AWS SDK for C++  1.8.98
AWS SDK for C++
Public Member Functions | List of all members
Aws::Firehose::Model::ParquetSerDe Class Reference

#include <ParquetSerDe.h>

Public Member Functions

 ParquetSerDe ()
 
 ParquetSerDe (Aws::Utils::Json::JsonView jsonValue)
 
ParquetSerDeoperator= (Aws::Utils::Json::JsonView jsonValue)
 
Aws::Utils::Json::JsonValue Jsonize () const
 
int GetBlockSizeBytes () const
 
bool BlockSizeBytesHasBeenSet () const
 
void SetBlockSizeBytes (int value)
 
ParquetSerDeWithBlockSizeBytes (int value)
 
int GetPageSizeBytes () const
 
bool PageSizeBytesHasBeenSet () const
 
void SetPageSizeBytes (int value)
 
ParquetSerDeWithPageSizeBytes (int value)
 
const ParquetCompressionGetCompression () const
 
bool CompressionHasBeenSet () const
 
void SetCompression (const ParquetCompression &value)
 
void SetCompression (ParquetCompression &&value)
 
ParquetSerDeWithCompression (const ParquetCompression &value)
 
ParquetSerDeWithCompression (ParquetCompression &&value)
 
bool GetEnableDictionaryCompression () const
 
bool EnableDictionaryCompressionHasBeenSet () const
 
void SetEnableDictionaryCompression (bool value)
 
ParquetSerDeWithEnableDictionaryCompression (bool value)
 
int GetMaxPaddingBytes () const
 
bool MaxPaddingBytesHasBeenSet () const
 
void SetMaxPaddingBytes (int value)
 
ParquetSerDeWithMaxPaddingBytes (int value)
 
const ParquetWriterVersionGetWriterVersion () const
 
bool WriterVersionHasBeenSet () const
 
void SetWriterVersion (const ParquetWriterVersion &value)
 
void SetWriterVersion (ParquetWriterVersion &&value)
 
ParquetSerDeWithWriterVersion (const ParquetWriterVersion &value)
 
ParquetSerDeWithWriterVersion (ParquetWriterVersion &&value)
 

Detailed Description

A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet.

See Also:

AWS API Reference

Definition at line 35 of file ParquetSerDe.h.

Constructor & Destructor Documentation

◆ ParquetSerDe() [1/2]

Aws::Firehose::Model::ParquetSerDe::ParquetSerDe ( )

◆ ParquetSerDe() [2/2]

Aws::Firehose::Model::ParquetSerDe::ParquetSerDe ( Aws::Utils::Json::JsonView  jsonValue)

Member Function Documentation

◆ BlockSizeBytesHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::BlockSizeBytesHasBeenSet ( ) const
inline

The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.

Definition at line 58 of file ParquetSerDe.h.

◆ CompressionHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::CompressionHasBeenSet ( ) const
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 122 of file ParquetSerDe.h.

◆ EnableDictionaryCompressionHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::EnableDictionaryCompressionHasBeenSet ( ) const
inline

Indicates whether to enable dictionary compression.

Definition at line 169 of file ParquetSerDe.h.

◆ GetBlockSizeBytes()

int Aws::Firehose::Model::ParquetSerDe::GetBlockSizeBytes ( ) const
inline

The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.

Definition at line 50 of file ParquetSerDe.h.

◆ GetCompression()

const ParquetCompression& Aws::Firehose::Model::ParquetSerDe::GetCompression ( ) const
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 113 of file ParquetSerDe.h.

◆ GetEnableDictionaryCompression()

bool Aws::Firehose::Model::ParquetSerDe::GetEnableDictionaryCompression ( ) const
inline

Indicates whether to enable dictionary compression.

Definition at line 164 of file ParquetSerDe.h.

◆ GetMaxPaddingBytes()

int Aws::Firehose::Model::ParquetSerDe::GetMaxPaddingBytes ( ) const
inline

The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.

Definition at line 186 of file ParquetSerDe.h.

◆ GetPageSizeBytes()

int Aws::Firehose::Model::ParquetSerDe::GetPageSizeBytes ( ) const
inline

The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.

Definition at line 82 of file ParquetSerDe.h.

◆ GetWriterVersion()

const ParquetWriterVersion& Aws::Firehose::Model::ParquetSerDe::GetWriterVersion ( ) const
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 211 of file ParquetSerDe.h.

◆ Jsonize()

Aws::Utils::Json::JsonValue Aws::Firehose::Model::ParquetSerDe::Jsonize ( ) const

◆ MaxPaddingBytesHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::MaxPaddingBytesHasBeenSet ( ) const
inline

The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.

Definition at line 192 of file ParquetSerDe.h.

◆ operator=()

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::operator= ( Aws::Utils::Json::JsonView  jsonValue)

◆ PageSizeBytesHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::PageSizeBytesHasBeenSet ( ) const
inline

The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.

Definition at line 89 of file ParquetSerDe.h.

◆ SetBlockSizeBytes()

void Aws::Firehose::Model::ParquetSerDe::SetBlockSizeBytes ( int  value)
inline

The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.

Definition at line 66 of file ParquetSerDe.h.

◆ SetCompression() [1/2]

void Aws::Firehose::Model::ParquetSerDe::SetCompression ( const ParquetCompression value)
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 131 of file ParquetSerDe.h.

◆ SetCompression() [2/2]

void Aws::Firehose::Model::ParquetSerDe::SetCompression ( ParquetCompression &&  value)
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 140 of file ParquetSerDe.h.

◆ SetEnableDictionaryCompression()

void Aws::Firehose::Model::ParquetSerDe::SetEnableDictionaryCompression ( bool  value)
inline

Indicates whether to enable dictionary compression.

Definition at line 174 of file ParquetSerDe.h.

◆ SetMaxPaddingBytes()

void Aws::Firehose::Model::ParquetSerDe::SetMaxPaddingBytes ( int  value)
inline

The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.

Definition at line 198 of file ParquetSerDe.h.

◆ SetPageSizeBytes()

void Aws::Firehose::Model::ParquetSerDe::SetPageSizeBytes ( int  value)
inline

The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.

Definition at line 96 of file ParquetSerDe.h.

◆ SetWriterVersion() [1/2]

void Aws::Firehose::Model::ParquetSerDe::SetWriterVersion ( const ParquetWriterVersion value)
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 223 of file ParquetSerDe.h.

◆ SetWriterVersion() [2/2]

void Aws::Firehose::Model::ParquetSerDe::SetWriterVersion ( ParquetWriterVersion &&  value)
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 229 of file ParquetSerDe.h.

◆ WithBlockSizeBytes()

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithBlockSizeBytes ( int  value)
inline

The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Kinesis Data Firehose uses this value for padding calculations.

Definition at line 74 of file ParquetSerDe.h.

◆ WithCompression() [1/2]

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithCompression ( const ParquetCompression value)
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 149 of file ParquetSerDe.h.

◆ WithCompression() [2/2]

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithCompression ( ParquetCompression &&  value)
inline

The compression code to use over data blocks. The possible values are UNCOMPRESSED, SNAPPY, and GZIP, with the default being SNAPPY. Use SNAPPY for higher decompression speed. Use GZIP if the compression ratio is more important than speed.

Definition at line 158 of file ParquetSerDe.h.

◆ WithEnableDictionaryCompression()

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithEnableDictionaryCompression ( bool  value)
inline

Indicates whether to enable dictionary compression.

Definition at line 179 of file ParquetSerDe.h.

◆ WithMaxPaddingBytes()

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithMaxPaddingBytes ( int  value)
inline

The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.

Definition at line 204 of file ParquetSerDe.h.

◆ WithPageSizeBytes()

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithPageSizeBytes ( int  value)
inline

The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.

Definition at line 103 of file ParquetSerDe.h.

◆ WithWriterVersion() [1/2]

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithWriterVersion ( const ParquetWriterVersion value)
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 235 of file ParquetSerDe.h.

◆ WithWriterVersion() [2/2]

ParquetSerDe& Aws::Firehose::Model::ParquetSerDe::WithWriterVersion ( ParquetWriterVersion &&  value)
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 241 of file ParquetSerDe.h.

◆ WriterVersionHasBeenSet()

bool Aws::Firehose::Model::ParquetSerDe::WriterVersionHasBeenSet ( ) const
inline

Indicates the version of row format to output. The possible values are V1 and V2. The default is V1.

Definition at line 217 of file ParquetSerDe.h.


The documentation for this class was generated from the following file: