Interface OrcSerDe.Builder
- All Superinterfaces:
Buildable
,CopyableBuilder<OrcSerDe.Builder,
,OrcSerDe> SdkBuilder<OrcSerDe.Builder,
,OrcSerDe> SdkPojo
- Enclosing class:
OrcSerDe
-
Method Summary
Modifier and TypeMethodDescriptionblockSizeBytes
(Integer blockSizeBytes) The Hadoop Distributed File System (HDFS) block size.bloomFilterColumns
(String... bloomFilterColumns) The column names for which you want Firehose to create bloom filters.bloomFilterColumns
(Collection<String> bloomFilterColumns) The column names for which you want Firehose to create bloom filters.bloomFilterFalsePositiveProbability
(Double bloomFilterFalsePositiveProbability) The Bloom filter false positive probability (FPP).compression
(String compression) The compression code to use over data blocks.compression
(OrcCompression compression) The compression code to use over data blocks.dictionaryKeyThreshold
(Double dictionaryKeyThreshold) Represents the fraction of the total number of non-null rows.enablePadding
(Boolean enablePadding) Set this totrue
to indicate that you want stripes to be padded to the HDFS block boundaries.formatVersion
(String formatVersion) The version of the file to write.formatVersion
(OrcFormatVersion formatVersion) The version of the file to write.paddingTolerance
(Double paddingTolerance) A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size.rowIndexStride
(Integer rowIndexStride) The number of rows between index entries.stripeSizeBytes
(Integer stripeSizeBytes) The number of bytes in each stripe.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
Method Details
-
stripeSizeBytes
The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- Parameters:
stripeSizeBytes
- The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
blockSizeBytes
The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Parameters:
blockSizeBytes
- The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
rowIndexStride
The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- Parameters:
rowIndexStride
- The number of rows between index entries. The default is 10,000 and the minimum is 1,000.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
enablePadding
Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
.- Parameters:
enablePadding
- Set this totrue
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
paddingTolerance
A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Firehose ignores this parameter when OrcSerDe$EnablePadding is
false
.- Parameters:
paddingTolerance
- A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Firehose ignores this parameter when OrcSerDe$EnablePadding is
false
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compression
The compression code to use over data blocks. The default is
SNAPPY
.- Parameters:
compression
- The compression code to use over data blocks. The default isSNAPPY
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
compression
The compression code to use over data blocks. The default is
SNAPPY
.- Parameters:
compression
- The compression code to use over data blocks. The default isSNAPPY
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
bloomFilterColumns
The column names for which you want Firehose to create bloom filters. The default is
null
.- Parameters:
bloomFilterColumns
- The column names for which you want Firehose to create bloom filters. The default isnull
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
bloomFilterColumns
The column names for which you want Firehose to create bloom filters. The default is
null
.- Parameters:
bloomFilterColumns
- The column names for which you want Firehose to create bloom filters. The default isnull
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
bloomFilterFalsePositiveProbability
The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- Parameters:
bloomFilterFalsePositiveProbability
- The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dictionaryKeyThreshold
Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- Parameters:
dictionaryKeyThreshold
- Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
formatVersion
The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
.- Parameters:
formatVersion
- The version of the file to write. The possible values areV0_11
andV0_12
. The default isV0_12
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
formatVersion
The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
.- Parameters:
formatVersion
- The version of the file to write. The possible values areV0_11
andV0_12
. The default isV0_12
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-