Interface RedshiftSettings.Builder
- All Superinterfaces:
Buildable
,CopyableBuilder<RedshiftSettings.Builder,
,RedshiftSettings> SdkBuilder<RedshiftSettings.Builder,
,RedshiftSettings> SdkPojo
- Enclosing class:
RedshiftSettings
-
Method Summary
Modifier and TypeMethodDescriptionacceptAnyDate
(Boolean acceptAnyDate) A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error.afterConnectScript
(String afterConnectScript) Code to run after connecting.bucketFolder
(String bucketFolder) An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.bucketName
(String bucketName) The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.caseSensitiveNames
(Boolean caseSensitiveNames) If Amazon Redshift is configured to support case sensitive schema names, setCaseSensitiveNames
totrue
.compUpdate
(Boolean compUpdate) If you setCompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty.connectionTimeout
(Integer connectionTimeout) A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.databaseName
(String databaseName) The name of the Amazon Redshift data warehouse (service) that you are working with.dateFormat
(String dateFormat) The date format that you are using.emptyAsNull
(Boolean emptyAsNull) A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL.encryptionMode
(String encryptionMode) The type of server-side encryption that you want to use for your data.encryptionMode
(EncryptionModeValue encryptionMode) The type of server-side encryption that you want to use for your data.explicitIds
(Boolean explicitIds) This setting is only valid for a full-load migration task.fileTransferUploadStreams
(Integer fileTransferUploadStreams) The number of threads used to upload a single file.loadTimeout
(Integer loadTimeout) The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.mapBooleanAsBoolean
(Boolean mapBooleanAsBoolean) When true, lets Redshift migrate the boolean type as boolean.maxFileSize
(Integer maxFileSize) The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift.The password for the user named in theusername
property.The port number for Amazon Redshift.removeQuotes
(Boolean removeQuotes) A value that specifies to remove surrounding quotation marks from strings in the incoming data.replaceChars
(String replaceChars) A value that specifies to replaces the invalid characters specified inReplaceInvalidChars
, substituting the specified characters instead.replaceInvalidChars
(String replaceInvalidChars) A list of characters that you want to replace.secretsManagerAccessRoleArn
(String secretsManagerAccessRoleArn) The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value inSecretsManagerSecret
.secretsManagerSecretId
(String secretsManagerSecretId) The full ARN, partial ARN, or friendly name of theSecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.serverName
(String serverName) The name of the Amazon Redshift cluster you are using.serverSideEncryptionKmsKeyId
(String serverSideEncryptionKmsKeyId) The KMS key ID.serviceAccessRoleArn
(String serviceAccessRoleArn) The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.timeFormat
(String timeFormat) The time format that you want to use.trimBlanks
(Boolean trimBlanks) A value that specifies to remove the trailing white space characters from a VARCHAR string.truncateColumns
(Boolean truncateColumns) A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column.An Amazon Redshift user name for a registered user.writeBufferSize
(Integer writeBufferSize) The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance.Methods inherited from interface software.amazon.awssdk.utils.builder.CopyableBuilder
copy
Methods inherited from interface software.amazon.awssdk.utils.builder.SdkBuilder
applyMutation, build
Methods inherited from interface software.amazon.awssdk.core.SdkPojo
equalsBySdkFields, sdkFields
-
Method Details
-
acceptAnyDate
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
true
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- Parameters:
acceptAnyDate
- A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choosetrue
orfalse
(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
afterConnectScript
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
- Parameters:
afterConnectScript
- Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
bucketFolder
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
- Parameters:
bucketFolder
- An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPY
command to upload the .csv files to the target table. The files are deleted once theCOPY
operation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
bucketName
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
- Parameters:
bucketName
- The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
caseSensitiveNames
If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNames
totrue
. The default isfalse
.- Parameters:
caseSensitiveNames
- If Amazon Redshift is configured to support case sensitive schema names, setCaseSensitiveNames
totrue
. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
compUpdate
If you set
CompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.- Parameters:
compUpdate
- If you setCompUpdate
totrue
Amazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW
. If you setCompUpdate
tofalse
, automatic compression is disabled and existing column encodings aren't changed. The default istrue
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
connectionTimeout
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- Parameters:
connectionTimeout
- A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
databaseName
The name of the Amazon Redshift data warehouse (service) that you are working with.
- Parameters:
databaseName
- The name of the Amazon Redshift data warehouse (service) that you are working with.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
dateFormat
The date format that you are using. Valid values are
auto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.- Parameters:
dateFormat
- The date format that you are using. Valid values areauto
(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingauto
recognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
emptyAsNull
A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
true
sets empty CHAR and VARCHAR fields to null. The default isfalse
.- Parameters:
emptyAsNull
- A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value oftrue
sets empty CHAR and VARCHAR fields to null. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
encryptionMode
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- Parameters:
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose eitherSSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
encryptionMode
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- Parameters:
encryptionMode
- The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose eitherSSE_S3
(the default) orSSE_KMS
.For the
ModifyEndpoint
operation, you can change the existing value of theEncryptionMode
parameter fromSSE_KMS
toSSE_S3
. But you can’t change the existing value fromSSE_S3
toSSE_KMS
.To use
SSE_S3
, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"
to use the following actions:"s3:PutObject", "s3:ListBucket"
- Returns:
- Returns a reference to this object so that method calls can be chained together.
- See Also:
-
explicitIds
This setting is only valid for a full-load migration task. Set
ExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.- Parameters:
explicitIds
- This setting is only valid for a full-load migration task. SetExplicitIds
totrue
to have tables withIDENTITY
columns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
fileTransferUploadStreams
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.- Parameters:
fileTransferUploadStreams
- The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreams
accepts a value from 1 through 64. It defaults to 10.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
loadTimeout
The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
- Parameters:
loadTimeout
- The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
maxFileSize
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
- Parameters:
maxFileSize
- The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
password
The password for the user named in the
username
property.- Parameters:
password
- The password for the user named in theusername
property.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
port
The port number for Amazon Redshift. The default value is 5439.
- Parameters:
port
- The port number for Amazon Redshift. The default value is 5439.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
removeQuotes
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
true
to remove quotation marks. The default isfalse
.- Parameters:
removeQuotes
- A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choosetrue
to remove quotation marks. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
replaceInvalidChars
A list of characters that you want to replace. Use with
ReplaceChars
.- Parameters:
replaceInvalidChars
- A list of characters that you want to replace. Use withReplaceChars
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
replaceChars
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.- Parameters:
replaceChars
- A value that specifies to replaces the invalid characters specified inReplaceInvalidChars
, substituting the specified characters instead. The default is"?"
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
serverName
The name of the Amazon Redshift cluster you are using.
- Parameters:
serverName
- The name of the Amazon Redshift cluster you are using.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
serviceAccessRoleArn
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRole
action.- Parameters:
serviceAccessRoleArn
- The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow theiam:PassRole
action.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
serverSideEncryptionKmsKeyId
The KMS key ID. If you are using
SSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.- Parameters:
serverSideEncryptionKmsKeyId
- The KMS key ID. If you are usingSSE_KMS
for theEncryptionMode
, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
timeFormat
The time format that you want to use. Valid values are
auto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.- Parameters:
timeFormat
- The time format that you want to use. Valid values areauto
(case-sensitive),'timeformat_string'
,'epochsecs'
, or'epochmillisecs'
. It defaults to 10. Usingauto
recognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
trimBlanks
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
true
to remove unneeded white space. The default isfalse
.- Parameters:
trimBlanks
- A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choosetrue
to remove unneeded white space. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
truncateColumns
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
true
to truncate data. The default isfalse
.- Parameters:
truncateColumns
- A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choosetrue
to truncate data. The default isfalse
.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
username
An Amazon Redshift user name for a registered user.
- Parameters:
username
- An Amazon Redshift user name for a registered user.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
writeBufferSize
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
- Parameters:
writeBufferSize
- The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
secretsManagerAccessRoleArn
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.- Parameters:
secretsManagerAccessRoleArn
- The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value inSecretsManagerSecret
. The role must allow theiam:PassRole
action.SecretsManagerSecret
has the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId
. Or you can specify clear-text values forUserName
,Password
,ServerName
, andPort
. You can't specify both. For more information on creating thisSecretsManagerSecret
and theSecretsManagerAccessRoleArn
andSecretsManagerSecretId
required to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
secretsManagerSecretId
The full ARN, partial ARN, or friendly name of the
SecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.- Parameters:
secretsManagerSecretId
- The full ARN, partial ARN, or friendly name of theSecretsManagerSecret
that contains the Amazon Redshift endpoint connection details.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-
mapBooleanAsBoolean
When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans as
varchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.- Parameters:
mapBooleanAsBoolean
- When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans asvarchar(1)
. You must set this setting on both the source and target endpoints for it to take effect.- Returns:
- Returns a reference to this object so that method calls can be chained together.
-