Class RedshiftSettings
- All Implemented Interfaces:
Serializable,SdkPojo,ToCopyableBuilder<RedshiftSettings.Builder,RedshiftSettings>
Provides information that defines an Amazon Redshift endpoint.
- See Also:
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionfinal BooleanA value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error.final StringCode to run after connecting.final StringAn S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.final StringThe name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.static RedshiftSettings.Builderbuilder()final BooleanIf Amazon Redshift is configured to support case sensitive schema names, setCaseSensitiveNamestotrue.final BooleanIf you setCompUpdatetotrueAmazon Redshift applies automatic compression if the table is empty.final IntegerA value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.final StringThe name of the Amazon Redshift data warehouse (service) that you are working with.final StringThe date format that you are using.final BooleanA value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL.final EncryptionModeValueThe type of server-side encryption that you want to use for your data.final StringThe type of server-side encryption that you want to use for your data.final booleanfinal booleanequalsBySdkFields(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final BooleanThis setting is only valid for a full-load migration task.final IntegerThe number of threads used to upload a single file.final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz) final inthashCode()final IntegerThe amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.final BooleanWhen true, lets Redshift migrate the boolean type as boolean.final IntegerThe maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift.final Stringpassword()The password for the user named in theusernameproperty.final Integerport()The port number for Amazon Redshift.final BooleanA value that specifies to remove surrounding quotation marks from strings in the incoming data.final StringA value that specifies to replaces the invalid characters specified inReplaceInvalidChars, substituting the specified characters instead.final StringA list of characters that you want to replace.final StringThe full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value inSecretsManagerSecret.final StringThe full ARN, partial ARN, or friendly name of theSecretsManagerSecretthat contains the Amazon Redshift endpoint connection details.static Class<? extends RedshiftSettings.Builder> final StringThe name of the Amazon Redshift cluster you are using.final StringThe KMS key ID.final StringThe Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service.final StringThe time format that you want to use.Take this object and create a builder that contains all of the current property values of this object.final StringtoString()Returns a string representation of this object.final BooleanA value that specifies to remove the trailing white space characters from a VARCHAR string.final BooleanA value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column.final Stringusername()An Amazon Redshift user name for a registered user.final IntegerThe size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
acceptAnyDate
A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to be loaded without generating an error. You can choose
trueorfalse(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
- Returns:
- A value that indicates to allow any date format, including invalid formats such as 00/00/00 00:00:00, to
be loaded without generating an error. You can choose
trueorfalse(the default).This parameter applies only to TIMESTAMP and DATE columns. Always use ACCEPTANYDATE with the DATEFORMAT parameter. If the date format for the data doesn't match the DATEFORMAT specification, Amazon Redshift inserts a NULL value into that field.
-
afterConnectScript
Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
- Returns:
- Code to run after connecting. This parameter should contain the code itself, not the name of a file containing the code.
-
bucketFolder
An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPYcommand to upload the .csv files to the target table. The files are deleted once theCOPYoperation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
- Returns:
- An S3 folder where the comma-separated-value (.csv) files are stored before being uploaded to the target
Redshift cluster.
For full load mode, DMS converts source records into .csv files and loads them to the BucketFolder/TableID path. DMS uses the Redshift
COPYcommand to upload the .csv files to the target table. The files are deleted once theCOPYoperation has finished. For more information, see COPY in the Amazon Redshift Database Developer Guide.For change-data-capture (CDC) mode, DMS creates a NetChanges table, and loads the .csv files to this BucketFolder/NetChangesTableID path.
-
bucketName
The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
- Returns:
- The name of the intermediate S3 bucket used to store .csv files before uploading data to Redshift.
-
caseSensitiveNames
If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNamestotrue. The default isfalse.- Returns:
- If Amazon Redshift is configured to support case sensitive schema names, set
CaseSensitiveNamestotrue. The default isfalse.
-
compUpdate
If you set
CompUpdatetotrueAmazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW. If you setCompUpdatetofalse, automatic compression is disabled and existing column encodings aren't changed. The default istrue.- Returns:
- If you set
CompUpdatetotrueAmazon Redshift applies automatic compression if the table is empty. This applies even if the table columns already have encodings other thanRAW. If you setCompUpdatetofalse, automatic compression is disabled and existing column encodings aren't changed. The default istrue.
-
connectionTimeout
A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
- Returns:
- A value that sets the amount of time to wait (in milliseconds) before timing out, beginning from when you initially establish a connection.
-
databaseName
The name of the Amazon Redshift data warehouse (service) that you are working with.
- Returns:
- The name of the Amazon Redshift data warehouse (service) that you are working with.
-
dateFormat
The date format that you are using. Valid values are
auto(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingautorecognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto.- Returns:
- The date format that you are using. Valid values are
auto(case-sensitive), your date format string enclosed in quotes, or NULL. If this parameter is left unset (NULL), it defaults to a format of 'YYYY-MM-DD'. Usingautorecognizes most strings, even some that aren't supported when you use a date format string.If your date and time values use formats different from each other, set this to
auto.
-
emptyAsNull
A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
truesets empty CHAR and VARCHAR fields to null. The default isfalse.- Returns:
- A value that specifies whether DMS should migrate empty CHAR and VARCHAR fields as NULL. A value of
truesets empty CHAR and VARCHAR fields to null. The default isfalse.
-
encryptionMode
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3(the default) orSSE_KMS.For the
ModifyEndpointoperation, you can change the existing value of theEncryptionModeparameter fromSSE_KMStoSSE_S3. But you can’t change the existing value fromSSE_S3toSSE_KMS.To use
SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"to use the following actions:"s3:PutObject", "s3:ListBucket"If the service returns an enum value that is not available in the current SDK version,
encryptionModewill returnEncryptionModeValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromencryptionModeAsString().- Returns:
- The type of server-side encryption that you want to use for your data. This encryption type is part of
the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3(the default) orSSE_KMS.For the
ModifyEndpointoperation, you can change the existing value of theEncryptionModeparameter fromSSE_KMStoSSE_S3. But you can’t change the existing value fromSSE_S3toSSE_KMS.To use
SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"to use the following actions:"s3:PutObject", "s3:ListBucket" - See Also:
-
encryptionModeAsString
The type of server-side encryption that you want to use for your data. This encryption type is part of the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3(the default) orSSE_KMS.For the
ModifyEndpointoperation, you can change the existing value of theEncryptionModeparameter fromSSE_KMStoSSE_S3. But you can’t change the existing value fromSSE_S3toSSE_KMS.To use
SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"to use the following actions:"s3:PutObject", "s3:ListBucket"If the service returns an enum value that is not available in the current SDK version,
encryptionModewill returnEncryptionModeValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromencryptionModeAsString().- Returns:
- The type of server-side encryption that you want to use for your data. This encryption type is part of
the endpoint settings or the extra connections attributes for Amazon S3. You can choose either
SSE_S3(the default) orSSE_KMS.For the
ModifyEndpointoperation, you can change the existing value of theEncryptionModeparameter fromSSE_KMStoSSE_S3. But you can’t change the existing value fromSSE_S3toSSE_KMS.To use
SSE_S3, create an Identity and Access Management (IAM) role with a policy that allows"arn:aws:s3:::*"to use the following actions:"s3:PutObject", "s3:ListBucket" - See Also:
-
explicitIds
This setting is only valid for a full-load migration task. Set
ExplicitIdstotrueto have tables withIDENTITYcolumns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse.- Returns:
- This setting is only valid for a full-load migration task. Set
ExplicitIdstotrueto have tables withIDENTITYcolumns override their auto-generated values with explicit values loaded from the source data files used to populate the tables. The default isfalse.
-
fileTransferUploadStreams
The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreamsaccepts a value from 1 through 64. It defaults to 10.- Returns:
- The number of threads used to upload a single file. This parameter accepts a value from 1 through 64. It
defaults to 10.
The number of parallel streams used to upload a single .csv file to an S3 bucket using S3 Multipart Upload. For more information, see Multipart upload overview.
FileTransferUploadStreamsaccepts a value from 1 through 64. It defaults to 10.
-
loadTimeout
The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
- Returns:
- The amount of time to wait (in milliseconds) before timing out of operations performed by DMS on a Redshift cluster, such as Redshift COPY, INSERT, DELETE, and UPDATE.
-
maxFileSize
The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
- Returns:
- The maximum size (in KB) of any .csv file used to load data on an S3 bucket and transfer data to Amazon Redshift. It defaults to 1048576KB (1 GB).
-
password
The password for the user named in the
usernameproperty.- Returns:
- The password for the user named in the
usernameproperty.
-
port
The port number for Amazon Redshift. The default value is 5439.
- Returns:
- The port number for Amazon Redshift. The default value is 5439.
-
removeQuotes
A value that specifies to remove surrounding quotation marks from strings in the incoming data. All characters within the quotation marks, including delimiters, are retained. Choose
trueto remove quotation marks. The default isfalse.- Returns:
- A value that specifies to remove surrounding quotation marks from strings in the incoming data. All
characters within the quotation marks, including delimiters, are retained. Choose
trueto remove quotation marks. The default isfalse.
-
replaceInvalidChars
A list of characters that you want to replace. Use with
ReplaceChars.- Returns:
- A list of characters that you want to replace. Use with
ReplaceChars.
-
replaceChars
A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars, substituting the specified characters instead. The default is"?".- Returns:
- A value that specifies to replaces the invalid characters specified in
ReplaceInvalidChars, substituting the specified characters instead. The default is"?".
-
serverName
The name of the Amazon Redshift cluster you are using.
- Returns:
- The name of the Amazon Redshift cluster you are using.
-
serviceAccessRoleArn
The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role must allow the
iam:PassRoleaction.- Returns:
- The Amazon Resource Name (ARN) of the IAM role that has access to the Amazon Redshift service. The role
must allow the
iam:PassRoleaction.
-
serverSideEncryptionKmsKeyId
The KMS key ID. If you are using
SSE_KMSfor theEncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.- Returns:
- The KMS key ID. If you are using
SSE_KMSfor theEncryptionMode, provide this key ID. The key that you use needs an attached policy that enables IAM user permissions and allows use of the key.
-
timeFormat
The time format that you want to use. Valid values are
auto(case-sensitive),'timeformat_string','epochsecs', or'epochmillisecs'. It defaults to 10. Usingautorecognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto.- Returns:
- The time format that you want to use. Valid values are
auto(case-sensitive),'timeformat_string','epochsecs', or'epochmillisecs'. It defaults to 10. Usingautorecognizes most strings, even some that aren't supported when you use a time format string.If your date and time values use formats different from each other, set this parameter to
auto.
-
trimBlanks
A value that specifies to remove the trailing white space characters from a VARCHAR string. This parameter applies only to columns with a VARCHAR data type. Choose
trueto remove unneeded white space. The default isfalse.- Returns:
- A value that specifies to remove the trailing white space characters from a VARCHAR string. This
parameter applies only to columns with a VARCHAR data type. Choose
trueto remove unneeded white space. The default isfalse.
-
truncateColumns
A value that specifies to truncate data in columns to the appropriate number of characters, so that the data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and rows with a size of 4 MB or less. Choose
trueto truncate data. The default isfalse.- Returns:
- A value that specifies to truncate data in columns to the appropriate number of characters, so that the
data fits in the column. This parameter applies only to columns with a VARCHAR or CHAR data type, and
rows with a size of 4 MB or less. Choose
trueto truncate data. The default isfalse.
-
username
An Amazon Redshift user name for a registered user.
- Returns:
- An Amazon Redshift user name for a registered user.
-
writeBufferSize
The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
- Returns:
- The size (in KB) of the in-memory file write buffer used when generating .csv files on the local disk at the DMS replication instance. The default value is 1000 (buffer size is 1000KB).
-
secretsManagerAccessRoleArn
The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants the required permissions to access the value in
SecretsManagerSecret. The role must allow theiam:PassRoleaction.SecretsManagerSecrethas the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId. Or you can specify clear-text values forUserName,Password,ServerName, andPort. You can't specify both. For more information on creating thisSecretsManagerSecretand theSecretsManagerAccessRoleArnandSecretsManagerSecretIdrequired to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.- Returns:
- The full Amazon Resource Name (ARN) of the IAM role that specifies DMS as the trusted entity and grants
the required permissions to access the value in
SecretsManagerSecret. The role must allow theiam:PassRoleaction.SecretsManagerSecrethas the value of the Amazon Web Services Secrets Manager secret that allows access to the Amazon Redshift endpoint.You can specify one of two sets of values for these permissions. You can specify the values for this setting and
SecretsManagerSecretId. Or you can specify clear-text values forUserName,Password,ServerName, andPort. You can't specify both. For more information on creating thisSecretsManagerSecretand theSecretsManagerAccessRoleArnandSecretsManagerSecretIdrequired to access it, see Using secrets to access Database Migration Service resources in the Database Migration Service User Guide.
-
secretsManagerSecretId
The full ARN, partial ARN, or friendly name of the
SecretsManagerSecretthat contains the Amazon Redshift endpoint connection details.- Returns:
- The full ARN, partial ARN, or friendly name of the
SecretsManagerSecretthat contains the Amazon Redshift endpoint connection details.
-
mapBooleanAsBoolean
When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans as
varchar(1). You must set this setting on both the source and target endpoints for it to take effect.- Returns:
- When true, lets Redshift migrate the boolean type as boolean. By default, Redshift migrates booleans as
varchar(1). You must set this setting on both the source and target endpoints for it to take effect.
-
toBuilder
Description copied from interface:ToCopyableBuilderTake this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilderin interfaceToCopyableBuilder<RedshiftSettings.Builder,RedshiftSettings> - Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
-
equals
-
equalsBySdkFields
Description copied from interface:SdkPojoIndicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojoclass, and is generated based on a service model.If an
SdkPojoclass does not have any inherited fields,equalsBySdkFieldsandequalsare essentially the same.- Specified by:
equalsBySdkFieldsin interfaceSdkPojo- Parameters:
obj- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
-
getValueForField
-
sdkFields
-