Class KafkaSettings
- All Implemented Interfaces:
Serializable,SdkPojo,ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>
Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.
- See Also:
-
Nested Class Summary
Nested Classes -
Method Summary
Modifier and TypeMethodDescriptionfinal Stringbroker()A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance.static KafkaSettings.Builderbuilder()final booleanfinal booleanequalsBySdkFields(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz) final inthashCode()final BooleanShows detailed control information for table definition, column definition, and table and column changes in the Kafka message output.final BooleanInclude NULL and empty columns for records migrated to the endpoint.final BooleanShows the partition value within the Kafka message output unless the partition type isschema-table-type.final BooleanIncludes any data definition language (DDL) operations that change the table in the control data, such asrename-table,drop-table,add-column,drop-column, andrename-column.final BooleanProvides detailed transaction information from the source database.final MessageFormatValueThe output format for the records created on the endpoint.final StringThe output format for the records created on the endpoint.final IntegerThe maximum size in bytes for records created on the endpoint The default is 1,000,000.final BooleanSet this optional parameter totrueto avoid adding a '0x' prefix to raw data in hexadecimal format.final BooleanPrefixes schema and table names to partition values, when the partition type isprimary-key-type.final KafkaSaslMechanismFor SASL/SSL authentication, DMS supports theSCRAM-SHA-512mechanism by default.final StringFor SASL/SSL authentication, DMS supports theSCRAM-SHA-512mechanism by default.final StringThe secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.final StringThe secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.final KafkaSecurityProtocolSet secure connection to a Kafka target endpoint using Transport Layer Security (TLS).final StringSet secure connection to a Kafka target endpoint using Transport Layer Security (TLS).static Class<? extends KafkaSettings.Builder> final StringThe Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.final StringThe Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.final StringThe Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.final StringThe password for the client private key used to securely connect to a Kafka target endpoint.Sets hostname verification for the certificate.final StringSets hostname verification for the certificate.Take this object and create a builder that contains all of the current property values of this object.final Stringtopic()The topic to which you migrate the data.final StringtoString()Returns a string representation of this object.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuilder
copy
-
Method Details
-
broker
A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form
broker-hostname-or-ip:port. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.- Returns:
- A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka
instance. Specify each broker location in the form
broker-hostname-or-ip:port. For example,"ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.
-
topic
The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"as the migration topic.- Returns:
- The topic to which you migrate the data. If you don't specify a topic, DMS specifies
"kafka-default-topic"as the migration topic.
-
messageFormat
The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab).If the service returns an enum value that is not available in the current SDK version,
messageFormatwill returnMessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available frommessageFormatAsString().- Returns:
- The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab). - See Also:
-
messageFormatAsString
The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab).If the service returns an enum value that is not available in the current SDK version,
messageFormatwill returnMessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available frommessageFormatAsString().- Returns:
- The output format for the records created on the endpoint. The message format is
JSON(default) orJSON_UNFORMATTED(a single line with no tab). - See Also:
-
includeTransactionDetails
Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for
transaction_id, previoustransaction_id, andtransaction_record_id(the record offset within a transaction). The default isfalse.- Returns:
- Provides detailed transaction information from the source database. This information includes a commit
timestamp, a log position, and values for
transaction_id, previoustransaction_id, andtransaction_record_id(the record offset within a transaction). The default isfalse.
-
includePartitionValue
Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. The default isfalse.- Returns:
- Shows the partition value within the Kafka message output unless the partition type is
schema-table-type. The default isfalse.
-
partitionIncludeSchemaTable
Prefixes schema and table names to partition values, when the partition type is
primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse.- Returns:
- Prefixes schema and table names to partition values, when the partition type is
primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default isfalse.
-
includeTableAlterOperations
Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table,drop-table,add-column,drop-column, andrename-column. The default isfalse.- Returns:
- Includes any data definition language (DDL) operations that change the table in the control data, such as
rename-table,drop-table,add-column,drop-column, andrename-column. The default isfalse.
-
includeControlDetails
Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is
false.- Returns:
- Shows detailed control information for table definition, column definition, and table and column changes
in the Kafka message output. The default is
false.
-
messageMaxBytes
The maximum size in bytes for records created on the endpoint The default is 1,000,000.
- Returns:
- The maximum size in bytes for records created on the endpoint The default is 1,000,000.
-
includeNullAndEmpty
Include NULL and empty columns for records migrated to the endpoint. The default is
false.- Returns:
- Include NULL and empty columns for records migrated to the endpoint. The default is
false.
-
securityProtocol
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword.If the service returns an enum value that is not available in the current SDK version,
securityProtocolwill returnKafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsecurityProtocolAsString().- Returns:
- Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword. - See Also:
-
securityProtocolAsString
Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword.If the service returns an enum value that is not available in the current SDK version,
securityProtocolwill returnKafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsecurityProtocolAsString().- Returns:
- Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include
ssl-encryption,ssl-authentication, andsasl-ssl.sasl-sslrequiresSaslUsernameandSaslPassword. - See Also:
-
sslClientCertificateArn
The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
- Returns:
- The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
-
sslClientKeyArn
The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
- Returns:
- The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
-
sslClientKeyPassword
The password for the client private key used to securely connect to a Kafka target endpoint.
- Returns:
- The password for the client private key used to securely connect to a Kafka target endpoint.
-
sslCaCertificateArn
The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
- Returns:
- The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
-
saslUsername
The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
- Returns:
- The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
-
saslPassword
The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
- Returns:
- The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
-
noHexPrefix
Set this optional parameter to
trueto avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefixendpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.- Returns:
- Set this optional parameter to
trueto avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use theNoHexPrefixendpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
-
saslMechanism
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN.If the service returns an enum value that is not available in the current SDK version,
saslMechanismwill returnKafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsaslMechanismAsString().- Returns:
- For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN. - See Also:
-
saslMechanismAsString
For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN.If the service returns an enum value that is not available in the current SDK version,
saslMechanismwill returnKafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsaslMechanismAsString().- Returns:
- For SASL/SSL authentication, DMS supports the
SCRAM-SHA-512mechanism by default. DMS versions 3.5.0 and later also support thePLAINmechanism. To use thePLAINmechanism, set this parameter toPLAIN. - See Also:
-
sslEndpointIdentificationAlgorithm
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
If the service returns an enum value that is not available in the current SDK version,
sslEndpointIdentificationAlgorithmwill returnKafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsslEndpointIdentificationAlgorithmAsString().- Returns:
- Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
- See Also:
-
sslEndpointIdentificationAlgorithmAsString
Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
If the service returns an enum value that is not available in the current SDK version,
sslEndpointIdentificationAlgorithmwill returnKafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromsslEndpointIdentificationAlgorithmAsString().- Returns:
- Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
- See Also:
-
toBuilder
Description copied from interface:ToCopyableBuilderTake this object and create a builder that contains all of the current property values of this object.- Specified by:
toBuilderin interfaceToCopyableBuilder<KafkaSettings.Builder,KafkaSettings> - Returns:
- a builder for type T
-
builder
-
serializableBuilderClass
-
hashCode
-
equals
-
equalsBySdkFields
Description copied from interface:SdkPojoIndicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojoclass, and is generated based on a service model.If an
SdkPojoclass does not have any inherited fields,equalsBySdkFieldsandequalsare essentially the same.- Specified by:
equalsBySdkFieldsin interfaceSdkPojo- Parameters:
obj- the object to be compared with- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
-
toString
-
getValueForField
-
sdkFields
-