java.lang.Object
software.amazon.awssdk.services.databasemigration.model.KafkaSettings
All Implemented Interfaces:
Serializable, SdkPojo, ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>

@Generated("software.amazon.awssdk:codegen") public final class KafkaSettings extends Object implements SdkPojo, Serializable, ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>

Provides information that describes an Apache Kafka endpoint. This information includes the output format of records applied to the endpoint and details of transaction and control table data information.

See Also:
  • Method Details

    • broker

      public final String broker()

      A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form broker-hostname-or-ip:port . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.

      Returns:
      A comma-separated list of one or more broker locations in your Kafka cluster that host your Kafka instance. Specify each broker location in the form broker-hostname-or-ip:port . For example, "ec2-12-345-678-901.compute-1.amazonaws.com:2345". For more information and examples of specifying a list of broker locations, see Using Apache Kafka as a target for Database Migration Service in the Database Migration Service User Guide.
    • topic

      public final String topic()

      The topic to which you migrate the data. If you don't specify a topic, DMS specifies "kafka-default-topic" as the migration topic.

      Returns:
      The topic to which you migrate the data. If you don't specify a topic, DMS specifies "kafka-default-topic" as the migration topic.
    • messageFormat

      public final MessageFormatValue messageFormat()

      The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

      If the service returns an enum value that is not available in the current SDK version, messageFormat will return MessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from messageFormatAsString().

      Returns:
      The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
      See Also:
    • messageFormatAsString

      public final String messageFormatAsString()

      The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).

      If the service returns an enum value that is not available in the current SDK version, messageFormat will return MessageFormatValue.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from messageFormatAsString().

      Returns:
      The output format for the records created on the endpoint. The message format is JSON (default) or JSON_UNFORMATTED (a single line with no tab).
      See Also:
    • includeTransactionDetails

      public final Boolean includeTransactionDetails()

      Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.

      Returns:
      Provides detailed transaction information from the source database. This information includes a commit timestamp, a log position, and values for transaction_id, previous transaction_id, and transaction_record_id (the record offset within a transaction). The default is false.
    • includePartitionValue

      public final Boolean includePartitionValue()

      Shows the partition value within the Kafka message output unless the partition type is schema-table-type. The default is false.

      Returns:
      Shows the partition value within the Kafka message output unless the partition type is schema-table-type. The default is false.
    • partitionIncludeSchemaTable

      public final Boolean partitionIncludeSchemaTable()

      Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false.

      Returns:
      Prefixes schema and table names to partition values, when the partition type is primary-key-type. Doing this increases data distribution among Kafka partitions. For example, suppose that a SysBench schema has thousands of tables and each table has only limited range for a primary key. In this case, the same primary key is sent from thousands of tables to the same partition, which causes throttling. The default is false.
    • includeTableAlterOperations

      public final Boolean includeTableAlterOperations()

      Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is false.

      Returns:
      Includes any data definition language (DDL) operations that change the table in the control data, such as rename-table, drop-table, add-column, drop-column, and rename-column. The default is false.
    • includeControlDetails

      public final Boolean includeControlDetails()

      Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false.

      Returns:
      Shows detailed control information for table definition, column definition, and table and column changes in the Kafka message output. The default is false.
    • messageMaxBytes

      public final Integer messageMaxBytes()

      The maximum size in bytes for records created on the endpoint The default is 1,000,000.

      Returns:
      The maximum size in bytes for records created on the endpoint The default is 1,000,000.
    • includeNullAndEmpty

      public final Boolean includeNullAndEmpty()

      Include NULL and empty columns for records migrated to the endpoint. The default is false.

      Returns:
      Include NULL and empty columns for records migrated to the endpoint. The default is false.
    • securityProtocol

      public final KafkaSecurityProtocol securityProtocol()

      Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.

      If the service returns an enum value that is not available in the current SDK version, securityProtocol will return KafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from securityProtocolAsString().

      Returns:
      Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.
      See Also:
    • securityProtocolAsString

      public final String securityProtocolAsString()

      Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.

      If the service returns an enum value that is not available in the current SDK version, securityProtocol will return KafkaSecurityProtocol.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from securityProtocolAsString().

      Returns:
      Set secure connection to a Kafka target endpoint using Transport Layer Security (TLS). Options include ssl-encryption, ssl-authentication, and sasl-ssl. sasl-ssl requires SaslUsername and SaslPassword.
      See Also:
    • sslClientCertificateArn

      public final String sslClientCertificateArn()

      The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.

      Returns:
      The Amazon Resource Name (ARN) of the client certificate used to securely connect to a Kafka target endpoint.
    • sslClientKeyArn

      public final String sslClientKeyArn()

      The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.

      Returns:
      The Amazon Resource Name (ARN) for the client private key used to securely connect to a Kafka target endpoint.
    • sslClientKeyPassword

      public final String sslClientKeyPassword()

      The password for the client private key used to securely connect to a Kafka target endpoint.

      Returns:
      The password for the client private key used to securely connect to a Kafka target endpoint.
    • sslCaCertificateArn

      public final String sslCaCertificateArn()

      The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.

      Returns:
      The Amazon Resource Name (ARN) for the private certificate authority (CA) cert that DMS uses to securely connect to your Kafka target endpoint.
    • saslUsername

      public final String saslUsername()

      The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

      Returns:
      The secure user name you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
    • saslPassword

      public final String saslPassword()

      The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.

      Returns:
      The secure password you created when you first set up your MSK cluster to validate a client identity and make an encrypted connection between server and client using SASL-SSL authentication.
    • noHexPrefix

      public final Boolean noHexPrefix()

      Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the NoHexPrefix endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.

      Returns:
      Set this optional parameter to true to avoid adding a '0x' prefix to raw data in hexadecimal format. For example, by default, DMS adds a '0x' prefix to the LOB column type in hexadecimal format moving from an Oracle source to a Kafka target. Use the NoHexPrefix endpoint setting to enable migration of RAW data type columns without adding the '0x' prefix.
    • saslMechanism

      public final KafkaSaslMechanism saslMechanism()

      For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this parameter to PLAIN.

      If the service returns an enum value that is not available in the current SDK version, saslMechanism will return KafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from saslMechanismAsString().

      Returns:
      For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this parameter to PLAIN.
      See Also:
    • saslMechanismAsString

      public final String saslMechanismAsString()

      For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this parameter to PLAIN.

      If the service returns an enum value that is not available in the current SDK version, saslMechanism will return KafkaSaslMechanism.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from saslMechanismAsString().

      Returns:
      For SASL/SSL authentication, DMS supports the SCRAM-SHA-512 mechanism by default. DMS versions 3.5.0 and later also support the PLAIN mechanism. To use the PLAIN mechanism, set this parameter to PLAIN.
      See Also:
    • sslEndpointIdentificationAlgorithm

      public final KafkaSslEndpointIdentificationAlgorithm sslEndpointIdentificationAlgorithm()

      Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.

      If the service returns an enum value that is not available in the current SDK version, sslEndpointIdentificationAlgorithm will return KafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from sslEndpointIdentificationAlgorithmAsString().

      Returns:
      Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
      See Also:
    • sslEndpointIdentificationAlgorithmAsString

      public final String sslEndpointIdentificationAlgorithmAsString()

      Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.

      If the service returns an enum value that is not available in the current SDK version, sslEndpointIdentificationAlgorithm will return KafkaSslEndpointIdentificationAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available from sslEndpointIdentificationAlgorithmAsString().

      Returns:
      Sets hostname verification for the certificate. This setting is supported in DMS version 3.5.1 and later.
      See Also:
    • toBuilder

      public KafkaSettings.Builder toBuilder()
      Description copied from interface: ToCopyableBuilder
      Take this object and create a builder that contains all of the current property values of this object.
      Specified by:
      toBuilder in interface ToCopyableBuilder<KafkaSettings.Builder,KafkaSettings>
      Returns:
      a builder for type T
    • builder

      public static KafkaSettings.Builder builder()
    • serializableBuilderClass

      public static Class<? extends KafkaSettings.Builder> serializableBuilderClass()
    • hashCode

      public final int hashCode()
      Overrides:
      hashCode in class Object
    • equals

      public final boolean equals(Object obj)
      Overrides:
      equals in class Object
    • equalsBySdkFields

      public final boolean equalsBySdkFields(Object obj)
      Description copied from interface: SdkPojo
      Indicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in an SdkPojo class, and is generated based on a service model.

      If an SdkPojo class does not have any inherited fields, equalsBySdkFields and equals are essentially the same.

      Specified by:
      equalsBySdkFields in interface SdkPojo
      Parameters:
      obj - the object to be compared with
      Returns:
      true if the other object equals to this object by sdk fields, false otherwise.
    • toString

      public final String toString()
      Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be redacted from this string using a placeholder value.
      Overrides:
      toString in class Object
    • getValueForField

      public final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz)
    • sdkFields

      public final List<SdkField<?>> sdkFields()
      Specified by:
      sdkFields in interface SdkPojo
      Returns:
      List of SdkField in this POJO. May be empty list but should never be null.