Class Connection
- All Implemented Interfaces:
- Serializable,- SdkPojo,- ToCopyableBuilder<Connection.Builder,- Connection> 
Defines a connection to a data source.
- See Also:
- 
Nested Class SummaryNested Classes
- 
Method SummaryModifier and TypeMethodDescriptionConnection properties specific to the Athena compute environment.The authentication properties of the connection.static Connection.Builderbuilder()final List<ComputeEnvironment> A list of compute environments compatible with the connection.A list of compute environments compatible with the connection.final Map<ConnectionPropertyKey, String> These key-value pairs define parameters for the connection when using the version 1 Connection schema:These key-value pairs define parameters for the connection when using the version 1 Connection schema:final IntegerThe version of the connection schema for this connection.final ConnectionTypeThe type of the connection.final StringThe type of the connection.final InstantThe timestamp of the time that this connection definition was created.final StringThe description of the connection.final booleanfinal booleanequalsBySdkFields(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz) final booleanFor responses, this returns true if the service returned a value for the AthenaProperties property.final booleanFor responses, this returns true if the service returned a value for the CompatibleComputeEnvironments property.final booleanFor responses, this returns true if the service returned a value for the ConnectionProperties property.final inthashCode()final booleanFor responses, this returns true if the service returned a value for the MatchCriteria property.final booleanFor responses, this returns true if the service returned a value for the PythonProperties property.final booleanFor responses, this returns true if the service returned a value for the SparkProperties property.final InstantA timestamp of the time this connection was last validated.final StringThe user, group, or role that last updated this connection definition.final InstantThe timestamp of the last time the connection definition was updated.A list of criteria that can be used in selecting this connection.final Stringname()The name of the connection definition.The physical connection requirements, such as virtual private cloud (VPC) andSecurityGroup, that are needed to make this connection successfully.Connection properties specific to the Python compute environment.static Class<? extends Connection.Builder> Connection properties specific to the Spark compute environment.final ConnectionStatusstatus()The status of the connection.final StringThe status of the connection.final StringThe reason for the connection status.Take this object and create a builder that contains all of the current property values of this object.final StringtoString()Returns a string representation of this object.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuildercopy
- 
Method Details- 
nameThe name of the connection definition. - Returns:
- The name of the connection definition.
 
- 
descriptionThe description of the connection. - Returns:
- The description of the connection.
 
- 
connectionTypeThe type of the connection. Currently, SFTP is not supported. If the service returns an enum value that is not available in the current SDK version, connectionTypewill returnConnectionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromconnectionTypeAsString().- Returns:
- The type of the connection. Currently, SFTP is not supported.
- See Also:
 
- 
connectionTypeAsStringThe type of the connection. Currently, SFTP is not supported. If the service returns an enum value that is not available in the current SDK version, connectionTypewill returnConnectionType.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromconnectionTypeAsString().- Returns:
- The type of the connection. Currently, SFTP is not supported.
- See Also:
 
- 
hasMatchCriteriapublic final boolean hasMatchCriteria()For responses, this returns true if the service returned a value for the MatchCriteria property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
matchCriteriaA list of criteria that can be used in selecting this connection. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasMatchCriteria()method.- Returns:
- A list of criteria that can be used in selecting this connection.
 
- 
connectionPropertiesThese key-value pairs define parameters for the connection when using the version 1 Connection schema: - 
 HOST- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
- 
 PORT- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
- 
 USER_NAME- The name under which to log in to the database. The value string forUSER_NAMEis "USERNAME".
- 
 PASSWORD- A password, if one is used, for the user name.
- 
 ENCRYPTED_PASSWORD- When you enable connection password protection by settingConnectionPasswordEncryptionin the Data Catalog encryption settings, this field stores the encrypted password.
- 
 JDBC_DRIVER_JAR_URI- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
- 
 JDBC_DRIVER_CLASS_NAME- The class name of the JDBC driver to use.
- 
 JDBC_ENGINE- The name of the JDBC engine to use.
- 
 JDBC_ENGINE_VERSION- The version of the JDBC engine to use.
- 
 CONFIG_FILES- (Reserved for future use.)
- 
 INSTANCE_ID- The instance ID to use.
- 
 JDBC_CONNECTION_URL- The URL for connecting to a JDBC data source.
- 
 JDBC_ENFORCE_SSL- A case-insensitive Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
- 
 CUSTOM_JDBC_CERT- An Amazon S3 location specifying the customer's root certificate. Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
- 
 SKIP_CUSTOM_JDBC_CERT_VALIDATION- By default, this isfalse. Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property totrueto skip Glue’s validation of the customer certificate.
- 
 CUSTOM_JDBC_CERT_STRING- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as theSSL_SERVER_CERT_DN; in Microsoft SQL Server, this is used as thehostNameInCertificate.
- 
 CONNECTION_URL- The URL for connecting to a general (non-JDBC) data source.
- 
 SECRET_ID- The secret ID used for the secret manager of credentials.
- 
 CONNECTOR_URL- The connector URL for a MARKETPLACE or CUSTOM connection.
- 
 CONNECTOR_TYPE- The connector type for a MARKETPLACE or CUSTOM connection.
- 
 CONNECTOR_CLASS_NAME- The connector class name for a MARKETPLACE or CUSTOM connection.
- 
 KAFKA_BOOTSTRAP_SERVERS- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
- 
 KAFKA_SSL_ENABLED- Whether to enable or disable SSL on an Apache Kafka connection. Default value is "true".
- 
 KAFKA_CUSTOM_CERT- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
- 
 KAFKA_SKIP_CUSTOM_CERT_VALIDATION- Whether to skip the validation of the CA cert file or not. Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
- 
 KAFKA_CLIENT_KEYSTORE- The Amazon S3 location of the client keystore file for Kafka client side authentication (Optional).
- 
 KAFKA_CLIENT_KEYSTORE_PASSWORD- The password to access the provided keystore (Optional).
- 
 KAFKA_CLIENT_KEY_PASSWORD- A keystore can consist of multiple keys, so this is the password to access the client key to be used with the Kafka server side key (Optional).
- 
 ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD- The encrypted version of the Kafka client keystore password (if the user has the Glue encrypt passwords setting selected).
- 
 ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD- The encrypted version of the Kafka client key password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_MECHANISM-"SCRAM-SHA-512","GSSAPI","AWS_MSK_IAM", or"PLAIN". These are the supported SASL Mechanisms.
- 
 KAFKA_SASL_PLAIN_USERNAME- A plaintext username used to authenticate with the "PLAIN" mechanism.
- 
 KAFKA_SASL_PLAIN_PASSWORD- A plaintext password used to authenticate with the "PLAIN" mechanism.
- 
 ENCRYPTED_KAFKA_SASL_PLAIN_PASSWORD- The encrypted version of the Kafka SASL PLAIN password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_SCRAM_USERNAME- A plaintext username used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
 KAFKA_SASL_SCRAM_PASSWORD- A plaintext password used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
 ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD- The encrypted version of the Kafka SASL SCRAM password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_SCRAM_SECRETS_ARN- The Amazon Resource Name of a secret in Amazon Web Services Secrets Manager.
- 
 KAFKA_SASL_GSSAPI_KEYTAB- The S3 location of a Kerberoskeytabfile. A keytab stores long-term keys for one or more principals. For more information, see MIT Kerberos Documentation: Keytab.
- 
 KAFKA_SASL_GSSAPI_KRB5_CONF- The S3 location of a Kerberoskrb5.conffile. A krb5.conf stores Kerberos configuration information, such as the location of the KDC server. For more information, see MIT Kerberos Documentation: krb5.conf.
- 
 KAFKA_SASL_GSSAPI_SERVICE- The Kerberos service name, as set withsasl.kerberos.service.namein your Kafka Configuration.
- 
 KAFKA_SASL_GSSAPI_PRINCIPAL- The name of the Kerberos princial used by Glue. For more information, see Kafka Documentation: Configuring Kafka Brokers.
- 
 ROLE_ARN- The role to be used for running queries.
- 
 REGION- The Amazon Web Services Region where queries will be run.
- 
 WORKGROUP_NAME- The name of an Amazon Redshift serverless workgroup or Amazon Athena workgroup in which queries will run.
- 
 CLUSTER_IDENTIFIER- The cluster identifier of an Amazon Redshift cluster in which queries will run.
- 
 DATABASE- The Amazon Redshift database that you are connecting to.
 Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasConnectionProperties()method.- Returns:
- These key-value pairs define parameters for the connection when using the version 1 Connection
         schema:
         - 
         HOST- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
- 
         PORT- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
- 
         USER_NAME- The name under which to log in to the database. The value string forUSER_NAMEis "USERNAME".
- 
         PASSWORD- A password, if one is used, for the user name.
- 
         ENCRYPTED_PASSWORD- When you enable connection password protection by settingConnectionPasswordEncryptionin the Data Catalog encryption settings, this field stores the encrypted password.
- 
         JDBC_DRIVER_JAR_URI- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
- 
         JDBC_DRIVER_CLASS_NAME- The class name of the JDBC driver to use.
- 
         JDBC_ENGINE- The name of the JDBC engine to use.
- 
         JDBC_ENGINE_VERSION- The version of the JDBC engine to use.
- 
         CONFIG_FILES- (Reserved for future use.)
- 
         INSTANCE_ID- The instance ID to use.
- 
         JDBC_CONNECTION_URL- The URL for connecting to a JDBC data source.
- 
         JDBC_ENFORCE_SSL- A case-insensitive Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
- 
         CUSTOM_JDBC_CERT- An Amazon S3 location specifying the customer's root certificate. Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
- 
         SKIP_CUSTOM_JDBC_CERT_VALIDATION- By default, this isfalse. Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property totrueto skip Glue’s validation of the customer certificate.
- 
         CUSTOM_JDBC_CERT_STRING- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as theSSL_SERVER_CERT_DN; in Microsoft SQL Server, this is used as thehostNameInCertificate.
- 
         CONNECTION_URL- The URL for connecting to a general (non-JDBC) data source.
- 
         SECRET_ID- The secret ID used for the secret manager of credentials.
- 
         CONNECTOR_URL- The connector URL for a MARKETPLACE or CUSTOM connection.
- 
         CONNECTOR_TYPE- The connector type for a MARKETPLACE or CUSTOM connection.
- 
         CONNECTOR_CLASS_NAME- The connector class name for a MARKETPLACE or CUSTOM connection.
- 
         KAFKA_BOOTSTRAP_SERVERS- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
- 
         KAFKA_SSL_ENABLED- Whether to enable or disable SSL on an Apache Kafka connection. Default value is "true".
- 
         KAFKA_CUSTOM_CERT- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
- 
         KAFKA_SKIP_CUSTOM_CERT_VALIDATION- Whether to skip the validation of the CA cert file or not. Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
- 
         KAFKA_CLIENT_KEYSTORE- The Amazon S3 location of the client keystore file for Kafka client side authentication (Optional).
- 
         KAFKA_CLIENT_KEYSTORE_PASSWORD- The password to access the provided keystore (Optional).
- 
         KAFKA_CLIENT_KEY_PASSWORD- A keystore can consist of multiple keys, so this is the password to access the client key to be used with the Kafka server side key (Optional).
- 
         ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD- The encrypted version of the Kafka client keystore password (if the user has the Glue encrypt passwords setting selected).
- 
         ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD- The encrypted version of the Kafka client key password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_MECHANISM-"SCRAM-SHA-512","GSSAPI","AWS_MSK_IAM", or"PLAIN". These are the supported SASL Mechanisms.
- 
         KAFKA_SASL_PLAIN_USERNAME- A plaintext username used to authenticate with the "PLAIN" mechanism.
- 
         KAFKA_SASL_PLAIN_PASSWORD- A plaintext password used to authenticate with the "PLAIN" mechanism.
- 
         ENCRYPTED_KAFKA_SASL_PLAIN_PASSWORD- The encrypted version of the Kafka SASL PLAIN password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_SCRAM_USERNAME- A plaintext username used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
         KAFKA_SASL_SCRAM_PASSWORD- A plaintext password used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
         ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD- The encrypted version of the Kafka SASL SCRAM password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_SCRAM_SECRETS_ARN- The Amazon Resource Name of a secret in Amazon Web Services Secrets Manager.
- 
         KAFKA_SASL_GSSAPI_KEYTAB- The S3 location of a Kerberoskeytabfile. A keytab stores long-term keys for one or more principals. For more information, see MIT Kerberos Documentation: Keytab.
- 
         KAFKA_SASL_GSSAPI_KRB5_CONF- The S3 location of a Kerberoskrb5.conffile. A krb5.conf stores Kerberos configuration information, such as the location of the KDC server. For more information, see MIT Kerberos Documentation: krb5.conf.
- 
         KAFKA_SASL_GSSAPI_SERVICE- The Kerberos service name, as set withsasl.kerberos.service.namein your Kafka Configuration.
- 
         KAFKA_SASL_GSSAPI_PRINCIPAL- The name of the Kerberos princial used by Glue. For more information, see Kafka Documentation: Configuring Kafka Brokers.
- 
         ROLE_ARN- The role to be used for running queries.
- 
         REGION- The Amazon Web Services Region where queries will be run.
- 
         WORKGROUP_NAME- The name of an Amazon Redshift serverless workgroup or Amazon Athena workgroup in which queries will run.
- 
         CLUSTER_IDENTIFIER- The cluster identifier of an Amazon Redshift cluster in which queries will run.
- 
         DATABASE- The Amazon Redshift database that you are connecting to.
 
- 
         
 
- 
 
- 
hasConnectionPropertiespublic final boolean hasConnectionProperties()For responses, this returns true if the service returned a value for the ConnectionProperties property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
connectionPropertiesAsStringsThese key-value pairs define parameters for the connection when using the version 1 Connection schema: - 
 HOST- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
- 
 PORT- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
- 
 USER_NAME- The name under which to log in to the database. The value string forUSER_NAMEis "USERNAME".
- 
 PASSWORD- A password, if one is used, for the user name.
- 
 ENCRYPTED_PASSWORD- When you enable connection password protection by settingConnectionPasswordEncryptionin the Data Catalog encryption settings, this field stores the encrypted password.
- 
 JDBC_DRIVER_JAR_URI- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
- 
 JDBC_DRIVER_CLASS_NAME- The class name of the JDBC driver to use.
- 
 JDBC_ENGINE- The name of the JDBC engine to use.
- 
 JDBC_ENGINE_VERSION- The version of the JDBC engine to use.
- 
 CONFIG_FILES- (Reserved for future use.)
- 
 INSTANCE_ID- The instance ID to use.
- 
 JDBC_CONNECTION_URL- The URL for connecting to a JDBC data source.
- 
 JDBC_ENFORCE_SSL- A case-insensitive Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
- 
 CUSTOM_JDBC_CERT- An Amazon S3 location specifying the customer's root certificate. Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
- 
 SKIP_CUSTOM_JDBC_CERT_VALIDATION- By default, this isfalse. Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property totrueto skip Glue’s validation of the customer certificate.
- 
 CUSTOM_JDBC_CERT_STRING- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as theSSL_SERVER_CERT_DN; in Microsoft SQL Server, this is used as thehostNameInCertificate.
- 
 CONNECTION_URL- The URL for connecting to a general (non-JDBC) data source.
- 
 SECRET_ID- The secret ID used for the secret manager of credentials.
- 
 CONNECTOR_URL- The connector URL for a MARKETPLACE or CUSTOM connection.
- 
 CONNECTOR_TYPE- The connector type for a MARKETPLACE or CUSTOM connection.
- 
 CONNECTOR_CLASS_NAME- The connector class name for a MARKETPLACE or CUSTOM connection.
- 
 KAFKA_BOOTSTRAP_SERVERS- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
- 
 KAFKA_SSL_ENABLED- Whether to enable or disable SSL on an Apache Kafka connection. Default value is "true".
- 
 KAFKA_CUSTOM_CERT- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
- 
 KAFKA_SKIP_CUSTOM_CERT_VALIDATION- Whether to skip the validation of the CA cert file or not. Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
- 
 KAFKA_CLIENT_KEYSTORE- The Amazon S3 location of the client keystore file for Kafka client side authentication (Optional).
- 
 KAFKA_CLIENT_KEYSTORE_PASSWORD- The password to access the provided keystore (Optional).
- 
 KAFKA_CLIENT_KEY_PASSWORD- A keystore can consist of multiple keys, so this is the password to access the client key to be used with the Kafka server side key (Optional).
- 
 ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD- The encrypted version of the Kafka client keystore password (if the user has the Glue encrypt passwords setting selected).
- 
 ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD- The encrypted version of the Kafka client key password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_MECHANISM-"SCRAM-SHA-512","GSSAPI","AWS_MSK_IAM", or"PLAIN". These are the supported SASL Mechanisms.
- 
 KAFKA_SASL_PLAIN_USERNAME- A plaintext username used to authenticate with the "PLAIN" mechanism.
- 
 KAFKA_SASL_PLAIN_PASSWORD- A plaintext password used to authenticate with the "PLAIN" mechanism.
- 
 ENCRYPTED_KAFKA_SASL_PLAIN_PASSWORD- The encrypted version of the Kafka SASL PLAIN password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_SCRAM_USERNAME- A plaintext username used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
 KAFKA_SASL_SCRAM_PASSWORD- A plaintext password used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
 ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD- The encrypted version of the Kafka SASL SCRAM password (if the user has the Glue encrypt passwords setting selected).
- 
 KAFKA_SASL_SCRAM_SECRETS_ARN- The Amazon Resource Name of a secret in Amazon Web Services Secrets Manager.
- 
 KAFKA_SASL_GSSAPI_KEYTAB- The S3 location of a Kerberoskeytabfile. A keytab stores long-term keys for one or more principals. For more information, see MIT Kerberos Documentation: Keytab.
- 
 KAFKA_SASL_GSSAPI_KRB5_CONF- The S3 location of a Kerberoskrb5.conffile. A krb5.conf stores Kerberos configuration information, such as the location of the KDC server. For more information, see MIT Kerberos Documentation: krb5.conf.
- 
 KAFKA_SASL_GSSAPI_SERVICE- The Kerberos service name, as set withsasl.kerberos.service.namein your Kafka Configuration.
- 
 KAFKA_SASL_GSSAPI_PRINCIPAL- The name of the Kerberos princial used by Glue. For more information, see Kafka Documentation: Configuring Kafka Brokers.
- 
 ROLE_ARN- The role to be used for running queries.
- 
 REGION- The Amazon Web Services Region where queries will be run.
- 
 WORKGROUP_NAME- The name of an Amazon Redshift serverless workgroup or Amazon Athena workgroup in which queries will run.
- 
 CLUSTER_IDENTIFIER- The cluster identifier of an Amazon Redshift cluster in which queries will run.
- 
 DATABASE- The Amazon Redshift database that you are connecting to.
 Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasConnectionProperties()method.- Returns:
- These key-value pairs define parameters for the connection when using the version 1 Connection
         schema:
         - 
         HOST- The host URI: either the fully qualified domain name (FQDN) or the IPv4 address of the database host.
- 
         PORT- The port number, between 1024 and 65535, of the port on which the database host is listening for database connections.
- 
         USER_NAME- The name under which to log in to the database. The value string forUSER_NAMEis "USERNAME".
- 
         PASSWORD- A password, if one is used, for the user name.
- 
         ENCRYPTED_PASSWORD- When you enable connection password protection by settingConnectionPasswordEncryptionin the Data Catalog encryption settings, this field stores the encrypted password.
- 
         JDBC_DRIVER_JAR_URI- The Amazon Simple Storage Service (Amazon S3) path of the JAR file that contains the JDBC driver to use.
- 
         JDBC_DRIVER_CLASS_NAME- The class name of the JDBC driver to use.
- 
         JDBC_ENGINE- The name of the JDBC engine to use.
- 
         JDBC_ENGINE_VERSION- The version of the JDBC engine to use.
- 
         CONFIG_FILES- (Reserved for future use.)
- 
         INSTANCE_ID- The instance ID to use.
- 
         JDBC_CONNECTION_URL- The URL for connecting to a JDBC data source.
- 
         JDBC_ENFORCE_SSL- A case-insensitive Boolean string (true, false) specifying whether Secure Sockets Layer (SSL) with hostname matching is enforced for the JDBC connection on the client. The default is false.
- 
         CUSTOM_JDBC_CERT- An Amazon S3 location specifying the customer's root certificate. Glue uses this root certificate to validate the customer’s certificate when connecting to the customer database. Glue only handles X.509 certificates. The certificate provided must be DER-encoded and supplied in Base64 encoding PEM format.
- 
         SKIP_CUSTOM_JDBC_CERT_VALIDATION- By default, this isfalse. Glue validates the Signature algorithm and Subject Public Key Algorithm for the customer certificate. The only permitted algorithms for the Signature algorithm are SHA256withRSA, SHA384withRSA or SHA512withRSA. For the Subject Public Key Algorithm, the key length must be at least 2048. You can set the value of this property totrueto skip Glue’s validation of the customer certificate.
- 
         CUSTOM_JDBC_CERT_STRING- A custom JDBC certificate string which is used for domain match or distinguished name match to prevent a man-in-the-middle attack. In Oracle database, this is used as theSSL_SERVER_CERT_DN; in Microsoft SQL Server, this is used as thehostNameInCertificate.
- 
         CONNECTION_URL- The URL for connecting to a general (non-JDBC) data source.
- 
         SECRET_ID- The secret ID used for the secret manager of credentials.
- 
         CONNECTOR_URL- The connector URL for a MARKETPLACE or CUSTOM connection.
- 
         CONNECTOR_TYPE- The connector type for a MARKETPLACE or CUSTOM connection.
- 
         CONNECTOR_CLASS_NAME- The connector class name for a MARKETPLACE or CUSTOM connection.
- 
         KAFKA_BOOTSTRAP_SERVERS- A comma-separated list of host and port pairs that are the addresses of the Apache Kafka brokers in a Kafka cluster to which a Kafka client will connect to and bootstrap itself.
- 
         KAFKA_SSL_ENABLED- Whether to enable or disable SSL on an Apache Kafka connection. Default value is "true".
- 
         KAFKA_CUSTOM_CERT- The Amazon S3 URL for the private CA cert file (.pem format). The default is an empty string.
- 
         KAFKA_SKIP_CUSTOM_CERT_VALIDATION- Whether to skip the validation of the CA cert file or not. Glue validates for three algorithms: SHA256withRSA, SHA384withRSA and SHA512withRSA. Default value is "false".
- 
         KAFKA_CLIENT_KEYSTORE- The Amazon S3 location of the client keystore file for Kafka client side authentication (Optional).
- 
         KAFKA_CLIENT_KEYSTORE_PASSWORD- The password to access the provided keystore (Optional).
- 
         KAFKA_CLIENT_KEY_PASSWORD- A keystore can consist of multiple keys, so this is the password to access the client key to be used with the Kafka server side key (Optional).
- 
         ENCRYPTED_KAFKA_CLIENT_KEYSTORE_PASSWORD- The encrypted version of the Kafka client keystore password (if the user has the Glue encrypt passwords setting selected).
- 
         ENCRYPTED_KAFKA_CLIENT_KEY_PASSWORD- The encrypted version of the Kafka client key password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_MECHANISM-"SCRAM-SHA-512","GSSAPI","AWS_MSK_IAM", or"PLAIN". These are the supported SASL Mechanisms.
- 
         KAFKA_SASL_PLAIN_USERNAME- A plaintext username used to authenticate with the "PLAIN" mechanism.
- 
         KAFKA_SASL_PLAIN_PASSWORD- A plaintext password used to authenticate with the "PLAIN" mechanism.
- 
         ENCRYPTED_KAFKA_SASL_PLAIN_PASSWORD- The encrypted version of the Kafka SASL PLAIN password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_SCRAM_USERNAME- A plaintext username used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
         KAFKA_SASL_SCRAM_PASSWORD- A plaintext password used to authenticate with the "SCRAM-SHA-512" mechanism.
- 
         ENCRYPTED_KAFKA_SASL_SCRAM_PASSWORD- The encrypted version of the Kafka SASL SCRAM password (if the user has the Glue encrypt passwords setting selected).
- 
         KAFKA_SASL_SCRAM_SECRETS_ARN- The Amazon Resource Name of a secret in Amazon Web Services Secrets Manager.
- 
         KAFKA_SASL_GSSAPI_KEYTAB- The S3 location of a Kerberoskeytabfile. A keytab stores long-term keys for one or more principals. For more information, see MIT Kerberos Documentation: Keytab.
- 
         KAFKA_SASL_GSSAPI_KRB5_CONF- The S3 location of a Kerberoskrb5.conffile. A krb5.conf stores Kerberos configuration information, such as the location of the KDC server. For more information, see MIT Kerberos Documentation: krb5.conf.
- 
         KAFKA_SASL_GSSAPI_SERVICE- The Kerberos service name, as set withsasl.kerberos.service.namein your Kafka Configuration.
- 
         KAFKA_SASL_GSSAPI_PRINCIPAL- The name of the Kerberos princial used by Glue. For more information, see Kafka Documentation: Configuring Kafka Brokers.
- 
         ROLE_ARN- The role to be used for running queries.
- 
         REGION- The Amazon Web Services Region where queries will be run.
- 
         WORKGROUP_NAME- The name of an Amazon Redshift serverless workgroup or Amazon Athena workgroup in which queries will run.
- 
         CLUSTER_IDENTIFIER- The cluster identifier of an Amazon Redshift cluster in which queries will run.
- 
         DATABASE- The Amazon Redshift database that you are connecting to.
 
- 
         
 
- 
 
- 
hasSparkPropertiespublic final boolean hasSparkProperties()For responses, this returns true if the service returned a value for the SparkProperties property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
sparkPropertiesConnection properties specific to the Spark compute environment. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasSparkProperties()method.- Returns:
- Connection properties specific to the Spark compute environment.
 
- 
hasAthenaPropertiespublic final boolean hasAthenaProperties()For responses, this returns true if the service returned a value for the AthenaProperties property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
athenaPropertiesConnection properties specific to the Athena compute environment. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasAthenaProperties()method.- Returns:
- Connection properties specific to the Athena compute environment.
 
- 
hasPythonPropertiespublic final boolean hasPythonProperties()For responses, this returns true if the service returned a value for the PythonProperties property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
pythonPropertiesConnection properties specific to the Python compute environment. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasPythonProperties()method.- Returns:
- Connection properties specific to the Python compute environment.
 
- 
physicalConnectionRequirementsThe physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup, that are needed to make this connection successfully.- Returns:
- The physical connection requirements, such as virtual private cloud (VPC) and SecurityGroup, that are needed to make this connection successfully.
 
- 
creationTimeThe timestamp of the time that this connection definition was created. - Returns:
- The timestamp of the time that this connection definition was created.
 
- 
lastUpdatedTimeThe timestamp of the last time the connection definition was updated. - Returns:
- The timestamp of the last time the connection definition was updated.
 
- 
lastUpdatedByThe user, group, or role that last updated this connection definition. - Returns:
- The user, group, or role that last updated this connection definition.
 
- 
statusThe status of the connection. Can be one of: READY,IN_PROGRESS, orFAILED.If the service returns an enum value that is not available in the current SDK version, statuswill returnConnectionStatus.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromstatusAsString().- Returns:
- The status of the connection. Can be one of: READY,IN_PROGRESS, orFAILED.
- See Also:
 
- 
statusAsStringThe status of the connection. Can be one of: READY,IN_PROGRESS, orFAILED.If the service returns an enum value that is not available in the current SDK version, statuswill returnConnectionStatus.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromstatusAsString().- Returns:
- The status of the connection. Can be one of: READY,IN_PROGRESS, orFAILED.
- See Also:
 
- 
statusReasonThe reason for the connection status. - Returns:
- The reason for the connection status.
 
- 
lastConnectionValidationTimeA timestamp of the time this connection was last validated. - Returns:
- A timestamp of the time this connection was last validated.
 
- 
authenticationConfigurationThe authentication properties of the connection. - Returns:
- The authentication properties of the connection.
 
- 
connectionSchemaVersionThe version of the connection schema for this connection. Version 2 supports properties for specific compute environments. - Returns:
- The version of the connection schema for this connection. Version 2 supports properties for specific compute environments.
 
- 
compatibleComputeEnvironmentsA list of compute environments compatible with the connection. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasCompatibleComputeEnvironments()method.- Returns:
- A list of compute environments compatible with the connection.
 
- 
hasCompatibleComputeEnvironmentspublic final boolean hasCompatibleComputeEnvironments()For responses, this returns true if the service returned a value for the CompatibleComputeEnvironments property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
compatibleComputeEnvironmentsAsStringsA list of compute environments compatible with the connection. Attempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasCompatibleComputeEnvironments()method.- Returns:
- A list of compute environments compatible with the connection.
 
- 
toBuilderDescription copied from interface:ToCopyableBuilderTake this object and create a builder that contains all of the current property values of this object.- Specified by:
- toBuilderin interface- ToCopyableBuilder<Connection.Builder,- Connection> 
- Returns:
- a builder for type T
 
- 
builder
- 
serializableBuilderClass
- 
hashCode
- 
equals
- 
equalsBySdkFieldsDescription copied from interface:SdkPojoIndicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojoclass, and is generated based on a service model.If an SdkPojoclass does not have any inherited fields,equalsBySdkFieldsandequalsare essentially the same.- Specified by:
- equalsBySdkFieldsin interface- SdkPojo
- Parameters:
- obj- the object to be compared with
- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
 
- 
toString
- 
getValueForField
- 
sdkFields
- 
sdkFieldNameToField- Specified by:
- sdkFieldNameToFieldin interface- SdkPojo
- Returns:
- The mapping between the field name and its corresponding field.
 
 
-