detectToxicContent
inline suspend fun ComprehendClient.detectToxicContent(crossinline block: DetectToxicContentRequest.Builder.() -> Unit): DetectToxicContentResponse
Performs toxicity analysis on the list of text strings that you provide as input. The API response contains a results list that matches the size of the input list. For more information about toxicity detection, see Toxicity detection in the Amazon Comprehend Developer Guide.