ModerationLabel

Provides information about a single type of inappropriate, unwanted, or offensive content found in an image or video. Each type of moderated content has a label within a hierarchical taxonomy. For more information, see Content moderation in the Amazon Rekognition Developer Guide.

Types

Link copied to clipboard
class Builder
Link copied to clipboard
object Companion

Properties

Link copied to clipboard

Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.

Link copied to clipboard
val name: String?

The label name for the type of unsafe content detected in the image.

Link copied to clipboard

The name for the parent label. Labels at the top level of the hierarchy have the parent label "".

Link copied to clipboard

The level of the moderation label with regard to its taxonomy, from 1 to 3.

Functions

Link copied to clipboard
inline fun copy(block: ModerationLabel.Builder.() -> Unit = {}): ModerationLabel
Link copied to clipboard
open operator override fun equals(other: Any?): Boolean
Link copied to clipboard
open override fun hashCode(): Int
Link copied to clipboard
open override fun toString(): String