Interface IndexFacesResponse.Builder

All Superinterfaces:
AwsResponse.Builder, Buildable, CopyableBuilder<IndexFacesResponse.Builder,IndexFacesResponse>, RekognitionResponse.Builder, SdkBuilder<IndexFacesResponse.Builder,IndexFacesResponse>, SdkPojo, SdkResponse.Builder
Enclosing class:
IndexFacesResponse

public static interface IndexFacesResponse.Builder extends RekognitionResponse.Builder, SdkPojo, CopyableBuilder<IndexFacesResponse.Builder,IndexFacesResponse>
  • Method Details

    • faceRecords

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

      Parameters:
      faceRecords - An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • faceRecords

      IndexFacesResponse.Builder faceRecords(FaceRecord... faceRecords)

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

      Parameters:
      faceRecords - An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • faceRecords

      An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.

      This is a convenience method that creates an instance of the FaceRecord.Builder avoiding the need to create one manually via FaceRecord.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to faceRecords(List<FaceRecord>).

      Parameters:
      faceRecords - a consumer that will call methods on FaceRecord.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • orientationCorrection

      IndexFacesResponse.Builder orientationCorrection(String orientationCorrection)

      If your collection is associated with a face detection model that's later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

      Parameters:
      orientationCorrection - If your collection is associated with a face detection model that's later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • orientationCorrection

      IndexFacesResponse.Builder orientationCorrection(OrientationCorrection orientationCorrection)

      If your collection is associated with a face detection model that's later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

      Parameters:
      orientationCorrection - If your collection is associated with a face detection model that's later than version 3.0, the value of OrientationCorrection is always null and no orientation information is returned.

      If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:

      • If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that includes the image's orientation. Amazon Rekognition uses this orientation information to perform image correction - the bounding box coordinates are translated to represent object locations after the orientation information in the Exif metadata is used to correct the image orientation. Images in .png format don't contain Exif metadata. The value of OrientationCorrection is null.

      • If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.

      Bounding box information is returned in the FaceRecords array. You can get the version of the face detection model by calling DescribeCollection.

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • faceModelVersion

      IndexFacesResponse.Builder faceModelVersion(String faceModelVersion)

      The version number of the face detection model that's associated with the input collection ( CollectionId).

      Parameters:
      faceModelVersion - The version number of the face detection model that's associated with the input collection ( CollectionId).
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • unindexedFaces

      IndexFacesResponse.Builder unindexedFaces(Collection<UnindexedFace> unindexedFaces)

      An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

      Parameters:
      unindexedFaces - An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • unindexedFaces

      IndexFacesResponse.Builder unindexedFaces(UnindexedFace... unindexedFaces)

      An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

      Parameters:
      unindexedFaces - An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • unindexedFaces

      IndexFacesResponse.Builder unindexedFaces(Consumer<UnindexedFace.Builder>... unindexedFaces)

      An array of faces that were detected in the image but weren't indexed. They weren't indexed because the quality filter identified them as low quality, or the MaxFaces request parameter filtered them out. To use the quality filter, you specify the QualityFilter request parameter.

      This is a convenience method that creates an instance of the UnindexedFace.Builder avoiding the need to create one manually via UnindexedFace.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to unindexedFaces(List<UnindexedFace>).

      Parameters:
      unindexedFaces - a consumer that will call methods on UnindexedFace.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also: