public static interface IndexFacesResponse.Builder extends RekognitionResponse.Builder, SdkPojo, CopyableBuilder<IndexFacesResponse.Builder,IndexFacesResponse>
Modifier and Type | Method and Description |
---|---|
IndexFacesResponse.Builder |
faceModelVersion(String faceModelVersion)
The version number of the face detection model that's associated with the input collection (
CollectionId ). |
IndexFacesResponse.Builder |
faceRecords(Collection<FaceRecord> faceRecords)
An array of faces detected and added to the collection.
|
IndexFacesResponse.Builder |
faceRecords(Consumer<FaceRecord.Builder>... faceRecords)
An array of faces detected and added to the collection.
|
IndexFacesResponse.Builder |
faceRecords(FaceRecord... faceRecords)
An array of faces detected and added to the collection.
|
IndexFacesResponse.Builder |
orientationCorrection(OrientationCorrection orientationCorrection)
If your collection is associated with a face detection model that's later than version 3.0, the value of
OrientationCorrection is always null and no orientation information is returned. |
IndexFacesResponse.Builder |
orientationCorrection(String orientationCorrection)
If your collection is associated with a face detection model that's later than version 3.0, the value of
OrientationCorrection is always null and no orientation information is returned. |
IndexFacesResponse.Builder |
unindexedFaces(Collection<UnindexedFace> unindexedFaces)
An array of faces that were detected in the image but weren't indexed.
|
IndexFacesResponse.Builder |
unindexedFaces(Consumer<UnindexedFace.Builder>... unindexedFaces)
An array of faces that were detected in the image but weren't indexed.
|
IndexFacesResponse.Builder |
unindexedFaces(UnindexedFace... unindexedFaces)
An array of faces that were detected in the image but weren't indexed.
|
build, responseMetadata, responseMetadata
sdkHttpResponse, sdkHttpResponse
copy
applyMutation, build
IndexFacesResponse.Builder faceRecords(Collection<FaceRecord> faceRecords)
An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
faceRecords
- An array of faces detected and added to the collection. For more information, see Searching Faces in a
Collection in the Amazon Rekognition Developer Guide.IndexFacesResponse.Builder faceRecords(FaceRecord... faceRecords)
An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
faceRecords
- An array of faces detected and added to the collection. For more information, see Searching Faces in a
Collection in the Amazon Rekognition Developer Guide.IndexFacesResponse.Builder faceRecords(Consumer<FaceRecord.Builder>... faceRecords)
An array of faces detected and added to the collection. For more information, see Searching Faces in a Collection in the Amazon Rekognition Developer Guide.
This is a convenience that creates an instance of theList.Builder
avoiding the need to
create one manually via List#builder()
.
When the Consumer
completes, List.Builder#build()
is called immediately and its
result is passed to #faceRecords(List)
.faceRecords
- a consumer that will call methods on List.Builder
#faceRecords(List)
IndexFacesResponse.Builder orientationCorrection(String orientationCorrection)
If your collection is associated with a face detection model that's later than version 3.0, the value of
OrientationCorrection
is always null and no orientation information is returned.
If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
correction - the bounding box coordinates are translated to represent object locations after the orientation
information in the Exif metadata is used to correct the image orientation. Images in .png format don't
contain Exif metadata. The value of OrientationCorrection
is null.
If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
Bounding box information is returned in the FaceRecords
array. You can get the version of the
face detection model by calling .
orientationCorrection
- If your collection is associated with a face detection model that's later than version 3.0, the value
of OrientationCorrection
is always null and no orientation information is returned.
If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
image correction - the bounding box coordinates are translated to represent object locations after the
orientation information in the Exif metadata is used to correct the image orientation. Images in .png
format don't contain Exif metadata. The value of OrientationCorrection
is null.
If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
Bounding box information is returned in the FaceRecords
array. You can get the version of
the face detection model by calling .
OrientationCorrection
,
OrientationCorrection
IndexFacesResponse.Builder orientationCorrection(OrientationCorrection orientationCorrection)
If your collection is associated with a face detection model that's later than version 3.0, the value of
OrientationCorrection
is always null and no orientation information is returned.
If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata that
includes the image's orientation. Amazon Rekognition uses this orientation information to perform image
correction - the bounding box coordinates are translated to represent object locations after the orientation
information in the Exif metadata is used to correct the image orientation. Images in .png format don't
contain Exif metadata. The value of OrientationCorrection
is null.
If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
Bounding box information is returned in the FaceRecords
array. You can get the version of the
face detection model by calling .
orientationCorrection
- If your collection is associated with a face detection model that's later than version 3.0, the value
of OrientationCorrection
is always null and no orientation information is returned.
If your collection is associated with a face detection model that's version 3.0 or earlier, the following applies:
If the input image is in .jpeg format, it might contain exchangeable image file format (Exif) metadata
that includes the image's orientation. Amazon Rekognition uses this orientation information to perform
image correction - the bounding box coordinates are translated to represent object locations after the
orientation information in the Exif metadata is used to correct the image orientation. Images in .png
format don't contain Exif metadata. The value of OrientationCorrection
is null.
If the image doesn't contain orientation information in its Exif metadata, Amazon Rekognition returns an estimated orientation (ROTATE_0, ROTATE_90, ROTATE_180, ROTATE_270). Amazon Rekognition doesn’t perform image correction for images. The bounding box coordinates aren't translated and represent the object locations before the image is rotated.
Bounding box information is returned in the FaceRecords
array. You can get the version of
the face detection model by calling .
OrientationCorrection
,
OrientationCorrection
IndexFacesResponse.Builder faceModelVersion(String faceModelVersion)
The version number of the face detection model that's associated with the input collection (
CollectionId
).
faceModelVersion
- The version number of the face detection model that's associated with the input collection (
CollectionId
).IndexFacesResponse.Builder unindexedFaces(Collection<UnindexedFace> unindexedFaces)
An array of faces that were detected in the image but weren't indexed. They weren't indexed because the
quality filter identified them as low quality, or the MaxFaces
request parameter filtered them
out. To use the quality filter, you specify the QualityFilter
request parameter.
unindexedFaces
- An array of faces that were detected in the image but weren't indexed. They weren't indexed because
the quality filter identified them as low quality, or the MaxFaces
request parameter
filtered them out. To use the quality filter, you specify the QualityFilter
request
parameter.IndexFacesResponse.Builder unindexedFaces(UnindexedFace... unindexedFaces)
An array of faces that were detected in the image but weren't indexed. They weren't indexed because the
quality filter identified them as low quality, or the MaxFaces
request parameter filtered them
out. To use the quality filter, you specify the QualityFilter
request parameter.
unindexedFaces
- An array of faces that were detected in the image but weren't indexed. They weren't indexed because
the quality filter identified them as low quality, or the MaxFaces
request parameter
filtered them out. To use the quality filter, you specify the QualityFilter
request
parameter.IndexFacesResponse.Builder unindexedFaces(Consumer<UnindexedFace.Builder>... unindexedFaces)
An array of faces that were detected in the image but weren't indexed. They weren't indexed because the
quality filter identified them as low quality, or the MaxFaces
request parameter filtered them
out. To use the quality filter, you specify the QualityFilter
request parameter.
List.Builder
avoiding the need
to create one manually via List#builder()
.
When the Consumer
completes, List.Builder#build()
is called immediately and
its result is passed to #unindexedFaces(List)
.unindexedFaces
- a consumer that will call methods on List.Builder
#unindexedFaces(List)
Copyright © 2017 Amazon Web Services, Inc. All Rights Reserved.