Class Av1Settings
- All Implemented Interfaces:
- Serializable,- SdkPojo,- ToCopyableBuilder<Av1Settings.Builder,- Av1Settings> 
- See Also:
- 
Nested Class SummaryNested Classes
- 
Method SummaryModifier and TypeMethodDescriptionfinal Av1AdaptiveQuantizationSpecify the strength of any adaptive quantization filters that you enable.final StringSpecify the strength of any adaptive quantization filters that you enable.final Av1BitDepthbitDepth()Specify the Bit depth.final StringSpecify the Bit depth.static Av1Settings.Builderbuilder()final booleanfinal booleanequalsBySdkFields(Object obj) Indicates whether some other object is "equal to" this one by SDK fields.final Av1FilmGrainSynthesisFilm grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain.final StringFilm grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain.final Av1FramerateControlUse the Framerate setting to specify the frame rate for this output.final StringUse the Framerate setting to specify the frame rate for this output.Choose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate.final StringChoose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate.final IntegerWhen you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction.final IntegerWhen you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction.final <T> Optional<T> getValueForField(String fieldName, Class<T> clazz) final DoublegopSize()Specify the GOP length (keyframe interval) in frames.final inthashCode()final booleanFor responses, this returns true if the service returned a value for the PerFrameMetrics property.final IntegerMaximum bitrate in bits/second.final IntegerSpecify from the number of B-frames, in the range of 0-15.final List<FrameMetricType> Optionally choose one or more per frame metric reports to generate along with your output.Optionally choose one or more per frame metric reports to generate along with your output.final Av1QvbrSettingsSettings for quality-defined variable bitrate encoding with the H.265 codec.final Av1RateControlMode'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR).final String'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR).static Class<? extends Av1Settings.Builder> final Integerslices()Specify the number of slices per picture.Keep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity.final StringKeep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity.Take this object and create a builder that contains all of the current property values of this object.final StringtoString()Returns a string representation of this object.Methods inherited from interface software.amazon.awssdk.utils.builder.ToCopyableBuildercopy
- 
Method Details- 
adaptiveQuantizationSpecify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization.If the service returns an enum value that is not available in the current SDK version, adaptiveQuantizationwill returnAv1AdaptiveQuantization.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromadaptiveQuantizationAsString().- Returns:
- Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization.
- See Also:
 
- 
adaptiveQuantizationAsStringSpecify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization.If the service returns an enum value that is not available in the current SDK version, adaptiveQuantizationwill returnAv1AdaptiveQuantization.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromadaptiveQuantizationAsString().- Returns:
- Specify the strength of any adaptive quantization filters that you enable. The value that you choose here applies to Spatial adaptive quantization.
- See Also:
 
- 
bitDepthSpecify the Bit depth. You can choose 8-bit or 10-bit.If the service returns an enum value that is not available in the current SDK version, bitDepthwill returnAv1BitDepth.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available frombitDepthAsString().- Returns:
- Specify the Bit depth. You can choose 8-bit or 10-bit.
- See Also:
 
- 
bitDepthAsStringSpecify the Bit depth. You can choose 8-bit or 10-bit.If the service returns an enum value that is not available in the current SDK version, bitDepthwill returnAv1BitDepth.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available frombitDepthAsString().- Returns:
- Specify the Bit depth. You can choose 8-bit or 10-bit.
- See Also:
 
- 
filmGrainSynthesisFilm grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain. We recommend that you choose Enabled to reduce the bandwidth of your QVBR quality level 5, 6, 7, or 8 outputs. For QVBR quality level 9 or 10 outputs we recommend that you keep the default value, Disabled. When you include Film grain synthesis, you cannot include the Noise reducer preprocessor.If the service returns an enum value that is not available in the current SDK version, filmGrainSynthesiswill returnAv1FilmGrainSynthesis.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromfilmGrainSynthesisAsString().- Returns:
- Film grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain. We recommend that you choose Enabled to reduce the bandwidth of your QVBR quality level 5, 6, 7, or 8 outputs. For QVBR quality level 9 or 10 outputs we recommend that you keep the default value, Disabled. When you include Film grain synthesis, you cannot include the Noise reducer preprocessor.
- See Also:
 
- 
filmGrainSynthesisAsStringFilm grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain. We recommend that you choose Enabled to reduce the bandwidth of your QVBR quality level 5, 6, 7, or 8 outputs. For QVBR quality level 9 or 10 outputs we recommend that you keep the default value, Disabled. When you include Film grain synthesis, you cannot include the Noise reducer preprocessor.If the service returns an enum value that is not available in the current SDK version, filmGrainSynthesiswill returnAv1FilmGrainSynthesis.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromfilmGrainSynthesisAsString().- Returns:
- Film grain synthesis replaces film grain present in your content with similar quality synthesized AV1 film grain. We recommend that you choose Enabled to reduce the bandwidth of your QVBR quality level 5, 6, 7, or 8 outputs. For QVBR quality level 9 or 10 outputs we recommend that you keep the default value, Disabled. When you include Film grain synthesis, you cannot include the Noise reducer preprocessor.
- See Also:
 
- 
framerateControlUse the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction.If the service returns an enum value that is not available in the current SDK version, framerateControlwill returnAv1FramerateControl.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromframerateControlAsString().- Returns:
- Use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction.
- See Also:
 
- 
framerateControlAsStringUse the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction.If the service returns an enum value that is not available in the current SDK version, framerateControlwill returnAv1FramerateControl.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromframerateControlAsString().- Returns:
- Use the Framerate setting to specify the frame rate for this output. If you want to keep the same frame rate as the input video, choose Follow source. If you want to do frame rate conversion, choose a frame rate from the dropdown list or choose Custom. The framerates shown in the dropdown list are decimal approximations of fractions. If you choose Custom, specify your frame rate as a fraction.
- See Also:
 
- 
framerateConversionAlgorithmChoose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate. For numerically simple conversions, such as 60 fps to 30 fps: We recommend that you keep the default value, Drop duplicate. For numerically complex conversions, to avoid stutter: Choose Interpolate. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence: Choose FrameFormer to do motion-compensated interpolation. FrameFormer uses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost. When you choose FrameFormer, your input video resolution must be at least 128x96. To create an output with the same number of frames as your input: Choose Maintain frame count. When you do, MediaConvert will not drop, interpolate, add, or otherwise change the frame count from your input to your output. Note that since the frame count is maintained, the duration of your output will become shorter at higher frame rates and longer at lower frame rates.If the service returns an enum value that is not available in the current SDK version, framerateConversionAlgorithmwill returnAv1FramerateConversionAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromframerateConversionAlgorithmAsString().- Returns:
- Choose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate. For numerically simple conversions, such as 60 fps to 30 fps: We recommend that you keep the default value, Drop duplicate. For numerically complex conversions, to avoid stutter: Choose Interpolate. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence: Choose FrameFormer to do motion-compensated interpolation. FrameFormer uses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost. When you choose FrameFormer, your input video resolution must be at least 128x96. To create an output with the same number of frames as your input: Choose Maintain frame count. When you do, MediaConvert will not drop, interpolate, add, or otherwise change the frame count from your input to your output. Note that since the frame count is maintained, the duration of your output will become shorter at higher frame rates and longer at lower frame rates.
- See Also:
 
- 
framerateConversionAlgorithmAsStringChoose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate. For numerically simple conversions, such as 60 fps to 30 fps: We recommend that you keep the default value, Drop duplicate. For numerically complex conversions, to avoid stutter: Choose Interpolate. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence: Choose FrameFormer to do motion-compensated interpolation. FrameFormer uses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost. When you choose FrameFormer, your input video resolution must be at least 128x96. To create an output with the same number of frames as your input: Choose Maintain frame count. When you do, MediaConvert will not drop, interpolate, add, or otherwise change the frame count from your input to your output. Note that since the frame count is maintained, the duration of your output will become shorter at higher frame rates and longer at lower frame rates.If the service returns an enum value that is not available in the current SDK version, framerateConversionAlgorithmwill returnAv1FramerateConversionAlgorithm.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromframerateConversionAlgorithmAsString().- Returns:
- Choose the method that you want MediaConvert to use when increasing or decreasing your video's frame rate. For numerically simple conversions, such as 60 fps to 30 fps: We recommend that you keep the default value, Drop duplicate. For numerically complex conversions, to avoid stutter: Choose Interpolate. This results in a smooth picture, but might introduce undesirable video artifacts. For complex frame rate conversions, especially if your source video has already been converted from its original cadence: Choose FrameFormer to do motion-compensated interpolation. FrameFormer uses the best conversion method frame by frame. Note that using FrameFormer increases the transcoding time and incurs a significant add-on cost. When you choose FrameFormer, your input video resolution must be at least 128x96. To create an output with the same number of frames as your input: Choose Maintain frame count. When you do, MediaConvert will not drop, interpolate, add, or otherwise change the frame count from your input to your output. Note that since the frame count is maintained, the duration of your output will become shorter at higher frame rates and longer at lower frame rates.
- See Also:
 
- 
framerateDenominatorWhen you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.- Returns:
- When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateDenominator to specify the denominator of this fraction. In this example, use 1001 for the value of FramerateDenominator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.
 
- 
framerateNumeratorWhen you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.- Returns:
- When you use the API for transcode jobs that use frame rate conversion, specify the frame rate as a fraction. For example, 24000 / 1001 = 23.976 fps. Use FramerateNumerator to specify the numerator of this fraction. In this example, use 24000 for the value of FramerateNumerator. When you use the console for transcode jobs that use frame rate conversion, provide the value as a decimal number for Framerate. In this example, specify 23.976.
 
- 
gopSizeSpecify the GOP length (keyframe interval) in frames. With AV1, MediaConvert doesn't support GOP length in seconds. This value must be greater than zero and preferably equal to 1 + ((numberBFrames + 1) * x), where x is an integer value.- Returns:
- Specify the GOP length (keyframe interval) in frames. With AV1, MediaConvert doesn't support GOP length in seconds. This value must be greater than zero and preferably equal to 1 + ((numberBFrames + 1) * x), where x is an integer value.
 
- 
maxBitrateMaximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.- Returns:
- Maximum bitrate in bits/second. For example, enter five megabits per second as 5000000. Required when Rate control mode is QVBR.
 
- 
numberBFramesBetweenReferenceFramesSpecify from the number of B-frames, in the range of 0-15. For AV1 encoding, we recommend using 7 or 15. Choose a larger number for a lower bitrate and smaller file size; choose a smaller number for better video quality.- Returns:
- Specify from the number of B-frames, in the range of 0-15. For AV1 encoding, we recommend using 7 or 15. Choose a larger number for a lower bitrate and smaller file size; choose a smaller number for better video quality.
 
- 
perFrameMetricsOptionally choose one or more per frame metric reports to generate along with your output. You can use these metrics to analyze your video output according to one or more commonly used image quality metrics. You can specify per frame metrics for output groups or for individual outputs. When you do, MediaConvert writes a CSV (Comma-Separated Values) file to your S3 output destination, named after the output name and metric type. For example: videofile_PSNR.csv Jobs that generate per frame metrics will take longer to complete, depending on the resolution and complexity of your output. For example, some 4K jobs might take up to twice as long to complete. Note that when analyzing the video quality of your output, or when comparing the video quality of multiple different outputs, we generally also recommend a detailed visual review in a controlled environment. You can choose from the following per frame metrics: * PSNR: Peak Signal-to-Noise Ratio * SSIM: Structural Similarity Index Measure * MS_SSIM: Multi-Scale Similarity Index Measure * PSNR_HVS: Peak Signal-to-Noise Ratio, Human Visual System * VMAF: Video Multi-Method Assessment Fusion * QVBR: Quality-Defined Variable Bitrate. This option is only available when your output uses the QVBR rate control mode. * SHOT_CHANGE: Shot ChangesAttempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasPerFrameMetrics()method.- Returns:
- Optionally choose one or more per frame metric reports to generate along with your output. You can use these metrics to analyze your video output according to one or more commonly used image quality metrics. You can specify per frame metrics for output groups or for individual outputs. When you do, MediaConvert writes a CSV (Comma-Separated Values) file to your S3 output destination, named after the output name and metric type. For example: videofile_PSNR.csv Jobs that generate per frame metrics will take longer to complete, depending on the resolution and complexity of your output. For example, some 4K jobs might take up to twice as long to complete. Note that when analyzing the video quality of your output, or when comparing the video quality of multiple different outputs, we generally also recommend a detailed visual review in a controlled environment. You can choose from the following per frame metrics: * PSNR: Peak Signal-to-Noise Ratio * SSIM: Structural Similarity Index Measure * MS_SSIM: Multi-Scale Similarity Index Measure * PSNR_HVS: Peak Signal-to-Noise Ratio, Human Visual System * VMAF: Video Multi-Method Assessment Fusion * QVBR: Quality-Defined Variable Bitrate. This option is only available when your output uses the QVBR rate control mode. * SHOT_CHANGE: Shot Changes
 
- 
hasPerFrameMetricspublic final boolean hasPerFrameMetrics()For responses, this returns true if the service returned a value for the PerFrameMetrics property. This DOES NOT check that the value is non-empty (for which, you should check theisEmpty()method on the property). This is useful because the SDK will never return a null collection or map, but you may need to differentiate between the service returning nothing (or null) and the service returning an empty collection or map. For requests, this returns true if a value for the property was specified in the request builder, and false if a value was not specified.
- 
perFrameMetricsAsStringsOptionally choose one or more per frame metric reports to generate along with your output. You can use these metrics to analyze your video output according to one or more commonly used image quality metrics. You can specify per frame metrics for output groups or for individual outputs. When you do, MediaConvert writes a CSV (Comma-Separated Values) file to your S3 output destination, named after the output name and metric type. For example: videofile_PSNR.csv Jobs that generate per frame metrics will take longer to complete, depending on the resolution and complexity of your output. For example, some 4K jobs might take up to twice as long to complete. Note that when analyzing the video quality of your output, or when comparing the video quality of multiple different outputs, we generally also recommend a detailed visual review in a controlled environment. You can choose from the following per frame metrics: * PSNR: Peak Signal-to-Noise Ratio * SSIM: Structural Similarity Index Measure * MS_SSIM: Multi-Scale Similarity Index Measure * PSNR_HVS: Peak Signal-to-Noise Ratio, Human Visual System * VMAF: Video Multi-Method Assessment Fusion * QVBR: Quality-Defined Variable Bitrate. This option is only available when your output uses the QVBR rate control mode. * SHOT_CHANGE: Shot ChangesAttempts to modify the collection returned by this method will result in an UnsupportedOperationException. This method will never return null. If you would like to know whether the service returned this field (so that you can differentiate between null and empty), you can use the hasPerFrameMetrics()method.- Returns:
- Optionally choose one or more per frame metric reports to generate along with your output. You can use these metrics to analyze your video output according to one or more commonly used image quality metrics. You can specify per frame metrics for output groups or for individual outputs. When you do, MediaConvert writes a CSV (Comma-Separated Values) file to your S3 output destination, named after the output name and metric type. For example: videofile_PSNR.csv Jobs that generate per frame metrics will take longer to complete, depending on the resolution and complexity of your output. For example, some 4K jobs might take up to twice as long to complete. Note that when analyzing the video quality of your output, or when comparing the video quality of multiple different outputs, we generally also recommend a detailed visual review in a controlled environment. You can choose from the following per frame metrics: * PSNR: Peak Signal-to-Noise Ratio * SSIM: Structural Similarity Index Measure * MS_SSIM: Multi-Scale Similarity Index Measure * PSNR_HVS: Peak Signal-to-Noise Ratio, Human Visual System * VMAF: Video Multi-Method Assessment Fusion * QVBR: Quality-Defined Variable Bitrate. This option is only available when your output uses the QVBR rate control mode. * SHOT_CHANGE: Shot Changes
 
- 
qvbrSettingsSettings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode.- Returns:
- Settings for quality-defined variable bitrate encoding with the H.265 codec. Use these settings only when you set QVBR for Rate control mode.
 
- 
rateControlMode'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'If the service returns an enum value that is not available in the current SDK version, rateControlModewill returnAv1RateControlMode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromrateControlModeAsString().- Returns:
- 'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'
- See Also:
 
- 
rateControlModeAsString'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'If the service returns an enum value that is not available in the current SDK version, rateControlModewill returnAv1RateControlMode.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromrateControlModeAsString().- Returns:
- 'With AV1 outputs, for rate control mode, MediaConvert supports only quality-defined variable bitrate (QVBR). You can''t use CBR or VBR.'
- See Also:
 
- 
slicesSpecify the number of slices per picture. This value must be 1, 2, 4, 8, 16, or 32. For progressive pictures, this value must be less than or equal to the number of macroblock rows. For interlaced pictures, this value must be less than or equal to half the number of macroblock rows.- Returns:
- Specify the number of slices per picture. This value must be 1, 2, 4, 8, 16, or 32. For progressive pictures, this value must be less than or equal to the number of macroblock rows. For interlaced pictures, this value must be less than or equal to half the number of macroblock rows.
 
- 
spatialAdaptiveQuantizationKeep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.If the service returns an enum value that is not available in the current SDK version, spatialAdaptiveQuantizationwill returnAv1SpatialAdaptiveQuantization.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromspatialAdaptiveQuantizationAsString().- Returns:
- Keep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.
- See Also:
 
- 
spatialAdaptiveQuantizationAsStringKeep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.If the service returns an enum value that is not available in the current SDK version, spatialAdaptiveQuantizationwill returnAv1SpatialAdaptiveQuantization.UNKNOWN_TO_SDK_VERSION. The raw value returned by the service is available fromspatialAdaptiveQuantizationAsString().- Returns:
- Keep the default value, Enabled, to adjust quantization within each frame based on spatial variation of content complexity. When you enable this feature, the encoder uses fewer bits on areas that can sustain more distortion with no noticeable visual degradation and uses more bits on areas where any small distortion will be noticeable. For example, complex textured blocks are encoded with fewer bits and smooth textured blocks are encoded with more bits. Enabling this feature will almost always improve your video quality. Note, though, that this feature doesn't take into account where the viewer's attention is likely to be. If viewers are likely to be focusing their attention on a part of the screen with a lot of complex texture, you might choose to disable this feature. Related setting: When you enable spatial adaptive quantization, set the value for Adaptive quantization depending on your content. For homogeneous content, such as cartoons and video games, set it to Low. For content with a wider variety of textures, set it to High or Higher.
- See Also:
 
- 
toBuilderDescription copied from interface:ToCopyableBuilderTake this object and create a builder that contains all of the current property values of this object.- Specified by:
- toBuilderin interface- ToCopyableBuilder<Av1Settings.Builder,- Av1Settings> 
- Returns:
- a builder for type T
 
- 
builder
- 
serializableBuilderClass
- 
hashCode
- 
equals
- 
equalsBySdkFieldsDescription copied from interface:SdkPojoIndicates whether some other object is "equal to" this one by SDK fields. An SDK field is a modeled, non-inherited field in anSdkPojoclass, and is generated based on a service model.If an SdkPojoclass does not have any inherited fields,equalsBySdkFieldsandequalsare essentially the same.- Specified by:
- equalsBySdkFieldsin interface- SdkPojo
- Parameters:
- obj- the object to be compared with
- Returns:
- true if the other object equals to this object by sdk fields, false otherwise.
 
- 
toString
- 
getValueForField
- 
sdkFields
- 
sdkFieldNameToField- Specified by:
- sdkFieldNameToFieldin interface- SdkPojo
- Returns:
- The mapping between the field name and its corresponding field.
 
 
-