Interface WebCrawlerConfiguration.Builder

All Superinterfaces:
Buildable, CopyableBuilder<WebCrawlerConfiguration.Builder,WebCrawlerConfiguration>, SdkBuilder<WebCrawlerConfiguration.Builder,WebCrawlerConfiguration>, SdkPojo
Enclosing class:
WebCrawlerConfiguration

public static interface WebCrawlerConfiguration.Builder extends SdkPojo, CopyableBuilder<WebCrawlerConfiguration.Builder,WebCrawlerConfiguration>
  • Method Details

    • crawlerLimits

      WebCrawlerConfiguration.Builder crawlerLimits(WebCrawlerLimits crawlerLimits)

      The configuration of crawl limits for the web URLs.

      Parameters:
      crawlerLimits - The configuration of crawl limits for the web URLs.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • crawlerLimits

      default WebCrawlerConfiguration.Builder crawlerLimits(Consumer<WebCrawlerLimits.Builder> crawlerLimits)

      The configuration of crawl limits for the web URLs.

      This is a convenience method that creates an instance of the WebCrawlerLimits.Builder avoiding the need to create one manually via WebCrawlerLimits.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to crawlerLimits(WebCrawlerLimits).

      Parameters:
      crawlerLimits - a consumer that will call methods on WebCrawlerLimits.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • exclusionFilters

      WebCrawlerConfiguration.Builder exclusionFilters(Collection<String> exclusionFilters)

      A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      exclusionFilters - A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • exclusionFilters

      WebCrawlerConfiguration.Builder exclusionFilters(String... exclusionFilters)

      A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      exclusionFilters - A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • inclusionFilters

      WebCrawlerConfiguration.Builder inclusionFilters(Collection<String> inclusionFilters)

      A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      inclusionFilters - A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • inclusionFilters

      WebCrawlerConfiguration.Builder inclusionFilters(String... inclusionFilters)

      A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      inclusionFilters - A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • scope

      The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Parameters:
      scope - The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • scope

      The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Parameters:
      scope - The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also: