Interface WebCrawlerConfiguration.Builder

  • Method Details

    • crawlerLimits

      WebCrawlerConfiguration.Builder crawlerLimits(WebCrawlerLimits crawlerLimits)

      The configuration of crawl limits for the web URLs.

      Parameters:
      crawlerLimits - The configuration of crawl limits for the web URLs.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • crawlerLimits

      default WebCrawlerConfiguration.Builder crawlerLimits(Consumer<WebCrawlerLimits.Builder> crawlerLimits)

      The configuration of crawl limits for the web URLs.

      This is a convenience method that creates an instance of the WebCrawlerLimits.Builder avoiding the need to create one manually via WebCrawlerLimits.builder().

      When the Consumer completes, SdkBuilder.build() is called immediately and its result is passed to crawlerLimits(WebCrawlerLimits).

      Parameters:
      crawlerLimits - a consumer that will call methods on WebCrawlerLimits.Builder
      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • exclusionFilters

      WebCrawlerConfiguration.Builder exclusionFilters(Collection<String> exclusionFilters)

      A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      exclusionFilters - A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • exclusionFilters

      WebCrawlerConfiguration.Builder exclusionFilters(String... exclusionFilters)

      A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      exclusionFilters - A list of one or more exclusion regular expression patterns to exclude certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • inclusionFilters

      WebCrawlerConfiguration.Builder inclusionFilters(Collection<String> inclusionFilters)

      A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      inclusionFilters - A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • inclusionFilters

      WebCrawlerConfiguration.Builder inclusionFilters(String... inclusionFilters)

      A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.

      Parameters:
      inclusionFilters - A list of one or more inclusion regular expression patterns to include certain URLs. If you specify an inclusion and exclusion filter/pattern and both match a URL, the exclusion filter takes precedence and the web content of the URL isn’t crawled.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • scope

      The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Parameters:
      scope - The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • scope

      The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Parameters:
      scope - The scope of what is crawled for your URLs.

      You can choose to crawl only web pages that belong to the same host or primary domain. For example, only web pages that contain the seed URL "https://docs.aws.amazon.com/bedrock/latest/userguide/" and no other domains. You can choose to include sub domains in addition to the host or primary domain. For example, web pages that contain "aws.amazon.com" can also include sub domain "docs.aws.amazon.com".

      Returns:
      Returns a reference to this object so that method calls can be chained together.
      See Also:
    • userAgent

      Returns the user agent suffix for your web crawler.

      Parameters:
      userAgent - Returns the user agent suffix for your web crawler.
      Returns:
      Returns a reference to this object so that method calls can be chained together.
    • userAgentHeader

      WebCrawlerConfiguration.Builder userAgentHeader(String userAgentHeader)

      A string used for identifying the crawler or bot when it accesses a web server. The user agent header value consists of the bedrockbot, UUID, and a user agent suffix for your crawler (if one is provided). By default, it is set to bedrockbot_UUID. You can optionally append a custom suffix to bedrockbot_UUID to allowlist a specific user agent permitted to access your source URLs.

      Parameters:
      userAgentHeader - A string used for identifying the crawler or bot when it accesses a web server. The user agent header value consists of the bedrockbot, UUID, and a user agent suffix for your crawler (if one is provided). By default, it is set to bedrockbot_UUID. You can optionally append a custom suffix to bedrockbot_UUID to allowlist a specific user agent permitted to access your source URLs.
      Returns:
      Returns a reference to this object so that method calls can be chained together.