Job posting spider template (job_posting
)
Basic use
scrapy crawl job_posting -a url="https://books.toscrape.com"
Parameters
- pydantic model zyte_spider_templates.spiders.job_posting.JobPostingSpiderParams[source]
- field crawl_strategy: JobPostingCrawlStrategy = JobPostingCrawlStrategy.navigation
Determines how input URLs and follow-up URLs are crawled.
- field custom_attrs_method: CustomAttrsMethod = CustomAttrsMethod.generate
Which model to use for custom attribute extraction.
- field extract_from: ExtractFrom | None = None
Whether to perform extraction using a browser request (browserHtml) or an HTTP request (httpResponseBody).
- field geolocation: Geolocation | None = None
Country of the IP addresses to use.
- field max_requests: int | None = 100
The maximum number of Zyte API requests allowed for the crawl.
Requests with error responses that cannot be retried or exceed their retry limit also count here, but they incur in no costs and do not increase the request count in Scrapy Cloud.
- field search_queries: List[str] [Optional]
A list of search queries, one per line, to submit using the search form found on each input URL. Only works for input URLs that support search. May not work on every website.
- field url: str = ''
Initial URL for the crawl. Enter the full URL including http(s), you can copy and paste it from your browser. Example: https://toscrape.com/
- field urls: List[str] | None = None
Initial URLs for the crawl, separated by new lines. Enter the full URL including http(s), you can copy and paste it from your browser. Example: https://toscrape.com/
- field urls_file: str = ''
URL that point to a plain-text file with a list of URLs to crawl, e.g. https://example.com/url-list.txt. The linked file must contain 1 URL per line.