Requests and Responses¶
Scrapy uses Request
and Response
objects for crawling web
sites.
Typically, Request
objects are generated in the spiders and pass
across the system until they reach the Downloader, which executes the request
and returns a Response
object which travels back to the spider that
issued the request.
Both Request
and Response
classes have subclasses which add
functionality not required in the base classes. These are described
below in Request subclasses and
Response subclasses.
Request objects¶
Passing additional data to callback functions¶
The callback of a request is a function that will be called when the response
of that request is downloaded. The callback function will be called with the
downloaded Response
object as its first argument.
Example:
def parse_page1(self, response):
return scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
def parse_page2(self, response):
# this would log http://www.example.com/some_page.html
self.logger.info("Visited %s", response.url)
In some cases you may be interested in passing arguments to those callback
functions so you can receive the arguments later, in the second callback.
The following example shows how to achieve this by using the
Request.cb_kwargs
attribute:
def parse(self, response):
request = scrapy.Request('http://www.example.com/index.html',
callback=self.parse_page2,
cb_kwargs=dict(main_url=response.url))
request.cb_kwargs['foo'] = 'bar' # add more arguments for the callback
yield request
def parse_page2(self, response, main_url, foo):
yield dict(
main_url=main_url,
other_url=response.url,
foo=foo,
)
Caution
Request.cb_kwargs
was introduced in version 1.7
.
Prior to that, using Request.meta
was recommended for passing
information around callbacks. After 1.7
, Request.cb_kwargs
became the preferred way for handling user information, leaving Request.meta
for communication with components like middlewares and extensions.
Using errbacks to catch exceptions in request processing¶
The errback of a request is a function that will be called when an exception is raise while processing it.
It receives a Failure
as first parameter and can
be used to track connection establishment timeouts, DNS errors etc.
Here’s an example spider logging all errors and catching some specific errors if needed:
import scrapy
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError, TCPTimedOutError
class ErrbackSpider(scrapy.Spider):
name = "errback_example"
start_urls = [
"http://www.httpbin.org/", # HTTP 200 expected
"http://www.httpbin.org/status/404", # Not found error
"http://www.httpbin.org/status/500", # server issue
"http://www.httpbin.org:12345/", # non-responding host, timeout expected
"https://example.invalid/", # DNS error expected
]
def start_requests(self):
for u in self.start_urls:
yield scrapy.Request(u, callback=self.parse_httpbin,
errback=self.errback_httpbin,
dont_filter=True)
def parse_httpbin(self, response):
self.logger.info('Got successful response from {}'.format(response.url))
# do something useful here...
def errback_httpbin(self, failure):
# log all failures
self.logger.error(repr(failure))
# in case you want to do something special for some errors,
# you may need the failure's type:
if failure.check(HttpError):
# these exceptions come from HttpError spider middleware
# you can get the non-200 response
response = failure.value.response
self.logger.error('HttpError on %s', response.url)
elif failure.check(DNSLookupError):
# this is the original request
request = failure.request
self.logger.error('DNSLookupError on %s', request.url)
elif failure.check(TimeoutError, TCPTimedOutError):
request = failure.request
self.logger.error('TimeoutError on %s', request.url)
Accessing additional data in errback functions¶
In case of a failure to process the request, you may be interested in
accessing arguments to the callback functions so you can process further
based on the arguments in the errback. The following example shows how to
achieve this by using Failure.request.cb_kwargs
:
def parse(self, response):
request = scrapy.Request('http://www.example.com/index.html',
callback=self.parse_page2,
errback=self.errback_page2,
cb_kwargs=dict(main_url=response.url))
yield request
def parse_page2(self, response, main_url):
pass
def errback_page2(self, failure):
yield dict(
main_url=failure.request.cb_kwargs['main_url'],
)
Request fingerprints¶
There are some aspects of scraping, such as filtering out duplicate requests
(see DUPEFILTER_CLASS
) or caching responses (see
HTTPCACHE_POLICY
), where you need the ability to generate a short,
unique identifier from a Request
object: a request
fingerprint.
You often do not need to worry about request fingerprints, the default request fingerprinter works for most projects.
However, there is no universal way to generate a unique identifier from a request, because different situations require comparing requests differently. For example, sometimes you may need to compare URLs case-insensitively, include URL fragments, exclude certain URL query parameters, include some or all headers, etc.
To change how request fingerprints are built for your requests, use the
REQUEST_FINGERPRINTER_CLASS
setting.
REQUEST_FINGERPRINTER_CLASS¶
New in version 2.7.
Default: scrapy.utils.request.RequestFingerprinter
A request fingerprinter class or its import path.
REQUEST_FINGERPRINTER_IMPLEMENTATION¶
New in version 2.7.
Default: '2.6'
Determines which request fingerprinting algorithm is used by the default
request fingerprinter class (see REQUEST_FINGERPRINTER_CLASS
).
Possible values are:
'2.6'
(default)This implementation uses the same request fingerprinting algorithm as Scrapy 2.6 and earlier versions.
Even though this is the default value for backward compatibility reasons, it is a deprecated value.
'2.7'
This implementation was introduced in Scrapy 2.7 to fix an issue of the previous implementation.
New projects should use this value. The
startproject
command sets this value in the generatedsettings.py
file.
If you are using the default value ('2.6'
) for this setting, and you are
using Scrapy components where changing the request fingerprinting algorithm
would cause undesired results, you need to carefully decide when to change the
value of this setting, or switch the REQUEST_FINGERPRINTER_CLASS
setting to a custom request fingerprinter class that implements the 2.6 request
fingerprinting algorithm and does not log this warning (
Writing your own request fingerprinter includes an example implementation of such a
class).
Scenarios where changing the request fingerprinting algorithm may cause
undesired results include, for example, using the HTTP cache middleware (see
HttpCacheMiddleware
).
Changing the request fingerprinting algorithm would invalidate the current
cache, requiring you to redownload all requests again.
Otherwise, set REQUEST_FINGERPRINTER_IMPLEMENTATION
to '2.7'
in
your settings to switch already to the request fingerprinting implementation
that will be the only request fingerprinting implementation available in a
future version of Scrapy, and remove the deprecation warning triggered by using
the default value ('2.6'
).
Writing your own request fingerprinter¶
A request fingerprinter is a class that must implement the following method:
- fingerprint(self, request)¶
Return a
bytes
object that uniquely identifies request.See also Request fingerprint restrictions.
- Parameters:
request (scrapy.http.Request) – request to fingerprint
Additionally, it may also implement the following methods:
- classmethod from_crawler(cls, crawler)
If present, this class method is called to create a request fingerprinter instance from a
Crawler
object. It must return a new instance of the request fingerprinter.crawler provides access to all Scrapy core components like settings and signals; it is a way for the request fingerprinter to access them and hook its functionality into Scrapy.
- Parameters:
crawler (
Crawler
object) – crawler that uses this request fingerprinter
- classmethod from_settings(cls, settings)¶
If present, and
from_crawler
is not defined, this class method is called to create a request fingerprinter instance from aSettings
object. It must return a new instance of the request fingerprinter.
The fingerprint()
method of the default request fingerprinter,
scrapy.utils.request.RequestFingerprinter
, uses
scrapy.utils.request.fingerprint()
with its default parameters. For some
common use cases you can use scrapy.utils.request.fingerprint()
as well
in your fingerprint()
method implementation:
For example, to take the value of a request header named X-ID
into
account:
# my_project/settings.py
REQUEST_FINGERPRINTER_CLASS = 'my_project.utils.RequestFingerprinter'
# my_project/utils.py
from scrapy.utils.request import fingerprint
class RequestFingerprinter:
def fingerprint(self, request):
return fingerprint(request, include_headers=['X-ID'])
You can also write your own fingerprinting logic from scratch.
However, if you do not use scrapy.utils.request.fingerprint()
, make sure
you use WeakKeyDictionary
to cache request fingerprints:
Caching saves CPU by ensuring that fingerprints are calculated only once per request, and not once per Scrapy component that needs the fingerprint of a request.
Using
WeakKeyDictionary
saves memory by ensuring that request objects do not stay in memory forever just because you have references to them in your cache dictionary.
For example, to take into account only the URL of a request, without any prior URL canonicalization or taking the request method or body into account:
from hashlib import sha1
from weakref import WeakKeyDictionary
from scrapy.utils.python import to_bytes
class RequestFingerprinter:
cache = WeakKeyDictionary()
def fingerprint(self, request):
if request not in self.cache:
fp = sha1()
fp.update(to_bytes(request.url))
self.cache[request] = fp.digest()
return self.cache[request]
If you need to be able to override the request fingerprinting for arbitrary
requests from your spider callbacks, you may implement a request fingerprinter
that reads fingerprints from request.meta
when available, and then falls back to
scrapy.utils.request.fingerprint()
. For example:
from scrapy.utils.request import fingerprint
class RequestFingerprinter:
def fingerprint(self, request):
if 'fingerprint' in request.meta:
return request.meta['fingerprint']
return fingerprint(request)
If you need to reproduce the same fingerprinting algorithm as Scrapy 2.6
without using the deprecated '2.6'
value of the
REQUEST_FINGERPRINTER_IMPLEMENTATION
setting, use the following
request fingerprinter:
from hashlib import sha1
from weakref import WeakKeyDictionary
from scrapy.utils.python import to_bytes
from w3lib.url import canonicalize_url
class RequestFingerprinter:
cache = WeakKeyDictionary()
def fingerprint(self, request):
if request not in self.cache:
fp = sha1()
fp.update(to_bytes(request.method))
fp.update(to_bytes(canonicalize_url(request.url)))
fp.update(request.body or b'')
self.cache[request] = fp.digest()
return self.cache[request]
Request fingerprint restrictions¶
Scrapy components that use request fingerprints may impose additional restrictions on the format of the fingerprints that your request fingerprinter generates.
The following built-in Scrapy components have such restrictions:
scrapy.extensions.httpcache.FilesystemCacheStorage
(default value ofHTTPCACHE_STORAGE
)Request fingerprints must be at least 1 byte long.
Path and filename length limits of the file system of
HTTPCACHE_DIR
also apply. InsideHTTPCACHE_DIR
, the following directory structure is created:Spider.name
first byte of a request fingerprint as hexadecimal
fingerprint as hexadecimal
filenames up to 16 characters long
For example, if a request fingerprint is made of 20 bytes (default),
HTTPCACHE_DIR
is'/home/user/project/.scrapy/httpcache'
, and the name of your spider is'my_spider'
your file system must support a file path like:/home/user/project/.scrapy/httpcache/my_spider/01/0123456789abcdef0123456789abcdef01234567/response_headers
scrapy.extensions.httpcache.DbmCacheStorage
The underlying DBM implementation must support keys as long as twice the number of bytes of a request fingerprint, plus 5. For example, if a request fingerprint is made of 20 bytes (default), 45-character-long keys must be supported.
Request.meta special keys¶
The Request.meta
attribute can contain any arbitrary data, but there
are some special keys recognized by Scrapy and its built-in extensions.
Those are:
dont_merge_cookies
ftp_password
(SeeFTP_PASSWORD
for more info)ftp_user
(SeeFTP_USER
for more info)
bindaddress¶
The IP of the outgoing IP address to use for the performing the request.
download_timeout¶
The amount of time (in secs) that the downloader will wait before timing out.
See also: DOWNLOAD_TIMEOUT
.
download_latency¶
The amount of time spent to fetch the response, since the request has been started, i.e. HTTP message sent over the network. This meta key only becomes available when the response has been downloaded. While most other meta keys are used to control Scrapy behavior, this one is supposed to be read-only.
download_fail_on_dataloss¶
Whether or not to fail on broken responses. See:
DOWNLOAD_FAIL_ON_DATALOSS
.
max_retry_times¶
The meta key is used set retry times per request. When initialized, the
max_retry_times
meta key takes higher precedence over the
RETRY_TIMES
setting.
Stopping the download of a Response¶
Raising a StopDownload
exception from a handler for the
bytes_received
or headers_received
signals will stop the download of a given response. See the following example:
import scrapy
class StopSpider(scrapy.Spider):
name = "stop"
start_urls = ["https://docs.scrapy.org/en/latest/"]
@classmethod
def from_crawler(cls, crawler):
spider = super().from_crawler(crawler)
crawler.signals.connect(spider.on_bytes_received, signal=scrapy.signals.bytes_received)
return spider
def parse(self, response):
# 'last_chars' show that the full response was not downloaded
yield {"len": len(response.text), "last_chars": response.text[-40:]}
def on_bytes_received(self, data, request, spider):
raise scrapy.exceptions.StopDownload(fail=False)
which produces the following output:
2020-05-19 17:26:12 [scrapy.core.engine] INFO: Spider opened
2020-05-19 17:26:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-05-19 17:26:13 [scrapy.core.downloader.handlers.http11] DEBUG: Download stopped for <GET https://docs.scrapy.org/en/latest/> from signal handler StopSpider.on_bytes_received
2020-05-19 17:26:13 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://docs.scrapy.org/en/latest/> (referer: None) ['download_stopped']
2020-05-19 17:26:13 [scrapy.core.scraper] DEBUG: Scraped from <200 https://docs.scrapy.org/en/latest/>
{'len': 279, 'last_chars': 'dth, initial-scale=1.0">\n \n <title>Scr'}
2020-05-19 17:26:13 [scrapy.core.engine] INFO: Closing spider (finished)
By default, resulting responses are handled by their corresponding errbacks. To
call their callback instead, like in this example, pass fail=False
to the
StopDownload
exception.
Request subclasses¶
Here is the list of built-in Request
subclasses. You can also subclass
it to implement your own custom functionality.
FormRequest objects¶
The FormRequest class extends the base Request
with functionality for
dealing with HTML forms. It uses lxml.html forms to pre-populate form
fields with form data from Response
objects.
- class scrapy.http.request.form.FormRequest¶
- class scrapy.http.FormRequest¶
- class scrapy.FormRequest(url[, formdata, ...])¶
The
FormRequest
class adds a new keyword parameter to the__init__
method. The remaining arguments are the same as for theRequest
class and are not documented here.- Parameters:
formdata (dict or collections.abc.Iterable) – is a dictionary (or iterable of (key, value) tuples) containing HTML Form data which will be url-encoded and assigned to the body of the request.
The
FormRequest
objects support the following class method in addition to the standardRequest
methods:- classmethod FormRequest.from_response(response[, formname=None, formid=None, formnumber=0, formdata=None, formxpath=None, formcss=None, clickdata=None, dont_click=False, ...])¶
Returns a new
FormRequest
object with its form field values pre-populated with those found in the HTML<form>
element contained in the given response. For an example see Using FormRequest.from_response() to simulate a user login.The policy is to automatically simulate a click, by default, on any form control that looks clickable, like a
<input type="submit">
. Even though this is quite convenient, and often the desired behaviour, sometimes it can cause problems which could be hard to debug. For example, when working with forms that are filled and/or submitted using javascript, the defaultfrom_response()
behaviour may not be the most appropriate. To disable this behaviour you can set thedont_click
argument toTrue
. Also, if you want to change the control clicked (instead of disabling it) you can also use theclickdata
argument.Caution
Using this method with select elements which have leading or trailing whitespace in the option values will not work due to a bug in lxml, which should be fixed in lxml 3.8 and above.
- Parameters:
response (
Response
object) – the response containing a HTML form which will be used to pre-populate the form fieldsformname (str) – if given, the form with name attribute set to this value will be used.
formid (str) – if given, the form with id attribute set to this value will be used.
formxpath (str) – if given, the first form that matches the xpath will be used.
formcss (str) – if given, the first form that matches the css selector will be used.
formnumber (int) – the number of form to use, when the response contains multiple forms. The first one (and also the default) is
0
.formdata (dict) – fields to override in the form data. If a field was already present in the response
<form>
element, its value is overridden by the one passed in this parameter. If a value passed in this parameter isNone
, the field will not be included in the request, even if it was present in the response<form>
element.clickdata (dict) – attributes to lookup the control clicked. If it’s not given, the form data will be submitted simulating a click on the first clickable element. In addition to html attributes, the control can be identified by its zero-based index relative to other submittable inputs inside the form, via the
nr
attribute.dont_click (bool) – If True, the form data will be submitted without clicking in any element.
The other parameters of this class method are passed directly to the
FormRequest
__init__
method.
Request usage examples¶
Using FormRequest to send data via HTTP POST¶
If you want to simulate a HTML Form POST in your spider and send a couple of
key-value fields, you can return a FormRequest
object (from your
spider) like this:
return [FormRequest(url="http://www.example.com/post/action",
formdata={'name': 'John Doe', 'age': '27'},
callback=self.after_post)]
Using FormRequest.from_response() to simulate a user login¶
It is usual for web sites to provide pre-populated form fields through <input
type="hidden">
elements, such as session related data or authentication
tokens (for login pages). When scraping, you’ll want these fields to be
automatically pre-populated and only override a couple of them, such as the
user name and password. You can use the FormRequest.from_response()
method for this job. Here’s an example spider which uses it:
import scrapy
def authentication_failed(response):
# TODO: Check the contents of the response and return True if it failed
# or False if it succeeded.
pass
class LoginSpider(scrapy.Spider):
name = 'example.com'
start_urls = ['http://www.example.com/users/login.php']
def parse(self, response):
return scrapy.FormRequest.from_response(
response,
formdata={'username': 'john', 'password': 'secret'},
callback=self.after_login
)
def after_login(self, response):
if authentication_failed(response):
self.logger.error("Login failed")
return
# continue scraping with authenticated session...
JsonRequest¶
The JsonRequest class extends the base Request
class with functionality for
dealing with JSON requests.
- class scrapy.http.JsonRequest(url[, ... data, dumps_kwargs])¶
The
JsonRequest
class adds two new keyword parameters to the__init__
method. The remaining arguments are the same as for theRequest
class and are not documented here.Using the
JsonRequest
will set theContent-Type
header toapplication/json
andAccept
header toapplication/json, text/javascript, */*; q=0.01
- Parameters:
data (object) – is any JSON serializable object that needs to be JSON encoded and assigned to body. if
Request.body
argument is provided this parameter will be ignored. ifRequest.body
argument is not provided and data argument is providedRequest.method
will be set to'POST'
automatically.dumps_kwargs (dict) – Parameters that will be passed to underlying
json.dumps()
method which is used to serialize data into JSON format.
JsonRequest usage example¶
Sending a JSON POST request with a JSON payload:
data = {
'name1': 'value1',
'name2': 'value2',
}
yield JsonRequest(url='http://www.example.com/post/action', data=data)
Response objects¶
Response subclasses¶
Here is the list of available built-in Response subclasses. You can also subclass the Response class to implement your own functionality.
TextResponse objects¶
- class scrapy.http.TextResponse(url[, encoding[, ...]])¶
TextResponse
objects adds encoding capabilities to the baseResponse
class, which is meant to be used only for binary data, such as images, sounds or any media file.TextResponse
objects support a new__init__
method argument, in addition to the baseResponse
objects. The remaining functionality is the same as for theResponse
class and is not documented here.- Parameters:
encoding (str) – is a string which contains the encoding to use for this response. If you create a
TextResponse
object with a string as body, it will be converted to bytes encoded using this encoding. If encoding isNone
(default), the encoding will be looked up in the response headers and body instead.
TextResponse
objects support the following attributes in addition to the standardResponse
ones:- text¶
Response body, as a string.
The same as
response.body.decode(response.encoding)
, but the result is cached after the first call, so you can accessresponse.text
multiple times without extra overhead.Note
str(response.body)
is not a correct way to convert the response body into a string:>>> str(b'body') "b'body'"
- encoding¶
A string with the encoding of this response. The encoding is resolved by trying the following mechanisms, in order:
the encoding passed in the
__init__
methodencoding
argumentthe encoding declared in the Content-Type HTTP header. If this encoding is not valid (i.e. unknown), it is ignored and the next resolution mechanism is tried.
the encoding declared in the response body. The TextResponse class doesn’t provide any special functionality for this. However, the
HtmlResponse
andXmlResponse
classes do.the encoding inferred by looking at the response body. This is the more fragile method but also the last one tried.
- selector¶
A
Selector
instance using the response as target. The selector is lazily instantiated on first access.
TextResponse
objects support the following methods in addition to the standardResponse
ones:- xpath(query)¶
A shortcut to
TextResponse.selector.xpath(query)
:response.xpath('//p')
- css(query)¶
A shortcut to
TextResponse.selector.css(query)
:response.css('p')
- urljoin(url)¶
Constructs an absolute url by combining the Response’s base url with a possible relative url. The base url shall be extracted from the
<base>
tag, or just the Response’surl
if there is no such tag.
HtmlResponse objects¶
- class scrapy.http.HtmlResponse(url[, ...])¶
The
HtmlResponse
class is a subclass ofTextResponse
which adds encoding auto-discovering support by looking into the HTML meta http-equiv attribute. SeeTextResponse.encoding
.
XmlResponse objects¶
- class scrapy.http.XmlResponse(url[, ...])¶
The
XmlResponse
class is a subclass ofTextResponse
which adds encoding auto-discovering support by looking into the XML declaration line. SeeTextResponse.encoding
.