Scrapy get request url in parse

Python 2.7ScrapyScrapyd

Python 2.7 Problem Overview


How can I get the request url in Scrapy's parse() function? I have a lot of urls in start_urls and some of them redirect my spider to homepage and as result I have an empty item. So I need something like item['start_url'] = request.url to store these urls. I'm using the BaseSpider.

Python 2.7 Solutions


Solution 1 - Python 2.7

The 'response' variable that's passed to parse() has the info you want. You shouldn't need to override anything.

eg. (EDITED)

def parse(self, response):
    print "URL: " + response.request.url

Solution 2 - Python 2.7

The request object is accessible from the response object, therefore you can do the following:

def parse(self, response):
    item['start_url'] = response.request.url

Solution 3 - Python 2.7

You need to override BaseSpider's make_requests_from_url(url) [function][1] to assign the start_url to the item and then use the Request.meta [special keys][2] to pass that item to the parse function

from scrapy.http import Request
    
    # override method
    def make_requests_from_url(self, url):
        item = MyItem()
        
        # assign url
        item['start_url'] = url
        request = Request(url, dont_filter=True)
        
        # set the meta['item'] to use the item in the next call back
        request.meta['item'] = item
        return request


    def parse(self, response):
    
        # access and do something with the item in parse
        item = response.meta['item']
        item['other_url'] = response.url
        return item
        

Hope that helps.

[1]: http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spider.BaseSpider.make_requests_from_url "function" [2]: http://scrapy.readthedocs.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions

Solution 4 - Python 2.7

Instead of storing requested URL's somewhere and also scrapy processed URL's are not in same sequence as provided in start_urls.

By using below,

response.request.meta['redirect_urls']

will give you the list of redirect happened like ['http://requested_url','https://redirected_url','https://final_redirected_url']

To access first URL from above list, you can use

response.request.meta['redirect_urls'][0]

For more, see doc.scrapy.org mentioned as :

RedirectMiddleware

This middleware handles redirection of requests based on response status.

The urls which the request goes through (while being redirected) can be found in the redirect_urls Request.meta key.

Hope this helps you

Solution 5 - Python 2.7

Python 3.5

Scrapy 1.5.0

from scrapy.http import Request

# override method
def start_requests(self):
    for url in self.start_urls:
        item = {'start_url': url}
        request = Request(url, dont_filter=True)
        # set the meta['item'] to use the item in the next call back
        request.meta['item'] = item
        yield request

# use meta variable
def parse(self, response):
    url = response.meta['item']['start_url']

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionGoranView Question on Stackoverflow
Solution 1 - Python 2.7JaguView Answer on Stackoverflow
Solution 2 - Python 2.7gusriddView Answer on Stackoverflow
Solution 3 - Python 2.7NKelnerView Answer on Stackoverflow
Solution 4 - Python 2.7Rohan KhudeView Answer on Stackoverflow
Solution 5 - Python 2.7SorinPView Answer on Stackoverflow