Monday, 24 May 2021

Crawling through multiple links on Scrapy

I'm trying to first crawl through the main page of this website for the links to a table for each year. Then I'd like to scrape each site, while maintaining record of each year.

So far I have my spider constructed as:

div = response.xpath('//*[@id="sidebar"]/div[1]/nav/ul/li[5]/div')
    
hrefs = div.xpath('*//a').extract()
splits = {}
    
for href in hrefs:
    split = href.split('"')
    link = split[1]
    date = split[2]
    clean_date = "".join(re.findall("[^><a/]",date))
    clean_link = "http://www.ylioppilastutkinto.fi" + str(link)
    splits[clean_date] = clean_link

I would then like to go through each link in this file and crawl through them, using the following logic:

table = resp.xpath('//*[@id="content"]/table/tbody')
rows = table.xpath('//tr')
        
data_dict = {"Category": 
            [w3lib.html.remove_tags(num.get()) for num in rows[0].xpath('td')[1:]]
            }

for row in rows[1:]:
    data = row.xpath('td')
    title = w3lib.html.remove_tags(data[0].get())
    nums = [w3lib.html.remove_tags(num.get()) for num in data[1:]]
    data_dict[title] = nums

My problem is that I couldn't find a way to do this effectively. Calling scrapy.Request on the url returns a response with just the content <html></html>. If there was a way where the response object could resemble the one given by the fetch command in Scrapy shell that would be ideal, since I've based the selection logic on testing with that command.



from Crawling through multiple links on Scrapy

No comments:

Post a Comment