疑难杂症

你在这里

在爬虫工程下运行scrapy crawl报错: Traceback (most recent call last): File "/usr/bin/scrapy", line 11, in load_entry_point('Scrapy==1.6.0', 'console_scripts', 'scrapy')() File "/usr/lib/python2.7/site-packages/Scrapy-1.6.0-py2.7.egg/scrapy/cmdline.py", line 150, in execute _run_print_help(parser, _run_command, cmd, args, opts) File "/usr/lib/python2.7/site-packages/Scrapy-1.6.0-py2.7.egg/scrap
2018-06-20 01:03:53 [twisted] CRITICAL: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks result = g.send(result) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 98, in crawl six.reraise(*exc_info) File "/usr/local/lib/python2.7/dist-packages/scrapy/crawler.py", line 79, in crawl self.spi
利用如下代码从多页爬取,但似乎Request没有调用,或其回调函数没有调用?
def parse(self, response):
    #水平爬取
    next_selector = response.xpath('//*[contains(@class, "house-lst-page-box")]//a[last()]/@href')
    for url in next_selector.extract():
    yield Request(urlparse.urljoin(response.url, url))
    #垂直爬取