How to combine scrapy and htmlunit to crawl urls with javascript

To handle the pages with javascript you can use Webkit or Selenium.

Here some snippets from snippets.scrapy.org:

Rendered/interactive javascript with gtk/webkit/jswebkit

Rendered Javascript Crawler With Scrapy and Selenium RC


Here is a working example using selenium and phantomjs headless webdriver in a download handler middleware.

class JsDownload(object):

@check_spider_middleware
def process_request(self, request, spider):
    driver = webdriver.PhantomJS(executable_path='D:\phantomjs.exe')
    driver.get(request.url)
    return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))

I wanted to ability to tell different spiders which middleware to use so I implemented this wrapper:

def check_spider_middleware(method):
@functools.wraps(method)
def wrapper(self, request, spider):
    msg = '%%s %s middleware step' % (self.__class__.__name__,)
    if self.__class__ in spider.middleware:
        spider.log(msg % 'executing', level=log.DEBUG)
        return method(self, request, spider)
    else:
        spider.log(msg % 'skipping', level=log.DEBUG)
        return None

return wrapper

settings.py:

DOWNLOADER_MIDDLEWARES = {'MyProj.middleware.MiddleWareModule.MiddleWareClass': 500}

for wrapper to work all spiders must have at minimum:

middleware = set([])

to include a middleware:

middleware = set([MyProj.middleware.ModuleName.ClassName])

The main advantage to implementing it this way rather than in the spider is that you only end up making one request. In the solution at reclosedev's second link for example: The download handler processes the request and then hands off the response to the spider. The spider then makes a brand new request in it's parse_page function -- That's two requests for the same content.

Another example: https://github.com/scrapinghub/scrapyjs

Cheers!