Multiple (asynchronous) connections with urllib2 or other http library?
So, it's 2016 😉 and we have Python 3.4+ with built-in asyncio module for asynchronous I/O. We can use aiohttp as HTTP client to download multiple URLs in parallel.
import asyncio
from aiohttp import ClientSession
async def fetch(url):
async with ClientSession() as session:
async with session.get(url) as response:
return await response.read()
async def run(loop, r):
url = "http://localhost:8080/{}"
tasks = []
for i in range(r):
task = asyncio.ensure_future(fetch(url.format(i)))
tasks.append(task)
responses = await asyncio.gather(*tasks)
# you now have all response bodies in this variable
print(responses)
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(run(loop, 4))
loop.run_until_complete(future)
Source: copy-pasted from http://pawelmhm.github.io/asyncio/python/aiohttp/2016/04/22/asyncio-aiohttp.html
You can use asynchronous IO to do this.
requests + gevent = grequests
GRequests allows you to use Requests with Gevent to make asynchronous HTTP Requests easily.
import grequests
urls = [
'http://www.heroku.com',
'http://tablib.org',
'http://httpbin.org',
'http://python-requests.org',
'http://kennethreitz.com'
]
rs = (grequests.get(u) for u in urls)
grequests.map(rs)
Take a look at gevent — a coroutine-based Python networking library that uses greenlet to provide a high-level synchronous API on top of libevent event loop.
Example:
#!/usr/bin/python
# Copyright (c) 2009 Denis Bilenko. See LICENSE for details.
"""Spawn multiple workers and wait for them to complete"""
urls = ['http://www.google.com', 'http://www.yandex.ru', 'http://www.python.org']
import gevent
from gevent import monkey
# patches stdlib (including socket and ssl modules) to cooperate with other greenlets
monkey.patch_all()
import urllib2
def print_head(url):
print 'Starting %s' % url
data = urllib2.urlopen(url).read()
print '%s: %s bytes: %r' % (url, len(data), data[:50])
jobs = [gevent.spawn(print_head, url) for url in urls]
gevent.joinall(jobs)
2021 answer using modern async libraries
The 2016 answer is good, but i figured i'd throw in another answer with httpx instead of aiohttp, since httpx is only a client and supports different async environments. I'm leaving out the OP's for loop with urls built from a number concatenated to the string for what i feel is a more generic answer.
import asyncio
import httpx
# you can have synchronous code here
async def getURL(url):
async with httpx.AsyncClient() as client:
response = await client.get(url)
# we could have some synchronous code here too
# to do CPU bound tasks on what we just fetched for instance
return response
# more synchronous code can go here
async def main():
response1, response2 = await asyncio.gather(getURL(url1),getURL(url2))
# do things with the responses
# you can also have synchronous code here
asyncio.run(main())
Code after any await within the async with block will run as soon as the awaited task is done. It is a good spot to parse your response without waiting for all your requests to have completed.
Code after the asyncio.gather will run once all the tasks have completed. It is a good place to do operations requiring information from all the requests, possibly pre-processed in the async function called by gather.
I know this question is a little old, but I thought it might be useful to promote another async solution built on the requests library.
list_of_requests = ['http://moop.com', 'http://doop.com', ...]
from simple_requests import Requests
for response in Requests().swarm(list_of_requests):
print response.content
The docs are here: http://pythonhosted.org/simple-requests/