How to do parallel programming in Python?
You can use the multiprocessing module. For this case I might use a processing pool:
from multiprocessing import Pool
pool = Pool()
result1 = pool.apply_async(solve1, [A]) # evaluate "solve1(A)" asynchronously
result2 = pool.apply_async(solve2, [B]) # evaluate "solve2(B)" asynchronously
answer1 = result1.get(timeout=10)
answer2 = result2.get(timeout=10)
This will spawn processes that can do generic work for you. Since we did not pass processes
, it will spawn one process for each CPU core on your machine. Each CPU core can execute one process simultaneously.
If you want to map a list to a single function you would do this:
args = [A, B]
results = pool.map(solve1, args)
Don't use threads because the GIL locks any operations on python objects.
This can be done very elegantly with Ray.
To parallelize your example, you'd need to define your functions with the @ray.remote
decorator, and then invoke them with .remote
.
import ray
ray.init()
# Define the functions.
@ray.remote
def solve1(a):
return 1
@ray.remote
def solve2(b):
return 2
# Start two tasks in the background.
x_id = solve1.remote(0)
y_id = solve2.remote(1)
# Block until the tasks are done and get the results.
x, y = ray.get([x_id, y_id])
There are a number of advantages of this over the multiprocessing module.
- The same code will run on a multicore machine as well as a cluster of machines.
- Processes share data efficiently through shared memory and zero-copy serialization.
- Error messages are propagated nicely.
-
These function calls can be composed together, e.g.,
@ray.remote def f(x): return x + 1 x_id = f.remote(1) y_id = f.remote(x_id) z_id = f.remote(y_id) ray.get(z_id) # returns 4
- In addition to invoking functions remotely, classes can be instantiated remotely as actors.
Note that Ray is a framework I've been helping develop.