Python: Executing multiple functions simultaneously
You are doing it correctly. :)
Try running this silly piece of code:
from multiprocessing import Process
import sys
rocket = 0
def func1():
global rocket
print 'start func1'
while rocket < sys.maxint:
rocket += 1
print 'end func1'
def func2():
global rocket
print 'start func2'
while rocket < sys.maxint:
rocket += 1
print 'end func2'
if __name__=='__main__':
p1 = Process(target = func1)
p1.start()
p2 = Process(target = func2)
p2.start()
You will see it print 'start func1' and then 'start func2' and then after a (very) long time you will finally see the functions end. But they will indeed execute simultaneously.
Because processes take a while to start up, you may even see 'start func2' before 'start func1'.
This is just what i needed. I know it wasn't asked but i modified shashank's code to suit Python 3 for anyone else looking :)
from multiprocessing import Process
import sys
rocket = 0
def func1():
global rocket
print ('start func1')
while rocket < sys.maxsize:
rocket += 1
print ('end func1')
def func2():
global rocket
print ('start func2')
while rocket < sys.maxsize:
rocket += 1
print ('end func2')
if __name__=='__main__':
p1 = Process(target=func1)
p1.start()
p2 = Process(target=func2)
p2.start()
Substitute sys.maxsize for an number then print(rocket)and you can see it count up one at a time. Get to a number and stop
This is a very good example by @Shashank. I just want to say that I had to add join
at the end, or else the two processes were not running simultaneously:
from multiprocessing import Process
import sys
rocket = 0
def func1():
global rocket
print 'start func1'
while rocket < sys.maxint:
rocket += 1
print 'end func1'
def func2():
global rocket
print 'start func2'
while rocket < sys.maxint:
rocket += 1
print 'end func2'
if __name__=='__main__':
p1 = Process(target = func1)
p1.start()
p2 = Process(target = func2)
p2.start()
# This is where I had to add the join() function.
p1.join()
p2.join()
Furthermore, Check this thread out: When to call .join() on a process?
This can be done elegantly with Ray, a system that allows you to easily parallelize and distribute your Python code.
To parallelize your example, you'd need to define your functions with the @ray.remote decorator
, and then invoke them with .remote
.
import ray
ray.init()
# Define functions you want to execute in parallel using
# the ray.remote decorator.
@ray.remote
def func1():
#does something
@ray.remote
def func2():
#does something
# Execute func1 and func2 in parallel.
ray.get([func1.remote(), func2.remote()])
If func1()
and func2()
return results, you need to rewrite the code as follows:
ret_id1 = func1.remote()
ret_id2 = func1.remote()
ret1, ret2 = ray.get([ret_id1, ret_id2])
There are a number of advantages of using Ray over the multiprocessing module. In particular, the same code will run on a single machine as well as on a cluster of machines. For more advantages of Ray see this related post.