Using 100% of all cores with the multiprocessing module
Solution 1:
To use 100% of all cores, do not create and destroy new processes.
Create a few processes per core and link them with a pipeline.
At the OS-level, all pipelined processes run concurrently.
The less you write (and the more you delegate to the OS) the more likely you are to use as many resources as possible.
python p1.py | python p2.py | python p3.py | python p4.py ...
Will make maximal use of your CPU.
Solution 2:
You can use psutil
to pin each process spawned by multiprocessing
to a specific CPU:
import multiprocessing as mp
import psutil
def spawn():
procs = list()
n_cpus = psutil.cpu_count()
for cpu in range(n_cpus):
affinity = [cpu]
d = dict(affinity=affinity)
p = mp.Process(target=run_child, kwargs=d)
p.start()
procs.append(p)
for p in procs:
p.join()
print('joined')
def run_child(affinity):
proc = psutil.Process() # get self pid
print(f'PID: {proc.pid}')
aff = proc.cpu_affinity()
print(f'Affinity before: {aff}')
proc.cpu_affinity(affinity)
aff = proc.cpu_affinity()
print(f'Affinity after: {aff}')
if __name__ == '__main__':
spawn()
Note: As commented, psutil.Process.cpu_affinity
is not available on macOS.
Solution 3:
Minimum example in pure Python:
def f(x):
while 1:
# ---bonus: gradually use up RAM---
x += 10000 # linear growth; use exponential for faster ending: x *= 1.01
y = list(range(int(x)))
# ---------------------------------
pass # infinite loop, use up CPU
if __name__ == '__main__': # name guard to avoid recursive fork on Windows
import multiprocessing as mp
n = mp.cpu_count() * 32 # multiply guard against counting only active cores
with mp.Pool(n) as p:
p.map(f, range(n))
Usage: to warm up on a cold day (but feel free to change the loop to something less pointless.)
Warning: to exit, don't pull the plug or hold the power button, Ctrl-C instead.