PyTorch DataLoader uses same random seed for batches run in parallel

Solution 1:

It seems this works, at least in Colab:

dataloader = DataLoader(dataset, batch_size=1, num_workers=3, 
    worker_init_fn = lambda id: np.random.seed(id) )

EDIT:

it produces identical output (i.e. the same problem) when iterated over epochs. – iacob

Best fix I have found so far:

...
dataloader = DataLoader(ds, num_workers= num_w, 
           worker_init_fn = lambda id: np.random.seed(id + epoch * num_w ))

for epoch in range ( 2 ):
    for batch in dataloader:
        print(batch)
    print()

Still can't suggest closed form, thing depends on a var (epoch) then called. Ideally It must be something like worker_init_fn = lambda id: np.random.seed(id + EAGER_EVAL(np.random.randint(10000) ) where EAGER_EVAL evaluate seed on loader construction, before lambda is passed as parameter. Is it possible in python, I wonder.