MPI_Sendrecv vs combination of MPI_Isend and MPI_Recv

Recently I saw in a book on computational physics that the following piece of code was used to shift data across a chain of processes:

MPI_Isend(data_send, length, datatype, lower_process, tag, comm, &request);
MPI_Recv(data_receive, length, datatype, upper_process, tag, comm, &status);
MPI_Wait(&request, &status);

I think the same can be achieved by a single call to MPI_Sendrecv and I don't think there is any reason to believe the former is faster. Does it have any advantages?


I believe there is no real difference between the fragment you give and an MPI_sendrecv call. The sendrecv combo is fully compatible with regular sends and receives: you could for instance when shifting data through a (non-periodic!) chain use sendrecv everywhere but the end points, and do a regular send/isend/recv/irecv there.

You can think of two variants on your code fragment: use a Isend/Recv combination or use Isend/Irecv. There are probably minor differences in how these are treated in the underlying protocols, but I wouldn't worry about them.

Your fragment can of course more easily be generalized to other patterns than shifting along a chain, but if you have indeed a setup where each process sends to at most one and receives from at most one, then I'd use MPI_Sendrecv just for cleanliness. This one just makes the reader wonder: "is there a deep reason for this".