About shared_mutex and shared_ptr across multiple threads

I have several remarks:

  1. ~ChunkData: You could change your data member from int* to unique_ptr<int[]> to get the same result without an explicit destructor. Your code is correct though, just less convenient.

  2. ~Chunk: I don't think you need a lock or call the reset method. By the time the destructor runs, by definition, no one should have a reference to the Chunk object. So the lock can never be contested. And reset is unnecessary because the shared_ptr destructor will handle that.

  3. InitData: Yes, the lock is needed because InitData can race with DoWork. You could avoid this by moving InitData to the constructor but I assume there are reasons for this division. You could also change the shared_ptr to std::atomic<std::shared_ptr<ChunkData> > to avoid the lock.

  4. It is more efficient to write InitData like this:

void InitData(int size)
{
    std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
    NewData->size = size;
    NewData->data = new int[size]; // or std::make_unique<int[]>(size)
    {
        std::lock_guard<std::mutex> lock(chunkMutex);
        chunkData.swap(NewData);
    }
    // deletes old chunkData outside locked region if it was initialized before 
}

make_shared avoids an additional memory allocation for the reference counter. This also moves all allocations and deallocations out of the critical section.

  1. DoWork: Your comment "ready chunkData->data[i] and modify some members". You only take a shared_lock but say that you modify members. Well, which is it, reading or writing? Or do you mean to say that you modify Chunk but not ChunkData, with Chunk being protected by its own mutex?

  2. SetNeighbour: You need to lock both your own chunkMutex and the neighbour's. You should not lock both at the same time to avoid the dining philosopher's problem (though std::lock solves this).

    void SetNeighbour(Chunk* neighbourChunk)
    {
        if(! neighbourChunk)
            return;
        std::shared_ptr<ChunkData> newNeighbourData;
        {
            std::lock_guard<std::mutex> lock(neighbourChunk->chunkMutex);
            newNeighbourData = neighbourChunk->chunkData;
        }
        std::lock_guard<std::mutex> lock(this->chunkMutex);
        this->neighbourChunkData = newNeighbourData;
    }
  1. GetDataAt and SetDataAt: You need to lock chunkMutex. Otherwise you might race with InitData. There is no need to use std::lock because the order of locks is never swapped around.

EDIT 1:

  1. DoWork: The line if (shared_ptr<ChunkData> neighbour = neighbourChunkData.lock()) doesn't keep the neighbur alive. Move the variable declaration out of the if to keep the reference.

EDIT: Alternative design proposal

What I'm bothered with is that your DoWork may be unable to proceed if InitData is still running or waiting to run. How do you want to deal with this? I suggest you make it possible to wait until the work can be done. Something like this:

class Chunk
{
    std::mutex chunkMutex;
    std::shared_ptr<ChunkData> chunkData;
    std::weak_ptr<ChunkData> neighbourChunkData;
    std::condition_variable chunkSet;

    void waitForChunk(std::unique_lock<std::mutex>& lock)
    {
        while(! chunkData)
            chunkSet.wait(lock);
    }
public:
    // modified version of my code above
    void InitData(int size)
    {
        std::shared_ptr<ChunkData> NewData = std::make_shared<ChunkData>();
        NewData->size = size;
        NewData->data = new int[size]; // or std::make_unique<int[]>(size)
        {
            std::lock_guard<std::mutex> lock(chunkMutex);
            chunkData.swap(NewData);
        }
        chunkSet.notify_all();
    }
    void DoWork()
    {
        std::unique_lock<std::mutex> ownLock(chunkMutex);
        waitForChunk(lock); // blocks until other thread finishes InitData
        {
            shared_lock readLock(chunkData->dataMutex);
            ...
        }
        
        shared_ptr<ChunkData> neighbour = neighbourChunkData.lock();
        if(! neighbour)
            return;
        shared_lock readLock(neighbour->dataMutex);
        ...
    }
    void SetNeighbour(Chunk* neighbourChunk)
    {
        if(! neighbourChunk)
            return;
        shared_ptr<ChunkData> newNeighbourData;
        {
            std::unique_lock<std::mutex> lock(neighbourChunk->chunkMutex);
            neighbourChunk->waitForChunk(lock); // wait until neighbor has finished InitData
            newNeighbourData = neighbourChunk->chunkData;
        }
        std::lock_guard<std::mutex> ownLock(this->chunkMutex);
        this->neighbourChunkData = std::move(newNeighbourData);
    }
};

The downside to this is that you could deadlock if InitData is never called or if it failed with an exception. There are ways around this, like using an std::shared_future which knows that it is valid (set when InitData is scheduled) and whether it failed (records exception of associated promise or packaged_task).