Segfault when passing vector of vectors to SSBO using glBufferData

I am trying to pass a vector of vectors to an SSB0, however I get a segfault when passing it through with glBufferData. The structure in C++ is:

const uint16_t MAX_NODE_POOLS = 1927;

union Node
{
    uint32_t childDescriptor;
    uint32_t material;
};

struct NodePool
{
    NodePool() : mNodes({0}) {}
    std::array<Node, 8> mNodes;
};

struct Block
{
    Block(): ID(0) {}
    uint16_t ID;
    std::vector<NodePool> mNodePools;
    std::vector<uint16_t> mNodeMasks;
};

class Octree
{
public:
    ...
    void registerSSBO(GLuint &octreeSSBO) const;
    void generate();
    
    [[nodiscard]] inline uint64_t getMem() const { return mBlocks.size() *
    (
            sizeof(uint16_t) +                      // ID
            (sizeof(NodePool)*MAX_NODE_POOLS) +     // NodePools
            (sizeof(uint16_t)*MAX_NODE_POOLS)       // NodeMasks
            ); }
private:
     ...
    std::vector<Block> mBlocks;
};

...

void Octree::Octree::registerSSBO(GLuint &octreeSSBO) const
{
    glGenBuffers(1, &octreeSSBO);
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, octreeSSBO);
    std::cout << getMem() << std::endl;
    glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW);
    glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 2, octreeSSBO);
    glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
}

I populate the blocks with data, and then I pass into the SSBO like so

...
octree.generate();
glGenVertexArrays(1, &VAO);
glGenBuffers(1, &VBO);
glBindVertexArray(VAO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 2 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);

octree.registerSSBO(octreeSSBO);
glBindVertexArray(0);
...

In my shader I have the SSBO structured like so

#version 430 core
// To use 16 bit integers
#extension GL_NV_gpu_shader5 : enable
#define MAX_NODE_POOLS 1927

struct Node
{
    // Either a child descriptor or material depending on mNodeMasks
    int data;
};

struct NodePool
{
    Node mNodes[8];
};

struct Block
{
    uint16_t ID;
    NodePool mNodePools[MAX_NODE_POOLS];
    uint16_t mNodeMasks[MAX_NODE_POOLS];
};

layout (std430, binding=2) buffer octreeData
{
    Block blocks[];
};

Everytime it segfaults on glBufferData inside registerSSBO

glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW);

getMem() in this case returns a size of 35773920 bytes, which is the value I expect. Am I calculating it incorrectly? Smaller values like mBlocks.size()*sizeof(mBlocks) or mBlocks.size()*sizeof(Block) don't cause the application to seg fault (however application doesn't behave as desired)

Running with valgrind prevents the segfault from happening, however gives me 20 warnings Invalid read of size 16 on the glBufferData call, but I'm having trouble figuring out exactly what that might indicate?

In each of the separate warnings it gives me issues like this:

Invalid read of size 16
...
Address 0x4cd830b0 is 16 bytes before a block of size 61,664 alloc'd
Invalid read of size 16
...
Address 0x4cd83080 is 0 bytes after a block of size 57,344 alloc'd
Invalid read of size 16
...
Address 0x4cd830a0 is 32 bytes before a block of size 61,664 in arena "client"

etc

Is this extraneous boiler plate or am I missing something?

Edit:

To show that the vectors are being properly sized I have changed getMem() to the following function, and the results are identical

inline uint64_t getMem() const 
{
    uint64_t sum = 0;
    for(const auto& b: mBlocks)
    {
        for(const auto& np: b.mNodePools)
            sum += sizeof(np);
        for(const auto& nm: b.mNodeMasks)
            sum += sizeof(nm);
        sum += sizeof(b.ID);
    }
    return sum;
}

Solution 1:

uint16_t ID;
std::vector<NodePool> mNodePools;
std::vector<uint16_t> mNodeMasks;
...
glBufferData(GL_SHADER_STORAGE_BUFFER, getMem(), mBlocks.data(), GL_DYNAMIC_DRAW);

You cannot do that. You cannot do a byte-wise copy of most C++ standard library types into OpenGL (or at anything else for that matter). As a general rule, if a type is not trivially copyable (and vector is most assuredly not), it definitely cannot just be thrown at OpenGL like this (note: this does not mean that you can throw any trivially copyable type at OpenGL. Trivial copyability is necessary but not sufficient).

Your use of std::array works (maybe. The C++ standard doesn't guarantee what you think it does about array's layout) because array<T> is defined without explicit constructors. As such, it will be trivially copyable to the extent that T is trivially copyable.

If you're going to copy C++ objects to GLSL, then the C++ types and layouts must match what GLSL defines. std::vector in no way matches the layout of any GLSL array. If your GLSL defines an array of X items, then the only C++ type that's definitely going to match that is an actual array of X items (again, necessary but not sufficient).

No more, no less.