Strict aliasing accross DLL boundary

I've been reviewing C++'s strict aliasing rules, which got me thinking of some code at my previous job. I believe said code violated strict aliasing rules, but was curious why we didn't run into any issues or compiler warnings. We utilized a core .DLL to receive network messages that were handed off to a server application. A (very) simplified example of what was done:

#include <iostream>
#include <cstring>

using namespace std;

// These enums/structs lived in a shared .h file consumed by the DLL and server application
enum NetworkMessageId : int
{
    NETWORK_MESSAGE_LOGIN
    // ...
};

struct NetworkMessageBase
{
    NetworkMessageId type;
    size_t size;
};

struct LoginNetworkMessage : NetworkMessageBase
{
    static constexpr size_t MaxUsernameLength = 25;
    static constexpr size_t MaxPasswordLength = 50;
    
    char username[MaxUsernameLength];
    char password[MaxUsernameLength];
};

// This buffer and function was created/exported by the DLL
char* receiveBuffer = new char[sizeof(LoginNetworkMessage)];

NetworkMessageBase* receiveNetworkMessage()
{
    // Simulate receiving data from network, actual production code provided additional safety checks 
    LoginNetworkMessage msg;
    msg.type = NETWORK_MESSAGE_LOGIN;
    msg.size = sizeof(msg);
    
    strcpy(msg.username, "username1");
    strcpy(msg.password, "qwerty");
    
    memcpy(receiveBuffer, &msg, sizeof(msg));
    
    return (NetworkMessageBase*)&receiveBuffer[0]; // I believe this line invokes undefined behavior (strict aliasing)
}


// Pretend main is the server application
int main()
{
    NetworkMessageBase* msg = receiveNetworkMessage();
    switch (msg->type)
    {
    case NETWORK_MESSAGE_LOGIN:
        {
            LoginNetworkMessage* loginMsg = (LoginNetworkMessage*)msg;
            cout << "Username: " << loginMsg->username << " Password: " << loginMsg->password << endl;
        }
        break;
    }
    
    delete [] receiveBuffer; // A cleanup function defined in the DLL actually did this

    return 0;
}

From what I understand, receiveNetworkMessage() invokes undefined behavior. I've read strict aliasing U.B. typically relates to compiler optimizations/assumptions. I'm thinking these optimizations are not relevant in this case since the .DLL and server application are compiled separately. Is that correct?

Lastly, the client application also shared the example .h provided which it utilized to create a LoginNetworkMessage which was streamed byte-for-byte to the server. Is this portable? Packing/endianness issues aside, I believe it's not since LoginNetworkMessage's layout is non-standard, so member ordering may be different.


From what I understand, receiveNetworkMessage() invokes undefined behavior

Correct.

LoginNetworkMessage which was streamed byte-for-byte to the server. Is this portable?

No, network communication that relies on binary compatibility isn't portable.

Packing/endianness issues aside, I believe it's not since LoginNetworkMessage's layout is non-standard, so member ordering may be different.

Order of members is guaranteed even for non-standard-layout classes. What is a problem (besides endianness) is the amount and the placement of padding for alignment purposes, the sizes of the fundamental types, the numbers of bits in a byte (although to be fair, non-8-bit-byte network connected hardware probably isn't a thing you need to support).


I'm thinking these optimizations are not relevant in this case since the .DLL and server application are compiled separately. Is that correct?

Yes. This makes the code safe in practice (even though the standard doesn't know anything DLLs and considers it undefined regardless).

The same is true for different translation units, unless whole-program-optimization is enabled.

The only potential problem here, as the other answer says, is the portability of struct layouts.


The C Standard makes no attempt to consider the semantics of programs whose entire source code is not available to the C implementation simultaneously, and thus imposes no requirements about how implementations process such programs. Since not all implementations can support such concepts, any programs that would rely upon them are inherently "non-portable", and since the Standard imposes no requirements on how they are processed they technically invoke Undefined Behavior.

On the other hand, the Standard recognizes that Undefined Behavior can occur in programs that are non-portable but correct, and specifies that implementations may process programs that invoke Undefined Behavior "in a documented manner characteristic of the environment". Implementations that allow invocation of separately-built code fragments generally specify how calls to such functions will be processed at the machine level, and should thus be expected to process such calls in that documented fashion without regard for whether the Standard would require that they do so.