How to support both IPv4 and IPv6 connections

I'm currently working on a UDP socket application and I need to build in support so that IPV4 and IPV6 connections can send packets to a server.

I was hoping that someone could help me out and point me in the right direction; the majority of the documentation that I found was not complete. It'd also be helpful if you could point out any differences between Winsock and BSD sockets.

Thanks in advance!


Solution 1:

The best approach is to create an IPv6 server socket that can also accept IPv4 connections. To do so, create a regular IPv6 socket, turn off the socket option IPV6_V6ONLY, bind it to the "any" address, and start receiving. IPv4 addresses will be presented as IPv6 addresses, in the IPv4-mapped format.

The major difference across systems is whether IPV6_V6ONLY is a) available, and b) turned on or off by default. It is turned off by default on Linux (i.e. allowing dual-stack sockets without setsockopt), and is turned on on most other systems.

In addition, the IPv6 stack on Windows XP doesn't support that option. In these cases, you will need to create two separate server sockets, and place them into select or into multiple threads.

Solution 2:

The socket API is governed by IETF RFCs and should be the same on all platforms including windows WRT IPv6.

For IPv4/IPv6 applications it's ALL about getaddrinfo() and getnameinfo(). getaddrinfo is a genius - looks at DNS, port names and capabilities of the client to resolve the eternal question of “can I use IPv4, IPv6 or both to reach a particular destination?” Or if you're going the dual-stack route and want it to return IPv4-mapped IPv6 addresses, it will do that too.

It provides a direct sockaddr * structure that can be plugged into bind(), recvfrom(), sendto() and the address family for socket()… In many cases this means no messy sockaddr_in(6) structures to fill out and deal with.

For UDP implementations I would be careful about setting dual-stack sockets or, more generally, binding to all interfaces (INADDR_ANY). The classic issue is that, when addresses are not locked down (see bind()) to specific interfaces and the system has multiple interfaces requests, responses may transit from different addresses for computers with multiple addresses based on the whims of the OS routing table, confusing application protocols—especially any systems with authentication requirements.

For UDP implementations where this is not a problem, or TCP, dual stack sockets can save a lot of time when IPv*-enabling your system. One must be careful to not rely entirely on dual-stack where it`s not absolutely necessary as there are no shortage of reasonable platforms (Old Linux, BSD, Windows 2003) deployed with IPv6 stacks not capable of dual stack sockets.

Solution 3:

I've been playing with this under Windows and it actually does appear to be a security issue there, if you bind to the loopback address then the IPv6 socket is correctly bound to [::1] but the mapped IPv4 socket is bound to INADDR_ANY, so your (supposedly) safely local-only app is actually exposed to the world.

Solution 4:

The RFCs don't really specify the existence of the IPV6_V6ONLY socket option, but, if it is absent, the RFCs are pretty clear that the implementation should be as though that option is FALSE.

Where the option is present, I would argue that it should default FALSE, but, for reasons passing understanding, BSD and Windows implementations default to TRUE. There is a bizarre claim that this is a security concern because an unknowing IPv6 programmer could bind thinking they were binding only to IN6ADDR_ANY for only IPv6 and accidentally accept an IPv4 connection causing a security problem. I think this is both far-fetched and absurd in addition to a surprise to anyone expecting an RFC-compliant implementation.

In the case of Windows, non-compiance won't usually be a surprise. In the case of BSD, this is unfortunate at best.