Question

In IPv6 networking, the IPV6_V6ONLY flag is used to ensure that a socket will only use IPv6, and in particular that IPv4-to-IPv6 mapping won't be used for that socket. On many OS's, the IPV6_V6ONLY is not set by default, but on some OS's (e.g. Windows 7), it is set by default.

My question is: What was the motivation for introducing this flag? Is there something about IPv4-to-IPv6 mapping that was causing problems, and thus people needed a way to disable it? It would seem to me that if someone didn't want to use IPv4-to-IPv6 mapping, they could simply not specify a IPv4-mapped IPv6 address. What am I missing here?

Was it helpful?

Solution

I don't know why it would be default; but it's the kind of flags that i would always put explicit, no matter what the default is.

About why does it exist in the first place, i guess that it allows you to keep existing IPv4-only servers, and just run new ones on the same port but just for IPv6 connections. Or maybe the new server can simply proxy clients to the old one, making the IPv6 functionality easy and painless to add to old services.

OTHER TIPS

Not all IPv6 capable platforms support dualstack sockets so the question becomes how do applications needing to maximimize IPv6 compatibility either know dualstack is supported or bind separatly when its not? The only universal answer is IPV6_V6ONLY.

An application ignoring IPV6_V6ONLY or written before dualstack capable IP stacks existed may find binding separatly to V4 fails in a dualstack environment as the IPv6 dualstack socket bind to IPv4 preventing IPv4 socket binding. The application may also not be expecting IPv4 over IPv6 due to protocol or application level addressing concerns or IP access controls.

This or similar situations most likely prompted MS et al to default to 1 even tho RFC3493 declares 0 to be default. 1 theoretically maximizes backwards compatibility. Specifically Windows XP/2003 does not support dualstack sockets.

There are also no shortage of applications which unfortunately need to pass lower layer information to operate correctly and so this option can be quite useful for planning a IPv4/IPv6 compatibility strategy that best fits the requirements and existing codebases.

The reason most often mentioned is for the case where the server has some form of ACL (Access Control List). For instance, imagine a server with rules like:

Allow 192.0.2.4
Deny all

It runs on IPv4. Now, someone runs it on a machine with IPv6 and, depending on some parameters, IPv4 requests are accepted on the IPv6 socket, mapped as ::192.0.2.4 and then no longer matched by the first ACL. Suddenly, access would be denied.

Being explicit in your application (using IPV6_V6ONLY) would solve the problem, whatever default the operating system has.

For Linux, when writing a service that listens on both IPv4 and IPv6 sockets on the same service port, e.g. port 2001, you MUST call setsockopt(s, SOL_IPV6, IPV6_V6ONLY, &one, sizeof(one)); on the IPv6 socket. If you do not, the bind() operation for the IPv4 socket fails with "Address already in use".

There are plausible ways in which the (poorly named) "IPv4-mapped" addresses can be used to circumvent poorly configured systems, or bad stacks, or even in a well configured system might just require onerous amounts of bugproofing. A developer might wish to use this flag to make their application more secure by not utilizing this part of the API.

See: http://ipv6samurais.com/ipv6samurais/openbsd-audit/draft-cmetz-v6ops-v4mapped-api-harmful-01.txt

Imagine a protocol that includes in the conversation a network address, e.g. the data channel for FTP. When using IPv6 you are going to send the IPv6 address, if the recipient happens to be a IPv4 mapped address it will have no way of connecting to that address.

There's one very common example where the duality of behavior is a problem. The standard getaddrinfo() call with AI_PASSIVE flag offers the possibility to pass a nodename parameter and returns a list of addresses to listen on. A special value in form of a NULL string is accepted for nodename and implies listening on wildcard addresses.

On some systems 0.0.0.0 and :: are returned in this order. When dual-stack socket is enabled by default and you don't set the socket IPV6_V6ONLY, the server connects to 0.0.0.0 and then fails to connect to dual-stack :: and therefore (1) only works on IPv4 and (2) reports error.

I would consider the order wrong as IPv6 is expected to be preferred. But even when you first attempt dual-stack :: and then IPv4-only 0.0.0.0, the server still reports an error for the second call.

I personally consider the whole idea of a dual-stack socket a mistake. In my project I would rather always explicitly set IPV6_V6ONLY to avoid that. Some people apparently saw it as a good idea but in that case I would probably explicitly unset IPV6_V6ONLY and translate NULL directly to 0.0.0.0 bypassing the getaddrinfo() mechanism.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top