Why would you use IPv6 internally?
Of course, I realize the need to go to IPv6 out on the open Internet since we are running out of addresses, but I really don't understand why there is any need to use it on an internal network. I have done zero with IPv6, so I also wonder: Won't modern firewalls do NAT between internal IPv4 addresses, and external IPv6 addresses?
I was just wondering since I have seen so many people struggling with IPv6 questions here, and wonder why bother?
There is no NAT for IPv6 (as you think of NAT anyway). NAT was an $EXPLETIVE temporary solution to IPv4 running out of addresses (a problem which didn't actually exist, and was solved before NAT was ever necessary, but history is 20/20). It adds nothing but complexity and would do little except cause headaches in IPv6 (we have so many IPv6 Address we unabashedly waste them). NAT66 does exist, and is meant to reduce the number of IPv6 addresses used by each host (it's normal for IPv6 hosts to have multiple addresses, IPv6 is somewhat different than IPv4 in many ways, this is one).
The Internet was supposed to be end-to-end routable, that is part of the reason IPv4 in invented and why it gained acceptance. That is not to say that all address on the Internet were supposed to be reachable. NAT breaks both. Firewalls add layers of security by breaking reachability, but normally that it's at the expense of routability.
You will want IPv6 in your networks as there is no way to specify an IPv6 endpoint with a IPv4 address. The other way around does work, which enables IPv6-only networks using DNS64 and NAT64 to access the IPv4 Internet still. It's actually possible today to ditch IPv4 all together, though it's a bit of hassle setting it up. It would be possible to proxy from IPv4 internal addresses to IPv6 servers. Adding and configuring a proxy server adds configuration, hardware, and maintenance costs to the network; usually much more than simply enabling IPv6.
NAT causes it's own problems too. The Router has to be capable of coordinating every connection running through it, keeping track of endpoints, ports, timeouts, and more. All that traffic is being funneled through that single point usually. Though it's possible to build redundant NAT routers, the technology is massively complex and generally expensive. Redundant simple routers are easy and cheap (comparatively). Also, to re-establish some of the routability, forwarding and translating rules have to be established on the NAT system. This still breaks protocols which embed IP addresses, such as SIP. UPNP, STUN, and other protocols were invented to help with this problem too - more complexity, more maintenance, more that could go wrong.
Running out of internal (rfc1918) ipv4 addresses can also be a very valid reason to go ipv6.
Comcast explained at Nanog37 why they were going ipv6 for their management addresses.
20 Million video customer
x 2.5 STB/customer
x 2 ip addresses/STB
--------------------
= 100 Millions IP addresses
And this is only for video, not data/modems.
They exhausted the RFC1918 pools in 2005. Then they used public addresses pools (as nat isn't an option for management), and went ipv6 to solve their needs.
Couple of reasons:
IPv6 doesn't support broadcasting. It is replaced with multicasting. Broadcasting enables one node to send traffic to all nodes on a subnet. Management of broadcast domains is a major issue with keeping large IPv4 networks running fast and smoothly. Multicasting requires that nodes that want to receive "broadcast"-style actually "sign-up" for it, so the network isn't flooded with traffic that hits all hosts.
IPv6 supports IPsec style encryption natively.
IPv6 supports autoconfiguration. It's possible for hosts behind a router to configure themselves without the need for DHCP, although you still need a DHCP server to hand out DHCP options such as DNS server, TFTP server, etc.
My old job, at a large University, would use an IPv6 allocation internally. They were assigned an IPv4 /16 back in the day and even today is passing out IPv4 addresses to nearly every internal client. The RFC1918 networks were restricted to the telecom-only network and certain specialized usages (the PCI standards required RFC1918 usage until October 2010).
Because of this, they were actively planning to use IPv6 internally as well. There were some hardware issues still to work out, the edge switches weren't supporting v6 well enough, but the core was ready. The idea was that getting v6 support at the publicly visible end (okay, the publicly responsive end) of the network would involve 70% of the work to deploy it to everyone, may as well do the extra 30% and go end-to-end with it.
Having lived with a public IP allocation for so long, our people were very aware of the adage: "just because it is public, does not mean it is reachable." As Chris S said, routeable does not imply reachable.
That is why at least one class of organization would deploy IPv6 internally: because they're already using non-RFC1918 IPv4 internally.