Any technical reason to install "real" windows to run a hyper-v only server?
Please ignore licensing questions here.
Is there any technical reason not to "only" use Hyper-V Server (the free version) to run a Hyper-V cluster? From a pure feature comparison it looks like the Hyper-V-Server is as capable as the other servers (while the pre r2 version had missing features) for a pure hyper-v role.
Solution 1:
From a capability standpoint, Hyper-V Server R2 is essentially the same as the Hyper-V role on a Windows Server 2008 R2 installation.
From a "technical limitations" standpoint, going with Hyper-V server is actually better than going with Windows Server 2008 R2 Standard. Hyper-V Server R2 supports up to 1TB of memory and 8 (multi-core) CPUs. Windows Server 2008 Standard supports up to 32GB of memory and 4 (multi-core) CPUs.
If you compare memory/CPUs to Windows Server 2008 Enterprise or Datacenter, then Hyper-V starts to fall a little short. Enterprise supports 2TB memory and 8 CPUs, Datacenter supports 2TB and 64 CPUs.
I know you said to ignore licensing - but the other big factor is the included free guest licenses on Windows 2008 Server. Hyper-V Server includes none. Standard includes 1 free guest OS license, Enterprise includes 4, and Datacenter allows unlimited. You need to do the comparison between purchasing guest OS licenses or purchasing the "upgraded" host OS and using the included licenses.
Solution 2:
As Sam Cogan mentioned, it will be a core install. Make sure all of your drivers, network card utilities, backup solutions, etc will work just fine on Core before going ahead.
Also be aware that managing Hyper-V Server outside of a domain (aka workgroup) requires a lot of extra configuration (I believe this is the same for Server 2008 Core but only have experience with H-V Core).