Dual-boot or virtual machine for Linux programmer that does some Visual Studio development?

Solution 1:

Others have covered the rest of your question, but I see they have left this part unaddressed:

But if I boot into Linux, I can boot the Windows partition as a virtual machine. Is this possible? And/or: If I boot into Windows, I can boot the Linux partition as a virtual machine. Is this possible?

The short answer is yes.

The long answer is yes, but with number of caveats and implications.

Until this January (when we got new machines), I had my work computer configured to either dual boot, or to use Linux as a host for a Windows 7 VM which directly accessed the Windows partition. I never bothered trying to use Windows as a host for a direct-disk Linux VM, but it is also possible. My computer used legacy BIOS/MBR, I am not sure how to do this if you use UEFI/GPT.

I chose VirtualBox to do it, but from what I understand it should also be possible with VMWare.

Performance was perfectly acceptable for my purposes; I gave Windows 1 CPU core, 2GB of memory, 1 monitor, and I don't remember how much video memory. The VM was capable of playing video and working with my Java GUI application and many rich-content websites.

I'll discuss the weirdness before going into details:

  • Switching between Native-boot and VM-mode Windows 7 triggered Windows's warning that my copy of Windows was not be genuine. Depending on your license, MS may consider this a breach of license by installing on two pieces of hardware. I also needed to "repair" my installation because many of the low-level drivers are different for the real hardware and the VM "hardware."

  • Every time Windows Update ran (or any other large disk I/O operation), my hard disk was crushed under load and everything on both machines became dog slow. Moving each OS to its own dedicated hard drive fixed this problem nicely; using a solid state drive would have also solved the issue.

  • For some reason, Ubuntu occasionally didn't like waking up after suspending (maybe 20% of the time), which was the equivalent of a hard power-off for my Windows VM. As a result I would power down the VM every evening before suspending the host.

  • I had to be very careful to never attempt to mount the Windows partition in Linux to prevent possible data corruption/loss. I quickly added a udev rule to prevent this.

Alright, if that doesn't sound too bad, here are the details: The most useful reference I found to do this is this blog: http://www.rajatarya.com/website/taming-windows-virtualbox-vm. Here are the steps summarized:

  • RTFM. Really. You will be using commands that could corrupt your file system if done wrong.

  • Find the partitions.

  • Change the permissions for your partitions (alternatively you could boot your VM using sudo, but then any files created on the host/guest shared drive will be owned by root).

  • Create an MBR so the guest doesn't try using the normal bootloader.

  • Use a VirtualBox internal command to create the vmdk image to use for the VM.

  • Create the VM. Have it use the vmdk you just created.

  • Boot and repair your installation.

I would expect to find many of the same issues using a Windows host, but Linux would probably handle the different hardware more elegantly. It really might be easier though to use a shared /home (and maybe /opt) partition(s) and just have different root partitions for the native vs guest machines.

Solution 2:

I do the opposite generally, that is my home system is a Windows 7 machine and I run various flavors of Linux as VM's. Being as Linux can function quite nicely with limited resources, this works for me even though I only have 4GB's of RAM (mind you this does mean I run a single VM at a time and typically give it 1024MB of RAM at the most to keep plenty for Windows).

Your 8GB's of RAM should allow you to run a Windows VM quite easily. I think the only big question here is the type of results you will be wanting to measure for your development. I would be hesitant to rely on any performance/throughput metrics for items you develop in a VM (unless it is going to be deployed to a similar VM environment with the same specs across the board). Otherwise you should be good to go.

And to touch on comments in other thread, naturally if you are going in knowing that you will be developing applications that require a large amount of resources, then optimizing for that end would be the way to go (so in those cases dual boot would certainly be preferable to using VM's).

I've used both VMWare and VirtualBox, and I have not seen any real performance hits using either. I have not gone to the extent of benchmarking the VM's within, I just have simply not noticed any negative impact from either platform (Still use VirtualBox at home, and VMware at work).

Solution 3:

It depends on how you work and how you want the os to feel.

Dual booting means restarting and booting if you wish to change the OS, this takes time. As you freelance, do you work 8 hours straight with a project without doing "your" stuff ("your stuff" would be you on your linux boot).

If you work 8 hours in a row in Windows, and then spend the rest of the day in Linux, use dual boot. If you work 30min and then want to do something in Linux, switch to working again for 3 hours, then into linux again etc. Then I would choose VM.

If you are unsure do both. Dual boot and when in windows run a VM. You will see a clear difference between both and you WILL prefer one over the other.

I personally only use a vm for testing new linux dists and sometimes a fake server when devloping (being able to screw up on a vm is priceless and so much less hassle that doing it on your actual harddrive.). VM also lags and feels less responsive, I need my os to feel responsive and smooth, the VM lacks in this area. So I prefer dual boot.

There is also an issue when it comes to sharing files. It's more work to share on a dual boot(imagine editing a file in windows boot then booting in linux only to realize that you missed a line and have to reboot into windows, the nightmare!) where on a vm you can use NFS or samba to share files between the os in real time, no probs.

Also about not enough ram as Aboba said, rubbish I say. I run 4gb and a VM in Windows8. Remember though about what resources your project needs.

Solution 4:

Or how abou this. You're interested in .net/C#, so why not just use Mono on linux? Its very cross platform in the sense java is.

Solution 5:

Personally I have had many positive experiences taking the half way house, use a hypervisor such as Xen you can then have both Linux and Windows running under a light weight micro-kernel hypervisor, neither take a significant performance hit and this configuration is far more flexible, allowing you to easily reallocate resources between the two at any time.

This nearly matches your dream setup mentioned as you can boot or shut down the two OSes independently at any time to reallocate resources to the other and you will not be able to notice any performance hit (in reality it may be a fraction of a % slower is some cases but you will not notice this unless you are benchmarking specifically to test this).

The only catch is certain features require your CPU to support visualization however every i7 does (afaik) so this should not be an issue for you.

EDIT:

Xen contains its own microkernel, Debian is not needed to run it, the only reason one would need a linux system is to reconfigure Xen as the configuration program is a linux program which you run on one of the guests (guest is the term Xen uses to mean a VM). You can have just linux, just windows or both running at any time, but you will need linux running in order to configure Xen's settings so it is advisable to leave a small linux running in the background to do this when needed.

Hardware can be handled in a few ways, Xen provides a set of virtual devices which allow a piece of hardware to be used by multiple guests at once such as network cards and sound cards etc.

For hardware which cannot work this way or is only needed by one guest at a time xen has pass-through devices, this means you can select and choose at any time which guest has direct access to the piece of hardware and can switch this at any time. This can be done to give a specific guest true access to the graphics card if you need to high speed accelerated rendering without any virtualisation overhead.

It is also possible to have one guest directly using the graphics card in pass-through mode as explained above but still view others using a local VNC like protocol which makes the screen's of other guests appear in windows (think like virtual box or remote desktop) while one has direct control of the graphics card; or of course you can just switch between guests screens giving one direct access at a time.

Contrary to what you may assume about the efficiency of this setup it is in fact one of the few ways to run both OSes at the same time while maintaining effectively native performance in both, if you do some googling you will find many have had success using such a setup to run a high performance gaming machine or other demanding systems.