Is Windows 7 Inside Linux Just as Good as Running as Main OS (esp for Graphics and Video)?

New Linux user. Was wondering if running Windows 7 inside Ubuntu/Linux Mint via VMPlayer will be just the same as running Windows 7 as the main OS.

By "same" I particularly mean:

  • Will the graphics and video rendering quality will be just as good?

  • Will there be any hardware issue such as using HDMI or WiDi?

  • Will applications run just as smoothly as long as enough RAM is allocated?

How powerful does the machine have to be for there to be no noticeable difference? The particular specs of my machine are: http://www.gadgetspecs.info/2011/07/asus-u46e-bal5-review-of-specs-and.html. I also have a SSD installed.

Background: I currently have the opposite set up with Linux Mint and Ubuntu inside Windows 7 and I'm finding that the video quality is not as good as that in Windows 7.


Solution 1:

I have appended a great deal to my answer below, but I have kept my original answer intact for reference.

TL:DR: Virtual Machines are a tool, and while they offer the easy ability to use one OS within another you have to be very much aware of what your intended primary use of the computer will be in order to make full use of the system.

Your question seems to be strongly slanted towards the graphical and interface performance possibilities of using a virtual machine and so I will answer regarding the possibilities there.

The main problem is that in order to safely manage the guest operating systems access to devices (and thus prevent the guest OS from trampling over the host and breaking things) all the devices that you want to use must be "emulated".

What this means is that the graphics card that your guest OS can see is not the same graphics card that your host OS can see. You may be able to enable features like 3D rendering in the guest, but this is handled by an intermediate driver in your guest which passes the requests up to the host in a safe manner in order for the 3D to be rendered there.

It is very doubtful that features like those necessary to play blue-ray discs securely to a supported HDMI display are emulated by the guest graphics card drivers and so this will likely not work.

Basically anything that requires hardware support on your host is not likely to work well, if at all, in your guest. I do not know how WiDi works, but if it requires direct access to your video card memory in order to share it to a television then it will not work unless you use it from your host (Linux) operating system.

Other than that in terms of performance a VM can get close to what it would be if it were the main OS, but there will always be penalties in terms of hard drive device access or contention with other resources that the host is using.

In the Beginning...

In the beginning we had a computer, that computer could only run one operating system. That operating system tended only to run well on the particular processor and other hardware that was in the machine, with other operating systems only able to run badly, if at all, on the native hardware available.

In order for people to be able use the software for one particular platform on another platform (for example, using Pre-OSX MacOS software on a Commodore Amiga) required more than just "installing the software". These two machines used completely different processor architectures and ancillary hardware. There was simply no way one OS could run on the hardware of another machine.

Emulation

Emulation is like a cousin to Virtualization, they are actually related and have similar goals. One begat the other as it were.

What these differing hardware platforms meant was that if you wanted to use one piece of software from another OS on your machine then everything about that machine had to be analysed to find out how it worked, and then a piece of code written that functioned in the same way as the hardware part did. This had to be done for every piece, the processor, the graphics controller, the memory controller, everything.

Then all these pieces are put together and as each piece is emulating a bit of hardware we called this an emulated machine. We then run an operating system on top of this emulated machine.

The problem is that this approach is slow. Quite simply you were lucky if you could achieve 1/10th of the speed of the original hardware. You literally needed a machine several times faster than your target emulated machine in order to run the emulated computer at anywhere near full speed.

So what changed?

Well, here's the cool thing. Not much really. The only big change was that hardware platforms standardised. We stopped getting custom hardware for every OS and the OSes all moved to, or were created on, a single standard platform.

The components that make up a Mac these days are, by and large, the same components that make up a PC. Linux always ran on PC hardware so nothing new there.

For a good long while emulation was still the norm if you wanted to run the software from one OS on another. Or you could dual-boot and run either operating system as you wanted, but this made it painful and annoying if you wanted to go from coding in Linux to playing games in Windows.

And Then..

There came the idea that as the underlying hardware is the same, why can't both OSes share it?

We ended up with QEMU and WiNE and similar software solutions. QEMU had long been a favourite for hard emulation of machines, while WiNE allowed Windows applications to run on Linux by trapping and patching their OS API calls and letting the code run natively on the processor.

QEMU did something similar to WiNE, but did at a much lower level. It is still effectively an emulator, but for every hardware call that was made they used a "patch and redirect" method so that any calls went to their own emulated hardware platform instead. because most working code in a program did not actually involve hardware calls (most are simple streams of calculations with a call at the end to display results).

This resulted in an instantaneous speed boost for almost every program in the now not-really-emulated machine. Programs ran with slowdowns dependant more on how much they accessed the "virtual" hardware rather than on how well the machine could be emulated. Rather than running at 1/10th of the speed they were now running at nearly the same speed as if they were natively.

So, if we're running on the processor now, why doesn't my graphics card work?

The only problem with these new Virtual Machines is that by their very nature an Operating System assumes that it has direct control of all the hardware that is in the computer, so that they can provide features like memory management, and control access to hardware.

What this means though is that Virtual Machines cannot get completely away from emulation, at least in method. They still have to emulate certain functions in software, for example a graphics card, or network card must be presented to the OS running in the virtual machine so that the "Guest" operating system thinks it has full control of that hardware. The main OS (by requirements of security) must guard itself against programs directly accessing hardware and this places restrictions on the guest operating system.

In order to do this they have to emulate "virtual" pieces of hardware for everything in the computer. All the code is run natively by the processor now, so it is not slow, but each one of those pieces of virtual hardware must be written in software, and this incurs both a small penalty in performance and potentially a large penalty in terms of functionality.

What that means is that your virtual graphics card cannot and will not have the same features as your real graphics card. In order to get the most performance the virtual hardware can be written to support most used features, and 3D rendering is now possible in a virtual machine, but it's still not the same as real hardware.

What this means is that the host operating system gets the best hardware options, while the guest operating system gets generic hardware options.

A virtual machine is not as good as real hardware, it is only a tool to make it easier to work with the tools from one system on another.

So what do I do?

You have to choose what you want the main purpose of your computer to be.

If you want to play the latest games on your high-powered graphics card and use that same graphics cards power to play full 1080p movies to your 400" HDMI TV but only occasionally want to do some Linux programming, then Windows may be your best bet with Linux as a guest.

If you want to work on the Linux kernel, making hardware drivers for devices in your computer, and occasionally write some software for Windows and test it on a good approximation of a "standard" Windows system then it may be you are better of with Linux as a host and Windows as a guest.

If you like the ease of use of a Mac but want to program for Windows (or there is a software package you want that is Windows-only) then that is an option too.

I'm not saying that Linux can't play games, or that MacOS isn't for programmers as that would simply be a pack of lies. It's just that the one person who can say which OS may be more suited to what you want to do is you.

You really have to understand what you want your machine to do first. Only then can you work out what a virtual machine can do for you.

To answer your questions:

Will the graphics and video rendering quality will be just as good?

No. The emulated graphics card may provide some features of the host graphics card, but it will likely not support complex features such as hardware video acceleration or CUDA programming features.

Will there be any hardware issue such as using HDMI or WiDi?

Again, these extra features will likely not be a feature of whatever emulated/virtual hardware that is available.

Will applications run just as smoothly as long as enough RAM is allocated?

Most applications (so long as they do not require specific hardware features) will run nearly as fast as they would on real hardware, so long as you do not starve either the host or guest of memory.

Solution 2:

Virtual Machines (VMs) always run more slowly than the host system because the guest system has to request the host to interface with hardware, such as your graphics card, hard drives, memory etc. because it is a program running in the host system and does not have direct control of the hardware. However, if your hardware can handle it, the slowness may not be too noticeable.

The reason for this is that the processor can only execute one instruction at a time. Programs usually consist of thousands or millions of machine instructions. When the machine starts, it scans the Master Boot Record (MBR) for a bootloader. The bootloader then starts the kernel. The kernel is the main process that controls all of the hardware. Multitasking, which is switching between several tasks, allows us to run more than one program at a time, even though only one is being executed at a given moment. Most processors spend only about 20% of their time performing mathematical calculations. Multi-core processors allow the next instruction to be fetched while waiting for another core to do math, reducing the idle time and speeding up the system significantly. In addition to the kernel, there is the shell (which provides an interface to users), services/daemons (processes that run in the background, e.g. to support the system, security, etc.), and applications.

Virtualization software is an application that, like any other application is managed by the kernel. Thus, the VM's kernel must wait for permisson from the host kernel to do anything and will be interrupted frequently. The more processes that are running on the host system, the less execution time that will be allocated to VM, making it slower. VMs usually run three to four times slower than physical machines.

If you are going to run games or anything like that I would allocate plenty of RAM and as much kernel time as possible. Multiple processors help. However, allocating too much RAM will slow down the system, as it takes longer to access data and may cause excessive hard disk caching on the host system. But on the other hand, too little will cause excessive amounts of hard disk caching on the guest. Since windows is so hungry for resources, I would allocate at least 2 to 4 GB of RAM, but don't allocate more than half of your RAM to the VM.

If it responds too slowly, a better option might be dual boot. This way both will be able to fully utilize system resources, but unfortunately you may only run one at a time. If you do this you will probably want at least three partitions: one for Linux, one for Windows, and one (or more) for your files.

Solution 3:

So. VMWare Player is a Type 2 Hypervisor. That means a guest sits on top of a OS. On a Type 1 Hypervisor , the virtualization platform sits above the hardware. It is because VMWare Player being on a Type 2 Hypervisor that it will run slower than something on a Type 1. However, VMWare Player offers the ability to customize your hardware requirements for your VM. So if you have a system with a quad core processor with 4GB of Ram, you can afford to offer up 1 or 2 cores and then 2GB of RAM (min. req for Windows x64) to have a VM that runs efficiently.

For instance, I have a XPS 14z and have Windows 7 on it. I run a Windows 7 VM as well-- I have allocated it 2GB of RAM & 2 processor cores (of my 4).

So when I run programs on it (Notepad++, Transwiz, Outlook, Word, Excel etc) there is no noticeable slowing. I have never tried to run intense software on it (Photoshop, etc). So, depending on what you're using it for-- a type 2 hypervisor may or may not fit your needs. If you're wanting to use it for gaming, it'll depend on the game. I've run a few Steam games on it and haven't had any problems (I RDP from my Fedora machine to my VM sometimes) but it'll really be dependent on the requirements of the game. For gaming in every sense, I would not use a VM to do it. I game on my W7 machine and then use a VM (Fedora, actually) to do other things. You want the most intensive applications to have first access to hardware.