VMware vs physical machines

funkymatt

Diamond Member
Jun 2, 2005
3,919
1
81
I have several systems that exclusively compile code using ARM's RVDS- ARM2.0 and 3.0 specifically. I've been playing around with virtual machines on various hardware and I still cannot get the VM systems to outperform a physical box.

Test1a:
fresh Install of XP pro sp2 on a quad core (q6600) with 2GB ram. System never went over 556MB ram used. compile time was approximately 20 mins from start to finish. Great. Network speeds seem good... over 100mb it takes about 20 seconds to copy 150mb worth of files.

Test1b:
Installed Fedora 8 as the host system, ran XP as a virtual environment giving the VM access to 2 cores and 1GB ram. Compile times dropped about 10% to 23ish minutes. Network copy dropped significantly also, the same copy is now taking 30 seconds.

Test1c:
Invoked 2 identical environments on the F8 box, compile times dropped significantly, 32 minutes per build. wtf? Network copy seems to be about the same... around 30 seconds.


Test2a:
installed VM esxi on a dual processor xeon quad core e5410, 16GB ram for the host. BEEFY system. Compile times are around 21 minutes for one instance of XP... about 22 for 2 instances. Network copy times are about the same for all VMs at 30+ seconds. Testing 3+ VM invocations right now and so far it seems to be handling it well during the compile.

My questions:
Why are the VM sessions significantly slower than a straight install on the quad core q6600?
What can I do to speed up parallel VM sessions on the quad core q6600?
Even with the beefy server, why were compile times still about the same as a physical box? Shouldn't they be exceptionally faster?
What can I do to optimize windows on the physical side? What about VM tweaking? Ideally I want to cut compilation time down as much as possible.

thanks in advance.
 

JackMDS

Elite Member
Super Moderator
Oct 25, 1999
29,540
419
126
VM computer does not interact directly with the hardware every thing is done through software emulation interacting with the Software Host's OS.

Thus they are always slower than direct hardware interaction.
 

Dadofamunky

Platinum Member
Jan 4, 2005
2,184
0
0
If you run 4 GB of RAM in your VM system, bump it up to 8 GB and assign more memory to each VM instance. That should help. But no VM will ever outperform a physical box. Period.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Who told you VMs would be faster than running the OS straight on the hardware? I've never read that claim before. The best you should hope for is a minimal penalty for the extra layer between your application and the hardware.

A VM on 2009 hardware like a 3.16 GHz dual core might be faster than the OS running on something like an old 2 GHz P4 single core, but even that is not guaranteed.

Some of the database-driven server apps I've run in VMs are 10 times as slow as running on non-VM equivalents.
 

RebateMonger

Elite Member
Dec 24, 2005
11,586
0
0
There've been many studies of the "speed costs" of VMs. Generally, the biggest loss is with disk performance, since you are dealing with a virtual hard drive INSIDE a real hard drive, requiring additiional disk seeks beyond that needed for a real hard drive.
 

funkymatt

Diamond Member
Jun 2, 2005
3,919
1
81
Thanks for the replies everyone. What would be the best solution for compiling code? Right now we have 30 individual systems that each get assigned jobs. Most of these systems are at least a core2 duo and compile times are around 20 mins.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
Have you done any profiling on the build process to see the bottlenecks?

Is your build process multi-threaded? If it scales beyond 4 threads then quad cores would reduce build time. If not, can you set up systems to run more than one job on a single PC?

Is it RAM limited or not? If the process is using <1 GB then again you might be able to run multiple jobs on a single PC.

Is it disk I/O limited? The new Raptors or SCSI might make sense, or even SSDs

Is it network limited? Gigabit network cards and a smart switch to partition your bandwidth
 

Conficio

Junior Member
Oct 14, 2004
9
0
0
If you run 30 individual systems for compilation, then there is more than (speed) performance to the picture. I'd say outright given an unlimited budget you will never match the performance of dedicated hardware with an VM system. It's simply a layer between your OS and the hardware that does eat up some resources. It will be the same if you run Virus software alongside your compiling jobs.

However, if you ask yourself other questions than speed, you might come to different answers. I'd look at performance/watt and performance/$ spent.

First if you are consolidating the servers to a few VM servers, you might save lots of power consumption and cost associated with keeping so many machines running. As a reference of this thinking, read Anand's latest report on the Nehelam Server processors.

The other factor to talk about is performance / $ spent. Lets assume your Machines cost $2000 each with disks, etc. Then you are $60,000 down. Lets assume you can consolidate that whole load on 6 Servers running some virtualization. You get to spent $10,000 each. That means you can spend more money on the components that are most important to your application.

As you say it is mostly compile jobs you are running I'd guess that I/O performance is your bottleneck. But the kinds of components you mention are desktop/workstation components (Q6600/ 100MBit LAN, not RAID controller). I'd say if you can spend more on buying good RAID controllers and fast RAID disks, you are gaining more than the Virtualization overhead takes away. So test before you commit to that new setup.

You also mention network speeds as being important. That is somewhat surprising for a compilation job. You say you are copying around the source or results. I'd say you can save a lot, by consolidating the network on a VM box. In other words have the compile work horses share the same physical disks than the File servers. Off course you can do that too with a SAN, but a SAN for 30+ machines is an expensive proposition as well. May be a SAN for 6 VM servers is more of an option.

Another aspect is flexibility. You can easily reconfigure your VMs with more RAM, more hard disk, etc. Even for a temporary measure. But do that for physical machines and you are a busy man.
 
sale-70-410-exam    | Exam-200-125-pdf    | we-sale-70-410-exam    | hot-sale-70-410-exam    | Latest-exam-700-603-Dumps    | Dumps-98-363-exams-date    | Certs-200-125-date    | Dumps-300-075-exams-date    | hot-sale-book-C8010-726-book    | Hot-Sale-200-310-Exam    | Exam-Description-200-310-dumps?    | hot-sale-book-200-125-book    | Latest-Updated-300-209-Exam    | Dumps-210-260-exams-date    | Download-200-125-Exam-PDF    | Exam-Description-300-101-dumps    | Certs-300-101-date    | Hot-Sale-300-075-Exam    | Latest-exam-200-125-Dumps    | Exam-Description-200-125-dumps    | Latest-Updated-300-075-Exam    | hot-sale-book-210-260-book    | Dumps-200-901-exams-date    | Certs-200-901-date    | Latest-exam-1Z0-062-Dumps    | Hot-Sale-1Z0-062-Exam    | Certs-CSSLP-date    | 100%-Pass-70-383-Exams    | Latest-JN0-360-real-exam-questions    | 100%-Pass-4A0-100-Real-Exam-Questions    | Dumps-300-135-exams-date    | Passed-200-105-Tech-Exams    | Latest-Updated-200-310-Exam    | Download-300-070-Exam-PDF    | Hot-Sale-JN0-360-Exam    | 100%-Pass-JN0-360-Exams    | 100%-Pass-JN0-360-Real-Exam-Questions    | Dumps-JN0-360-exams-date    | Exam-Description-1Z0-876-dumps    | Latest-exam-1Z0-876-Dumps    | Dumps-HPE0-Y53-exams-date    | 2017-Latest-HPE0-Y53-Exam    | 100%-Pass-HPE0-Y53-Real-Exam-Questions    | Pass-4A0-100-Exam    | Latest-4A0-100-Questions    | Dumps-98-365-exams-date    | 2017-Latest-98-365-Exam    | 100%-Pass-VCS-254-Exams    | 2017-Latest-VCS-273-Exam    | Dumps-200-355-exams-date    | 2017-Latest-300-320-Exam    | Pass-300-101-Exam    | 100%-Pass-300-115-Exams    |
http://www.portvapes.co.uk/    | http://www.portvapes.co.uk/    |