Virtualbox performance Opensuse 12.1

Greetings.

I recently bought a 32 core monster-server intended to be used for virtualization. The machine is equipped with 2 x 16 core AMD bulldozer processors.

I have been using Virtualbox for several years and it’s always been a very pleasant experience, up till now that is. I have installed two guest in this machine and experiencing terrible performance, frankly it performs like sh*t.

Both VMs have been given 2 cores and 4 GB memory and can easily sink them both. An easy task such as scp to one of the guests on a Gbit network makes the machine consume more or less both cores given. The throughput seldom goes beyond 3Mb/sec. Repeating the same operation but to the host gives me a more expected throughput of about 30Mb/sec. I have used several different sources when performing these scp-tests.

Host, 32 cores, 64 GB Ram, Opensuse (x64) 12.1
Guest#1, 2 cores, 4 GB Ram, Opensuse (x64) 11.4
Guest#2, 2 cores, 4 GB Ram, Opensuse (x64) 12.1

I must have tested to change more or less every parameter available on those guest and nothing seem to help. I have tested both the OSE and the closed source versions as well as at least two different releases of the closed source version.

I’m starting to believe there is some sort of compability issue and therefore very interested to hear if someone with a similar setup either share my experience or managed to get this working.

Tia and Happy new year everyone!
Jonas

What happens if you give the vms a single core and/or less RAM?

Now that is a setup I actually have not tried.

Manjula, physical server, used as “copy source”.
Edna, VM (11.4)

One CPU

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
100mb.file 100% 98MB 2.3MB/s 00:42

1 CPU & 1GB RAM

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
100mb.file 100% 98MB 2.3MB/s 00:42

2 CPUs & 1GB RAM

Test 1

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
100mb.file 100% 98MB 3.4MB/s 00:29

2 CPUs & 1GB RAM

Test 2

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
100mb.file 100% 100% 98MB 5.4MB/s 00:18

2 CPUs & 1GB RAM

Test 3

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
100mb.file 100% 100% 98MB 5.4MB/s 00:18

Look at there two tests. Same config but I waited about 10 min between the two tests, somethings fishy…

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
isrjo@edna.wehay.com’s password:
100mb.file 100% 98MB 14.0MB/s 00:07

isrjo@manjula:~> scp ./100mb.file edna.wehay.com:/tmp
isrjo@edna.wehay.com’s password:
100mb.file 100% 98MB 2.0MB/s 00:49

Maybe you should monitor CPU(s) and memory usage, among other things, on the guests (and maybe host as well). I’ll recommend conky.
See this post: http://forums.opensuse.org/english/get-technical-help-here/how-faq-forums/unreviewed-how-faq/464737-easy-configuring-conky-conkyconf.html

On Fri, 30 Dec 2011 17:06:03 +0000, swejis wrote:

> Look at there two tests. Same config but I waited about 10 min between
> the two tests, somethings fishy…

This is just a guess and I’m sorry I can’t be more specific…

I seem to remember reading somewhere that VM’s can take a severe
performance hit if TCP checksums and other functions are offloaded to the
network adapter. If that is your problem and you can disable those
functions on the adapter, it may help.


Kevin Boyle - Knowledge Partner

It’s worth a try:

/sbin/ethtool -K eth0 tso off
  • eth0 or whatever your nic is.

Kevin,
Is that what you mean?

On Fri, 30 Dec 2011 18:16:02 +0000, please try again wrote:

>
> Code:
> --------------------
> /sbin/ethtool -K eth0 tso off
> --------------------
>
> Kevin,
> Is that what you mean?

Yes, I think that is it. It would have taken me quite a while to dig it
up. I haven’t had to use it myself. If I remember correctly, it needs to
be done for each VM.


Kevin Boyle - Knowledge Partner

Hi guys, thanks a lot for your efforts, I really appreciate it.

The Nics in the host are Broadcom Netextream II adapters (bnx2) with (at least) iscsi offload. So that might definitely be something worth trying.

However, now then concentrating some of my tests against the host I also found abnormal differences between same tests. And the throughput between other, much older physical machines on the same net have much higher throughput. I think I have some issue in this new machine either with a broken driver or bad cable or something. I guess that needs to resolved first, maybe that is “the” issue…

Copy between two other hosts

isrjo@moleman:~> scp ./100mb.file manjula.wehay.com:/tmp 100% 98MB 48.8MB/s 00:02

Copy to the “vm-host”

isrjo@moleman:~> scp ./100mb.file ft.wehay.com:/tmp 100mb.file 100% 98MB 32.6MB/s 00:03
isrjo@moleman:~> scp ./100mb.file ft.wehay.com:/tmp 100mb.file 100% 98MB 10.9MB/s 00:09

Yes, it can be everywhere then.

On the other hand, kvm is not very complicated to set up and would probably be happy with 32 cores (although I don’t have such a monster, so I don’t know). You can easily create a vm with one of the scripts I wrote (vm-create) in a package called vmscripts available in my repo. Btw, the package also includes vboxlive, that lets you create and run most Linux distros live systems on the fly in diskless vms (maybe it would help exclude disk I/O problems in your tests).

Check here:

If you run both, VirtualBox an kvm on your 32 cores server, you could certainly tell use which one performs better on this monster (it will be a great info in this subforum, I guess), and you would also find out if the issue you’re having is related to VirtualBox (I doubt it though).

Sorry for this somewhat off-topic post, but I just had to check the network. I therefore installed iperf on both my test-server and the vm-host.

With iperf I always get round about 1gbit speed, and that looks like a healthy network to me. No errors found in ifconfig nor ethtool statistics.

I really can’t explain why some of my scp-tests suddenly drop to about 10MB/sec


Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 33592 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 33609 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 33618 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 33655 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 33664 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

Client connecting to ft.wehay.com, TCP port 5001
TCP window size: 16.0 KByte (default)

3] local 217.75.116.20 port 52633 connected with 217.75.116.11 port 5001
ID] Interval Transfer Bandwidth
3] 0.0-10.0 sec 1.10 GBytes 943 Mbits/sec

If you run both, VirtualBox an kvm on your 32 cores server, you could certainly tell use which one performs better on this monster (it will be a great info in this subforum, I guess), and you would also find out if the issue you’re having is related to VirtualBox (I doubt it though).

I actually did install kvm some time ago sharing your though, to create a reference. Not read your links yet, but what I failed to find then was how-to administrative kvm in a headless environment, you know if there is some virtual console such a vnc or similar ? No X installed on the server.

Of course! It’s called virt-manager. Look at the first link in my previous post. You connect to the server from anywhere and administrate the vms, start, shut down, open a vnc viewer (you can start virt-viewer from virt-manager). It’s a bummer that vbox is not implemented in virt-manager, because libvirt (not any version) is able to manage VirtualBox vms as well. So you can administrate them from the libvirt shell (virsh).

Don’t hesitate to open a new thread in the network forum to focus on this particular issue.