I don't think so, it appears to just be some internal bugs in the VMM software.
Right now, we use an OS-level VMM, which is loaded as part of the Linux kernel, to separate services on the machine. For example the web and IRC services run in separate VM's on the machine, so if one was compromised, the other couldn't be affected since they would be unable to access the other VM. For all intents and purposes, each VM is a separate machine to the internet even though they all run on the physical machine.
Personally I haven't been _that_ impressed with it -- it can be a pain when one of the VM's crashes, sometimes taking up to 20 minutes for a VM restart to get the affected services back online. The creator of the software claims 50 VMs could easily be supported on one machine such as ours, but it always seemed to bog down after only a few VMs began running.
Galactic and I have looked at switching to a different VMM.. for example, we have thought about using a 'hypervisor' model that runs directly on the hardware, and runs the OS inside of it.. this would allow different OSes to be run on the same machine simultaneously. I feel this might be a bit more reliable since the hypervisor is not 'entangled' with the operating system as it is now -- if one VM goes down, restarting it is as simple as restarting the operating system it contains.
Anyway, I'm on a bit of a rant. The main point is, we plan to try to do something to decrease the problems in the future. Most will see little or no change in how things work if we do switch to a different VMM software or get rid of it entirely.