The problem does not show up with PVE1.4beta2 on Core2Duo (a DELL).
We'll have a closer look if it's caused by platform or PVE release.
I have had a closer look at that issue.
Probably many or even most users do have that issue on their KVM-Servers, but don't know about that since they don't run a windows-vm in high resolution time. It's known at all virtualisation products, including KVM, QEMU, VirtualBox, VMWare and ESX,
some tried to fix some still trying or demaged their fixes because of
other enhancements.
Most time you won't see the difficulties, because you don't switch WindowsXP to High-Resolution timming. As far as I now until now,
Labview and Quicktime make use of the higher resolution timing.
Labview is fully unusable because of that, since it has got time
triggered context menues, which simply take factor 10 and more
to open. If you try do do some automation with it, it will even fails even for slow stuff. Dependent on load time freezes for more than 10 secs and
the machine looses 50% and more of time.
Since syncronisation at <1 sec resolution is required, you simply can't do that via time servers. It takes much to long and has got too much overhead.
If you have a look at linux, you'll notice time was a bad thing there, too,
but the kernel developers spend much affort to the timing thing and still do and look for accurate results through the various system timers. Linuxes are able to use different sources for systemtime, while older Windows don't. RHEL has got a basic check integrated to prevent install on a
machine which uses >1 cores which do not have time integrity and could
finaly permanently damage system resources (filesystems).
Unfortunately nobody did a mistake but nobody thought about a usefull
concect, too. If the Hardware offers timers based on CPU-clock only, while
clock strongly changes due to dynamic clock/dynamic under/overclocking
inside the chipset and between different cores und on CPU, it's realy hard
for the software to get a usefull timebase.
Inside (hardware based) virtualisation, it's simply the maximum of jitter
because of concept, but it's not really the reason, only a part of that.
The thing which is ok, is the RTC, but at least regarding WindowsXP...it's not used. Hardware people integrated a high resolution event timer to get down to the sub-ms-area with precission....but again did not strictly define a non-changing base-clock for that thing. Maybe it would work for our hardware when disabling the enhanced clocking and energie saving options, but I don't see the HPET as a hardware resource inside the WindowsXPSP3 VM (up to my understanding there should be one...or is it limited to Vista?).
Up to the docs I have had a look at XPSP3 offers the timer, but I'm not sure if win32time makes use of it, of if use is limited to newer Media API
functions.
Probably the bochs BIOS doesn't offer the HPET feature inside the VM, too.
The best solution in my understanding would be a modified win32time.dll to simply use the kvm-time functions. The <1ms time interrupts Windows requires in high resolution (probably 100us like for Linux in past, too) are simply too fast for virtualisation if the host runs a lower rate on his kernel because of efficiency. The linux time on the host at least has got full integrity on our hardware - even with energie saving and dynamic clocking enabled. The whole concept causing timer-interrupts without having any process waiting for it is a stupit idea - caused by the
pre-historic hardware concept. Other platforms don't do it the same, do they?
It seems there are things to do regarding that issue. We'll have a llok at Win2008R2 and see if it's fixed there on OS-Level.
JP