Time running fast after upgrading to PVE 2 final

I have had the same problems.
My situation:
I have 2 VM's running at the moment
1) A fully up2date SME server 7.5.1
2) A Windows 7 Ultimate 32bit
My problem:
The SME server has a really massive timedrift.
For every 1 minute the SME server counts 1 minute and 57 seconds so the timeclock is nearly twice as fast.
Strangly enough the Windows 7 image has no timedrift at all (or the timeserver is able to handle the timedrift).
Both images timesync with the timeserver of my internet provider.
With me both images are moved from my previous server to my new server so the images are the same.
This also means that in my case I have 2 variables.
My first server is a proxmox 1.9 on a AMD Athlon II X4 605e system with 8GByte of memory.
My second server is a proxmox 2.0 on a AMD Opteron 4274HE system with 48GByte of memory.
On the proxmox 1.9 the SME server runs fine without any timedrift (or the timeserver is able to handle the timedrift) and no changes to the VMID.conf.
The linux version of the SME server shows up as Red Hat 3.4.6-11.
cat: /sys/devices/system/clocksource/clocksource0/current_clocksource gives: No such file or directory
# info qtree gives:
dev: mc146818rtc, id ""
dev-prop: lost_tick_policy = discard
Adding the line below to /etc/pve/qemu-server/vmid.conf also solves the problem for me:
args: -no-hpet -no-kvm-pit-reinjection
 
Last edited:
Just another data point for the thread,

Proxmox 2.0 version as below (latest / patched as of oct-29-12)





------paste-------
root@proxmox:/etc# pveversion --verbose
pve-manager: 2.1-14 (pve-manager/2.1/f32f3f46)
running kernel: 2.6.32-10-pve
proxmox-ve-2.6.32: 2.0-63
pve-kernel-2.6.32-10-pve: 2.6.32-63
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.3-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.92-3
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.8-1
pve-cluster: 1.0-27
qemu-server: 2.0-33
pve-firmware: 1.0-18
libpve-common-perl: 1.0-30
libpve-access-control: 1.0-24
libpve-storage-perl: 2.0-31
vncterm: 1.0-3
vzctl: 3.0.30-2pve5
vzprocps: 2.0.11-2
vzquota: 3.0.12-3
pve-qemu-kvm: 1.0-8
ksm-control-daemon: 1.1-1

VIRTIO Installed:

root@proxmox:/var/lib/vz/template/iso# du -sh ./virtio-for-win2008-virtio-win-0.1-30.iso
44M ./virtio-for-win2008-virtio-win-0.1-30.iso

in windows virtio disk driver (redhat virtio scsi controller) reports as 7/3/12 build
Driver version reports as 61.63.103.3000

and VIrtIO NIC reports as same build date / versions


I was having severe clock drift problems for a guest VM running SBS2011 install (fully patched as of same time/date so running SP1 I believe)

(Drift was on the order of running fast +5 minutes error every ~10 minutes)

Followed advice of this thread, ie,

just append at the bottom of /etc/pve/qemu-server/vmid.conf
Code:
[COLOR=#333333]
args: -no-hpet -no-kvm-pit-reinjection
 [/COLOR][COLOR=#333333][/COLOR]

And now the problem seems to be entirely resolved. Note a full power off/power on was required to get the new parameters in effect (not just a "reboot" of the guest OS)


Tim
 
...

just append at the bottom of /etc/pve/qemu-server/vmid.conf
Code:
[COLOR=#333333]
args: -no-hpet -no-kvm-pit-reinjection
 [/COLOR]

And now the problem seems to be entirely resolved. Note a full power off/power on was required to get the new parameters in effect (not just a "reboot" of the guest OS)


Tim


do not apply these setting manually, remove it. not needed if you run 2.2.
 
Hi Tom, thanks for the update. I will update this ProxVE host from 2.1 to 2.2 soon; adjust the config (remove this line) and then followup to the thread re: outcome.

Tim
 
I just ran into this on a w2003 guest. Reboot and will report, but its the first time I have noticed it. Clock was running minutes like seconds.

Code:
pve-manager: 2.2-31 (pve-manager/2.2/e94e95e9)
running kernel: 2.6.32-16-pve
proxmox-ve-2.6.32: 2.2-82
pve-kernel-2.6.32-16-pve: 2.6.32-82
lvm2: 2.02.95-1pve2
clvm: 2.02.95-1pve2
corosync-pve: 1.4.4-1
openais-pve: 1.1.4-2
libqb: 0.10.1-2
redhat-cluster-pve: 3.1.93-2
resource-agents-pve: 3.9.2-3
fence-agents-pve: 3.1.9-1
pve-cluster: 1.0-33
qemu-server: 2.0-69
pve-firmware: 1.0-21
libpve-common-perl: 1.0-39
libpve-access-control: 1.0-25
libpve-storage-perl: 2.0-36
vncterm: 1.0-3
vzctl: 4.0-1pve2
vzprocps: 2.0.11-2
vzquota: 3.1-1
pve-qemu-kvm: 1.2-7
ksm-control-daemon: 1.1-1


And vm in question conf

Code:
acpi: 1
boot: c
bootdisk: ide0
cores: 2
cpuunits: 40000
freeze: 0
ide0: LVM01:vm-102-disk-1,size=32772M
ide2: none,media=cdrom
kvm: 1
memory: 2048
name: roes3
net0: virtio=32:24:91:C4:1C:A9,bridge=vmbr0
onboot: 1
ostype: w2k3
sockets: 1
startup: order=2
vga: vmware
 
Last edited:
Just a thought, I wonder if vzdump might have an effect. The VM in question had a snapshot ran right before when I feel the time was goofed up.

I'm guessing with doing any research yet that at some point the VM has to stop and then be restarted?
 
This solved my problem, thank you!

args: -no-hpet -no-kvm-pit-reinjection


I had this problem, and the above fixed it.
There was only one VM affected by this issue.

What is the side effect of making this modification? If there isn't one, shouldn't this always be in place for all VMs?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!