Same here.
tcpdump on the TAP Interface shows ARP traffic in and out, but no package is recieved inside the Solaris 10 VM.
09:14:48.583479 ARP, Request who-has 10.1.30.1 (ff:ff:ff:ff:ff:ff) tell 10.1.30.10, length 46
09:14:48.590272 ARP, Reply 10.1.30.1 is-at 00:00:0c:07:ac:01, length 46...
That's why you have mirrors. There a lots of companies and universities who provide mirror space for open source projects.But proxmox team obviously dind't wan to go that way.
That's why you have to involve the community. Something that Proxmox failed to accomplish especially in the beginning...
Thanx again.
I think now I got it. All VMs with the same "Order" value are started in the ascending sequence of their VM IDs, and when Proxmox hits a VM with a configured startup delay, it pauses for N seconds and the continues with the next VM or the next "Order group".
Is this correct?
Hello,
I don't really understand how the startup order in proxmox works. For example, if I have 4 VMs.
VM1: order=1, up=300
VM2: order=1, up=300
VM3: order=any
VM4: order=any
Will VM1 and VM2 be started concurrent and after 5 minutes VM3 and VM4?
Or will Proxmox start VM1 and then wait 5...
Hi,
Is there a possibility to suspend a KVM VM to disk? Especially when rebooting the host it would be nice not to shutdown/powerup the VMs, but suspend/resume them.
Hello,
I have one Proxmox node that uses KSM heavily (118GB assigned on 96GB physical RAM). Recently I had to reboot that node and after the reboot the VMs started up and began to eat up the RAM quickly.
cat /sys/kernel/mm/ksm/run showed KSM running, but pages_shared and pages_sharing ware...
You don't need RDAC at all. Multipath replaces the RDAC kernel driver.
Which is perfectly normal for all SAN storages in Linux.
That's why you need multipathd installed and a correct multipath configuration.
Hello,
I recently ran into the "metadata too large for circular buffer problem" -> http://forum.proxmox.com/threads/12045-HELP!-metadata-too-large-for-circular-buffer
Because I have to keep the downtime minimal, what I want to do now is add an additional PV with a larger metadatasize to my...
Hi!
I had a similiar configuration (IBM DS4700). Don't know if it works with the DS3512 too, but you can try the following (but better backup your initrd first)...
In /etc/initramfs-tools/initramfs.conf change MODULES=most to MODULES=dep
In /etc/initramfs-tools/modules add scsi_dh_rdac...
That's not true. KVM with KSM allocates only the amount of memory, that the virtual OS uses. With KSM I can overprovision around 130-140% (eg. physical mem:96GB, provisioned: 130GB, used:80 GB). All are Linux KVM VMs, but with different distributions, and different workload (Oracle DBs...
I copied (dd) the Xen LV over ssh to an empty Proxmox LV. Thats why I originally expected to have a raw volume on Proxmox.
I converted the proxmox LV with qemu-img today and live migration now works as expected.
Sure. As long as you have a quorum disk (see: http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster) and enough free resources (especially RAM) on both nodes, so that all your VMs can run on one node if the other one fails.
You can have any number of cluster nodes. All nodes can have running VMs. If one node fails, the VMs from that node will be automatically transfered to another running cluster node.
Hello!
I got the most strage error message today, when I tried to live migrate a KVM Solaris guest with ZFS disks.
ERROR: online migrate failure - VM 57355 qmp command 'migrate' failed - Block format 'vpc' used by device 'drive-ide0' does not support feature 'live migration'
Now I'm confused...
I'm talking of a NON-HA cluster (no rgmanager running)! You can't simply start a VM on another node with this type of cluster (it's like the "cluster" in 1.x). All you can do (and all I want) is live migration between cluster nodes. So this should not be a problem with "expected votes=1".
I had...
Hi all!
Today I had to reboot my 8 node non-HA cluster because of a storage migration. When I turned on the nodes one-by-one the cluster did not get quorate on the first 4 nodes and the VMs on these nodes did not start. After the 5th node the cluster got the quorum and the VMs on node 5 to 8...
When I try to restore a KVM machine with qmrestore on my Proxmox 2.1 cluster, I have lot of this messages in dmesg.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
vgs D ffff880c7709c280 0 3981 1 0 0x00000007
ffff880c63fd1aa8 0000000000000086...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.