Hello,
i have the following setup.
on windows 10 i have vmware workstation, on it i have 3 VM's. on each one i have proxmox.
i created proxmox cluster from these 3 nodes and i have VM on 1 of the nodes,
i configured ceph and ntp.
i have kvm - off in the VM options cause it wont start with it
when i switch off the node with the VM on it (call it pve2), the VM moves to another node, but cant start, and i get this error
any help would be much appreciated! (if need more info, please tell me)
and when the VM is on the new node and i try to start it, i get this error
so to run it again, i need to go to datacenter > HA > click on the VM and disable it.
adding additional info:
ha-manager status
ceph df
pveceph lspools
i have the following setup.
on windows 10 i have vmware workstation, on it i have 3 VM's. on each one i have proxmox.
i created proxmox cluster from these 3 nodes and i have VM on 1 of the nodes,
i configured ceph and ntp.
i have kvm - off in the VM options cause it wont start with it
when i switch off the node with the VM on it (call it pve2), the VM moves to another node, but cant start, and i get this error
any help would be much appreciated! (if need more info, please tell me)
Code:
task started by HA resource agent
TASK ERROR: start failed: command '/usr/bin/kvm -id 100 -name 'ubuntu20.04,debug-threads=on' -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server=on,wait=off' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/100.pid -daemonize -smbios 'type=1,uuid=9c97de18-9fda-4bf0-acd5-808365732c83' -smp '1,sockets=1,cores=1,maxcpus=1' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc 'unix:/var/run/qemu-server/100.vnc,password=on' -cpu qemu64,+aes,+pni,+popcnt,+sse4.1,+sse4.2,+ssse3 -m 1024 -object 'iothread,id=iothread-virtioscsi0' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.3,chassis_nr=3,bus=pci.0,addr=0x5' -device 'vmgenid,guid=e74e7d02-3820-4b1b-99d2-554fedb7c038' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -chardev 'socket,path=/var/run/qemu-server/100.qga,server=on,wait=off,id=qga0' -device 'virtio-serial,id=qga0,bus=pci.0,addr=0x8' -device 'virtserialport,chardev=qga0,name=org.qemu.guest_agent.0' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3,free-page-reporting=on' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:862b1cb59b' -drive 'if=none,id=drive-ide2,media=cdrom,aio=io_uring' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=101' -device 'virtio-scsi-pci,id=virtioscsi0,bus=pci.3,addr=0x1,iothread=iothread-virtioscsi0' -drive 'file=rbd:pvepool01/vm-100-disk-0:conf=/etc/pve/ceph.conf:id=admin:keyring=/etc/pve/priv/ceph/pvepool01.keyring,if=none,id=drive-scsi0,format=raw,cache=none,aio=io_uring,detect-zeroes=on' -device 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap100i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=4A:14:35:BD:2D:FF,netdev=net0,bus=pci.0,addr=0x12,id=net0,rx_queue_size=1024,tx_queue_size=1024,bootindex=102' -machine 'accel=tcg,type=pc+pve0'' failed: got timeout
and when the VM is on the new node and i try to start it, i get this error
Code:
VM 100 qmp command 'set_password' failed - unable to connect to VM 100 qmp socket - timeout after 51 retries
TASK ERROR: Failed to run vncproxy.
so to run it again, i need to go to datacenter > HA > click on the VM and disable it.
adding additional info:
ha-manager status
Code:
root@pve1:~# ha-manager status
quorum OK
master pve3 (active, Thu Jul 27 22:56:35 2023)
lrm pve1 (active, Thu Jul 27 22:56:34 2023)
lrm pve2 (idle, Thu Jul 27 22:56:34 2023)
lrm pve3 (idle, Thu Jul 27 22:56:35 2023)
service vm:100 (pve1, disabled)
ceph df
Code:
root@pve1:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 60 GiB 47 GiB 12 GiB 12 GiB 20.82
TOTAL 60 GiB 47 GiB 12 GiB 12 GiB 20.82
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.mgr 1 1 449 KiB 2 1.3 MiB 0 15 GiB
pvepool01 2 32 4.1 GiB 1.24k 12 GiB 21.66 15 GiB
pveceph lspools
Code:
root@pve1:~# pveceph lspools
┌───────────┬──────┬──────────┬────────┬─────────────┬────────────────┬───────────────────┬──────────────────────────┬───────────────────────────┬─────────────────┬─────────────────────┬─────────────┐
│ Name │ Size │ Min Size │ PG Num │ min. PG Num │ Optimal PG Num │ PG Autoscale Mode │ PG Autoscale Target Size │ PG Autoscale Target Ratio │ Crush Rule Name │ %-Used │ Used │
╞═══════════╪══════╪══════════╪════════╪═════════════╪════════════════╪═══════════════════╪══════════════════════════╪═══════════════════════════╪═════════════════╪═════════════════════╪═════════════╡
│ .mgr │ 3 │ 2 │ 1 │ 1 │ 1 │ on │ │ │ replicated_rule │ 2.9181532227085e-05 │ 1388544 │
├───────────┼──────┼──────────┼────────┼─────────────┼────────────────┼───────────────────┼──────────────────────────┼───────────────────────────┼─────────────────┼─────────────────────┼─────────────┤
│ pvepool01 │ 3 │ 2 │ 32 │ │ 32 │ on │ │ │ replicated_rule │ 0.216631069779396 │ 13158102408 │
└───────────┴──────┴──────────┴────────┴─────────────┴────────────────┴───────────────────┴──────────────────────────┴───────────────────────────┴─────────────────┴─────────────────────┴─────────────┘
Last edited: