ZFSoverISCSI kvm: cannot create PID file

x4x

New Member
Feb 3, 2021
6
0
1
31
AT
Hi! I’m trying to setup ZFS over iSCSI. Both hypervisor and storage are Proxmox nodes.
I can create VMs and take snapshots but when I try to start the VM I get a timeout.
When I run the command from shell I get ‘kvm: cannot create PID file: Cannot lock pid file’
Anyone an idea how to fix it?


Bash:
root@pm02:~# /usr/bin/kvm -id 2206 -name gle-test2 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/2206.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/2206.pid -daemonize -smbios 'type=1,uuid=782a974f-582b-4175-b416-6df8881c1e64' -smp '8,sockets=1,cores=8,maxcpus=8' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/2206.vnc,password -no-hpet -cpu 'EPYC,enforce,hv_ipi,hv_relaxed,hv_reset,hv_runtime,hv_spinlocks=0x1fff,hv_stimer,hv_synic,hv_time,hv_vapic,hv_vpindex,+kvm_pv_eoi,+kvm_pv_unhalt,vendor=AuthenticAMD' -m 8192 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=613a64d5-b796-48a8-aed6-98903b99d89d' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2,edid=off' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:b2db997192e6' -drive 'file=iscsi://192.168.0.21/iqn.2021-06.local.sfu.storage1:zfs-iscsi0/1,if=none,id=drive-ide0,cache=writethrough,format=raw,aio=threads,detect-zeroes=on' -device 'ide-hd,bus=ide.0,unit=0,drive=drive-ide0,id=ide0' -drive 'file=/var/lib/vz/template/iso/Windows_10_Pro_64Bit.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap2206i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=9E:0B:F6:38:B3:79,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=101' -rtc 'driftfix=slew,base=localtime' -machine 'type=pc-i440fx-5.2+pve0' -global 'kvm-pit.lost_tick_policy=discard'

kvm: cannot create PID file: Cannot lock pid file: Resource temporarily unavailable

Bash:
root@storage1:/etc/pve/nodes# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content iso,backup,vztmpl
    prune-backups keep-all=1
    shared 0

fspool: local-zfs
    pool rpool/data
    content images,rootdir
    sparse 1

zfs: zfs-iscsi0
    blocksize 4k
    iscsiprovider LIO
    pool tank
    portal 192.168.0.21
    target iqn.2021-06.local.sfu.storage1:zfs-iscsi0
    content images
    lio_tpg tpg1
    nodes pm02,pm01
    nowritecache 1
    sparse 1



Bash:
root@storage1:/etc/pve/nodes# targetcli
targetcli shell version 2.1.fb48
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

> ls
o- / ................................................................................................................. [...]
o- backstores ...................................................................................................... [...]
| o- block .......................................................................................... [Storage Objects: 2]
| | o- tank-vm-2206-disk-0 ..................................... [/dev/tank/vm-2206-disk-0 (32.0GiB) write-thru activated]
| | | o- alua ........................................................................................... [ALUA Groups: 1]
| | | o- default_tg_pt_gp ............................................................... [ALUA state: Active/optimized]
| | o- zfs-iscsi0 ..................................................... [/dev/zvol/tank/vms (1.0MiB) write-thru activated]
| | o- alua ........................................................................................... [ALUA Groups: 1]
| | o- default_tg_pt_gp ............................................................... [ALUA state: Active/optimized]
| o- fileio ......................................................................................... [Storage Objects: 0]
| o- pscsi .......................................................................................... [Storage Objects: 0]
| o- ramdisk ........................................................................................ [Storage Objects: 0]
o- iscsi .................................................................................................... [Targets: 1]
| o- iqn.2021-06.local.sfu.storage1:zfs-iscsi0 ................................................................. [TPGs: 1]
| o- tpg1 ....................................................................................... [no-gen-acls, no-auth]
| o- acls .................................................................................................. [ACLs: 4]
| | o- iqn.1993-08.org.debian:01:7cde21d6c42 ........................................................ [Mapped LUNs: 2]
| | | o- mapped_lun0 .................................................................... [lun0 block/zfs-iscsi0 (rw)]
| | | o- mapped_lun1 ........................................................... [lun1 block/tank-vm-2206-disk-0 (rw)]
| | o- iqn.1993-08.org.debian:01:b2db997192e6 ....................................................... [Mapped LUNs: 2]
| | | o- mapped_lun0 .................................................................... [lun0 block/zfs-iscsi0 (rw)]
| | | o- mapped_lun1 ........................................................... [lun1 block/tank-vm-2206-disk-0 (rw)]
| | o- iqn.1993-08.org.debian:01:b3cf9d97eb3 ........................................................ [Mapped LUNs: 2]
| | | o- mapped_lun0 .................................................................... [lun0 block/zfs-iscsi0 (rw)]
| | | o- mapped_lun1 ........................................................... [lun1 block/tank-vm-2206-disk-0 (rw)]
| | o- iqn.1993-08.org.debian:01:fe627aab1aa ........................................................ [Mapped LUNs: 2]
| | o- mapped_lun0 .................................................................... [lun0 block/zfs-iscsi0 (rw)]
| | o- mapped_lun1 ........................................................... [lun1 block/tank-vm-2206-disk-0 (rw)]
| o- luns .................................................................................................. [LUNs: 2]
| | o- lun0 ............................................... [block/zfs-iscsi0 (/dev/zvol/tank/vms) (default_tg_pt_gp)]
| | o- lun1 ................................ [block/tank-vm-2206-disk-0 (/dev/tank/vm-2206-disk-0) (default_tg_pt_gp)]
| o- portals ............................................................................................ [Portals: 1]
| o- 0.0.0.0:3260 ............................................................................................. [OK]
o- loopback ................................................................................................. [Targets: 0]
o- vhost .................................................................................................... [Targets: 0]
o- xen-pvscsi ............................................................................................... [Targets: 0]
/>
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,762
191
83
Does the PID file `/var/run/qemu-server/2206.qmp` already exist?
Please provide the output of `ps auxwf`.
 

x4x

New Member
Feb 3, 2021
6
0
1
31
AT
/var/run/qemu-server/2206.pid and 2206.qmp exist all the time.
I removed them and restarted the VM then they get recreated.

Bash:
root@pm02:~# ps auxwf
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           2  0.0  0.0      0     0 ?        S    Jun04  11:25 [kthreadd]
root           3  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [rcu_gp]
root           4  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [rcu_par_gp]
root           6  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/0:0H-kblockd]
root           9  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [mm_percpu_wq]
root          10  0.0  0.0      0     0 ?        S    Jun04   0:01  \_ [ksoftirqd/0]
root          11  0.0  0.0      0     0 ?        I    Jun04  26:35  \_ [rcu_sched]
root          12  0.0  0.0      0     0 ?        S    Jun04   0:03  \_ [migration/0]
root          13  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/0]
root          14  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/0]
root          15  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/1]
root          16  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/1]
root          17  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/1]
root          18  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/1]
root          20  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/1:0H-kblockd]
root          21  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/2]
root          22  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/2]
root          23  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/2]
root          24  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/2]
root          26  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/2:0H-kblockd]
root          27  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/3]
root          28  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/3]
root          29  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/3]
root          30  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/3]
root          32  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/3:0H-kblockd]
root          33  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/4]
root          34  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/4]
root          35  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/4]
root          36  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/4]
root          38  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/4:0H-kblockd]
root          39  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/5]
root          40  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/5]
root          41  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/5]
root          42  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/5]
root          44  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/5:0H-kblockd]
root          45  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/6]
root          46  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/6]
root          47  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/6]
root          48  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/6]
root          50  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/6:0H-kblockd]
root          51  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/7]
root          52  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/7]
root          53  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/7]
root          54  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/7]
root          56  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/7:0H-kblockd]
root          57  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [cpuhp/8]
root          58  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [idle_inject/8]
root          59  0.0  0.0      0     0 ?        S    Jun04   0:07  \_ [migration/8]
root          60  0.0  0.0      0     0 ?        S    Jun04   0:00  \_ [ksoftirqd/8]
root          62  0.0  0.0      0     0 ?        I<   Jun04   0:00  \_ [kworker/8:0H-kblockd]

root     3534452  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534460  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534495  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534505  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534506  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534507  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534508  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534510  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]
root     3534512  0.0  0.0      0     0 ?        S<   16:20   0:00  \_ [z_wr_int]

root           1  0.0  0.0 171268  8592 ?        Ss   Jun04   3:59 /sbin/init
root        2715  0.0  0.0 488312 338992 ?       Ss   Jun04   2:58 /lib/systemd/systemd-journald
root        2751  0.0  0.0  23496  4372 ?        Ss   Jun04   2:17 /lib/systemd/systemd-udevd
_rpc        3065  0.0  0.0   6820  2300 ?        Ss   Jun04   0:02 /sbin/rpcbind -f -w
systemd+    3070  0.0  0.0  93080  4644 ?        Ssl  Jun04   0:02 /lib/systemd/systemd-timesyncd
root        3127  0.0  0.0   2140   540 ?        Ss   Jun04   0:26 /usr/sbin/watchdog-mux
root        3174  0.0  0.0 224736  1380 ?        Ssl  Jun04   0:00 /usr/bin/lxcfs /var/lib/lxcfs
message+    3185  0.0  0.0   9096  3176 ?        Ss   Jun04   0:30 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root        3200  0.0  0.0   4088  1660 ?        Ss   Jun04   0:00 /usr/sbin/qmeventd /var/run/qmeventd.sock
root        3219  0.0  0.0  19912  5944 ?        Ss   Jun04   0:13 /lib/systemd/systemd-logind
root        3224  0.0  0.0  12108  3744 ?        Ss   Jun04   0:00 /usr/sbin/smartd -n
root        3231  0.0  0.0   6724  2120 ?        S    Jun04   0:30 /bin/bash /usr/sbin/ksmtuned
root     3533397  0.0  0.0   5256   688 ?        S    16:19   0:00  \_ sleep 60
root        3233  0.0  0.0 4331724 2508 ?        Ssl  Jun04   0:00 /usr/lib/x86_64-linux-gnu/pve-lxc-syscalld/pve-lxc-syscalld --system /run/pve/lxc-syscalld.sock
root        3235  0.0  0.0  14944  1888 ?        S<s  Jun04   0:00 ovsdb-server: monitoring pid 3236 (healthy)
root        3236  0.0  0.0  15352  3160 ?        S<   Jun04   1:06  \_ ovsdb-server /etc/openvswitch/conf.db -vconsole:emer -vsyslog:err -vfile:info --remote=punix:/var/run/openvswitch/db.sock --private-key=db
root        3237  0.0  0.0 225820  3748 ?        Ssl  Jun04   1:03 /usr/sbin/rsyslogd -n -iNONE
root        3238  0.0  0.0 166188  3384 ?        Ssl  Jun04   0:00 /usr/sbin/zed -F
root        3239  0.0  0.0  19760  9324 ?        Ss   Jun04   0:22 /usr/bin/python3.7 /usr/bin/ceph-crash
root        3499  0.0  0.0  15592  2332 ?        S<s  Jun04   0:00 ovs-vswitchd: monitoring pid 3500 (healthy)
root        3500  0.0  0.0  16048 13960 ?        S<L  Jun04   0:01  \_ ovs-vswitchd unix:/var/run/openvswitch/db.sock -vconsole:emer -vsyslog:err -vfile:info --mlockall --no-chdir --log-file=/var/log/openvswit
root        3833  0.0  0.0   7292  2128 ?        Ss   Jun04   0:00 /usr/lib/x86_64-linux-gnu/lxc/lxc-monitord --daemon
root        3835  0.0  0.0  15848  6096 ?        Ss   Jun04   0:00 /usr/sbin/sshd -D
root     3523804  0.0  0.0  16628  7716 ?        Ss   16:12   0:00  \_ sshd: root@pts/0
root     3523944  0.0  0.0   8500  4816 pts/0    Ss   16:12   0:00      \_ -bash
root     3534613  0.0  0.0  12776  5092 pts/0    R+   16:20   0:00          \_ ps auxwf
root        3850  0.0  0.0   6888   244 ?        Ss   Jun04   0:20 /sbin/iscsid
root        3853  0.0  0.0   7428  5052 ?        S<Ls Jun04   0:00 /sbin/iscsid
root        3992  0.0  0.0   6920  3320 tty1     Ss   Jun04   0:00 /bin/login -p --
root        6129  0.0  0.0   7868  4116 tty1     S+   Jun04   0:00  \_ -bash
root        4014  0.0  0.0 732760  3740 ?        Ssl  Jun04   3:29 /usr/bin/rrdcached -B -b /var/lib/rrdcached/db/ -j /var/lib/rrdcached/journal/ -p /var/run/rrdcached.pid -l unix:/var/run/rrdcached.sock
root        4033  0.0  0.0 679332 60168 ?        Ssl  Jun04  26:31 /usr/bin/pmxcfs
root        4174  0.0  0.0  43472  3976 ?        Ss   Jun04   0:04 /usr/lib/postfix/sbin/master -w
postfix     4176  0.0  0.0  43876  6432 ?        S    Jun04   0:01  \_ qmgr -l -t unix -u
postfix  3470595  0.0  0.0  43832  6644 ?        S    15:32   0:00  \_ pickup -l -t unix -u -c
ceph        4223  0.1  0.0 1453592 973520 ?      Ssl  Jun04  45:44 /usr/bin/ceph-mon -f --cluster ceph --id pm02 --setuser ceph --setgroup ceph
root        4227  1.0  0.0 572920 177528 ?       SLsl Jun04 310:20 /usr/sbin/corosync -f
root        4230  0.0  0.0   8500  2376 ?        Ss   Jun04   0:02 /usr/sbin/cron -f
root        4447  0.3  0.0 265392 97888 ?        Ss   Jun04 117:29 pvestatd
root        4448  0.2  0.0 267092 99500 ?        Ss   Jun04  90:02 pve-firewall
root        4656  0.0  0.0 354576 138608 ?       Ss   Jun04   0:16 pvedaemon
root     3471278  0.0  0.0 363032 131712 ?       S    15:33   0:02  \_ pvedaemon worker
root     3498456  0.1  0.0 363188 131324 ?       S    15:53   0:01  \_ pvedaemon worker
root     3532500  0.1  0.0 362508 129900 ?       S    16:18   0:00  \_ pvedaemon worker
root        4667  0.0  0.0 337180 101992 ?       Ss   Jun04   1:38 pve-ha-crm
www-data    4823  0.0  0.0 356028 141288 ?       Ss   Jun04   0:20 pveproxy
www-data 3483988  0.1  0.0 364392 133956 ?       S    15:42   0:04  \_ pveproxy worker
www-data 3488265  0.2  0.0 364224 133784 ?       S    15:45   0:04  \_ pveproxy worker
www-data 3514208  0.2  0.0 364380 133424 ?       S    16:05   0:02  \_ pveproxy worker
www-data    4832  0.0  0.0  70600 57052 ?        Ss   Jun04   0:15 spiceproxy
www-data 2211050  0.0  0.0  70856 52540 ?        S    00:00   0:01  \_ spiceproxy worker
root        4834  0.0  0.0 336780 101744 ?       Ss   Jun04   2:45 pve-ha-lrm
root        6116  0.0  0.0  21284  8160 ?        Ss   Jun04   0:00 /lib/systemd/systemd --user
root        6119  0.0  0.0 170928  2904 ?        S    Jun04   0:00  \_ (sd-pam)
root     1343571  0.0  0.0   7372  4160 ?        Ss   Jun15   1:17 /usr/bin/lxc-start -F -n 2251
root     1346289  0.0  0.0 169600  7152 ?        Ss   Jun15   0:04  \_ /sbin/init
root     1346395  0.0  0.0  37784 12748 ?        Ss   Jun15   0:02      \_ /lib/systemd/systemd-journald
root     1346448  0.0  0.0 225844  2412 ?        Ssl  Jun15   0:00      \_ /usr/sbin/rsyslogd -n -iNONE
_rpc     1346449  0.0  0.0   9120  2772 ?        Ss   Jun15   0:00      \_ /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root     1346450  0.0  0.0  19304  4980 ?        Ss   Jun15   0:01      \_ /lib/systemd/systemd-logind
root     1346476  0.0  0.0  15848  4100 ?        Ss   Jun15   0:00      \_ /usr/sbin/sshd -D
postfix  1346762  0.0  0.0 2126600 143160 ?      Ssl  Jun15   6:45      \_ /usr/sbin/mysqld
root     1346823  0.0  0.0   5672  1660 ?        Ss   Jun15   0:01      \_ /usr/sbin/cron -f
root     1346829  0.0  0.0   2416  1076 pts/1    Ss+  Jun15   0:00      \_ /sbin/agetty -o -p -- \u --noclear --keep-baud tty2 115200,38400,9600 linux
root     1346830  0.0  0.0   2416  1116 pts/1    Ss+  Jun15   0:00      \_ /sbin/agetty -o -p -- \u --noclear --keep-baud console 115200,38400,9600 linux
root     1346831  0.0  0.0   7136  2672 pts/0    Ss   Jun15   0:00      \_ /bin/login -p --
root     1349305  0.0  0.0   5052  3456 pts/0    S+   Jun15   0:00      |   \_ -bash
root     1346878  0.0  0.0 235588 24804 ?        Ss   Jun15   0:18      \_ /usr/sbin/apache2 -k start
www-data 2370965  0.0  0.0 236184 15796 ?        S    02:00   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 2370967  0.0  0.0 236184 15768 ?        S    02:00   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 2370968  0.0  0.0 238752 34280 ?        S    02:00   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 2370969  0.0  0.0 236184 15760 ?        S    02:00   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 2370970  0.0  0.0 236616 30252 ?        S    02:00   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 2383697  0.0  0.0 236192 15728 ?        S    02:09   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 3169487  0.0  0.0 235948 15480 ?        S    11:49   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 3169488  0.0  0.0 236184 15668 ?        S    11:49   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 3169667  0.0  0.0 235940 14180 ?        S    11:49   0:00      |   \_ /usr/sbin/apache2 -k start
www-data 3169760  0.0  0.0 236184 15628 ?        S    11:49   0:00      |   \_ /usr/sbin/apache2 -k start
root     1347013  0.0  0.0  43468  3500 ?        Ss   Jun15   0:02      \_ /usr/lib/postfix/sbin/master -w
systemd+ 1347016  0.0  0.0  44160  5656 ?        S    Jun15   0:00          \_ qmgr -l -t unix -u
systemd+ 3503931  0.0  0.0  43956  6004 ?        S    15:57   0:00          \_ pickup -l -t unix -u -c
root     1349175  0.0  0.0   2400  1568 ?        Ss   Jun15   0:00 /usr/bin/dtach -A /var/run/dtach/vzctlconsole2251 -r winch -z lxc-console -n 2251 -e -1
root     1349176  0.0  0.0   7284  3872 pts/3    Ss+  Jun15   0:00  \_ lxc-console -n 2251 -e -1
root     2211053  0.0  0.0  86172  1900 ?        Ssl  00:00   0:03 /usr/sbin/pvefw-logger
root@pm02:~#
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,762
191
83
Does the VM start once those files are deleted? Or do you still get the 'Resource temporarily unavailable' message?
 

x4x

New Member
Feb 3, 2021
6
0
1
31
AT
I got the VM running. When creating the storage I added two nodes of the pve cluster.
On one node I can start and run the VMs normally. But on the other node I get errors. I also can’t migrate to that node.

Bash:
kvm: -drive file=iscsi://192.168.0.21/iqn.2021-06.local.sfu.storage1:zfs-iscsi0/1,if=none,id=drive-ide0,format=raw,cache=none,aio=native,detect-zeroes=on: iSCSI: Failed to connect to LUN : iscsi_service failed with : iscsi_service_reconnect_if_loggedin. Can not reconnect right now.

What do I miss hire? Way does it only work on one node not the other?
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,762
191
83
Please provide the output of the following commands:
Code:
pveversion -v
iscsiadm -m node
iscsiadm -m session
df -h
 

x4x

New Member
Feb 3, 2021
6
0
1
31
AT
Bash:
root@storage1:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.119-1-pve)
pve-manager: 6.4-8 (running version: 6.4-8/185e14db)
pve-kernel-5.4: 6.4-3
pve-kernel-helper: 6.4-3
pve-kernel-5.4.119-1-pve: 5.4.119-1
pve-kernel-5.4.114-1-pve: 5.4.114-1
pve-kernel-5.4.73-1-pve: 5.4.73-1
ceph: 15.2.13-pve1~bpo10
ceph-fuse: 15.2.13-pve1~bpo10
corosync: 3.1.2-pve1
criu: 3.11-3
glusterfs-client: 9.0-1
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.20-pve1
libproxmox-acme-perl: 1.1.0
libproxmox-backup-qemu0: 1.0.3-1
libpve-access-control: 6.4-3
libpve-apiclient-perl: 3.1-3
libpve-common-perl: 6.4-3
libpve-guest-common-perl: 3.1-5
libpve-http-server-perl: 3.2-3
libpve-storage-perl: 6.4-1
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve4
lxc-pve: 4.0.6-2
lxcfs: 4.0.6-pve1
novnc-pve: 1.1.0-1
proxmox-backup-client: 1.1.10-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.5-6
pve-cluster: 6.4-1
pve-container: 3.3-5
pve-docs: 6.4-2
pve-edk2-firmware: 2.20200531-1
pve-firewall: 4.1-4
pve-firmware: 3.2-4
pve-ha-manager: 3.1-1
pve-i18n: 2.3-1
pve-qemu-kvm: 5.2.0-6
pve-xtermjs: 4.7.0-3
qemu-server: 6.4-2
smartmontools: 7.2-pve2
spiceterm: 3.1-1
vncterm: 1.6-2
zfsutils-linux: 2.0.4-pve1
root@storage1:~# iscsiadm -m node
192.168.0.10:3260,1 iqn.2021-03.local.home.storage0:iscsi0
192.168.0.11:3260,1 iqn.2021-03.local.home.storage1:iscsi1
root@storage1:~# iscsiadm -m session
iscsiadm: No active sessions.
root@storage1:~# df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              126G     0  126G   0% /dev
tmpfs              26G  914M   25G   4% /run
rpool/ROOT/pve-1  225G  4.6G  221G   3% /
tmpfs             126G   63M  126G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs             126G     0  126G   0% /sys/fs/cgroup
rpool             221G  128K  221G   1% /rpool
rpool/ROOT        221G  128K  221G   1% /rpool/ROOT
rpool/data        221G  128K  221G   1% /rpool/data
tank               52T  256K   52T   1% /tank
/dev/fuse          30M   56K   30M   1% /etc/pve
tmpfs              26G     0   26G   0% /run/user/0
root@storage1:~#
 

differential21

New Member
Jun 28, 2021
1
0
1
23
Hello.
I create a VM on one of our Proxmox Hypervisor, where the VM-Disk points to another Proxmox Hypervisor which is shown as "storage1" and also function as Storage Server only. As soon as the vm was created and i tried to start, it started but shutdown on itself immediately. When i restarted the Hypervisor i can start the VM get noVNC access all perfect. Now where i shutdown the VM and wait a day and then tried to start this specific VM. It wont boot

Now when i start the server via SSH or WEB Gui, i get the following error msg:

TASK ERROR: start failed: command '/usr/bin/kvm -id 131 -name sh-li1 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/131.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/131.pid -daemonize -smbios 'type=1,uuid=61a719d2-ba4a-415e-9991-929fdb043c44' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/131.vnc,password -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 8192 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=c5bcd9ea-f7f8-49db-891d-35a6ea6e0342' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fe627aab1aa' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=iscsi://192.168.0.21/iqn.2021-06.local.sfu.storage1:zfs-iscsi0/3,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=101' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz/template/iso/debian-10.10.0-amd64-netinst.iso,if=none,id=drive-sata0,media=cdrom,aio=threads' -device 'ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap131i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=E2:23:AF:62:E2:C9,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout

Bash:
/usr/bin/kvm -id 131 -name sh-li1 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/131.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/131.pid -daemonize -smbios 'type=1,uuid=61a719d2-ba4a-415e-9991-929fdb043c44' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/131.vnc,password -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 8192 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=c5bcd9ea-f7f8-49db-891d-35a6ea6e0342' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fe627aab1aa' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=iscsi://192.168.0.21/iqn.2021-06.local.sfu.storage1:zfs-iscsi0/3,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=101' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz/template/iso/debian-10.10.0-amd64-netinst.iso,if=none,id=drive-sata0,media=cdrom,aio=threads' -device 'ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap131i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=E2:23:AF:62:E2:C9,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -machine 'type=pc+pve0'
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,762
191
83
Now when i start the server via SSH or WEB Gui, i get the following error msg:

TASK ERROR: start failed: command '/usr/bin/kvm -id 131 -name sh-li1 -no-shutdown -chardev 'socket,id=qmp,path=/var/run/qemu-server/131.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/131.pid -daemonize -smbios 'type=1,uuid=61a719d2-ba4a-415e-9991-929fdb043c44' -smp '4,sockets=1,cores=4,maxcpus=4' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/131.vnc,password -cpu kvm64,enforce,+kvm_pv_eoi,+kvm_pv_unhalt,+lahf_lm,+sep -m 8192 -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'vmgenid,guid=c5bcd9ea-f7f8-49db-891d-35a6ea6e0342' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:fe627aab1aa' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=iscsi://192.168.0.21/iqn.2021-06.local.sfu.storage1:zfs-iscsi0/3,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,rotation_rate=1,bootindex=101' -device 'ahci,id=ahci0,multifunction=on,bus=pci.0,addr=0x7' -drive 'file=/var/lib/vz/template/iso/debian-10.10.0-amd64-netinst.iso,if=none,id=drive-sata0,media=cdrom,aio=threads' -device 'ide-cd,bus=ahci0.0,drive=drive-sata0,id=sata0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap131i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown' -device 'e1000,mac=E2:23:AF:62:E2:C9,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=102' -machine 'type=pc+pve0'' failed: got timeout
This looks like a different issue, please open a new thread for it.
 

mira

Proxmox Staff Member
Staff member
Aug 1, 2018
1,762
191
83
root@storage1:~# iscsiadm -m node
192.168.0.10:3260,1 iqn.2021-03.local.home.storage0:iscsi0
192.168.0.11:3260,1 iqn.2021-03.local.home.storage1:iscsi1
root@storage1:~# iscsiadm -m session
iscsiadm: No active sessions.
Can you manually connect to the iSCSI storage? iscsiadm -m node -L all
 

x4x

New Member
Feb 3, 2021
6
0
1
31
AT
i got it running! Setting the delay in /usr/share/perl5/PVE/Storage/ZFSPlugin.pm line 57 to something larger then 10seconds solved it. i just set 120sec.
Interestingly setting it just on one server and Migrating the VMs also worked. So I thing some Initiation takes longer then 10sec.

Maybe there should be a option in the storage config for longer waiting time.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!