pveproxy.service & pvedaemon.service - PVE API Daemon | FAILED

You can try using dpkg --debug=77777 --configure -a to get more information from dpkg which hopefully hints to why it gets stuck.

If nothing else helps, you should consider saving/backing up your data and configuration! and then reinstalling.
 
# dpkg --debug=77777 --configure -a
D000001: ensure_diversions: new, (re)loading
D000001: process queue pkg initramfs-tools:all queue.len 1 progress 1, try 1
D000040: checking dependencies of initramfs-tools:all (- <none>)
D000400: checking group ...
D000400: checking possibility -> initramfs-tools-core
D000400: checking non-provided pkg initramfs-tools-core:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> linux-base
D000400: checking non-provided pkg linux-base:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000040: ok 2 msgs >><<
D000040: checking Breaks
D000400: checking breaker initramfs-tools-core:all virtbroken <none>
D000400: checking breaker busybox:amd64 virtbroken <none>
D000400: checking breaker klibc-utils:amd64 virtbroken <none>
D000400: checking virtbroken linux-initramfs-tool
Setting up initramfs-tools (0.130) ...
D020000: trigproc_activate_packageprocessing pkg=initramfs-tools:all
D000002: fork/exec /var/lib/dpkg/info/initramfs-tools.postinst ( configure 0.130 )
update-initramfs: deferring update (trigger activated)
D000001: ensure_diversions: same, skipping
D020000: post_postinst_tasks - trig_incorporate
D010000: trigproc_enqueue_deferred pend=initramfs-tools:all
D000001: process queue pkg open-iscsi:amd64 queue.len 0 progress 1, try 1
D000040: checking dependencies of open-iscsi:amd64 (- <none>)
D000400: checking group ...
D000400: checking possibility -> udev
D000400: checking non-provided pkg udev:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> debconf
D000400: checking non-provided pkg debconf:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libc6
D000400: checking non-provided pkg libc6:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libisns0
D000400: checking non-provided pkg libisns0:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libmount1
D000400: checking non-provided pkg libmount1:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> lsb-base
D000400: checking non-provided pkg lsb-base:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> debconf
D000400: checking non-provided pkg debconf:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000040: ok 2 msgs >><<
D000040: checking Breaks
Setting up open-iscsi (2.0.874-7.1) ...
D020000: trigproc_activate_packageprocessing pkg=open-iscsi:amd64
D000002: fork/exec /var/lib/dpkg/info/open-iscsi.postinst ( configure 2.0.874-3~deb9u1 )............ stock here around 6 hours
 
How to saving/backing up my data and configuration ??

Thanks
That depends on your setup. PVE's configuration is saved in /etc/pve (assuming the pve-cluster.service still works), most other configuration is in /etc/ and your data is where you put it (check your /etc/pve/storage.cfg for storages used by PVE).

D000002: fork/exec /var/lib/dpkg/info/open-iscsi.postinst ( configure 2.0.874-3~deb9u1 )............ stock here around 6 hours
You could take a look at that script and see where it might hang (maybe when doing modprobe?). Did you try to press Ctrl+C when it hangs and if it continues then?
 
While you are running the update, in another terminal run `dmesg -Tw` and see if it says anything is wrong. Maybe you just have a failing disk or something.
 
While you are running the update, in another terminal run `dmesg -Tw` and see if it says anything is wrong. Maybe you just have a failing disk or something.
[Fri Feb 25 11:55:06 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:06 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:06 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:06 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:11 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:11 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:11 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:11 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:16 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:16 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:16 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:16 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:21 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:21 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:21 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:21 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:26 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:26 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:26 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:26 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:31 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:31 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:31 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:31 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:36 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:36 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:36 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:36 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:41 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:41 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:41 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:41 2022] device-mapper: multipath: Failing path 69:64.
[Fri Feb 25 11:55:42 2022] sd 19:0:0:10: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:10: alua: device naa.6000d310058198000000000000000021 port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:10: [sdcg] 21474836480 512-byte logical blocks: (11.0 TB/10.0 TiB)
[Fri Feb 25 11:55:42 2022] sd 19:0:0:10: [sdcg] 4096-byte physical blocks
[Fri Feb 25 11:55:42 2022] sdcg: detected capacity change from 0 to 10995116277760
[Fri Feb 25 11:55:42 2022] sd 19:0:0:11: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:11: alua: device naa.6000d310058198000000000000000022 port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:2: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:2: alua: device naa.6000d31005819800000000000000000b port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:3: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:3: alua: device naa.6000d31005819800000000000000000d port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:4: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:4: alua: device naa.6000d31005819800000000000000000f port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:5: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:5: alua: device naa.6000d310058198000000000000000019 port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:6: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:6: alua: device naa.6000d310058198000000000000000017 port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:7: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:7: alua: device naa.6000d31005819800000000000000001b port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:8: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:8: alua: device naa.6000d31005819800000000000000001a port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:9: alua: supports implicit TPGS
[Fri Feb 25 11:55:42 2022] sd 19:0:0:9: alua: device naa.6000d31005819800000000000000001e port group f03b rel port 3b
[Fri Feb 25 11:55:42 2022] sd 19:0:0:11: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:4: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:3: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:2: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:6: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:7: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:5: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:9: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:42 2022] sd 19:0:0:8: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:46 2022] device-mapper: multipath: Reinstating path 69:64.
[Fri Feb 25 11:55:46 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 11:55:46 2022] sd 19:0:0:10: alua: port group f03b state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] device-mapper: table: 253:51: multipath: error getting device
[Fri Feb 25 13:45:03 2022] device-mapper: ioctl: error adding target to table
[Fri Feb 25 13:45:03 2022] sd 21:0:0:10: alua: port group f03c state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 16:0:0:10: alua: port group f037 state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 21:0:0:10: alua: port group f03c state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 15:0:0:10: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 16:0:0:10: alua: port group f037 state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 15:0:0:10: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 22:0:0:8: alua: port group f03d state A non-preferred supports toluSNA
[Fri Feb 25 13:45:03 2022] sd 20:0:0:8: alua: port group f039 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:4: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:4: alua: device naa.6000d31005819800000000000000000f port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:5: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:5: alua: device naa.6000d310058198000000000000000019 port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:6: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:6: alua: device naa.6000d310058198000000000000000017 port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:7: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:7: alua: device naa.6000d31005819800000000000000001b port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:8: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:8: alua: device naa.6000d31005819800000000000000001a port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:11: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:10: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:9: alua: supports implicit TPGS
[Fri Feb 25 13:51:18 2022] sd 15:0:0:9: alua: device naa.6000d31005819800000000000000001e port group f036 rel port 36
[Fri Feb 25 13:51:18 2022] sd 15:0:0:5: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:4: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:3: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:2: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:8: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:7: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:6: alua: port group f036 state A non-preferred supports toluSNA
[Fri Feb 25 13:51:18 2022] sd 15:0:0:9: alua: port group f036 state A non-preferred supports toluSNA
 
That depends on your setup. PVE's configuration is saved in /etc/pve (assuming the pve-cluster.service still works), most other configuration is in /etc/ and your data is where you put it (check your /etc/pve/storage.cfg for storages used by PVE).


You could take a look at that script and see where it might hang (maybe when doing modprobe?). Did you try to press Ctrl+C when it hangs and if it continues then?
If I do crtl + c

# dpkg --debug=77777 --configure -a
D000001: ensure_diversions: new, (re)loading
D000001: process queue pkg initramfs-tools:all queue.len 1 progress 1, try 1
D010000: trigproc initramfs-tools:all
D000040: checking dependencies of initramfs-tools:all (- <none>)
D000400: checking group ...
D000400: checking possibility -> initramfs-tools-core
D000400: checking non-provided pkg initramfs-tools-core:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> linux-base
D000400: checking non-provided pkg linux-base:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000040: ok 2 msgs >><<
D010000: check_triggers_cycle pnow=initramfs-tools:all
D020000: check_triggers_cycle pnow=initramfs-tools:all first
Processing triggers for initramfs-tools (0.130) ...
D000002: fork/exec /var/lib/dpkg/info/initramfs-tools.postinst ( triggered update-initramfs )
update-initramfs: Generating /boot/initrd.img-4.15.18-21-pve
^Cdpkg: error processing package initramfs-tools (--configure):
subprocess installed post-installation script was interrupted
D020000: post_script_tasks - ensure_diversions
D000001: ensure_diversions: same, skipping
D020000: post_script_tasks - trig_incorporate
D000001: process queue pkg open-iscsi:amd64 queue.len 0 progress 1, try 1
D000040: checking dependencies of open-iscsi:amd64 (- <none>)
D000400: checking group ...
D000400: checking possibility -> udev
D000400: checking non-provided pkg udev:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> debconf
D000400: checking non-provided pkg debconf:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libc6
D000400: checking non-provided pkg libc6:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libisns0
D000400: checking non-provided pkg libisns0:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> libmount1
D000400: checking non-provided pkg libmount1:amd64
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> lsb-base
D000400: checking non-provided pkg lsb-base:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000400: checking group ...
D000400: checking possibility -> debconf
D000400: checking non-provided pkg debconf:all
D000400: is installed, ok and found
D000400: found 3
D000400: found 3 matched 0 possfixbytrig -
D000040: ok 2 msgs >><<
D000040: checking Breaks
Setting up open-iscsi (2.0.874-7.1) ...
D020000: trigproc_activate_packageprocessing pkg=open-iscsi:amd64
D000002: fork/exec /var/lib/dpkg/info/open-iscsi.postinst ( configure 2.0.874-3~deb9u1 )
debconf: DbDriver "config": /var/cache/debconf/config.dat is locked by another process: Resource temporarily unavailable
dpkg: error processing package open-iscsi (--configure):
subprocess installed post-installation script returned error exit status 1
D020000: post_script_tasks - ensure_diversions
D000001: ensure_diversions: same, skipping
D020000: post_script_tasks - trig_incorporate
D010000: trigproc_run_deferred
Errors were encountered while processing:
initramfs-tools
open-iscsi
root@R740:~#
 
All VM still running thanks god, but I dont have web gui yet

# service multipathd status
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-01-18 02:25:08 CST; 1 years 1 months ago
Main PID: 29127 (multipathd)
Status: "idle"
Tasks: 29 (limit: 6144)
Memory: 47.7M
CPU: 7h 8.401s
CGroup: /system.slice/multipathd.service
└─29127 /sbin/multipathd -d -s

Feb 25 13:46:36 R740 multipathd[29127]: 36000d31005819800000000000000000c: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d31005819800000000000000000e: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000010: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000012: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000016: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000018: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000021: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d31005819800000000000000001a: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: 36000d310058198000000000000000025: event checker started
Feb 25 13:46:36 R740 multipathd[29127]: dm-51: remove map (uevent)
root@R740:~# service pve-cluster status
● pve-cluster.service - The Proxmox VE cluster filesystem
Loaded: loaded (/lib/systemd/system/pve-cluster.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-02-25 13:46:37 CST; 6 days ago
Main PID: 19553 (pmxcfs)
Tasks: 6 (limit: 6144)
Memory: 28.0M
CPU: 22min 6.912s
CGroup: /system.slice/pve-cluster.service
└─19553 /usr/bin/pmxcfs

Feb 25 13:46:36 R740 systemd[1]: Starting The Proxmox VE cluster filesystem...
Feb 25 13:46:37 R740 pvecm[19558]: Experimental "my" subs not enabled at /usr/share/perl5/PVE/APIClient/LWP.pm line 278, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: Compilation failed in require at /usr/share/perl5/PVE/API2/ClusterConfig.pm line 12, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2/ClusterConfig.pm line 12, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: Compilation failed in require at /usr/share/perl5/PVE/CLI/pvecm.pm line 15, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/CLI/pvecm.pm line 15, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: Compilation failed in require at /usr/bin/pvecm line 8, <DATA> line 755.
Feb 25 13:46:37 R740 pvecm[19558]: BEGIN failed--compilation aborted at /usr/bin/pvecm line 8, <DATA> line 755.
Feb 25 13:46:37 R740 systemd[1]: Started The Proxmox VE cluster filesystem.
root@R740:~# service pvedaemon status
● pvedaemon.service - PVE API Daemon
Loaded: loaded (/lib/systemd/system/pvedaemon.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-02-25 13:52:41 CST; 6 days ago
Main PID: 22268 (code=exited, status=0/SUCCESS)
CPU: 536ms

Feb 25 13:52:41 R740 pvedaemon[21658]: Compilation failed in require at /usr/share/perl5/PVE/API2.pm line 13.
Feb 25 13:52:41 R740 pvedaemon[21658]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2.pm line 13.
Feb 25 13:52:41 R740 pvedaemon[21658]: Compilation failed in require at /usr/share/perl5/PVE/Service/pvedaemon.pm line 8.
Feb 25 13:52:41 R740 pvedaemon[21658]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/Service/pvedaemon.pm line 8.
Feb 25 13:52:41 R740 pvedaemon[21658]: Compilation failed in require at /usr/bin/pvedaemon line 11.
Feb 25 13:52:41 R740 pvedaemon[21658]: BEGIN failed--compilation aborted at /usr/bin/pvedaemon line 11.
Feb 25 13:52:41 R740 systemd[1]: pvedaemon.service: Control process exited, code=exited status=255
Feb 25 13:52:41 R740 systemd[1]: Failed to start PVE API Daemon.
Feb 25 13:52:41 R740 systemd[1]: pvedaemon.service: Unit entered failed state.
Feb 25 13:52:41 R740 systemd[1]: pvedaemon.service: Failed with result 'exit-code'.
root@R740:~# service pvestatd status
● pvestatd.service - PVE Status Daemon
Loaded: loaded (/lib/systemd/system/pvestatd.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2022-02-25 13:52:40 CST; 6 days ago
Main PID: 21652 (pvestatd)
Tasks: 10 (limit: 6144)
Memory: 103.9M
CPU: 28.677s
CGroup: /system.slice/pvestatd.service
├─ 1465 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─ 3865 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─11920 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─12960 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─20710 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─21652 pvestatd
├─21770 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─29530 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
├─34214 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count
└─39775 /sbin/vgs --separator : --noheadings --units b --unbuffered --nosuffix --options vg_name,vg_size,vg_free,lv_count

Feb 25 13:52:40 R740 systemd[1]: pvestatd.service: Unit entered failed state.
Feb 25 13:52:40 R740 systemd[1]: pvestatd.service: Failed with result 'timeout'.
Feb 25 13:52:40 R740 systemd[1]: Starting PVE Status Daemon...
Feb 25 13:52:40 R740 pvestatd[21652]: starting server
Feb 25 13:52:40 R740 systemd[1]: Started PVE Status Daemon.
root@R740:~# service pveproxy status
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2022-02-25 13:52:41 CST; 6 days ago
Main PID: 33586 (code=exited, status=0/SUCCESS)
CPU: 549ms

Feb 25 13:52:41 R740 pveproxy[21660]: Compilation failed in require at /usr/share/perl5/PVE/API2.pm line 13.
Feb 25 13:52:41 R740 pveproxy[21660]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/API2.pm line 13.
Feb 25 13:52:41 R740 pveproxy[21660]: Compilation failed in require at /usr/share/perl5/PVE/Service/pveproxy.pm line 15.
Feb 25 13:52:41 R740 pveproxy[21660]: BEGIN failed--compilation aborted at /usr/share/perl5/PVE/Service/pveproxy.pm line 15.
Feb 25 13:52:41 R740 pveproxy[21660]: Compilation failed in require at /usr/bin/pveproxy line 11.
Feb 25 13:52:41 R740 pveproxy[21660]: BEGIN failed--compilation aborted at /usr/bin/pveproxy line 11.
Feb 25 13:52:41 R740 systemd[1]: pveproxy.service: Control process exited, code=exited status=255
Feb 25 13:52:41 R740 systemd[1]: Failed to start PVE API Proxy Server.
Feb 25 13:52:41 R740 systemd[1]: pveproxy.service: Unit entered failed state.
Feb 25 13:52:41 R740 systemd[1]: pveproxy.service: Failed with result 'exit-code'.
root@R740:~#
 
It looks like you have issues with your disk or SNA or whatever it is you are using for storage. It added 11 terabytes in the middle of the operation, amongst some other errors.
 
Hello. I haven't added anything to the server. my storage i use iscsi. but I have no problems with storage. if not with the web and blank
 
# service multipathd status
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-01-18 02:25:08 CST; 1 years 1 months ago
Process: 3905 ExecReload=/sbin/multipathd reconfigure (code=exited, status=0/SUCCESS)
Main PID: 29127 (multipathd)
Status: "idle"
Tasks: 29 (limit: 6144)
Memory: 47.4M
CPU: 7h 14min 35.006s
CGroup: /system.slice/multipathd.service
└─29127 /sbin/multipathd -d -s

Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000000c: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000000e: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000010: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000012: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000016: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000018: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000021: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000001a: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000025: event checker started
It's not a command, it is from the logs you posted.
 
# service multipathd status
● multipathd.service - Device-Mapper Multipath Device Controller
Loaded: loaded (/lib/systemd/system/multipathd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-01-18 02:25:08 CST; 1 years 1 months ago
Process: 3905 ExecReload=/sbin/multipathd reconfigure (code=exited, status=0/SUCCESS)
Main PID: 29127 (multipathd)
Status: "idle"
Tasks: 29 (limit: 6144)
Memory: 47.4M
CPU: 7h 14min 35.006s
CGroup: /system.slice/multipathd.service
└─29127 /sbin/multipathd -d -s

Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000000c: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000000e: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000010: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000012: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000016: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000018: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000021: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d31005819800000000000000001a: event checker started
Mar 09 08:35:04 R740 multipathd[29127]: 36000d310058198000000000000000025: event checker started
Ok...
 
From what I can see, Proxmox isn't going to work until you get your upgrade/packages fixed. And it may be that the packages can't get fixed until your disks get fixed. I do note your multipathd.service has been running for 1 year 1 month. I'm not familiar with your iscsi setup, but it may be the source of the issues.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!