command failed with status code 5

hugo.hernandez

New Member
Jul 26, 2024
3
0
1
Hello members, I need some help, I am new in Promox and I have an issue.

After updating the version:
proxmox-ve: 8.2.0 (running kernel: 6.8.8-3-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)

I have next error:

1722026324568.png

I have next outputs:

PVS, VGS & LVS

1722026450469.png




root@nodo3:~# cat /etc/pve/storage.cfg

dir: local
path /var/lib/vz
content iso,vztmpl,backup

lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir

rbd: CEPH_PROXMOX
content images,rootdir
krbd 0
pool CEPH_PROXMOX

lvm: SDD_nodo3
vgname vm_SDD_nodo3

content images,rootdir
shared 1

pbs: PBS2
datastore PBS2_Backups_Nodo3
server 172.23.176.28
content backup
fingerprint 91:be:c9:de:62:ad:57:59:96:0d:f8:5e:5e:76:c8:d2:24:c6:c9:09:9a:c2:43:20:74:ca:9b:d6:19:28:13:ba
prune-backups keep-daily=3,keep-last=3,keep-monthly=1,keep-yearly=1
username root@pam

lvm: SDD_1
vgname vmstore_sdd1
content rootdir,images
shared 1

lvm: HDD_1
vgname vmstore_hdd1
content rootdir,images
shared 1

lvm: HDD_2
vgname vmstore_hdd2
content images,rootdir
shared 1

lvm: SDD_2
vgname vmstore_sdd2
content images,rootdir
shared 1

lvm: SDD_3
vgname vmstore_sdd3
content rootdir,images
shared 1

lvm: SDD_4
vgname vmstore_sdd4
content rootdir,images
shared 1

lvm: HDD_3
vgname vm_HDD_nodo3
content rootdir,images
shared 1

lvm: HDD_4
vgname PBS1_2
content rootdir,images
shared 1

lvm: HDD5
vgname vmstore_hdd5
content rootdir,images
shared 1

lvm: SDD_5
vgname SDD_5
content rootdir,images
shared 1

pbs: PBS1
datastore PBS1
server 172.23.176.29
content backup
fingerprint 5b:bd:2c:f0:05:85:a4:e4:27:08:1d:8d:1c:15:46:33:ee:08:47:29:45:cb:ed:27:74:0c:0b:a9:f3:1f:c2:7f
prune-backups keep-all=1
username root@pam

pbs: PBS1_2
datastore PBS1_2
server 172.23.176.29
content backup
fingerprint 5b:bd:2c:f0:05:85:a4:e4:27:08:1d:8d:1c:15:46:33:ee:08:47:29:45:cb:ed:27:74:0c:0b:a9:f3:1f:c2:7f
prune-backups keep-all=1
username root@pam

pbs: PBS2_1
datastore PBS2_1
server 172.23.176.28
content backup
fingerprint 91:be:c9:de:62:ad:57:59:96:0d:f8:5e:5e:76:c8:d2:24:c6:c9:09:9a:c2:43:20:74:ca:9b:d6:19:28:13:ba
prune-backups keep-all=1
username root@pam

lvm: HDD_6
vgname HDD_6
content images,rootdir
shared 1

lvm: POOL_NODO5
vgname POOL_NODO5
content images,rootdir
nodes nodo5
shared 0

lvm: POOL_NODO4
vgname POOL_NODO4
content rootdir,images
nodes nodo4
shared 0

lvm: HDD_7
vgname HDD_7
content images,rootdir
nodes nodo3,nodo2,nodo1
saferemove 0
shared 1

cifs: NAS_OSCAR
path /mnt/pve/NAS_OSCAR
server 172.23.176.9
share NAS_OSCAR
content backup,vztmpl,images
prune-backups keep-all=1
username admin

cifs: NAS_SYN
path /mnt/pve/NAS_SYN
server 172.23.176.69
share NAS_SYN
content images,vztmpl,backup
prune-backups keep-all=1
username Administrador






root@nodo3:~# pveversion -v

proxmox-ve: 8.2.0 (running kernel: 6.8.8-3-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-14
proxmox-kernel-6.8: 6.8.8-3
proxmox-kernel-6.8.8-3-pve-signed: 6.8.8-3
pve-kernel-5.15.158-1-pve: 5.15.158-1
pve-kernel-5.15.152-1-pve: 5.15.152-1
pve-kernel-5.15.149-1-pve: 5.15.149-1
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.131-2-pve: 5.15.131-3
pve-kernel-5.15.107-2-pve: 5.15.107-2
pve-kernel-5.15.107-1-pve: 5.15.107-1
ceph: 17.2.7-pve3
ceph-fuse: 17.2.7-pve3
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown: residual config
ifupdown2: 3.2.0-1+pmx9
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.3
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.7-1
proxmox-backup-file-restore: 3.2.7-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-offline-mirror-helper: 0.6.6
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.13-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 9.0.0-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.2
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1


If anybody could help me, I appreciate so much, sincerely thanks in advance.
 
Hi,
it seems like the physical disks are not present (from the point of view of LVM). Can you see them with lsblk -f? Can you see any errors about the disks in the system's boot log, e.g. journalctl -b?
 
Thanks in advance, I appreciate a lot your feedback.

Please find you attached the lsblk -f

In a few moment I upload the Journalctl -b.

Best regards
 

Attachments

  • lsblk.log
    95.2 KB · Views: 4
I paste some parts about command Journalctl -b, since is too big.

Sincerely I appreciate your feedback.

Jul 22 15:39:19 nodo1 pveproxy[11349]: starting 3 worker(s)
Jul 22 15:39:19 nodo1 pveproxy[11349]: worker 11350 started
Jul 22 15:39:19 nodo1 pveproxy[11349]: worker 11351 started
Jul 22 15:39:19 nodo1 pveproxy[11349]: worker 11352 started
Jul 22 15:39:19 nodo1 systemd[1]: Started pveproxy.service - PVE API Proxy Server.
Jul 22 15:39:19 nodo1 systemd[1]: Starting pve-ha-lrm.service - PVE Local HA Resource Manager Daemon...
Jul 22 15:39:19 nodo1 systemd[1]: Starting spiceproxy.service - PVE SPICE Proxy Server...
Jul 22 15:39:19 nodo1 spiceproxy[11355]: starting server
Jul 22 15:39:19 nodo1 spiceproxy[11355]: starting 1 worker(s)
Jul 22 15:39:19 nodo1 spiceproxy[11355]: worker 11356 started
Jul 22 15:39:19 nodo1 systemd[1]: Started spiceproxy.service - PVE SPICE Proxy Server.
Jul 22 15:39:20 nodo1 pve-ha-lrm[11358]: starting server
Jul 22 15:39:20 nodo1 pve-ha-lrm[11358]: status change startup => wait_for_agent_lock
Jul 22 15:39:20 nodo1 systemd[1]: Started pve-ha-lrm.service - PVE Local HA Resource Manager Daemon.
Jul 22 15:39:20 nodo1 systemd[1]: Starting pve-guests.service - PVE guests...
Jul 22 15:39:20 nodo1 ceph-osd[10892]: 2024-07-22T15:39:20.811-0600 74ed0888c3c0 -1 osd.2 5812 log_to_monitors true
Jul 22 15:39:21 nodo1 pve-guests[11438]: <root@pam> starting task UPID:nodo1:00002CD6:00002A51:669ED189:startall::root@pam:
Jul 22 15:39:21 nodo1 pve-guests[11438]: <root@pam> end task UPID:nodo1:00002CD6:00002A51:669ED189:startall::root@pam: OK
Jul 22 15:39:21 nodo1 systemd[1]: Finished pve-guests.service - PVE guests.
Jul 22 15:39:21 nodo1 systemd[1]: Starting pvescheduler.service - Proxmox VE scheduler...
Jul 22 15:39:21 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:21 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:21 nodo1 ceph-osd[10892]: 2024-07-22T15:39:21.512-0600 74ecf98006c0 -1 osd.2 5812 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jul 22 15:39:21 nodo1 pvescheduler[11481]: starting server
Jul 22 15:39:21 nodo1 systemd[1]: Started pvescheduler.service - Proxmox VE scheduler.
Jul 22 15:39:21 nodo1 systemd[1]: Reached target multi-user.target - Multi-User System.
Jul 22 15:39:21 nodo1 systemd[1]: Reached target graphical.target - Graphical Interface.
Jul 22 15:39:21 nodo1 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP...
Jul 22 15:39:21 nodo1 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jul 22 15:39:21 nodo1 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP.
Jul 22 15:39:21 nodo1 systemd[1]: Startup finished in 1min 28.584s (firmware) + 6.362s (loader) + 12.807s (kernel) + 1min 36.227s (userspace) = 3min 23.980s.
Jul 22 15:39:23 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:24 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:24 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:24 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:25 nodo1 pvedaemon[10863]: <root@pam> successful auth for user 'root@pam'
Jul 22 15:39:27 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:28 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:30 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:34 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:35 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:39:37 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:37 nodo1 kernel: netfs: FS-Cache loaded
Jul 22 15:39:37 nodo1 kernel: Key type cifs.spnego registered
Jul 22 15:39:37 nodo1 kernel: Key type cifs.idmap registered
Jul 22 15:39:37 nodo1 kernel: CIFS: Attempting to mount //172.23.176.69/NAS_SYN
Jul 22 15:39:37 nodo1 kernel: CIFS: Attempting to mount //172.23.176.9/NAS_OSCAR
Jul 22 15:39:38 nodo1 pvestatd[10817]: status update time (14.148 seconds)
Jul 22 15:39:38 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:40 nodo1 systemd-timesyncd[10123]: Contacted time server 132.248.30.3:123 (2.debian.pool.ntp.org).
Jul 22 15:39:40 nodo1 systemd-timesyncd[10123]: Initial clock synchronization to Mon 2024-07-22 15:39:40.559701 CST.
Jul 22 15:39:41 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:44 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:47 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:50 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:51 nodo1 pvestatd[10817]: status update time (12.969 seconds)
Jul 22 15:39:51 nodo1 pvedaemon[10864]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:51 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:54 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:54 nodo1 pvedaemon[10864]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:55 nodo1 pvedaemon[10862]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:57 nodo1 pvedaemon[10863]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:39:57 nodo1 pvedaemon[10864]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:57 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:39:59 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:59 nodo1 pvedaemon[10864]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:39:59 nodo1 pvedaemon[10862]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:02 nodo1 pvedaemon[10862]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:02 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:02 nodo1 pvedaemon[10864]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:02 nodo1 pvestatd[10817]: status update time (11.263 seconds)
Jul 22 15:40:05 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:05 nodo1 pvedaemon[10862]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:05 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:08 nodo1 pvedaemon[10862]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:08 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:09 nodo1 pvedaemon[10862]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:11 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:11 nodo1 pvedaemon[10862]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:14 nodo1 pvedaemon[10862]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:14 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:15 nodo1 pvestatd[10817]: status update time (12.802 seconds)
Jul 22 15:40:15 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:18 nodo1 pvedaemon[10862]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:18 nodo1 pvedaemon[10862]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:18 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:21 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:25 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:27 nodo1 pvedaemon[10864]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:28 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:28 nodo1 pvestatd[10817]: status update time (13.183 seconds)
Jul 22 15:40:28 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:30 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:30 nodo1 pvedaemon[10864]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:33 nodo1 pvedaemon[10864]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:33 nodo1 pvedaemon[10864]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:33 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:36 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:36 nodo1 pvedaemon[10864]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:40 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:40 nodo1 pvestatd[10817]: status update time (11.969 seconds)
Jul 22 15:40:43 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:43 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:43 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:43 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:43 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:45 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:45 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:46 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:46 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:47 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:48 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:48 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:50 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:50 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:52 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:53 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:40:53 nodo1 pvestatd[10817]: status update time (13.506 seconds)
Jul 22 15:40:53 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:53 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:54 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:54 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:56 nodo1 pvestatd[10817]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:40:57 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:40:57 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:57 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:40:59 nodo1 pvedaemon[10864]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 22 15:41:00 nodo1 pvedaemon[10864]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:41:00 nodo1 pvestatd[10817]: PBS1_2: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:41:03 nodo1 pvestatd[10817]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:41:03 nodo1 pvedaemon[10864]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:41:04 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:41:05 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:41:06 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:41:06 nodo1 pvedaemon[10864]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:41:06 nodo1 pvedaemon[10864]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:41:06 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:41:07 nodo1 pvestatd[10817]: status update time (13.508 seconds)
Jul 22 15:41:07 nodo1 pmxcfs[10472]: [status] notice: received log
Jul 22 15:41:07 nodo1 pvestatd[10817]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
 
What does qm config 104 show?


Jul 22 15:39:19 nodo1
All your previous output was for nodo3?

Jul 22 15:41:06 nodo1 pvedaemon[10864]: PBS1: error fetching datastores - 500 Can't connect to 172.23.176.29:8007 (No route to host)
Jul 22 15:41:06 nodo1 pvedaemon[10864]: PBS2: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Jul 22 15:41:06 nodo1 pvestatd[10817]: PBS2_1: error fetching datastores - 500 Can't connect to 172.23.176.28:8007 (No route to host)
Are all your PBSs really down, or do you have some NW problem with PVE?
 
Also your lsblk output appears for nodo1 & not nodo3

I imagine your running something incorrectly on your cluster.
 
  • Like
Reactions: fiona
Also your lsblk output appears for nodo1 & not nodo3

I imagine your running something incorrectly on your cluster.
Seems plausible. @hugo.hernandez please note that the storage configuration is shared across nodes, but local storages will need to be set-up on each node. If a storage is not actually available on a certain node, it needs to be restricted in the configuration (either using the nodes property in the CLI or when editing the storage in the UI). See also: https://pve.proxmox.com/pve-docs/chapter-pvesm.html#_storage_configuration
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!