weird problem (no cluster)

TErxleben

Renowned Member
Oct 20, 2008
142
3
83
I have a strange problem here with a pbs backup:

1. I have a pveserver (pve1) with three virtual machines.
2. pve1 has an additional internal backup SSD on which I can back up the three VMs without any problems.
3. Additionally have a pbs on another pve (pve0).
4. pve1 and pve0 are connected via 1gbit.
5. If I now want to back up the three vms on pve1 on the pbs on pve0, it works with two. The third VM eventually hangs.
6. Afterwards, neither pve1 nor any of the three VMS hosted there can be reached.
7. nmap shows the open ports required for pve1 and every VM.
8. Webgui not working. Neither does SSH (connection reset by peer).

I would appreciate ideas and input.
 
Last edited:
I have a strange problem here with a pbs backup:

1. I have a pveserver (pve1) with three virtual machines.
2. pve1 has an additional internal backup SSD on which I can back up the three VMs without any problems.
3. Additionally have a pbs on another pve (pve0).
4. pve1 and pve0 are connected via 1gbit.
5. If I now want to back up the three vms on pve1 on the pbs on pve0, it works with two. The third VM eventually hangs.
6. Afterwards, neither pve1 nor any of the three VMS hosted there can be reached.
7. nmap shows the open ports required for pve1 and every VM.
8. Webgui not working. Neither does SSH (connection reset by peer).

I would appreciate ideas and input.
Hi,
the fact that you are not able to ssh into the machine might indicate a hard crash of the host.

Are there any errors logged to the systemd journal around the time of the backup, which eventually leads to the Proxmox VE node becoming unresponsive?

Check journalctl --since <DATETIME> --until <DATETIME>, post the VM config qm config <VMID> --current for the VM in question as well as the backup task log.
 
Hi Chris,

jornalctl:

Code:
Aug 11 16:37:41 pve1 pvedaemon[3631748]: INFO: Finished Backup of VM 201 (00:07:05)
Aug 11 16:37:41 pve1 pvedaemon[3631748]: INFO: Starting Backup of VM 202 (qemu)
Aug 11 16:38:39 pve1 pvedaemon[3629712]: <root@pam> successful auth for user 'root@pam'
Aug 11 16:40:51 pve1 pveproxy[3626703]: worker exit
Aug 11 16:40:51 pve1 pveproxy[987]: worker 3626703 finished
Aug 11 16:40:51 pve1 pveproxy[987]: starting 1 worker(s)
Aug 11 16:40:51 pve1 pveproxy[987]: worker 3634158 started
-- Boot 8eab634d744340289401e74b24a167ed --
Jan 26 22:48:48 pve1 chronyd[779]: Selected source 148.251.5.46 (2.debian.pool.ntp.org)
Aug 12 07:57:23 pve1 systemd-journald[363]: Oldest entry in /var/log/journal/af7b4875bb6d49efaa1fa86f>
Aug 12 07:57:23 pve1 systemd-journald[363]: /var/log/journal/af7b4875bb6d49efaa1fa86f0e7c072e/system.>
Jan 26 22:48:48 pve1 chronyd[779]: System clock wrong by 17136514.343054 seconds
Aug 12 07:57:23 pve1 chronyd[779]: System clock was stepped by 17136514.343054 seconds
Aug 12 07:57:23 pve1 chronyd[779]: System clock TAI offset set to 37 seconds
Aug 12 07:57:24 pve1 systemd[1]: Starting dpkg-db-backup.service - Daily dpkg database backup service>
Aug 12 07:57:24 pve1 systemd[1]: Starting logrotate.service - Rotate log files...
Aug 12 07:57:24 pve1 systemd[1]: dpkg-db-backup.service: Deactivated successfully.
Aug 11 16:34:37 pve1 pveproxy[3628482]: worker exit
Aug 11 16:34:37 pve1 pveproxy[987]: worker 3628482 finished
Aug 11 16:34:37 pve1 pveproxy[987]: starting 1 worker(s)
Aug 11 16:34:37 pve1 pveproxy[987]: worker 3632706 started
Aug 11 16:36:21 pve1 pvedaemon[3628350]: worker exit
Aug 11 16:36:21 pve1 pvedaemon[978]: worker 3628350 finished
Aug 11 16:36:21 pve1 pvedaemon[978]: starting 1 worker(s)
Aug 11 16:36:21 pve1 pvedaemon[978]: worker 3633087 started
Aug 11 16:37:41 pve1 pvedaemon[3631748]: INFO: Finished Backup of VM 201 (00:07:05)
Aug 11 16:37:41 pve1 pvedaemon[3631748]: INFO: Starting Backup of VM 202 (qemu)
Aug 11 16:38:39 pve1 pvedaemon[3629712]: <root@pam> successful auth for user 'root@pam'
Aug 11 16:40:51 pve1 pveproxy[3626703]: worker exit
Aug 11 16:40:51 pve1 pveproxy[987]: worker 3626703 finished
Aug 11 16:40:51 pve1 pveproxy[987]: starting 1 worker(s)
Aug 11 16:40:51 pve1 pveproxy[987]: worker 3634158 started
-- Boot 8eab634d744340289401e74b24a167ed --
Jan 26 22:48:48 pve1 chronyd[779]: Selected source 148.251.5.46 (2.debian.pool.ntp.org)
Aug 12 07:57:23 pve1 systemd-journald[363]: Oldest entry in /var/log/journal/af7b4875bb6d49efaa1fa86f>
Aug 12 07:57:23 pve1 systemd-journald[363]: /var/log/journal/af7b4875bb6d49efaa1fa86f0e7c072e/system.>
Jan 26 22:48:48 pve1 chronyd[779]: System clock wrong by 17136514.343054 seconds
Aug 12 07:57:23 pve1 chronyd[779]: System clock was stepped by 17136514.343054 seconds

qm config:

Code:
acpi: 1
agent: 1,fstrim_cloned_disks=1
balloon: 4096
boot: c
bootdisk: virtio0
cores: 4
cpuunits: 1000
description:
freeze: 0
kvm: 1
memory: 16184
name: PB-Server
net0: virtio=xx:xx:xx:xx:xx:xx,bridge=vmbr0
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-pci
smbios1: uuid=a47e2218-6646-4d2f-a750-f04f8fffc98c
sockets: 1
startup: order=3
tablet: 0
virtio0: local-lvm:vm-202-disk-0,size=148G

Backuplog see attachment.

Brgds
 

Attachments

  • Textdatei (5).txt
    13.4 KB · Views: 2
Last edited:
Jan 26 22:48:48 pve1 chronyd[779]: Selected source 148.251.5.46 (2.debian.pool.ntp.org)
Your system time jumps quite a bit after the reboot, please check if you can sync the hardware clock to be closer to your localtime. But I do not think this is related to your issue.

-- Boot 8eab634d744340289401e74b24a167ed --
Did you manually trigger the reboot here?

Try also to attach a monitor to the host and follow the systemd journal while the host locks up via journalctl -f , could be that the kernel panics but the log is never written to disk.
 
1. A little is a slight understatement. But hwclock -r shows synced time.
2. I definitly didn't trigger manul reboot at this time.
3. Ok. Will try to implement KVM-remotehands to follow the journal. It's a customer far away.
 
1. A little is a slight understatement. But hwclock -r shows synced time.
2. I definitly didn't trigger manul reboot at this time.
3. Ok. Will try to implement KVM-remotehands to follow the journal. It's a customer far away.
What is the pveversion -v, does this issue persist when using a different kernel version?
 
  • Like
Reactions: floh8
What is the pveversion -v, does this issue persist when using a different kernel version?
proxmox-ve: 8.2.0 (running kernel: 6.8.8-1-pve)
pve-manager: 8.2.4 (running version: 8.2.4/faa83925c9641325)
proxmox-kernel-helper: 8.1.0
pve-kernel-5.15: 7.4-11
proxmox-kernel-6.8: 6.8.8-1
proxmox-kernel-6.8.8-1-pve-signed: 6.8.8-1
proxmox-kernel-6.8.4-3-pve-signed: 6.8.4-3
proxmox-kernel-6.8.4-2-pve-signed: 6.8.4-2
proxmox-kernel-6.5.13-5-pve-signed: 6.5.13-5
proxmox-kernel-6.5: 6.5.13-5
pve-kernel-5.15.143-1-pve: 5.15.143-1
pve-kernel-5.15.74-1-pve: 5.15.74-1
ceph-fuse: 16.2.11+ds-2
corosync: 3.1.7-pve3
criu: 3.17.1-2
glusterfs-client: 10.3-5
ifupdown2: 3.2.0-1+pmx8
ksm-control-daemon: 1.5-1
libjs-extjs: 7.0.0-4
libknet1: 1.28-pve1
libproxmox-acme-perl: 1.5.1
libproxmox-backup-qemu0: 1.4.1
libproxmox-rs-perl: 0.3.3
libpve-access-control: 8.1.4
libpve-apiclient-perl: 3.3.2
libpve-cluster-api-perl: 8.0.7
libpve-cluster-perl: 8.0.7
libpve-common-perl: 8.2.1
libpve-guest-common-perl: 5.1.3
libpve-http-server-perl: 5.1.0
libpve-network-perl: 0.9.8
libpve-rs-perl: 0.8.9
libpve-storage-perl: 8.2.2
libspice-server1: 0.15.1-1
lvm2: 2.03.16-2
lxc-pve: 6.0.0-1
lxcfs: 6.0.0-pve2
novnc-pve: 1.4.0-3
proxmox-backup-client: 3.2.4-1
proxmox-backup-file-restore: 3.2.4-1
proxmox-firewall: 0.4.2
proxmox-kernel-helper: 8.1.0
proxmox-mail-forward: 0.2.3
proxmox-mini-journalreader: 1.4.0
proxmox-widget-toolkit: 4.2.3
pve-cluster: 8.0.7
pve-container: 5.1.12
pve-docs: 8.2.2
pve-edk2-firmware: 4.2023.08-4
pve-esxi-import-tools: 0.7.1
pve-firewall: 5.0.7
pve-firmware: 3.12-1
pve-ha-manager: 4.0.5
pve-i18n: 3.2.2
pve-qemu-kvm: 8.1.5-6
pve-xtermjs: 5.3.0-3
qemu-server: 8.2.1
smartmontools: 7.3-pve1
spiceterm: 3.3.0
swtpm: 0.8.0+pve1
vncterm: 1.8.0
zfsutils-linux: 2.2.4-pve1
 
What is the pveversion -v, does this issue persist when using a different kernel version?

I investigated the problem further and ran journalctl -f in ssh sessions on both the sending server pve1 and the receiving pbs. I was able to catch the following logs before the server to be secured, pve1, went off the rails:

WebGUI (pve1):
Code:
unable to open file - Input/output error

Source (pve1):
Code:
root@pve1:~# journalctl -f
Aug 13 16:33:54 pve1 systemd[445828]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Aug 13 16:33:54 pve1 systemd[445828]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Aug 13 16:33:54 pve1 systemd[445828]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Aug 13 16:33:54 pve1 systemd[445828]: Reached target sockets.target - Sockets.
Aug 13 16:33:54 pve1 systemd[445828]: Reached target basic.target - Basic System.
Aug 13 16:33:54 pve1 systemd[445828]: Reached target default.target - Main User Target.
Aug 13 16:33:54 pve1 systemd[1]: Started user@0.service - User Manager for UID 0.
Aug 13 16:33:54 pve1 systemd[445828]: Startup finished in 124ms.
Aug 13 16:33:54 pve1 systemd[1]: Started session-41.scope - Session 41 of User root.
Aug 13 16:33:54 pve1 sshd[445823]: pam_env(sshd:session): deprecated reading of user environment enabled
Aug 13 16:34:22 pve1 pvedaemon[983]: <root@pam> starting task UPID:pve1:0006CE0A:00B33195:66BB6EEE:vzdump::root@pam:
Aug 13 16:34:22 pve1 pvedaemon[445962]: INFO: starting new backup job: vzdump 302 201 202 --all 0 --mailto edv@sea-flutlicht.de --mailnotification always --mode snapshot --node pve1 --prune-backups 'keep-last=30' --notes-template '{{guestname}}' --storage pbs --fleecing 0
Aug 13 16:34:22 pve1 pvedaemon[445962]: INFO: Starting Backup of VM 201 (qemu)
Aug 13 16:35:33 pve1 pveproxy[417450]: worker exit
Aug 13 16:35:33 pve1 pveproxy[989]: worker 417450 finished
Aug 13 16:35:33 pve1 pveproxy[989]: starting 1 worker(s)
Aug 13 16:35:33 pve1 pveproxy[989]: worker 446246 started
Aug 13 16:40:30 pve1 pvedaemon[445962]: INFO: Finished Backup of VM 201 (00:06:08)
Aug 13 16:40:30 pve1 pvedaemon[445962]: INFO: Starting Backup of VM 202 (qemu)
Aug 13 16:44:01 pve1 pvedaemon[981]: <root@pam> successful auth for user 'root@pam'
Aug 13 16:45:01 pve1 pveproxy[441161]: worker exit
Aug 13 16:45:01 pve1 pveproxy[989]: worker 441161 finished
Aug 13 16:45:01 pve1 pveproxy[989]: starting 1 worker(s)
Aug 13 16:45:01 pve1 pveproxy[989]: worker 448478 started
Aug 13 16:47:39 pve1 pveproxy[446246]: worker exit
Aug 13 16:47:39 pve1 pveproxy[989]: worker 446246 finished
Aug 13 16:47:39 pve1 pveproxy[989]: starting 1 worker(s)
Aug 13 16:47:39 pve1 pveproxy[989]: worker 449084 started
Aug 13 16:48:12 pve1 pvedaemon[981]: <root@pam> successful auth for user 'root@pam'
Aug 13 16:48:38 pve1 smartd[621]: Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 52 to 54
Aug 13 16:50:40 pve1 pveproxy[444884]: worker exit
Aug 13 16:50:41 pve1 pveproxy[989]: worker 444884 finished
Aug 13 16:50:41 pve1 pveproxy[989]: starting 1 worker(s)
Aug 13 16:50:41 pve1 pveproxy[989]: worker 449761 started
Aug 13 16:51:55 pve1 pvedaemon[983]: worker exit
Aug 13 16:51:55 pve1 pvedaemon[980]: worker 983 finished
Aug 13 16:51:55 pve1 pvedaemon[980]: starting 1 worker(s)
Aug 13 16:51:55 pve1 pvedaemon[980]: worker 450040 started
Aug 13 16:53:46 pve1 pvedaemon[445962]: INFO: Finished Backup of VM 202 (00:13:16)
Aug 13 16:53:47 pve1 pvedaemon[445962]: INFO: Starting Backup of VM 302 (qemu)
Aug 13 16:56:29 pve1 pveproxy[989]: worker 448478 finished
Aug 13 16:56:29 pve1 pveproxy[989]: starting 1 worker(s)
Aug 13 16:56:29 pve1 pveproxy[989]: worker 451118 started
Aug 13 16:56:30 pve1 pveproxy[451117]: worker exit

Destination (pbs):
Code:
root@pbs:~# journalctl -f
Aug 13 16:49:18 pbs systemd[3656]: Listening on gpg-agent-extra.socket - GnuPG cryptographic agent and passphrase cache (restricted).
Aug 13 16:49:18 pbs systemd[3656]: Listening on gpg-agent-ssh.socket - GnuPG cryptographic agent (ssh-agent emulation).
Aug 13 16:49:18 pbs systemd[3656]: Listening on gpg-agent.socket - GnuPG cryptographic agent and passphrase cache.
Aug 13 16:49:18 pbs systemd[3656]: Reached target sockets.target - Sockets.
Aug 13 16:49:18 pbs systemd[3656]: Reached target basic.target - Basic System.
Aug 13 16:49:18 pbs systemd[1]: Started user@0.service - User Manager for UID 0.
Aug 13 16:49:18 pbs systemd[1]: Started session-54.scope - Session 54 of User root.
Aug 13 16:49:18 pbs systemd[3656]: Reached target default.target - Main User Target.
Aug 13 16:49:18 pbs systemd[3656]: Startup finished in 201ms.
Aug 13 16:49:18 pbs sshd[3646]: pam_env(sshd:session): deprecated reading of user environment enabled
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Upload statistics for 'drive-virtio0.img.fidx'
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: UUID: 9bb538a5bfbf49d799b459bff6d2b688
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Checksum: 81964937134951ea1f4c63735f8bff56687da14b64eff642ae53c9ab148ad37e
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Size: 158913789952
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Chunk count: 37888
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Upload size: 106245914624 (66%)
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Duplicates: 12557+30 (33%)
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: Compression: 62%
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: successfully closed fixed index 1
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: add blob "/mnt/datastore/pbs/vm/202/2024-08-13T14:40:30Z/index.json.blob" (329 bytes, comp: 329)
Aug 13 16:53:40 pbs proxmox-backup-proxy[781]: syncing filesystem
Aug 13 16:53:45 pbs proxmox-backup-proxy[781]: successfully finished backup
Aug 13 16:53:45 pbs proxmox-backup-proxy[781]: backup finished successfully
Aug 13 16:53:45 pbs proxmox-backup-proxy[781]: TASK OK
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: retention options: --keep-last 30
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: Starting prune on datastore 'pbs', root namespace group "vm/202"
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-03-29T21:04:17Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-04-11T20:01:51Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-04-19T20:01:18Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-04-26T20:00:16Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-03T20:00:14Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-08T20:06:00Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-09T20:06:06Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-10T20:00:12Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-14T20:06:12Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-15T20:00:15Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-05-21T16:33:57Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: vm/202/2024-08-13T14:40:30Z keep
Aug 13 16:53:46 pbs proxmox-backup-proxy[781]: TASK OK
Aug 13 16:53:47 pbs proxmox-backup-proxy[781]: Upload backup log to datastore 'pbs', root namespace vm/202/2024-08-13T14:40:30Z/client.log.blob
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: starting new backup on datastore 'pbs' from ::ffff:192.168.5.201: "vm/302/2024-08-13T14:53:47Z"
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: download 'index.json.blob' from previous backup.
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: register chunks in 'drive-virtio0.img.fidx' from previous backup.
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: download 'drive-virtio0.img.fidx' from previous backup.
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: created new fixed index 1 ("vm/302/2024-08-13T14:53:47Z/drive-virtio0.img.fidx")
Aug 13 16:53:50 pbs proxmox-backup-proxy[781]: add blob "/mnt/datastore/pbs/vm/302/2024-08-13T14:53:47Z/qemu-server.conf.blob" (442 bytes, comp: 442)
Aug 13 17:01:41 pbs proxmox-backup-proxy[781]: backup ended and finish failed: backup ended but finished flag is not set.
Aug 13 17:01:41 pbs proxmox-backup-proxy[781]: removing unfinished backup
Aug 13 17:01:41 pbs proxmox-backup-proxy[781]: TASK ERROR: backup ended but finished flag is not set.
Aug 13 17:01:41 pbs proxmox-backup-proxy[781]: removing backup snapshot "/mnt/datastore/pbs/vm/302/2024-08-13T14:53:47Z"
Aug 13 17:12:52 pbs proxmox-backup-proxy[781]: write rrd data back to disk
Aug 13 17:12:52 pbs proxmox-backup-proxy[781]: starting rrd data sync
Aug 13 17:12:52 pbs proxmox-backup-proxy[781]: rrd journal successfully committed (25 files in 0.031 seconds)
Aug 13 17:17:01 pbs CRON[4149]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Aug 13 17:17:01 pbs CRON[4150]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
Aug 13 17:17:01 pbs CRON[4149]: pam_unix(cron:session): session closed for user root
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!