Recent content by mocanub

  1. M

    [SOLVED] Error during PBS sync - create_locked_backup_group failed

    Hi fabian, Thanks for your reply. So this is the group content on the source datastore PBS01: $ ls /mnt/datastore/PBS01_VM_BACKUP/vm/115/ -la total 52 drwxr-xr-x 6 backup backup 7 Dec 17 13:16 . drwxr-xr-x 76 backup backup 76 Nov 14 22:53 .. drwxr-xr-x 2 backup backup 6 Nov 26 12:11...
  2. M

    [SOLVED] Error during PBS sync - create_locked_backup_group failed

    Hi everyone, I'm trying to configure a sync between 2 PBS servers (PBS01 and PBS02 as per the below logs). For some groups it works without any issues but for some I get this error create_locked_backup_group failed. Both PBS servers are at version 3.1-2. Does anyone know what is causing this...
  3. M

    node crashing due to soft lockup error - CPU stuck for seconds

    Hi, I've noticed that using the kvm64 CPU type (which was the default one in PVE v7) CPU usage on some VMs on that affected node reached even 215%. I knew that in PVE v8 the default CPU type x86-64-v2-AES so I've switched to that CPU type. Since then the CPU usage no longer exceeds 100% and...
  4. M

    node crashing due to soft lockup error - CPU stuck for seconds

    Hi everyone, I have a problem with a rather new build where the Proxmox node randomly crashes due to kernel panics with soft lockup errors (see attached logs). CPU: Intel(R) Core(TM) i9-13900K MOBO: Gigabyte Z790 UD RAM: 128 GiB of DDR5 memory STORAGE: - ZFS in RAID1 for OS based on 2x 256GiB...
  5. M

    issues getting corosync to work for the entire cluster

    Hello everyone, I currently manage a 19-node Proxmox cluster and I believe that during the weekend one of the nodes failed during the weekly VM backup job. Due to that event, the corosync service lost sync at cluster level and the nodes can't speak to each other (no quorum) and they have been...
  6. M

    [SOLVED] unable to create zfs pool - mountpoint exists and is not empty

    Hi Dunuin, I was about to ask where the backups will end up if the zfs pool is not mounted. But your later edit clarified this for me. I will set that option in that case. Thank you, Bogdan M.
  7. M

    [SOLVED] unable to create zfs pool - mountpoint exists and is not empty

    Thanks, Hannes, for pointing that out to me. Regards, Bogdan
  8. M

    [SOLVED] unable to create zfs pool - mountpoint exists and is not empty

    Hi Hannes, Sure. Here you go: root@pve-node-18:/home/bogdan# ls -la /backup_node_18 total 10 drwxr-xr-x 3 root root 3 Aug 25 13:09 . drwxr-xr-x 20 root root 26 Aug 27 14:51 .. drwxr-xr-x 2 root root 2 Aug 25 13:09 dump Regards, Bogdan M
  9. M

    [SOLVED] unable to create zfs pool - mountpoint exists and is not empty

    Hi everyone, Has anybody bumped into this before? I have two unused 2TB SSDs which I would like to configure them to run as ZFS in a RAID1 configuration. The problem is that I can't create the ZFS disk with a specific name. I've also tried to switch the disks with new ones but it makes no...
  10. M

    [SOLVED] can't convince a part of my nodes to communicate with cluster owner

    Right! ... there wasn't communication between the 2 subnets. Although initially I had a false impression that it did otherwise those nodes should not be able to communicate in first place with the cluster owner and then join the cluster. Either way, the past weekend I've removed all nodes from...
  11. M

    [SOLVED] can't convince a part of my nodes to communicate with cluster owner

    Hi jsterr, I'm trying to separate the management from the cluster traffic. Management traffic for that node is on 10.100.50.28/24 and cluster traffic for that node is being sent via 10.100.200.28/24. Basically, these are 2 Linux VLANs with different tags which rely on the same bridge / NIC (see...
  12. M

    [SOLVED] can't convince a part of my nodes to communicate with cluster owner

    Hi Maximiliano, Thank you for your reply. Sure. This is the corosync.conf file showing on my cluster owner which is pve-node-13: root@pve-node-13:/etc/pve/nodes# cat /etc/pve/corosync.conf logging { debug: off to_syslog: yes } nodelist { node { name: pve-node-01 nodeid: 8...
  13. M

    [SOLVED] can't convince a part of my nodes to communicate with cluster owner

    Hi support / community members, Since not long ago I've migrated from PVE v7 to v8. Everything seemed fine since a couple of days ago. Now only a part of my 19 total nodes are willing to communicate with the cluster owner and rejoin the cluster. ... proxmox-ve: 8.0.1 (running kernel...
  14. M

    Proxmox node crashing ocassionally

    Thanks for your response @leesteken I'll try to move VMs to a different node and try to run a long self-test as you've suggested. The SMART values don't look that bad apart from that single Runtime_Bad_Block:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!