Hi fabian,
Thanks for your reply.
So this is the group content on the source datastore PBS01:
$ ls /mnt/datastore/PBS01_VM_BACKUP/vm/115/ -la
total 52
drwxr-xr-x 6 backup backup 7 Dec 17 13:16 .
drwxr-xr-x 76 backup backup 76 Nov 14 22:53 ..
drwxr-xr-x 2 backup backup 6 Nov 26 12:11...
Hi everyone,
I'm trying to configure a sync between 2 PBS servers (PBS01 and PBS02 as per the below logs). For some groups it works without any issues but for some I get this error create_locked_backup_group failed. Both PBS servers are at version 3.1-2.
Does anyone know what is causing this...
Hi,
I've noticed that using the kvm64 CPU type (which was the default one in PVE v7) CPU usage on some VMs on that affected node reached even 215%.
I knew that in PVE v8 the default CPU type x86-64-v2-AES so I've switched to that CPU type. Since then the CPU usage no longer exceeds 100% and...
Hi everyone,
I have a problem with a rather new build where the Proxmox node randomly crashes due to kernel panics with soft lockup errors (see attached logs).
CPU: Intel(R) Core(TM) i9-13900K
MOBO: Gigabyte Z790 UD
RAM: 128 GiB of DDR5 memory
STORAGE:
- ZFS in RAID1 for OS based on 2x 256GiB...
Hello everyone,
I currently manage a 19-node Proxmox cluster and I believe that during the weekend one of the nodes failed during the weekly VM backup job. Due to that event, the corosync service lost sync at cluster level and the nodes can't speak to each other (no quorum) and they have been...
Hi Dunuin,
I was about to ask where the backups will end up if the zfs pool is not mounted. But your later edit clarified this for me.
I will set that option in that case.
Thank you,
Bogdan M.
Hi Hannes,
Sure. Here you go:
root@pve-node-18:/home/bogdan# ls -la /backup_node_18
total 10
drwxr-xr-x 3 root root 3 Aug 25 13:09 .
drwxr-xr-x 20 root root 26 Aug 27 14:51 ..
drwxr-xr-x 2 root root 2 Aug 25 13:09 dump
Regards,
Bogdan M
Hi everyone,
Has anybody bumped into this before? I have two unused 2TB SSDs which I would like to configure them to run as ZFS in a RAID1 configuration. The problem is that I can't create the ZFS disk with a specific name. I've also tried to switch the disks with new ones but it makes no...
Right! ... there wasn't communication between the 2 subnets. Although initially I had a false impression that it did otherwise those nodes should not be able to communicate in first place with the cluster owner and then join the cluster.
Either way, the past weekend I've removed all nodes from...
Hi jsterr,
I'm trying to separate the management from the cluster traffic. Management traffic for that node is on 10.100.50.28/24 and cluster traffic for that node is being sent via 10.100.200.28/24. Basically, these are 2 Linux VLANs with different tags which rely on the same bridge / NIC (see...
Hi Maximiliano,
Thank you for your reply.
Sure. This is the corosync.conf file showing on my cluster owner which is pve-node-13:
root@pve-node-13:/etc/pve/nodes# cat /etc/pve/corosync.conf
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: pve-node-01
nodeid: 8...
Hi support / community members,
Since not long ago I've migrated from PVE v7 to v8. Everything seemed fine since a couple of days ago. Now only a part of my 19 total nodes are willing to communicate with the cluster owner and rejoin the cluster.
...
proxmox-ve: 8.0.1 (running kernel...
Thanks for your response @leesteken
I'll try to move VMs to a different node and try to run a long self-test as you've suggested.
The SMART values don't look that bad apart from that single Runtime_Bad_Block:
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.