Ok so these started again. I thought updates and reboots after I last posted had solved the issue..
This time backup to PBS worked, local vzdump failed .
# stat /dev/rbd-pve
File: /dev/rbd-pve
Size: 60 Blocks: 0 IO Block: 4096 directory
Device: 0,5 Inode: 2238...
here is the qm config for a vm that had the issue:
# qm config 902
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: ldap-master2
net0: virtio=92:EC:4F:23:5C:37,bridge=vmbr3,tag=3
numa: 0
onboot: 1
ostype: l26
protection: 1
scsi0...
Hello
we have the rcu issue . 5 nodes with all systems using CPU "type 80 x Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz (2 Sockets)"
How do I set -pcid ?
after rebooting the node I could restore the PCT:
recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z'
/dev/rbd0
Creating filesystem with 3670016 4k blocks and 917504 inodes
Filesystem UUID: 4ff60784-0624-452e-abdf-b21ba0f165a5
Superblock backups stored on...
i see the PCT restore that worked above was from a different backup. so tested restoring the same backup which failed on another node:
recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z'
/dev/rbd7
Creating filesystem with 3670016 4k blocks and 917504 inodes...
the issue seems to be with the PVE host.
I can not restore a PCT backup. KVM restore works okay.
here is part of the output:
recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z'
/dev/rbd0
The file...
Also the 1st post error was using NFS storage.
the same result occurs when using local storage .
604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc)
604: 2023-10-15 02:10:49 INFO: status = running
604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4
604: 2023-10-15 02:10:49 INFO...
Hello
We have around 15 containers. Just one has backup failures - 2 times in the past 4 days
here is more info:
dmesg
[Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d
[Sat Oct 14 08:44:46 2023]...
Hello
I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working .
I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up:
* Integratet Zabbix Proxmox Tempalte
Key...
Y
Yes years ago I had set up a ceph rule like that , and we have since replaced the drives.
could you point me to documentation on changing the crush map rule?
# pveceph pool ls --noborder
Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used
.mgr 3 2 1 1 on...
I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 .
I'll mark this closed
so ceph health still has the original warning:
# ceph -s
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
services:
mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h)
mgr: pve11(active, since 6h)...
we've 5 pve hosts with 7 OSD's each.
If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand.
PS:
I assume this:
these days with very reliable ssd or nvme [ having good DWPD ] available I do...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.