i see the PCT restore that worked above was from a different backup. so tested restoring the same backup which failed on another node:
recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z'
/dev/rbd7
Creating filesystem with 3670016 4k blocks and 917504 inodes...
the issue seems to be with the PVE host.
I can not restore a PCT backup. KVM restore works okay.
here is part of the output:
recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z'
/dev/rbd0
The file...
Also the 1st post error was using NFS storage.
the same result occurs when using local storage .
604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc)
604: 2023-10-15 02:10:49 INFO: status = running
604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4
604: 2023-10-15 02:10:49 INFO...
Hello
We have around 15 containers. Just one has backup failures - 2 times in the past 4 days
here is more info:
dmesg
[Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d
[Sat Oct 14 08:44:46 2023]...
Hello
I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working .
I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up:
* Integratet Zabbix Proxmox Tempalte
Key...
Y
Yes years ago I had set up a ceph rule like that , and we have since replaced the drives.
could you point me to documentation on changing the crush map rule?
# pveceph pool ls --noborder
Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used
.mgr 3 2 1 1 on...
I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 .
I'll mark this closed
so ceph health still has the original warning:
# ceph -s
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
services:
mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h)
mgr: pve11(active, since 6h)...
we've 5 pve hosts with 7 OSD's each.
If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand.
PS:
I assume this:
these days with very reliable ssd or nvme [ having good DWPD ] available I do...
I am having trouble with the 2ND step.
here is the disk layout:
# fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors
Disk model: Micron_7450_MTFDKBA960TFR
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes /...
I was able to delete .mgr using pve web page.
after that the original warning went away:
ceph -s
cluster:
id: 220b9a53-4556-48e3-a73c-28deff665e45
health: HEALTH_WARN
1 mgr modules have recently crashed
services:
mon: 3 daemons, quorum pve15,pve11,pve4 (age...
Hello
I replaced a disk in an rpool. Per my notes the last step is to run this on the new disk::
grub-install /dev/nvme0n1
However that returned:
grub-install is disabled because this system is booted via proxmox-boot-tool, if you really need to run it, run /usr/sbin/grub-install.real
Is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.