Search results

  1. R

    container backup fail

    Also the 1st post error was using NFS storage. the same result occurs when using local storage . 604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc) 604: 2023-10-15 02:10:49 INFO: status = running 604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4 604: 2023-10-15 02:10:49 INFO...
  2. R

    container backup fail

    Note: the container is also backed up to a PBS server and that works okay .
  3. R

    container backup fail

    Hello We have around 15 containers. Just one has backup failures - 2 times in the past 4 days here is more info: dmesg [Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d [Sat Oct 14 08:44:46 2023]...
  4. R

    Monitoring ceph with Zabbix 6.4

    Thanks. What should be used for $PVE.URL.PORT ? 443? or 8006 or ?
  5. R

    Monitoring ceph with Zabbix 6.4

    Hello I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working . I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up: * Integratet Zabbix Proxmox Tempalte Key...
  6. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Arron - Thank you, I was able to edit the pool at the ui. Proxmox has made it so much easier to edit a pool then I remember from the old days!!
  7. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Y Yes years ago I had set up a ceph rule like that , and we have since replaced the drives. could you point me to documentation on changing the crush map rule?
  8. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # pveceph pool ls --noborder Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used .mgr 3 2 1 1 on...
  9. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 . I'll mark this closed
  10. R

    [SOLVED] 'ceph pg 55.0 query' not working

    so ceph health still has the original warning: # ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN Reduced data availability: 1 pg inactive services: mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h) mgr: pve11(active, since 6h)...
  11. R

    re installing pve on a system with osd's

    we've 5 pve hosts with 7 OSD's each. If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand. PS: I assume this: these days with very reliable ssd or nvme [ having good DWPD ] available I do...
  12. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I am having trouble with the 2ND step. here is the disk layout: # fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Micron_7450_MTFDKBA960TFR Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /...
  13. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr using pve web page. after that the original warning went away: ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN 1 mgr modules have recently crashed services: mon: 3 daemons, quorum pve15,pve11,pve4 (age...
  14. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr pool from the pve web page. now I'll follow the link above on recreating the .mgr pool. thank you for the help.
  15. R

    [SOLVED] 'ceph pg 55.0 query' not working

    trying to delete has this issue: # ceph pg 55.0 mark_unfound_lost delete Couldn't parse JSON : Expecting value: line 1 column 1 (char 0) Traceback (most recent call last): File "/usr/bin/ceph", line 1326, in <module> retval = main() File "/usr/bin/ceph", line 1246, in main sigdict =...
  16. R

    [SOLVED] replacing an rpool disk , question on grub-install

    Hello I replaced a disk in an rpool. Per my notes the last step is to run this on the new disk:: grub-install /dev/nvme0n1 However that returned: grub-install is disabled because this system is booted via proxmox-boot-tool, if you really need to run it, run /usr/sbin/grub-install.real Is...
  17. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I did not create the .mgr pool. I suspect that got created during the upgrade. there was something in release notes about mgr ....
  18. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # ceph osd pool ls detail pool 48 'nvme-4tb' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 300736 lfor 0/294914/294912 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 55 '.mgr' replicated size 3 min_size...
  19. R

    [SOLVED] 'ceph pg 55.0 query' not working

    i did not replace the disks, the only thing that changed was to upgrade ceph software. we get emails from zabbix if ceph issues, and only got an email around the time of the upgrade . Also I looked at the ceph section of the pve web page before upgrading and all was OK.