Search results

  1. R

    migrations on a node restart

    Hello I think a nice change would be for migrations to occur before the non high availability vm;s are shutdown.
  2. R

    container backup fail

    i see the PCT restore that worked above was from a different backup. so tested restoring the same backup which failed on another node: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd7 Creating filesystem with 3670016 4k blocks and 917504 inodes...
  3. R

    container backup fail

    the issue seems to be with the PVE host. I can not restore a PCT backup. KVM restore works okay. here is part of the output: recovering backed-up configuration from 'pbs-daily:backup/ct/604/2023-08-31T20:37:10Z' /dev/rbd0 The file...
  4. R

    container backup fail

    Also the 1st post error was using NFS storage. the same result occurs when using local storage . 604: 2023-10-15 02:10:49 INFO: Starting Backup of VM 604 (lxc) 604: 2023-10-15 02:10:49 INFO: status = running 604: 2023-10-15 02:10:49 INFO: CT Name: bc-sys4 604: 2023-10-15 02:10:49 INFO...
  5. R

    container backup fail

    Note: the container is also backed up to a PBS server and that works okay .
  6. R

    container backup fail

    Hello We have around 15 containers. Just one has backup failures - 2 times in the past 4 days here is more info: dmesg [Sat Oct 14 08:44:46 2023] rbd: rbd1: capacity 15032385536 features 0x1d [Sat Oct 14 08:44:46 2023]...
  7. R

    Monitoring ceph with Zabbix 6.4

    Thanks. What should be used for $PVE.URL.PORT ? 443? or 8006 or ?
  8. R

    Monitoring ceph with Zabbix 6.4

    Hello I have followed https://geekistheway.com/2022/12/31/monitoring-proxmox-ve-using-zabbix-agent/ and have that working . I am confused on how to get ceph data to zabbix. You seem to mention that the following needs to be set up: * Integratet Zabbix Proxmox Tempalte Key...
  9. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Arron - Thank you, I was able to edit the pool at the ui. Proxmox has made it so much easier to edit a pool then I remember from the old days!!
  10. R

    [SOLVED] 'ceph pg 55.0 query' not working

    Y Yes years ago I had set up a ceph rule like that , and we have since replaced the drives. could you point me to documentation on changing the crush map rule?
  11. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # pveceph pool ls --noborder Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG Autoscale Target Ratio Crush Rule Name %-Used Used .mgr 3 2 1 1 on...
  12. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I think there was probably a glitch due to not following the documentation . i ended up using /usr/sbin/grub-install.real /dev/nvme0n1 . I'll mark this closed
  13. R

    [SOLVED] 'ceph pg 55.0 query' not working

    so ceph health still has the original warning: # ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN Reduced data availability: 1 pg inactive services: mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h) mgr: pve11(active, since 6h)...
  14. R

    re installing pve on a system with osd's

    we've 5 pve hosts with 7 OSD's each. If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand. PS: I assume this: these days with very reliable ssd or nvme [ having good DWPD ] available I do...
  15. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I am having trouble with the 2ND step. here is the disk layout: # fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Micron_7450_MTFDKBA960TFR Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /...
  16. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr using pve web page. after that the original warning went away: ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN 1 mgr modules have recently crashed services: mon: 3 daemons, quorum pve15,pve11,pve4 (age...
  17. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr pool from the pve web page. now I'll follow the link above on recreating the .mgr pool. thank you for the help.
  18. R

    [SOLVED] 'ceph pg 55.0 query' not working

    trying to delete has this issue: # ceph pg 55.0 mark_unfound_lost delete Couldn't parse JSON : Expecting value: line 1 column 1 (char 0) Traceback (most recent call last): File "/usr/bin/ceph", line 1326, in <module> retval = main() File "/usr/bin/ceph", line 1246, in main sigdict =...
  19. R

    [SOLVED] replacing an rpool disk , question on grub-install

    Hello I replaced a disk in an rpool. Per my notes the last step is to run this on the new disk:: grub-install /dev/nvme0n1 However that returned: grub-install is disabled because this system is booted via proxmox-boot-tool, if you really need to run it, run /usr/sbin/grub-install.real Is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!