Search results

  1. R

    [SOLVED] 'ceph pg 55.0 query' not working

    so ceph health still has the original warning: # ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN Reduced data availability: 1 pg inactive services: mon: 3 daemons, quorum pve15,pve11,pve4 (age 6h) mgr: pve11(active, since 6h)...
  2. R

    re installing pve on a system with osd's

    we've 5 pve hosts with 7 OSD's each. If for some reason I had to reinstall pve to one of the nodes is there a way to preserve the osd's ? the reinstall would be fast and noout set beforehand. PS: I assume this: these days with very reliable ssd or nvme [ having good DWPD ] available I do...
  3. R

    [SOLVED] replacing an rpool disk , question on grub-install

    I am having trouble with the 2ND step. here is the disk layout: # fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 894.25 GiB, 960197124096 bytes, 1875385008 sectors Disk model: Micron_7450_MTFDKBA960TFR Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes /...
  4. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr using pve web page. after that the original warning went away: ceph -s cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN 1 mgr modules have recently crashed services: mon: 3 daemons, quorum pve15,pve11,pve4 (age...
  5. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I was able to delete .mgr pool from the pve web page. now I'll follow the link above on recreating the .mgr pool. thank you for the help.
  6. R

    [SOLVED] 'ceph pg 55.0 query' not working

    trying to delete has this issue: # ceph pg 55.0 mark_unfound_lost delete Couldn't parse JSON : Expecting value: line 1 column 1 (char 0) Traceback (most recent call last): File "/usr/bin/ceph", line 1326, in <module> retval = main() File "/usr/bin/ceph", line 1246, in main sigdict =...
  7. R

    [SOLVED] replacing an rpool disk , question on grub-install

    Hello I replaced a disk in an rpool. Per my notes the last step is to run this on the new disk:: grub-install /dev/nvme0n1 However that returned: grub-install is disabled because this system is booted via proxmox-boot-tool, if you really need to run it, run /usr/sbin/grub-install.real Is...
  8. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I did not create the .mgr pool. I suspect that got created during the upgrade. there was something in release notes about mgr ....
  9. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # ceph osd pool ls detail pool 48 'nvme-4tb' replicated size 3 min_size 2 crush_rule 4 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 300736 lfor 0/294914/294912 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 55 '.mgr' replicated size 3 min_size...
  10. R

    [SOLVED] 'ceph pg 55.0 query' not working

    i did not replace the disks, the only thing that changed was to upgrade ceph software. we get emails from zabbix if ceph issues, and only got an email around the time of the upgrade . Also I looked at the ceph section of the pve web page before upgrading and all was OK.
  11. R

    [SOLVED] 'ceph pg 55.0 query' not working

    # ceph pg dump_stuck PG_STAT STATE UP UP_PRIMARY ACTING ACTING_PRIMARY 48.7e active+undersized+degraded+remapped+backfilling [1,24,19] 1 [1,24] 1 48.7d active+undersized+degraded+remapped+backfilling...
  12. R

    [SOLVED] 'ceph pg 55.0 query' not working

    I have not changed crush rules since 5 years ago.
  13. R

    [SOLVED] 'ceph pg 55.0 query' not working

    hello , today I upgraded to the latest ceph. after the first note was upgraded i noticed an inactive pg warning. I continued and finished the upgrade hoping the inactive pg would be fixed with a complete upgrade. But that was not the case. following info from...
  14. R

    Schedule DAILY restore from PBS possible to standby mode? - need advices.. !

    have you looked at rsnapshot ? it is good for data snapshots, like hourly, daily , weekly etc . I am not sure if it will do what you want for system.
  15. R

    log rotation

    For some reason on PVE and Nextcloud and other systems log rotation came to a halt in November. /var/log had over 7G . /var/log/journal had 4G i think it has something to do with a conflict with rsyslog and systemd journal stuff. will check later to fix edit /etc/cron.daily/logrotate #...
  16. R

    [SOLVED] how to move pbs to new hardware

    I'l edit this post sometime.... in the middle of transferring 8T ... I have not tried this.... was researching if resume is possible.. following this https://unix.stackexchange.com/questions/343675/zfs-on-linux-send-receive-resume-on-poor-bad-ssh-connection supposedly as we used the...
  17. R

    [SOLVED] rbd ... object map is invalid

    thank you Mira, that fixed the warning. rbd -p <pool-name> object-map rebuild vm-160-disk-0 rbd -p <pool-name> object-map rebuild vm-603-disk-0
  18. R

    [SOLVED] rbd ... object map is invalid

    for some LXC's there have had ' object map is invalid' since about 2020 . the timestamps show these lines are created when a pbs backup occurs. I assume these are harmless. Is there is a simple way to prevent these ? # dmesg|grep rbd | grep inval| grep disk [Fri Nov 4 17:02:54 2022]...