Recent content by Jospeh Huber

  1. J

    [SOLVED] SSDs could not be found after upgrade

    Hi, I found it... There was a filter defined on /dev/sdc in /etc/lvm.conf. global_filter = [ "r|/dev/sdc|", .... After adapting the filter, the thin volume is back again... oh man.:mad: :~# pvscan PV /dev/sdc1 VG ssd2 lvm2 [<476. 94 GiB...
  2. J

    [SOLVED] SSDs could not be found after upgrade

    Hi, yes it looks like the device name has changed from sdd to sdc... On the second host it is excatly the same. Never had this before. :~# mount /dev/sdc1 /mnt/ mount: /mnt: unknown filesystem type 'LVM2_member'. ##pvscan pvscan /dev/sdd: open failed: No medium found PV /dev/sdb...
  3. J

    [SOLVED] SSDs could not be found after upgrade

    Hi, we use Samsung SATA and m2 SSDs mixed types: SAMSUNG MZ7KH960HAJR SAMSUNG 860 PRO, 970 PRO The missing SSD is a SATA 860 Pro (I guess, because at the moment I have no physical access to check). lsblk does not show the ssd (/dev/sdd) : :~# lsblk NAME...
  4. J

    [SOLVED] SSDs could not be found after upgrade

    Hi, today I wanted to upgrade my cluster to PVE 7 and before I wanted to upgrade to the last version of PVE 6. But on two hosts of my clusters LVM thin volumes on SSDs cannot be detected anymore. Other SSDs with LVM thin are there. journalctl -b (parts) ----- Jul 28 17:07:38 vmhost kernel...
  5. J

    Ceph OSD Crash, why?

    In addition to that I found in dmesg: [Mon Jul 26 00:20:38 2021] libceph: osd1 down [Mon Jul 26 00:20:41 2021] rbd: rbd2: encountered watch error: -107 [Mon Jul 26 00:24:55 2021] libceph: osd1 up [Mon Jul 26 00:26:39 2021] libceph: osd1 down [Mon Jul 26 00:37:03 2021] libceph: osd1 weight 0x0...
  6. J

    Ceph OSD Crash, why?

    Hi, I my three node cephcluster with three osds I have OSD crashes only on host and on the same osd. The OSD is then "out" and i cannot restart it and take it "in" again. The only way to heal this is, destroy the OSD and recreate it so it replicates the data again. There are no "ceph crash"...
  7. J

    Help: No Working Ceph Monitors...

    Hi, I found this, https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026957.html It is possible to recover from the OSDs, but for this the OSDs must be shutdown. That seems to much risk...
  8. J

    Help: No Working Ceph Monitors...

    Hi, today one of my monitor was in a bad state and does not come up after a node reboot. So I tried to recover it with the solution from here ... that has worked in the past. https://forum.proxmox.com/threads/i-managed-to-create-a-ghost-ceph-monitor.58435/#post-389798 I have three nodes, two...
  9. J

    [SOLVED] I managed to create a ghost ceph monitor

    I had the same problem today, but a manual creation of the monitor with the ceph built in commands did not help. It is described here https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#adding-a-monitor-manual ... but in my case the result was the same as creating the monitor with...
  10. J

    Proxmox Thin-Pool Full but reported only half full?

    I think the above is not a good idea, if the physical space is fully used ... But I found another thing here that sounds good for me: https://mellowhost.com/billing/index.php?rp=/knowledgebase/68/How-to-Extend-meta-data-of-a-thin-pool.html When I do a lvs -a I can see that my meta data pool is...
  11. J

    Proxmox Thin-Pool Full but reported only half full?

    I think the problem is similar to this post here: https://forum.proxmox.com/threads/drbd9-lvm-thin-provisioning.28584/#post-143856 My metadataspace ran out of space. The values I have provided above were after the restore of the corrupt VMs and moving one VM away. I think that has healed the...
  12. J

    Proxmox Thin-Pool Full but reported only half full?

    Here it is: lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert lvol0 backup -wi-ao---- 2.30t data pve twi-aotz--...
  13. J

    Proxmox Thin-Pool Full but reported only half full?

    Hi, today one of my VMs (backed on LVM thin and backed on a RAID) ran out of disk space. The only vm without disk monitoring :-(( It seems that this has destroyed my other vms in my thin pool. Several I/O errors have occured. My LVM thin pool is not overbooked, total vm space is 871 Gb and...
  14. J

    Ceph Question: Replace OSDs

    Hi, the migration has worked perfectly as decribed above :) The SSDs were added as OSDs nearly at the same time, the cluster was balanced and then we reweighted the hdds to zero. The local replication was fast... Thx
  15. J

    Ceph Question: Replace OSDs

    Thanks for your advice to be careful with step 2 after each. Is it the same for step 1 (after each)? Because if there is more space on all nodes the cluster tries to rebalance for each node at the same time.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!