Hi,
I found it...
There was a filter defined on /dev/sdc in /etc/lvm.conf.
global_filter = [ "r|/dev/sdc|", ....
After adapting the filter, the thin volume is back again... oh man.:mad:
:~# pvscan
PV /dev/sdc1 VG ssd2 lvm2 [<476.
94 GiB...
Hi,
yes it looks like the device name has changed from sdd to sdc...
On the second host it is excatly the same.
Never had this before.
:~# mount /dev/sdc1 /mnt/
mount: /mnt: unknown filesystem type 'LVM2_member'.
##pvscan
pvscan
/dev/sdd: open failed: No medium found
PV /dev/sdb...
Hi,
we use Samsung SATA and m2 SSDs mixed types:
SAMSUNG MZ7KH960HAJR
SAMSUNG 860 PRO, 970 PRO
The missing SSD is a SATA 860 Pro (I guess, because at the moment I have no physical access to check).
lsblk does not show the ssd (/dev/sdd) :
:~# lsblk
NAME...
Hi,
today I wanted to upgrade my cluster to PVE 7 and before I wanted to upgrade to the last version of PVE 6.
But on two hosts of my clusters LVM thin volumes on SSDs cannot be detected anymore.
Other SSDs with LVM thin are there.
journalctl -b (parts)
-----
Jul 28 17:07:38 vmhost kernel...
Hi,
I my three node cephcluster with three osds I have OSD crashes only on host and on the same osd.
The OSD is then "out" and i cannot restart it and take it "in" again.
The only way to heal this is, destroy the OSD and recreate it so it replicates the data again.
There are no "ceph crash"...
Hi,
I found this,
https://docs.ceph.com/en/latest/rados/troubleshooting/troubleshooting-mon/#recovery-using-osds
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-May/026957.html
It is possible to recover from the OSDs, but for this the OSDs must be shutdown.
That seems to much risk...
Hi,
today one of my monitor was in a bad state and does not come up after a node reboot.
So I tried to recover it with the solution from here ... that has worked in the past.
https://forum.proxmox.com/threads/i-managed-to-create-a-ghost-ceph-monitor.58435/#post-389798
I have three nodes, two...
I had the same problem today, but a manual creation of the monitor with the ceph built in commands did not help.
It is described here https://docs.ceph.com/en/latest/rados/operations/add-or-rm-mons/#adding-a-monitor-manual ... but in my case the result was the same as creating the monitor with...
I think the above is not a good idea, if the physical space is fully used ...
But I found another thing here that sounds good for me:
https://mellowhost.com/billing/index.php?rp=/knowledgebase/68/How-to-Extend-meta-data-of-a-thin-pool.html
When I do a lvs -a I can see that my meta data pool is...
I think the problem is similar to this post here:
https://forum.proxmox.com/threads/drbd9-lvm-thin-provisioning.28584/#post-143856
My metadataspace ran out of space. The values I have provided above were after the restore of the corrupt VMs and moving one VM away. I think that has healed the...
Hi,
today one of my VMs (backed on LVM thin and backed on a RAID) ran out of disk space. The only vm without disk monitoring :-((
It seems that this has destroyed my other vms in my thin pool. Several I/O errors have occured.
My LVM thin pool is not overbooked, total vm space is 871 Gb and...
Hi,
the migration has worked perfectly as decribed above :)
The SSDs were added as OSDs nearly at the same time, the cluster was balanced and then we reweighted the hdds to zero.
The local replication was fast...
Thx
Thanks for your advice to be careful with step 2 after each.
Is it the same for step 1 (after each)? Because if there is more space on all nodes the cluster tries to rebalance for each node at the same time.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.