Proxmox VE kann mit vielen unterschiedlichen Storages umgehen. Am besten schaust mal in die Doku. :)
https://pve.proxmox.com/pve-docs/chapter-pvesm.html
Qcow2 ist ein Image Format un muss bei der Erstellung oder bei Migration/Move Disk spezifisch ausgewählt werden. Aber performanter ist wohl...
Gewagte Aussage. :)
Ext4 ist ein Filesystem das keine Snapshots kann. Hier muss entweder Qcow2 Image Dateien verwendet werden. Oder LVM(-thin) als Storage.
Du kannst auch auf dem LVM ein LV erstellen und ein Dateisystem darauf einrichten. Dann sind auch beide Optionen möglich.
pct help, will list all the commands and options for the tool. With eg. pct help listsnapshots and gather the name of the snapshot in question. Then you can run the delsnapshot.
And with man pct you can get the help page for it as well.
The autoscaler uses a factor of 3x by wich it will warn about an over-/under provisioning of PGs in a pool. The factor is caused by fill level of the pool.
https://docs.ceph.com/en/octopus/rados/operations/placement-groups/#viewing-pg-scaling-recommendations
On of these pools has more PGs than the autoscaler thinks is necessary. Hence the warning. This is regardless of the actual amount of PGs on an OSD. Though 225 PGs is already a very high number.
Yup, there seems to be no module for that one in the kernel. The ipmi interface (openipmi) should still provide some additional data. But Fujitsu also has its own software and plugins, maybe these may help.
https://download.ts.fujitsu.com/prim_supportcd/SVSSoftware/
Otherwise you can disable...
Did you run sensors-detect? I am missing the fan speeds in the output. Also you can check the data through IPMI, Fujitsu usually has a good load of sensor data there as well.
Yes, but your cluster is Nautilus as well. Hence, why to look in the logs (any / all of them), to see what might be logged that can explain it.
After restart of the MONs, they are listing on both ports, assuming the msgrv2 protocol has been activated. Is there a firewall in between?
The easiest is to destroy & create the OSD again.
You can do it a host at a time.
But if you want to minimize the impact, then best reweight the OSD first to 0. This will cause Ceph to distribute the data on that OSD to the others on that node. Once it is empty a destroy & create will just...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.