hab auf einem Host paar SSD die gleiche Größen haben, Weight ist auch gleich, trotzdem hat eine (OSD 10) 58% Belegung und die andere (OSD 14) 80%.
Wenn neue Daten geschrieben werden, passiert das genauso auf die schon fast volle, siehe Screenshot.
Liebe Grüße, Stefan
I have a CT HD with 100G, it was once filled up to 90%, now its just filled with 14% of data.
When I backup the CT now, it takes a lot of time, in the begining it was faster.
Can I somehow clean that HD or somehow get teh sparse space back?
But what if one of the MONs dies? Do I just set the 4. Node become a Monitor?
Note: All my cluster have just a small single boot disk. I wanted to increase redundancy by adding a 5. Node even if I don't need the additional performance or disk space yet.
I am currently run a 4 node Ceph cluster with an 3/2 pool, 3 Nodes are monitors with OSDs and 1 hold just OSDs.
Now I want to add another node and upgrade both Nodes to monitors, but stay with the 3/2 pool to have the possibility of 2 failing Nodes.
Does the 5. Node need to have the same...
Recently I benchmarked Samsungs Enterprise SSD 860DCT with 960GB with my usual benchmark setup and the result was just horrible:
fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test...
hmm.. i tried to add the ssd again and check the output of syslog:
Feb 13 11:10:00 ceph8 systemd: Starting Proxmox VE replication runner...
Feb 13 11:10:01 ceph8 systemd: Started Proxmox VE replication runner.
Feb 13 11:10:52 ceph8 pvedaemon: <root@pam> starting task...
ok tried both partprobe and reboot but that ssd is not going to be turned into an osd..
Now I tried to add a completely new SSD but I receive the same result.
Can be the node the faulty part?
What can I try? beside reinstalling proxmox ;-)
I wanted to add a new ssd and make an OSD out of it, but after 'create: OSD' its marked as 'partitions'.
Ok, its not the first time in my life, so I issued dd if=/dev/zero of=/dev/sdd count=1000000 followed by ceph zap /dev/sdd
But still 'partitions'...
I deleted all partitions with fdisk...
I have a virtualized PBX (Askozia) VM which I can't control through the gui.
If I send shutdown nothing happens.
Now I have seen that I could activate VMware Tools (or HyperV Linux Integration Services), its just a checkbox.
Should I try to activate it? Can KVM use that tools somehow to...