Hi,
I have a CT HD with 100G, it was once filled up to 90%, now its just filled with 14% of data.
When I backup the CT now, it takes a lot of time, in the begining it was faster.
Can I somehow clean that HD or somehow get teh sparse space back?
But what if one of the MONs dies? Do I just set the 4. Node become a Monitor?
Note: All my cluster have just a small single boot disk. I wanted to increase redundancy by adding a 5. Node even if I don't need the additional performance or disk space yet.
Hi,
I am currently run a 4 node Ceph cluster with an 3/2 pool, 3 Nodes are monitors with OSDs and 1 hold just OSDs.
Now I want to add another node and upgrade both Nodes to monitors, but stay with the 3/2 pool to have the possibility of 2 failing Nodes.
Does the 5. Node need to have the same...
Recently I benchmarked Samsungs Enterprise SSD 860DCT with 960GB with my usual benchmark setup and the result was just horrible:
FIO Command:
fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test...
hmm.. i tried to add the ssd again and check the output of syslog:
Feb 13 11:10:00 ceph8 systemd[1]: Starting Proxmox VE replication runner...
Feb 13 11:10:01 ceph8 systemd[1]: Started Proxmox VE replication runner.
Feb 13 11:10:52 ceph8 pvedaemon[2536]: <root@pam> starting task...
ok tried both partprobe and reboot but that ssd is not going to be turned into an osd..
Now I tried to add a completely new SSD but I receive the same result.
Can be the node the faulty part?
What can I try? beside reinstalling proxmox ;-)
Hi,
I wanted to add a new ssd and make an OSD out of it, but after 'create: OSD' its marked as 'partitions'.
Ok, its not the first time in my life, so I issued dd if=/dev/zero of=/dev/sdd count=1000000 followed by ceph zap /dev/sdd
But still 'partitions'...
I deleted all partitions with fdisk...
No.
Workaround: root disks of those VMs are stored on local-zfs
Since this workaround (about Oct 2018), backup to the same NFS store works without any issues.
Hi,
I have a virtualized PBX (Askozia) VM which I can't control through the gui.
If I send shutdown nothing happens.
Now I have seen that I could activate VMware Tools (or HyperV Linux Integration Services), its just a checkbox.
Should I try to activate it? Can KVM use that tools somehow to...
Ich habe nun vor gut 2 Monaten die 3 CTs vom Ceph Storage auf das local Storage verschoben, Backup mit lokalem tmp wie zuvor auf das NFS, und nun ist Schluss mit den hängenden Backups.
Ich werde mal die Nodes mit den neuesten Updates versorgen und dann die CTs wieder auf das Ceph legen.
Hallo Leute,
eine kurze Frage:
Ist eine Replikation von VMs aus dem Cluster zu einem stand alone node (sprich einem ohne cluster) möglich?
BTW: Wie wäre es wenn man so einen "Kurze Frage - kurze Antwort"-Thread startet, gibts in vielen anderen Foren und man muss nicht für jede mini Frage einen...
Sorry hab das dumm geschrieben:
Es sind insgesamt 6 nodes: 3 als Ceph Mon und OSDs, 1er nur mit OSDs (kein Mon) und 2 sind ohne Ceph (keine OSDs, kein Mon).
Zwischen dem letzten "regulären" Backup Job (der immer erfolgreich ist) und dem Task mit den "bösens" CTs vergehen 4:30 Stunden...
Da es...
ok, der Reboot hat nichts gebracht, ein Backupjob hing heute wieder.
Durch das letzte Post sieht man auch, dass es eben nur ab und an passiert. Sind jetzt ja doch 9 Tage vergangen und es hat dazwischen problemlos gesichert.
6 nodes davon 3x Ceph Mon, 1x Ceph OSDs, 2x nur VM/CT node.
1x NFS Backup
Ceph: Bluestore auf SSDs
Ceph Netzwerk und Backup Netzwerk liegen auf 10G
Cluster und VM auf jeweils VLAN getrennten 1G.
Kernel: Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9
PVE Manager: pve-manager/5.2-1/0fcd7879
Node...
Hi,
tried to find an answer to my question across the forum but I couldn't.
Lets image I have a 3 node cluster and want to reboot one node and those node stuck and can't boot up (POST error, wrong memory, etc.).
How can I migrate that CT/VMs on that node to the other 2 leftover?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.