Dann fast die Verträge doch einfach zu einem zusammen. Geht im Robot......
Und PVE könnt ihr dann auch direkt vom PVE-Stick installieren, den legen die Jungs bei Hetzner gerne "ein"....
Well, it is ready when it is ready.... as already written... they want it to be nice and stable..... anything about any "release-dates" will only give "pressure" to release something that is not finished.... a problem that nowadays is far to often happening.... release some bugged, garbage, just...
Hi, can you get a bit more in detail about "ms_async_rdma_local_gid"?
Is this something that has to be set in each node? Or do I add multiple of them in the one shared /etc/ceph/ceph.conf?
Thanks...
It's your Cluster, but in my opinion, I would never follow a guide, where Dataloss is possible without having a good and fresh backup.
The rebalancing is the least problem. Even if the SSDs/NVMe have only 0.8 DWPD they aren't that full at the moment. Won't be more than 1/2% wear if not less...
OUT.... wait until rebalance is done...
STOP ..... wait until rebalance is done....
DESTROY..... wait until rebalance is done....
Remove DISK
Insert new. Add to CEPH. Wait until rebalance is done.... and CEPH is "green" again!
AND! ONE AT A TIME!
Hallo,
vielleicht kann mich dazu jemand erleuchten.
QEMU unterstützt also eine art "Emulation" um Windows Gäste glauben zu lassen, sie würden unter HyperV laufen.
Ist das in PVE eingebaut? Ist es geplant das als "Option" in der GUI zu ermöglichen? Kann man es bereits in der Config-Datei...
Wie man das richtig macht ist hier beschrieben direkt von Proxmox.
Nennt sich Full-Mesh. Die einfachste Variante ist Routing.
https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server
It performs very good for our usecase. We use the host to have PBS and VEEAM-VMs on it. So it is more or less a big TANK for Backup.
About 200TB usable Storage. But we will see in the next weeks... just 10% data on it so far.... IOWAIT is below 1% all the time....
Good morning,
I try to get some information about the possibility to expand an RAIDZ2-0.
Is this already possible, besides it might degrade performance?
So can I add another 2 identical disks to the pool shown here, without recreate?
root@TANK:~# zpool status
pool: rpool
state: ONLINE...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.