Can you show the configs of the VMs in comparison to what memory they are using? qm config {vmid}.
Some memory overhead is to be expected. But it would be interesting to see how the one with the potentially huge overhead is configured.
How much memory do you assign all guests in total?
Do you...
Mittlerweile gibt es die Tags in der GUI. Der Thread ist schon mehr als ein Jahr alt.
Schaut mal unter Datacenter->Options, da gibt es ein paar Einstellungen wie die Tags dargestellt werden.
Anything where you can store backups to. A network share on another NAS or other machine for example. Then use the backup functionality to back up your guests. After that you can modify your server and do a new install with ZFS.
Then configure that external storage again to get access to the...
Ok, then a reinstall will be necessary.
Are the guest's images located in `local-lvm`? Then it is easiest to create backups of the guests, to a reinstall and restore the guests with a new target storage.
For now, if you want to create multiple OSDs / disk, you need to create them with ceph-volume lvm batch --osds-per-device X /dev/nvme….
If you want to build a Ceph cluster with blades, keep in mind that failure domains might be a bit different, depending on what is locally per blade and what is...
Proxmox VE zeigt nur die Backups für den jeweiligen Namespace an. Wenn man mehrere Namespaces einbinden will, braucht man pro Namespace eine Storageconfig.
I am confused. The OSDs should be set up on their own physical disk that is present in the node.
Can you post the output of the following commands inside tags?
ceph osd df tree
ceph device ls
Solange der gleiche Key verwendet wird, sollte gut dedupliziert werden können. Andere Keys liefern andere Daten, die am PBS gespeichert werden. Ansonsten wäre es keine gute Verschlüsselung ;)
AFAIU gibt es ein weiteres Netzwerk?
Stellt sicher, das ein anderes IP Subnet verwendet wird.
Ein vmbrX Interface ist nur nötig, wenn die Gäste auch in das Netz sollen. Ansonsten kann man die IP direkt im Interface (Autostart aktivieren) oder direkt am Bond konfigurieren.
Man kann für Corosync...
So, 4 nodes with OSDs, pool is using a size/min_size of 3/2.
2 Nodes die. Some PGs will only have one replica. So far, so good. Make sure to have enough space on the OSDs that Ceph can restore the second replica on either node.
While the pool is IO blocked, the VMs won't be able to access...
The additional server is for Ceph, not Proxmox VE. You will need 5 MONs in order to survive the loss of two. -> small Proxmox VE node with a Ceph MON on it.
If you run the pools with size/min_size 3/2 and lose two nodes, chances are high that some PGs will have lost two replicas. Until Ceph is...
Another thing to consider is if the clients encrypt their backups. Then the encryption key is another separation. If two different encryption keys would generate the same chunk, it wouldn't be a good encryption. ;)
Ideally, anything with Power-Loss-Protection (PLP), the cheapest ones are just below 300€ by now. With consumer SSDs it is always a bit hit or miss if they are decent enough. Consumer SSDs are optimized for a desktop workload, where they will see writes happening in short bursts and data...
I hope not for VMs… those drives are terribly slow once their internal write cache is full!
That warning is there permanently, we do not try to detect if the disks are connected via a HW RAID controller or not as that would be hard to do reliably.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.