K=6,M=2 results in 6 data strips per 8 total. 6/8=0.75
in replication you have 1 data strips per 3 total. 1/3=0.33
its not exactly the "same" availability because survivability in a replication group is much higher; you need one living osd per...
Hello wondimu,
May we know more about the Disks? Or Perhaps get a PVE report? It is possible that only some of the Disk are stored on media that allows replication.
Report can be generated by going to DataCenter > Node > Subscription > Click...
Although they were about replicated pools (so no ec) following reads might serve as a hint why (outside of experiments/lab setups) it's not a good idea to go against the recommendations...
@x509
corosync is on 1G "private" link
no redundancy for 25G - single node failure is accepted as datacenter in which those servers are has 24h service and spare parts - faulty node will be up in matter of minutes / hours
there is total of 70 VMs...
Take this with a huge grain of salt. I don't know you or your customers :)
IMHO you probably don't need HA. Redudant PSU and local storage is more than enough. And your next part is a good explanation why.
That is not automatically real HA...
I see that we have replication enabled between the two nodes, but one of the VMs is only replicating half of its disks. Is there a solution to fix this?”
Hello everyone,
I’m currently working on an automation project to provision virtual machines in Proxmox 9 using Terraform (BPG provider) together with cloud-init. The virtual machines are based on Ubuntu 24.04 cloud images.
The provisioning...
Just upgraded myself. Went just fine no issues. 3 OSDs. I got some interesting data after upgrading:
Ceph Squid → Tentacle Upgrade Benchmark Summary
Cluster: 3-node Proxmox (Intel NUC14, NVMe)
Pool: ceph-vms (replicated, size 2 / min_size 2)...
Hi Fiona,
No, i did not use snapshot-as-volume-chain, i changed the clustersize to 1M instead of 64k, i would like to share more information how to implement it (installation scripts, etc) if you can implement this in the next version, etc..
Apologies for Necro'ing this thread, but it is the first hit on google. and its the one I kept stumbling back to when searching the error
"create storage failed: failed to spawn fuse mount, process exited with status 65280 (500)" when looking...
My company has been using Proxmox for a while now and have deployed in a variety of uses and I can confirm that Blockbridge has definitely been the most straight forward way to address the problems you've mentioned. While Proxmox doesn't official...
Laut Aussage eines Proxmox Entwicklers hat man darüber schon nachgedacht und ist zum Schluss gekommen, dass das konterproduktiv ist:
- Es gab schon früher mal eine Möglichkeit zu spenden, die dabei entstehenden Einnahmen (bzw. deren Mangel)...
Ja, oder man zahlt gar nichts und klickt diese Meldung einfach weg. Oder man googelt, wie man sie los wird, falls einem dieses Ding tatsächlich schlaflose Nächte bereitet. ;)
Bezüglich der Unterstützung mit „Peanuts” sieht es laut offiziellen...
That will work if you have a quorum machine. We used DRBD+LVM in active/active heavily with XEN before we switched to PVE over 10 years ago, but we used clvm with XEN and in the beginning with PVE, but it was replaced with PVE internal logic to...
Die Backups mit dem PBS sind soweit schon "application aware", wenn der qemu-guest-agent auf den VM's installiert ist, weil ja - wie Du schon geschrieben hast - ein freeze mit dem qemu-agent ausgeführt wird und somit in der "Luft" befindliche...
Hi guys,
I would like to upgrade my PVE v.8 to v.9.
pve8to9 --full returns 2 warnings:
Could you please tell me what should I do to get a smooth upgrade?
Thanks for your kind help.
We are a blockbridge customer for our cloud (2yrs, 2 zen4-48 clusters) and are working to migrate some larger customers onto their own blockbridges. Can't say enough about their storage knowledge and support. They are one of my favorite vendors...