@nhh,
nein, unterschiedliche Storage-Namen sind hier keine Option — die Replikation erfordert den gleichen Storage-Namen auf beiden Nodes, sonst funktioniert sie nicht.
Die Anzeige im Datacenter-Summary ist einfach so. Schau dir den...
Syncoid is a tool for "ZFS send/receive". It does work on a PVE installation but it has nothing to do with PVE --> it lacks integration.
Regarding LVM: see the last sentence of post #2 ;-)
Greentigs,
I have put a Proxmox training manual together for my team (and others) and I would like to offer it here for review/use by the community. This is not intended to be a technical deep-dive, rather a practical point-to-point to help...
@shanreich Thank you for the document. Based on what I see in the charts, 25Gb is more than sufficient for our needs (expected since that is the our storage traffic runs on currently).
Do you have any input on the design @Nexces has used...
Since I haven't seen it mentioned yet, there is also our Ceph Benchmark paper from late 2023 [1]
[1] https://www.proxmox.com/images/download/pve/docs/Proxmox-VE-Ceph-Benchmark-202312-rev0.pdf
Then imho this warrants writing to office@proxmox.com to ask for help. I mean you payed for support, so I see no reason, why you should figure this out on your own ;) As far I know the person/s behind that adress handle everything subscription...
@x509
corosync is on 1G "private" link
no redundancy for 25G - single node failure is accepted as datacenter in which those servers are has 24h service and spare parts - faulty node will be up in matter of minutes / hours
there is total of 70 VMs...
Just wanted to add a quick comment for anyone that might find this post via google or similar:
This is no longer needed, as the whole functionality (including comments/notes) is now built into vma-to-pbs directly. It can just be pointed at a...
After reading through the various discussions about the O_DIRECT bug/feature, I understand that this primarily affects VMs configured with cache mode set to none (which is the default in Proxmox).
I’m planning to run the C code to reproduce the...
Hi, for everyone who has boot issues on kernel 6.17.4-2 if Intel VMD is enabled: One option would be to try kernel 6.17.9-1 (currently on pve-test [1] ), it contains a potential fix [2] for the issue. If you test 6.17.9-1, it would be great if...
Thanks for you quick reply. I checked and there's no hint:
etc/fstab is practically empty.
systemctl status '*.mount' not hints. (shows a lot, also the new and fine working cifs-shares, but not the old ones.)
I scrolled back a bit in journal...
My bad - i checked again: The messages don't appear after the last reboot. So i guess in the system there was some function that still tried (for weeks now!) to communicate with the once lost cifs / nfs connections. But after a reboot this...
If by redundancy you mean disk fault tolerance, the higher the value after "raidz" the higher the fault tolerance. In practice, raidz2+ (never use single parity raidz unless prepared to lose the pool at any time)
performance= striped mirrors...
* FYI, bzfs is written in Python (not a "shell script"), compression is configurable via CLI options, and it is similarly robust as zrepl, and far more robust and reliable than syncoid.
* I think zrepl (and syncoid) have different focus and...
Ok, we already did zpool replace -f <pool> <old-device> <new-device>
But now we should remove device, wipe it, resinsert it and do the following:
sgdisk <healthy bootable device> -R <new device>
sgdisk -G <new device>
zpool replace -f...