16 consumer NVMe drives. Any write Ceph does is sync and any drive without PLP will show high latency and, once its cache fills, poor sequential performance. Keep in mind that you have to write to 3 disks and besides data itself it has to write...
Write Back, schreibt immer erst einmal in den RAM.
Der Ceph Benchmark macht immer mehrere Streams. Dein Diskmark nur einen auf einer vDisk. Mehrere VMs performen dann auch schön.
Produktiv habe ich nirgends Performanceprobleme, auch nicht bei...
Try using CPU x86-64-v2-AES or x86-64-v3-AES instead of "host."
Google Translate:
Versuchen Sie, anstelle von „host“ die CPU-Typen x86-64-v2-AES oder x86-64-v3-AES zu verwenden...
A status update on this:
Two corosync parameters that are especially relevant for larger clusters are the "token timeout" and the "consensus timeout". When a node goes offline, corosync (or rather the totem protocol it implements) will need to...
Kannst Du bitte mal dein Setup genauer beschreiben.
ceph osd df tree
qm config <VMID>
ceph osd pool ls detail
ceph osd pool autoscale-status
Bitte setze dein Output immer in [ CODE ] tags (oben im Menü das "</>"), dann kann man das deutlich...
Is a good practice to separate Ceph data traffic in another interface or almost another VLAN and use 10 Gb or more for this traffic, You should keep in mind that modern SSDs have a very high transfer rate and need a network to match this...
Is not a critical error, blueStore has recovered the failure, and the cause can be a puntual problem with the network, or some hardware element (controller, disk) but it appears to be for a low response of these OSD.
To remove the warning:
ceph...
Hi, from a quick look, the "Retransmit" messages may be a symptom of network stability issues (e.g. lost packets, increased latency etc) that are more likely to occur if corosync shares a physical network with other traffic types -- I'd expect...
There is some documentation available for Corosync network setup: https://pve.proxmox.com/wiki/Cluster_Manager#pvecm_cluster_network
Using bonds is not advised, shared links with other traffic types are not advised too.
Yep, not a whole lot to say really - we have a 5 node ceph cluster (only 4 of which hold storage) and the SQL server's drives are on enterprise SSD backed ceph storage. Zero complaints about stability or performance.
Couldn't see myself moving...
ZFS is a local and not a shared storage. You can`t remotely access it across nodes. Whats possible is to replicate virtual disks between two IDENTICALLY named ZFS pools so both pools store a local copy the same data (means also losing half the...
Alright, well I ended up creating a whole project around this request, starting from the nice work that Jorge did. It just needed a bit more work and features for my use case (which is deploying OpenShfit/OpenStack for development and using the...
We're very excited to present the first stable release of our new Proxmox Datacenter Manager!
Proxmox Datacenter Manager is an open-source, centralized management solution to oversee and manage multiple, independent Proxmox-based environments...
The name of the network interface is simply too long when combined with the VLAN, since the maximum length of network interface names is 15. You will need to rename the interface [1].
[1]...
We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement.
This release is based on Debian 13.2 "Trixie" but we're...
I'd go with U.2 NVMe enterprise drives in a ZFS-mirror. Depending on the size requirements, stripped mirrors. All in a tower workstation or server from a known brand. You can of course have it cheaper with parts, but you need to get also the...
Yes, PVE works excellently with NVMe/TCP. It comes with recent enough Kernel with stable NVMe/TCP support.
We test and support iSCSI and NVMe/TCP equally for our PVE customers.
Blockbridge : Ultra low latency all-NVME shared storage for...