@UdoB You are 100% correct and thanks for calling it out! Tables without units are incomplete. I've updated the table headers to clarify appropriate units.
Cheers
Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
"shutdown" ;-)
Eigentlich ist heutzutage ja "systemd" das aktuelle Werkzeug - siehe systemctl list-timers. Aber der klassische Cron fühlt sich oft einfacher an.
Falls du den PBS "irgendwann mal" tagsüber einschaltest und ungefähr weißt, wann...
I am sure some people do.
Three nodes is the absolute minimum, as official documents tell. While I do not run Ceph currently I did use it last year - in my Homelab; some findings...
Each K and M must be in a different host because you want your fault domain to be host (the default), not disk: i.e. if fault domain was disk you may end up with too many K or M (or both!) for some PGs in the same host and if that host goes down...
The best practice is to have a dedicated HBA-Controller for your storage devices (e.G. a SATA-HBA Controller or a RAID-Controller in HBA/IT-Mode) and pass it through via PCI...
For reference: @UdoB explained why RAIDZ is a bad idea for vm storage here:
https://forum.proxmox.com/threads/fabu-can-i-use-zfs-raidz-for-my-vms.159923/
High peak speeds on consumer NVMe look great in benchmarks but don’t matter in real workloads. Enterprise SSDs are built for consistency, with stable performance even under sustained load, while desktop drives quickly drop off once their cache is...
I really appreciate your posts about testing storages, even though it is one (or two) levels above my world.
But please, a bare numerical value without knowing the unit is... problematic just incomplete.
Hi @pmvemf,
Following up on this, I asked our performance team to review the kernel iSCSI stack (what you referred to as the "Proxmox native initiator," which is in fact the standard Linux initiator).
Our testing with PVE9 showed no functional...
Well PBS will need less storage space and can be leveraged for Ransomware protection:
https://pbs.proxmox.com/docs/storage.html#ransomware-protection-recovery
It also allows live-restore, if i recall direct that's not possible the other way...
Hi,
You can override that URL in the remote settings' "Web UI URL" field. For example, if you want to use the same URL, but without the default 8006 port, you would enter the plain HTTPS URL (as the browser defaults to port 443 anyway), e.g...
That's the curse of such physically small devices. Keep in mind that "not ideal" may mean different things; one not obvious aspect in one of my systems is this:
rpool ONLINE 0 0...
Hi @DJohneys , welcome to the forum.
This is not PVE specific but rather standard Linux admistration. There are many ways to do what you want:
echo 'export http_proxy="http://proxy.example.com:8080"' >> ~/.bashrc
echo 'export...
I operate a small 5-node HA test cluster on PVE 9 with Ceph. During fault-injection tests (power loss, forced resets, cable disconnects) I observed that when Ceph OSDs were located on SSDs without power-loss protection (PLP), virtual machines...
You basically saved your data one time encrypted and one time unencrypted, thus in the end saving everything two times. As soon as every unencrypted backup is getting pruned or manually removed you should notice that the storage space occupied...
You have basically three options together with the integrated high-availability solution (https://pve.proxmox.com/wiki/High_Availability ):
Using Ceph
Using some shared storage (like a NAS with NFS, or a storage array, attached via ISCSI or...
Wenn dir Datenkonsistenz wichtig ist, dann ist ZFS die richtige Wahl und ganz sicher kannst du dann nur gehen mit ECC RAM. Denn wenn Fehler im RAM passieren sollten, kann ZFS auch nicht helfen.
It‘s not only the bottleneck of IOPS but also:
- Parity must be recalculated and rewritten for every small change
- Sync-write-heavy applications (e.g., databases) suffer massively under RAIDZ
RAIDZ should only be used for „cold“ storage. For...
I would try again with "Enterprise Class SSDs" with PLP / "Power-Loss-Protection".
...and with mirrors, not a RaidZ2 - as this gives you only the IOPS of a single device.
Of course SATA is massively slower than PCIe --> if possibly switch...