Wofür willst du denn die HDDs verwenden? Als Datengrab (auch für VMs und lxcs)? Dann ist RAIDZ ok. Wenn du dagegen VMs (also deren Betriebssystem und Serverdienste) darüber laufen willst, solltest du dir das noch mal überlegen, RAIDZ ist schon...
Und bei der Schätzung des Speicherbedarf auch an 2 GByte für Proxmox VE selbst denken.
Ich setze rund 1 GByte Ram für 1TB an Datenspeicher an.
Mehr geht immer.
Und je nach Speichertyp bitte ein zfs quota von ca. 80 % auf den gesamten Pool...
Ja, bei weitem.
https://pve.proxmox.com/wiki/ZFS_on_Linux#sysadmin_zfs_limit_memory_usage :
" ... will be set to 10 % of the installed physical memory," - also vielleicht etwas über 3 GiB.
https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Beta_Documentation :
"...with the objective of providing a centralized overview of all your individual nodes and clusters. It also enables basic management like migrations of virtual guests...
Perhaps you can find some hints here: https://forum.proxmox.com/threads/fabu-no-network-connectivity-after-installation-or-after-switching-the-router-can-not-load-the-web-gui-in-a-browser.160091/
I wouldn't trust any production data on something, some guy on youtube claimed to work. I also wouldn't trust anything somebody on a forum said who doesn't have his own experience but just repeat something from youtube.
I wasn't talking about...
Hi!
What exact error message do you get for the failed backups? Anything of interest in the PVE task log or the corresponding PBS backup task log?
We discovered an issue leading to possible deadlocks. A fix is work in progress, please subscribe...
An import consideration is the needed capacity ( rule of thumb is 0.02-0.03 of HDD storage capacity ) and amount of ssds since the redundant of the special device should match the redundancy of the pool:
See also...
Also sicherlich ist das Feature noch nicht aktiv.
# zpool version
zfs-2.3.4-pve1
zfs-kmod-2.3.4-pve1
zpool upgrade
Test es halt mal mit raw-Dateiein, anstatt einem realen zfs pool.
# man fallocate
fallocate -l 1G dev0.raw
fallocate -l 1G...
OK, I think my browser cache was outdated. After refreshing the proxmox webui it is available for selection.
Then it's time to spin up otel collector and test the monitoring.
Oddly enough, after going down a massive Ceph rabbit hole and getting ever increasing amounts of Ceph+proxmox content pushed to me by "the algorithm", I was reading through the manual before starting the deployment and spotted ZFS replication...
The Micron 7450 MAX are tlc with plp which is good.
With "ZFSPOOL RAIDZ2" you get in fio just the iops of a single disk, recreate pool as mirror.
For ceph you just have a single 25Gb active line for writing default 3 times each block so it's only...
It seems that the problem has been solved. It is unbelievable, but the problem was the UTP cable between the Proxmox server and the router.
The original 1.5m patch cable was used between the server and the router.
I crimped the RJ45 connectors on...
01/2:00
Let's verify it:
# systemd-analyze calendar --iterations=12 '01/2:00'
Original form: 01/2:00
Normalized form: *-*-* 01/2:00:00
Next elapse: Sat 2025-09-27 13:00:00 CEST
(in UTC): Sat 2025-09-27 11:00:00 UTC
From now...
Well especially in a "I don't want to take the whole house offline scenario" for a house I think a two-node cluster with ZFS Storage replication is the better approach: If you have a low-power device (like a raspberry pi or a nas) which can act...
Thats expected, the more backups you add the slower GC & verify will get due to how PBS handles deduplication (many small chunk files) and hdd's can't handle that well.
Solutions would be, switch to all flash storage for your backups or if...
It would also make host backups a lot easier because then you would just have to backup the modifiable files (configuration etc).
So I see the benefits, but I also see a huge issue: A big benefit of ProxmoxVE is it's flexiblity since it's...