Search results

  1. R

    linux headers aren't installed

    no, i think because the kernel and headers of proxmox have simply a different name. However, if you install dkms, simply install proxmox-headers either. It will stay there forever and update automatically, like leesteken already said. I dont see a downside.
  2. R

    Help with storage

    - ls -lah /backup/dump/* - You dont need a Cluster, just Check the Storage category und the Cluster View (The Cluster View is the first item in the GUI, above your Server) - If you're sure that there can't be anything, you can remove that folder, but check first what there is in it.
  3. R

    Current hardware recommendations for enthusiast home build

    Passthrough with amd onboard gpu's doesn't work. You can use it only in LXC Containers, for Plex for example. PS: I would build a Home-Server based on the new AMD 4004 Series. Thats a relative Cheap Plattform with 28 Lanes. Everything else has only 16/20/24 Lanes, and above that are only...
  4. R

    Help with storage

    check whats in there. check your storage configuration in procmox cluster section, that directory must be used somewhere there, or check your crontab. But in "dump" folders are usually Backups, i think images too, but unsure.
  5. R

    Ubuntu 24.04 LXC containers fail to boot after upgrade

    But what does this have todo with Proxmox?
  6. R

    Ubuntu 24.04 LXC containers fail to boot after upgrade

    What means lost networking in GUI, which GUI?
  7. R

    Relay to Mailservers based on TO: Email-Addr

    I don't use PMG, just wanted to evaluate if should try it. But without that ability i see no reason to switch from standard postfix + postfix-admin to pmg. Basically postfix-admin does that via mapping. Cheers :-)
  8. R

    Relay to Mailservers based on TO: Email-Addr

    Yeah indeed, i wrote non understandable crap :) What i meant was: rama@golima.de -> Kerio stoiko@golima.de -> Exchange lama@golima.de -> Zimbra Routing to Mailserver based on destination email address. Not Domain based. All users are under the se domain. Thanks stoiko :)
  9. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Lets do a conclusion, because this thread get too long: Local Storage / PVE Default Settings (Zstd 1thread): ZSTD -> 1GB/s limit LZO -> Around 600MB/s GZIP -> Around 40MB/s none -> 4,5-5 GB/s Local Storage / Zstd: 32 threads + Pigz: ZSTD -> 1GB/s limit LZO -> Didn't tested, but i don't expect...
  10. R

    [SOLVED] Force delete of "Pending removals"

    #!/bin/bash base_dir="/datasets/Backup-HDD-SAS/.chunks/" yesterday=$(date --date="yesterday" '+%Y-%m-%d %H:%M') total_dirs=$(find "$base_dir" -mindepth 1 -maxdepth 1 -type d | wc -l) echo "Total directories to process: $total_dirs" processed=0 find "$base_dir" -mindepth 1 -maxdepth 1 -type d...
  11. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    https://bugzilla.proxmox.com/show_bug.cgi?id=5481 Im not completely sure if it's SSL, but almost nothing else is left. I cannot break the barrier of 1GB/s, no matter which hardware. And im absolutely sure, that no one can, if the destination is a PBS. Alternative way that's left, is to ditch...
  12. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    iperf3 -c 172.17.1.131 -P4 Connecting to host 172.17.1.131, port 5201 [ 5] local 172.17.1.132 port 48106 connected to 172.17.1.131 port 5201 [ 7] local 172.17.1.132 port 48108 connected to 172.17.1.131 port 5201 [ 9] local 172.17.1.132 port 48114 connected to 172.17.1.131 port 5201 [ 11]...
  13. R

    [SOLVED] [iperf3 speed] Same Node vs 2 Nodes (Found a Bug)

    Okay, im further in my research, there seems indeed some bug: If i do an iperf3 test from LXC to Node directly: [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 4.44 GBytes 37.9 Gbits/sec 0 434 KBytes [ 5] 1.00-2.00 sec 4.33 GBytes...
  14. R

    [SOLVED] NVME disk "Available Spare" problem.

    Glad god you didn't listened to me xD
  15. R

    Worse performance with higher specs server

    PS: i forgot to mention: echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor Should increase fsync twice. Thats a dirty hack, but makes it a lot more reliable to test one server against another. Because with ondemand you don't know at which frequency your test was...
  16. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Okay, its not ZSTD itself, but a Bug in Proxmox somewhere: https://pve.proxmox.com/pve-docs/chapter-vzdump.html#vzdump_configuration you can set "zstd: 1" I did a test to backup to Local-Storage and indeed zstd runs with 32 Threads!!!! But the Backup-Speeds still hits all the same limits! No...
  17. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Actually, lets think different, i can use no Compression, because the files gets anyway Compressed with LZ4 on the Backup-Server, at least via ZFS. But this is still a bummer. That means simply that 1GB/s is the max Backup-Speed Limit for everyone, that don't disable Compression. Cheers EDIT...
  18. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Anyone here reached higher Backup-Speed as 800MB/s or 1GB/s ? Maybe thats some sort of PBS limit Its getting weirder! I have created a VM with PBS on another Genoa Server, the measured write speed is 1,5GB/s inside the VM and read around 5GB/s. But thats a ZVOL issue, that im aware of, the...
  19. R

    ZFS Datastore on PBS - ZFS Recordsize/Dedup...

    Lets start with basic tuning parameters that i use: -o ashift=12 \ -O special_small_blocks=128k \ -O xattr=sa \ -O dnodesize=auto \ -O recordsize=1M \ Thats means logbias is default (latency) + special vdev Testings: logbias=latency + special vdev: INFO: Finished Backup of VM 166...
  20. R

    [SOLVED] Force delete of "Pending removals"

    Thats still the only way to delete stupid chunck files, if you need to, lol However, that command will loop through all files and individually touch them, which is a big loop and execution time of touch comes into play either. I have a better idea, to speed that crap up by at least a factor of...