Recent content by Dunuin

  1. vzdump over sshfs - How to use RAM instead of /tmp on host ?

    On Debain (and maybe PVE too as it is based on Debian?) you can also run... cp /usr/share/systemd/tmp.mount /etc/systemd/system/ systemctl enable tmp.mount ...to mount your /tmp as a tmpfs filesystem. That way your /tmp folder would always be stored in RAM. I always do that to reduce SSD wear.
  2. Feature Request: advanced restore options in GUI

    What I'm missing are more advanced options when restoring backups: 1.) a way to set the target storage for each individual virtual disk Right now you can only choose a single target storage and all virtual disks will be restored to that storage. But often my VMs got virtual disks on different...
  3. SSD setup for VM host

    Depends on your workload. The S4510 are cheap ones for read intense workloads. For mixed workloads they got the S4610 series with better write performance and better durability. As far as I know they don't got a S4700/S4710 series anymore for write intense workloads like the discontinued S3710...
  4. SSD setup for VM host

    Also keep in mind that the MX500 are advertised as having a "powerloss immunity". But they don't got a real "powerloss protection" like enterprise/datacenter SSDs where each SSD got its own internal backup battery (technically they use condensators, but work the same as a BBU of a raid...
  5. Split ZFS mirror rpool into 2 single disks

    Most people also don't use ZFS deduplication because it costs too much. For most workloads the deduplication won't save that much space (more usefull if you got a DB that stores alot of the same entries) and enbled deduplication needs alot more RAM. For each TB of deduplicated storage you should...
  6. Split ZFS mirror rpool into 2 single disks

    Yes, otherwise it is still a ZFS mirror in degraded state because ZFS thinks there is a mirror member missing (but it still would continue running fine). Jup, no compression, no dedup, no replication. Snapshots still work. But benefit would be less overhead so your SSD might wear 2-3 times less...
  7. [SOLVED] Probleme mit APT nach Hardwaretausch

    Komisch, da muss ich auch mal bei mir gucken. Hier läuft APT normal mit aktuellstem PVE über das aktuellste OPNsense (als VM auf selben Host). Mein PVE läuft allerdings auf einem aktuellem Debian Bullseye und wurde nicht über die PVE Iso installiert. Vielleicht macht das ja einen Unterschied.
  8. Feature request: volblocksize per zvol

    What I'm really missing is a option to set the volblocksize for single zvols. Now it isn't really possible to optimize the storage performance even if ZFS itself would totally allow that. Lets say I got this scenario: ZFS Pool: ashift=12, two SSDs in a mirror VM: 1st zvol is storing the ext4...
  9. Split ZFS mirror rpool into 2 single disks

    Also keep in mind that single disk pools won't be able to repair corrupted data. So there won't be any bit rot protection anymore. If you don't need other ZFS features it might be better performing to just use LVM-Thin for the new VM/LXC storage.
  10. Disk Performance Probleme bei KVM Server mit ZFS

    Hast recht, war der Meinung bei dem Wiki-Artikel waren auch Write-Tests bei. Sollte man jedenfalls trotzdem immer vorsichtig sein bei fio. Die Beispiele aus dem Thomas Krenn Wiki oder dem PVE Benchmark Paper würden z.B. die Daten zerstören wenn man einfach die Befehle in die shell kopiert.
  11. Disk Performance Probleme bei KVM Server mit ZFS

    Da müsstest du erst gucken was denn deine Workload genau ist. Je größer du die Volblocksize wählst, desto schneller sollten große Writes sein, aber auch desto langsamer kleiner Writes. Bei Zvols muss man immer einen Kompromiss irgendwo in der Mitte finden, da sonst immer das eine oder andere...
  12. PBS killed a virtual server

    Created one: https://bugzilla.proxmox.com/show_bug.cgi?id=4061
  13. Disk Performance Probleme bei KVM Server mit ZFS

    Eine Sache die man tun kann ist z.B. die volblocksize für seine Workload anzupassen. Gerade wenn man ein Raidz1/2/3 benutzt sollte man das immer tun, da man sonst massiv Padding-Overhead hat. Siehe hier für raidz1/2/3...
  14. Disk Performance Probleme bei KVM Server mit ZFS

    Bei ZFS nutzen VMs Block Devices (Zvols) mit einer festen Blockgröße (standardmäßig 8K) und LXCs nutzen Dateisysteme (Datasets) mit einer variablen Blockgröße (standardmäßig 128K, was heißt LXCs können variable Blöcke von 4K-128K in Dateien schreiben). Schreibt man also eine 1MB Datei in einer...
  15. PBS killed a virtual server

    I also got a question as last weeks someone also asked how to change the VMID of a guest. Is there a CLI command to change the VMID of an existing guest without needing to do a backup+restore? It would make it easier to temporarily restore a VM (with VMID 100) for example as VMID 10100 and then...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!