Search results

  1. S

    [SOLVED] Raidz expansion

    Hello there, I have seen a couple of comments on Reddit & Github that raidz expansion is currently being developed/has already beta state. Will that feature also work for PVE or do I still need all disks available at zpool create?
  2. S

    HDD queue settings for VM

    While doing my tests I stumbled accross a very weird behaviour: My Win22k server used IPv4 & IPv6 with dedicated addresses. Current settings: agent: 1 bios: ovmf boot: order=scsi0;ide2 cores: 8 cpu: host efidisk0: local-lvm:vm-109-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M ide2...
  3. S

    HDD queue settings for VM

    Hello there, can someone tell me the "secret" where to place queue settings for a Windows VM. Someone from Proxmox staff told me some years ago to add this to the guest config itself - but I am unable to find this thread and also Google is not showing something useful. Background is that I am...
  4. S

    ZFS volblocksize per VM disk instead of pool

    This topic is also a feature request here --> https://forum.proxmox.com/threads/feature-request-volblocksize-per-zvol.109693/
  5. S

    Clustering issue

    You need a 3rd to prevent split brain. You can use a Raspberrry Pi as a QDevice which is probably the cheapest option. As an alternative you can install PVE in an old PC just to act as a voting device for the cluster
  6. S

    Proxmox Arm64 Port. Any review / Advise ?

    PVE is currently not supported on ARM. I guess you can do it via trial and error.
  7. S

    Feature request: volblocksize per zvol

    I am also struggling with this.
  8. S

    ZFS volblocksize per VM disk instead of pool

    In general yes: I run different services in different VMs but sometime also on the same storage like my Postgres DB VM and my webserver VM. Due to that my guess was that it might be useful to use different blocksizes per VM instead as a general setting per storage.
  9. S

    ZFS volblocksize per VM disk instead of pool

    No, in general a run a bit more on my servers. This was just being mentioned as an example if I understood that right.
  10. S

    ZFS volblocksize per VM disk instead of pool

    I run diffent things on the e.g. same SSD. 1x Postgres DB 1x Ubuntu Webserver On a 2nd PVE I run a Windows server. C: for OS and integrated Windows SQL database D: music E: backup from users F: media C = 8k D - F = 32k or higher Would that make sense?
  11. S

    ZFS volblocksize per VM disk instead of pool

    So this also mean that if a VM uses different disks on different ZFS pools I may also use different volblocksizes - am I right? E.g. Ubuntu , root partition with a Postgres DB = 8k volblocksize + 2nd partition used for SMB storage on a different PVE ZFS pool 1M volblock size
  12. S

    ZFS volblocksize per VM disk instead of pool

    Not so sure as the page mentions: So from this point of view the volblocksize should also match to the "thing" you run on it (database, storage, and so on)
  13. S

    ZFS volblocksize per VM disk instead of pool

    Hello everyone, I took a deep dive into ZFS blocksize topic and found some useful information like: Source: https://klarasystems.com/articles/tuning-recordsize-in-openzfs/ Not sure if this is still valid or not but wouldn't it be better in general to define the volblocksize per VM disk within...
  14. S

    zfs TB eater

    Just moving the VM disk from one storage pool to another and back again?
  15. S

    PCI Passthrough in Proxmox 8.0 auf TrueNas Scale funktioniert nicht

    Du könntest auch deine Platten direkt an die VM (TrueNAS) durchrreichen - ohne, dass du den ganzen SATA Controller nutzt.
  16. S

    PCI Passthrough in Proxmox 8.0 auf TrueNas Scale funktioniert nicht

    reichst du den sata Controller vom Host weiter oder hast du da einen eigenen Storage Controller?
  17. S

    Laufendes Windows von lokal auf Proxmox VM

    Siehe hier —> https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Physical-to-Virtual_.28P2V.29
  18. S

    zfs TB eater

    I use this zfs pool mostly for storing films and audio stuff. Will that also be a benefit on a single vdev/zpool or only for raidz?
  19. S

    zfs TB eater

    But will that also affect the performance of the underlying VM as mentioned in my previous post?
  20. S

    AER: Multiple Corrected error received: 0000:00:1c.5

    Und auf deinem Server selbst hast du PCIe passthrough auch entsprechend eingerichtet und schon erfolgreich getestet? Welches "Ding" wäre denn in diesem PCIe slot?