Search results for query: ZFS fragment

  1. K

    Big 'host' backup failed

    Hello We have PBS 4.1.0 server used for backing up a number of PVE VMs and file-level sets via backup agents. It is used successfully for about a month. Total size of PBS ZFS datastore is 30TB,78% of free space. Yesterday i try to set up backup script at our mail storage server containing a huge...
  2. U

    [SOLVED] After starting a virtual machine using PCIe passthrough, stopping it prevents the amdgpu driver from bind to a specific iGPU.

    After starting a virtual machine using PCIe passthrough, stopping it prevents the amdgpu driver from bind to a specific iGPU. Background I was using hookscript on the Ryzen 7 7700 to enable restarting after shutdown. This worked fine to reduce the hassle of blacklisting and early binding...
  3. UdoB

    VM virtual disks and ZFS's "80-90% rule"

    If you configure -let's say- a 100 GB virtual disk it is not created on disk as a single continuous 100 GB disk space - and stay this way. This behavior is possibly implemented by LVM thick. ZFS is usually configured sparse. Only actually required space is occupied when the user (the VM) writes...
  4. U

    [SOLVED] Proxmox 9.0 + AMD Radeon iGPU (Granite Ridge) Passthrough: A Desperate Plea for Help

    It will function normally unless the following operations are performed. Proxmox Windows 11 Virtual Machine force stop (stop) Proxmox Windows 11 Virtual Machine force reset (reset) Windows OS BSOD These operations do not perform proper termination procedures, preventing pass-through PCI...
  5. aaron

    ZFS Pool Usage Reporting Higher than Actual VM Disk Usage

    Keep in mind that tihs stems from a time where all we had were HDDs. Given that ZFS is copy on write, the data will fragment over time, and if the HDD is full, it will need more time to find unused space on the disk. With SSDs where the seek time is practically zero, I do not think that the 80%...
  6. I

    hardware recommendation for pbs server

    Hi! I have very good user experince with pve + pbs solutions and thanks for the good software! Currently pbs system runs on system like - server: PRIMERGY RX2530 M7S - cpu: 2 x Intel GOLD 6548Y+ (core/threads 32/64 per cpu) - memory: 512 G DDR5 - storage: 10 x 15 TB Micron 7450 nvme devices -...
  7. mr44er

    AMD Ryzen 5 8600G w/ Radeon 760M - Working passthrough of this GPU?

    Heya, can somebody confirm that passthrough with this onboard Radeon is working...and how? Mainboard: ASUS B650-PLUS, BIOS 3067 12/10/2024 In the past I really hated the passthrough with Vega64 because of the reset bug. Not nice, but it worked. Switched to GTX 1050 Ti, worked better, arranged...
  8. S

    Optiming Virtual Disk Volume Block Size (Host) and EXT4 Blocks/Groups (Guest)

    It just occurred to me that the Reason why I'm getting such a MASSIVE overhead on my Backup Server (which has 2 x RAIDZ-2 x 6 Disks each) is because the Default ZVOL on Proxmox VE is (or probably "was" for a long Time, and many of my Virtual Disks are quite Old) 8k, which results in a Space...
  9. UdoB

    Proxmox ZFS 80% Rule, Pool or VM Storage?

    No. Probably. But a) ZFS tends to fragment on the long run. And while I do not know Blue Iris and while a video stream is probably sequential there may be "events" and "motion detection" resulting in a high number of much smaller files. (Or database operations?) b) in any case a Special Device...
  10. LnxBil

    OOM - Shut down VM

    Every OS on the planet has built in swap support, even win 3.1 did use a virtual memory page file. Swap (compressed in memory as with zram or disk swap) will solve or at least push the OOM way down the line. Performance wise, you'll need to monitor not the swap usage itself, but the...
  11. F

    node proxmox is offline but. vm is on line

    DMESG command [ 3.756599] systemd[1]: modprobe@dm_mod.service: Deactivated successfully. [ 3.756764] systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. [ 3.756997] systemd[1]: modprobe@drm.service: Deactivated successfully. [ 3.757160] systemd[1]: Finished...
  12. justinclift

    [SOLVED] Oh no... wake-up with "SMART error (FailedOpenDevice) detected on host..." message!

    @GazdaJezda Looking at your initial post in this thread, it has this: Note the vpath there is /dev/disk/by-id/ata.... [etc] ...17A-part1 ? That's a path from the /dev/disk/by-id/ directory on your Proxmox box, and is the kind of path you'll want to use for the replacement device. Probably a...
  13. leesteken

    Opt-in Linux 6.8 Kernel for Proxmox VE 8 available on test & no-subscription

    The amdgpu driver on kernel version 6.8.1-1 and 6.8.4-2 crashes my AMD Radeon RX570, while the earlier kernels worked fine. It still works fine with RX6950XT and this issue can be worked-around by blacklisting amdgpu, but I cannot use the RX570 for the Proxmox host console after VM shutdown (or...
  14. SInisterPisces

    Proxmox VE 8.1 released!

    It is. I doubt the performance hit is that meaningful in a home server environment, but if I have problems I'll try turning it off. Once I move the dbStore to a bigger storage (it's on a shared 1 TB pool), I'll definitely turn it off. My VM boot disks live on a ZFS mirror (two disks). Is 64k...
  15. Dunuin

    Restore data insert in VM to ZFSpool

    You shouldn't do that. ZFS will become slow and fragment faster (which is bad as there is no way to degrament it) when becoming full. Usually it's recommended to not fill it more than 80% and I personally always set a 90% quota so it can't even filled more than 90% by accident. And in case you...
  16. A

    ZFS pool for new proxmox installation

    I get it. thanks i will go for it, thanks for your time
  17. A

    ZFS pool for new proxmox installation

    Really Appreciate it, Understood. I learn the availability of the RAID on the initial installer but I want to create the ZFS pool later after installation I ended put in the proxmox system in a 64gb USB and the my idea is to use those 3x2TB HDD for the ZFS just for information but with the...
  18. Dunuin

    ZFS pool for new proxmox installation

    ZSTD for bad performance but good compression ratio. LZ4 for very good performance and ok compression ratio. I would stick with the default LZ4 unless the data is well compressible, rarely accessed and you got tons of CPU power you don't need. Should definitely be in the installer as raidz1 or...
  19. M

    Fixing grub after disk faliure? SQUASHFS error

    Hi! I've followed this guide, but stuck with step 8. chroot /mnt /bin/bash I've got: SQUASHFS error: unable to read fragment cache entry [47d57d1] SQUASHFS error: unable to read page, block 47d57d1size d464 Anybody has any idea how to solve this? The config: boot from 1TB disk from onboard sata...