Search results

  1. C

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    oh, yes you're right. I'm not too sure which bits are most relvevant though. I switched my VMs to virtio block (which is viostor driver, right?) and all looks OK so far. Strangely only one of my systems was affected anyway. I saw it when opening the very large event log while it was installing...
  2. C

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Thank you. I'll not mess with it right now. Maybe on a Sunday afternoon as it is in use 24hr/6 days
  3. C

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Only issue I am seeing so far, is that my PoweEdge R7615, Epyc 9174F, after bootup, the iDRAC says no-signal on the console. It's like I lost video out... I got the 'Found volume group "pve" and the mount message for /dev/mapper/pve-root, and then I lost the console. system is up and running...
  4. C

    really slow restore from HDD pool

    Backup of same VM to ext4 on SAS12 SSD: 16m 10s Restore: 19m 9s
  5. C

    really slow restore from HDD pool

    I believe I am being held up by pbs-restore being single threaded. I see it pegged at 100% on the PVE host.. :-/
  6. C

    really slow restore from HDD pool

    I know, you did say so earlier, but prior to that, everybody else is telling me that my HDDs are the reason this is performing a bit below my hopes. Now, when I look at the data overall: inside the guest, there is 550GB used on the two virtual disks. A full backup is taking up 175GiB on PBS...
  7. C

    really slow restore from HDD pool

    On the PBS, it has taken 176GiB. The source virtual-machine is 1.1TiB in disk size but not fully allocated. All the above tests are with zfs compression disabled, but PVE/PBS does zstd already I think.
  8. C

    really slow restore from HDD pool

    Restore from enterprise SAS12 SSD in RAIDZ, took 18 minutes at 45 seconds. That is.. 40 seconds quicker than the HDD + Special. However I have no idea how much data went to special and how much to HDD. next test, after I have changed the pet bedding, is ext4 instead of ZFS, single SAS SSD disk.
  9. C

    really slow restore from HDD pool

    I have created a RAIDZ from the 3x SAS12 SSD (800gb partitions on each) on PBS. Currently running a backup. Most of the time hovering around 55 - 70MiB/s. Occasionally spikes to 500MiB/s for a short while.
  10. C

    really slow restore from HDD pool

    Restored in the same 20m to the other RAID5 on target PVE.
  11. C

    really slow restore from HDD pool

    Backup of SAP&SQL server to 4x4tb RAIDZ HDD with SSD special device small-files=512K: 15min. Restore: 19m20s Backup of SAP&SQL server to 4x4tb RAIDZ HDD without special device: 15min. Restore: 26m15s Next I will create a RAIDZ of just the SAS12 SSDs and see how that looks. Or perhaps I will...
  12. C

    really slow restore from HDD pool

    Yes well since I had bought 3 disks I have done a 3 way mirror for the test
  13. C

    really slow restore from HDD pool

    OK so the special device can only be a mirror or single disk. thats fine. First thing: the actual backup is slow too. Most of the 1.1TiB VM backup (of which I think only perhaps 200G is data) is moving at 60MiB/s. Occasionally it jumps to 500MiB/s but most of the time hovers at 60. For the...
  14. C

    really slow restore from HDD pool

    I have installed PBS on another server (Poweredge T330). It has 3x 960gb SAS12G enterprise SSD, 100GB used for OS RAIDZ. remaining left. and 4x 4tb HDD. I have partitioned the HDDs in half. I will create a zpool using half of each HDD in 4-disk RAIDZ with a RAIDZ special device from the 3x...
  15. C

    Show SSD wearout - SAS connected SSDs

    OK I will do. I will have the system switched back on tomorrow. I only powered up the system to check the SSD health on some used-SSDs and turned it back off afterwards. It sounds exactly like the description though - shows N/A in PBS, shows "Percentage used endurance indicator:" in smartctl
  16. C

    really slow restore from HDD pool

    Yes I am referring to 'in restore' (that's what the thread is about.. really poor restore performance)
  17. C

    really slow restore from HDD pool

    I see. So you are saying that there won't be an improvement when I add the SSD special devices ? hmm. oh well. I have 2 of the 3 SAS SSDs now. The third went missing so I have not made the changes yet anyway.
  18. C

    Show SSD wearout - SAS connected SSDs

    Hi, any chance of the patch making it into PBS ? I see the same N/A on SAS SSDs, yet smartctl shows percentage used endurance indicator.