Search results

  1. C

    Errors after pve-esxi-import-tools upgrade to 0.7.3

    something fishy there. mine was just a 1GbE connection and the ~40 - 60gb (used blocks) vm was done in a reasonable time like 10 - 15 minutes.
  2. C

    Errors after pve-esxi-import-tools upgrade to 0.7.3

    Cheers. It's good to see what had happened. Software huh? Crazy.
  3. C

    Errors after pve-esxi-import-tools upgrade to 0.7.3

    same here, well very similar anyway I have 0.7.4 yet I get "'NoneType' object has no attribute 'files' (500)" it used to work - I migrated a few VMs in the past and there's one left that I wanted to go back and migrated. Edit; Just downgraded to 0.7.2 and it is working again.
  4. C

    Redhat VirtIO developers would like to coordinate with Proxmox devs re: "[vioscsi] Reset to device ... system unresponsive"

    oh, yes you're right. I'm not too sure which bits are most relvevant though. I switched my VMs to virtio block (which is viostor driver, right?) and all looks OK so far. Strangely only one of my systems was affected anyway. I saw it when opening the very large event log while it was installing...
  5. C

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Thank you. I'll not mess with it right now. Maybe on a Sunday afternoon as it is in use 24hr/6 days
  6. C

    Opt-in Linux 6.11 Kernel for Proxmox VE 8 available on test & no-subscription

    Only issue I am seeing so far, is that my PoweEdge R7615, Epyc 9174F, after bootup, the iDRAC says no-signal on the console. It's like I lost video out... I got the 'Found volume group "pve" and the mount message for /dev/mapper/pve-root, and then I lost the console. system is up and running...
  7. C

    really slow restore from HDD pool

    Backup of same VM to ext4 on SAS12 SSD: 16m 10s Restore: 19m 9s
  8. C

    really slow restore from HDD pool

    I believe I am being held up by pbs-restore being single threaded. I see it pegged at 100% on the PVE host.. :-/
  9. C

    really slow restore from HDD pool

    I know, you did say so earlier, but prior to that, everybody else is telling me that my HDDs are the reason this is performing a bit below my hopes. Now, when I look at the data overall: inside the guest, there is 550GB used on the two virtual disks. A full backup is taking up 175GiB on PBS...
  10. C

    really slow restore from HDD pool

    On the PBS, it has taken 176GiB. The source virtual-machine is 1.1TiB in disk size but not fully allocated. All the above tests are with zfs compression disabled, but PVE/PBS does zstd already I think.
  11. C

    really slow restore from HDD pool

    Restore from enterprise SAS12 SSD in RAIDZ, took 18 minutes at 45 seconds. That is.. 40 seconds quicker than the HDD + Special. However I have no idea how much data went to special and how much to HDD. next test, after I have changed the pet bedding, is ext4 instead of ZFS, single SAS SSD disk.
  12. C

    really slow restore from HDD pool

    I have created a RAIDZ from the 3x SAS12 SSD (800gb partitions on each) on PBS. Currently running a backup. Most of the time hovering around 55 - 70MiB/s. Occasionally spikes to 500MiB/s for a short while.
  13. C

    really slow restore from HDD pool

    Restored in the same 20m to the other RAID5 on target PVE.
  14. C

    really slow restore from HDD pool

    Backup of SAP&SQL server to 4x4tb RAIDZ HDD with SSD special device small-files=512K: 15min. Restore: 19m20s Backup of SAP&SQL server to 4x4tb RAIDZ HDD without special device: 15min. Restore: 26m15s Next I will create a RAIDZ of just the SAS12 SSDs and see how that looks. Or perhaps I will...
  15. C

    really slow restore from HDD pool

    Yes well since I had bought 3 disks I have done a 3 way mirror for the test
  16. C

    really slow restore from HDD pool

    OK so the special device can only be a mirror or single disk. thats fine. First thing: the actual backup is slow too. Most of the 1.1TiB VM backup (of which I think only perhaps 200G is data) is moving at 60MiB/s. Occasionally it jumps to 500MiB/s but most of the time hovers at 60. For the...
  17. C

    really slow restore from HDD pool

    I have installed PBS on another server (Poweredge T330). It has 3x 960gb SAS12G enterprise SSD, 100GB used for OS RAIDZ. remaining left. and 4x 4tb HDD. I have partitioned the HDDs in half. I will create a zpool using half of each HDD in 4-disk RAIDZ with a RAIDZ special device from the 3x...
  18. C

    Show SSD wearout - SAS connected SSDs

    OK I will do. I will have the system switched back on tomorrow. I only powered up the system to check the SSD health on some used-SSDs and turned it back off afterwards. It sounds exactly like the description though - shows N/A in PBS, shows "Percentage used endurance indicator:" in smartctl