Search results

  1. C

    odd one - scsi / disk / write errors after exsi migrate, server 2012 r2.

    did an esxi import woudn't boot - BSOD x7b or whatever it is - inaccessible boot device. tried the various disk controllers. the only one that would boot the system was the MegaRAID SAS 8708EM2. *however* I had loads of freakin' weird things going on. The os (server 2012 r2) booted fine, but I...
  2. C

    Getting VM's to run at more than 1Gb/s on bonded interface

    That is how bonding works. You won't see more than a 1gb/s for a given network stream (which, depending on hash policy could be source/dest mac, source/dest IP, source/dest IP and port, etc). Some protocols like multichannel smb can (theoretically) use multiple connections but it's something I...
  3. C

    really slow restore from HDD pool

    ah, yes, of course. I have just ordered 3x960GB to use for special device. I think I will put them in a RAIDZ and steal some space for the boot/os. Maybe I will try small_files too. perhaps everytyhing under 512kb. Here is what I see: root@pbs:/mnt/datastore/RAIDZ/.chunks/aaaa# ls -lh total 94M...
  4. C

    really slow restore from HDD pool

    My obvious reason for leaving out 'small files' is that, isn't every PBS backup chunk a fixed-size file? I thought they were all fixed at 4mb? none smaller, none larger?
  5. C

    really slow restore from HDD pool

    It seems there's different comments on that space usage. Perhaps more people are doing 'small files' as well as metadata. I will try with what I have available anyway at first and keep an eye on it.
  6. C

    really slow restore from HDD pool

    I made a small miscalculation.. I had in my head that my datastore is 36TB, but it's 44TB or therabouts (4x16TB RAIDZ). I still don't understand ZFS results in df but still at 0.3% for metadata it would be 132GB so those 200GB Intel enterprise SAS disks should be fine and I have 4 of them that...
  7. C

    really slow restore from HDD pool

    Small files is optional isn't it? I plan to only put metadata on. 0.3% of my 36tb is about 108gb if I done my maths right.
  8. C

    really slow restore from HDD pool

    OK. I have some enterprise 200gb SAS SSDs sitting around that I can use, and I can set up another PBS temporarily to shift the data off to and then back.
  9. C

    [SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

    I have R7615 with 2x H965i. one controller has 1x 960GB Dell-branded Samsung PM9A3 drive, which I was forced to buy from Dell, plus 2x non-dell branded identical drives (so 3x in a RAID5, just to make it up to something useful with parity raid). The other controller has 4x Intel P5520 3.84TB...
  10. C

    really slow restore from HDD pool

    Right. so 30 MB /sec is to be expected. Fine. but throughout this discussion it was said that the 12 MB /sec that I see when I restore to the older PowerEdge R630 is because of PBS HDD source. and I am saying how can that be, when I see 2.5* that performance when restoring to a newer PowerEdge...
  11. C

    really slow restore from HDD pool

    Also, still no explanation why it is 30MiB/s to the newer target PVE node? admittedly, I have only run each restore once. Perhaps I will run again but in reverse order - second target node first, then the other one.
  12. C

    really slow restore from HDD pool

    Thank you for the thorough reply! It is no problem for me to offload the data and recreate the volume. I just need to spend some time investigating required size of SSD special devices.
  13. C

    really slow restore from HDD pool

    I believe atime has to stay active for garbage collection. There is some contradictory discussion on it, but if I understood correctly, it can maybe be changed to relatime but that's all, and it is already mounted as such: RAIDZ on /mnt/datastore/RAIDZ type zfs...
  14. C

    restore with proxmox-backup-debug, pipe to qmrestore - stdin?

    Ah, that is wonderful - perfect and so simple. Thanks very much. I will try it out this weekend.
  15. C

    restore with proxmox-backup-debug, pipe to qmrestore - stdin?

    I am testing out various recovery scenarios. One that I am testing, is to restore a disk image from a remote object storage (BackBlaze) which is mounted through kopia. After I get proxmox-backup-debug on to a pve host, I can successfully run the command: ~/proxmox-backup-debug recover index...
  16. C

    really slow restore from HDD pool

    That is correct in terms of where the data to restore lays. Thank you for your pointers and links. I have to go out now but I will do some study over the following week while I am away on holiday.
  17. C

    really slow restore from HDD pool

    That was a restore to an older node (48 x Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (2 Sockets), target datastore 5x 1.92TB RAIDZ SAS 12G ZFS HBA330 no controller raid. I am now testing a restore to the original node which is a modern PowerEdge R7615 with the fastest single-core performance I...
  18. C

    really slow restore from HDD pool

    I have a RAIDZ on PBS with 4x 16Tb SATA HDD. I am watching a KVM restore on a 10G LAN at 12MiB/s. Target pool is 5x 1.92Tb enterprise SAS 12g ZFS through HBA330 controller. how can it be so bad? I understand it's many small files, but 12MiB/s ? There must be a way to improve this? I read that...
  19. C

    Extremely slow Windows Server 2022 after ~9 days. 10x CPU throughput reduction.

    yeah I know I linked to the GitHub issue, and it looks like there is some traction on fixing it finally. Fingers crossed :-)