Search results

  1. F

    Code 43 and no image with GTX 660 and latest proxmox/pve-kernel

    Thanks for the response. I was just getting ready to update the thread. In the course of the last few hours, I obtained a UEFI bios for my GPUs, and I was able to boot the VM with video and audio on two GPUs from the CLI, after removing all the pv_ stuff. Otherwise the VM would crash with a...
  2. F

    Code 43 and no image with GTX 660 and latest proxmox/pve-kernel

    I've followed the famous "vfio tips and tricks" blog up until the 4th installment, which is distro specific. My VM is able to recognize my GPU, I believe the problem is that my GPU /IS/ able to recognize my VM, as per: http://vfio.blogspot.com/2014/08/vfiovga-faq.html (see question 10). I...
  3. F

    PCI passthru: intel_iommu=on and VT-d enabled in BIOS results in HardLock on boot

    This was caused by a patch applied to kernel 4.2ish that breaks IOMMU for certain (most?) X58 chipset motherboards (at least). This is significant because the only LGA1366 boards with x16 PCIe slots are X58. The fix has been merged into kernel 4.7...
  4. F

    PCI passthru: intel_iommu=on and VT-d enabled in BIOS results in HardLock on boot

    I've read that there are issues with the Intel X58 chipset and iommu on some kernels, but I'm not sure if this is the cause. Basicaly the problem is that I cannot enable IOMMU in grub.conf if I have VT-d enabled in the BIOS, or the CPU hardlocks very early on in the boot. I had the same...
  5. F

    Moving 120 GB disk to ZFS pool taking hours.

    I solved this. It was my fault. The wiki says: I miscalculated, and ZFS was competing violently for ram, gumming up the entire works.
  6. F

    Moving 120 GB disk to ZFS pool taking hours.

    I'll give it a try. Good news is old system is able to make backups normally after a reboot. Seems things are slowly normalizing.
  7. F

    Moving 120 GB disk to ZFS pool taking hours.

    The wiki seems to imply that JBOD is ok. I have an identical system that I'll set up and try to replicate the problem. Then I'll see if IT mode resolves it. A vzdump to the pool is currently going at about 3mb per second.
  8. F

    Moving 120 GB disk to ZFS pool taking hours.

    Right now I am moving a disk from local to zfs. It's 8GB. It's taking about 45 minutes and maxes CPU. The KVM with the 120 GB disk in my zpool that I opened this thread about has been brought to a crawl. This is a quad core xeon with 8 gb of ram. The Windows server in the KVM currently has...
  9. F

    Moving 120 GB disk to ZFS pool taking hours.

    I can't remember the model of the disks in the ZFS partition. They were "business class" SATA disks (larger cache, only 7,200 rpm) from Seagate. The root partition is still a HW raid array, I expect it to be faster, but not by over an order of magnitude.
  10. F

    Moving 120 GB disk to ZFS pool taking hours.

    Disk performance still seems to be very slow. # pveperf /pool0/ && pveperf CPU BOGOMIPS: 51199.92 REGEX/SECOND: 1604685 HD SIZE: 775.22 GB (pool0) FSYNCS/SECOND: 76.87 DNS EXT: 79.13 ms DNS INT: 73.84 ms (domain.com) CPU BOGOMIPS: 51199.92...
  11. F

    Moving 120 GB disk to ZFS pool taking hours.

    The backup made the VM unusable. I shut it down and started it, and the issue appears to be resolved. I'm going to try a backup in a bit and see how long it takes.
  12. F

    Moving 120 GB disk to ZFS pool taking hours.

    It's done. Users are reporting slowness. # zpool status pool: pool0 state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM pool0 ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb ONLINE 0...
  13. F

    Moving 120 GB disk to ZFS pool taking hours.

    As the title suggests, it's taking a long time. I read that ZFS destroy operations may take hours, but I did not expect this to. I'm concerned that when the VM does finish moving it will be unuseable. I also have no idea how the server will perform, since CPU usage is very high. If I cancel...
  14. F

    upgrade failed - pve-manager won't install

    The post above mine in this thread solved it for me.
  15. F

    upgrade failed - pve-manager won't install

    nm for now. i misread weather i should or shouldn't be using systemd, now i gotta fix what i did.
  16. F

    upgrade failed - pve-manager won't install

    Same issue here trying to go from 3.4 to 4.
  17. F

    Proxmox node in the cloud / ProxMox OffSite BDR

    Thanks for the suggestion. It's intriguing enough that I got a cheap DPS to try it on.
  18. F

    Proxmox node in the cloud / ProxMox OffSite BDR

    Having done some consulting, I've seen medium sized companies sign up for steep monthly contracts for BDR solutions that, while they don't do HA afaik, basically function to keep OS images of your servers on-site and off- for use as host spares. The main selling point seems to be that you can...
  19. F

    Proxmox Multiseat Gaming

    It can be done using commodity GPUs and VFIO. Also, that article claims >80% of native speed, but I've seen >95% claimed for VFIO and KVM. Thanks for the reply tho. Subsequent to OP I've found there are small groups that have done this within the Arch Linux and Ubuntu communities, so...