Search results

  1. A

    e1000 driver hang

    In the past week we are seeing random e1000e 0000:00:1f.6 eno1: Detected Hardware Unit Hang failuresacross all our nodes, even different hardware hosts. Must do a reset of the host. There are lots of references to this issue going back 5+ years. Was there a driver change with the latest...
  2. A

    [SOLVED] Proxmox bridge MTU issue

    Would it still be necessary to set MTU 9000 in the guest also, if you set it in the host bridge?
  3. A

    [SOLVED] 5.0 failure to create partitions

    Hello, I am installing a fresh 5.0 on Intel NUC7I5BNK with a Samsung 960 EVO Series - 500GB NVMe - M.2 Internal SSD (MZ-V6E500BW). Once it gets to the creating partitions stage, it just sits at 0% forever. Any thoughts?
  4. A

    Setting hard drive serial number

    That is a new convention in proxmox 4.
  5. A

    Fresh Install: Unable to find LVM root

    adding "rootdelay=10" to grub seems to fix this issue.
  6. A

    Fresh Install: Unable to find LVM root

    That does work, but it must be entered on every reboot. Very strange. Thanks, Alan
  7. A

    Fresh Install: Unable to find LVM root

    Hi, I'm doing a fresh install of 4.1 on an HP z620. I cleaned the drive prior to install with Gparted Live. Install goes fine but on reboot I get the error: Volume group "pve" not found. I've done plenty of Proxmox installs on other hardware no problem, first time on this machine model...
  8. A

    [#153] Hdd serial number.

    This fails if the serial number contains spaces, which is what many drives have. example: serial=' kjhgfd21' or serial=" kjhgfd21" Proxmox will invalidate and remove the drive from a VM Bug #153.
  9. A

    Problems with GPU passthrough (x-vga=on)

    Hello, I've followed the wiki (https://pve.proxmox.com/wiki/Pci_passthrough). Everything checks out (Virtualization and VT-d) on in BIOS. But when I add, x-vga=on to the .conf, the guest machine fails to boot, but there are no errors that show up in the PVE logs and the physical monitor is...
  10. A

    fence device

    Link to appropriate wiki?
  11. A

    Nat and UDP

    http://forum.proxmox.com/threads/21194-Port-Forward-with-built-in-NAT-and-PVE-Firewall
  12. A

    zfs_arc_max does not seem to work

    I went back to plain old EXT4... much easier, and everything just works.
  13. A

    zfs_arc_max does not seem to work

    Yes, fresh 3.4 install on dual SSD ZFS Raid0. I rebooted the host.
  14. A

    zfs_arc_max does not seem to work

    i have done this, it made no difference.
  15. A

    zfs_arc_max does not seem to work

    It seems that proxmox official ZFS is ignoring the options zfs zfs_arc_max=2147483648 value. I added an additional for 4 gigs of RAM and ARC swallowed that too. root@licvault01:~# grep c_max /proc/spl/kstat/zfs/arcstats c_max 4 8413184000
  16. A

    zfs_arc_max does not seem to work

    root@XXXXX:~# cat /etc/modprobe.d/zfs.conf​ # ZFS tuning for a proxmox machine that reserves 64GB for ZFS # # Don't let ZFS use less than 4GB and more than 64GB #options zfs zfs_arc_min=2147483648 #options zfs zfs_arc_max=4294967296 options zfs zfs_arc_max=2147483648...
  17. A

    Nvidia nforce ethernet does not work with kernel 3.10

    As per the posts in the linked CentOS bug, the Plus repositories has a kernel with that driver re-enabled. Can you make this happen for the PVE kernel, it would be greatly appreciated. I disagree that his hardware is irrelevant. http://bugs.centos.org/view.php?id=7359#c20558
  18. A

    Nvidia nforce ethernet does not work with kernel 3.10

    PVE-Kernel 3.10 does not have the drivers to support Nvidia nforce ethernet chipset such as on the HP xw9400. I was hoping to do some GPU pass thru on this machine, but without proper drivers support in 3.10, it is pointless, as the machine has no network. http://bugs.centos.org/view.php?id=7359