Search results

  1. J

    Proxmox 4.1 vs 4.4 swap usage

    @eth: You might also want to look at limiting the ZFS ARC size. By default, it will try to use half your RAM.
  2. J

    Performance issues on raid 1 SSD

    Yes, thus the "For my UPS backed & well-backed-up servers, I have the following settings:" and the "And the controversial one - which of course made the biggest difference... :)" parts of my message.
  3. J

    Performance issues on raid 1 SSD

    Howdy! Since ZFS is so much more complex than EXT4, it requires some tuning to perform. For my UPS backed & well-backed-up servers, I have the following settings: sysctl.conf vm.swappiness = 1 vm.min_free_kbytes = 131072 /etc/modprobe.d/zfs.conf options zfs zfs_arc_max=8589934592 options zfs...
  4. J

    Random Restarting

    Did you try the new kernel?
  5. J

    Random Restarting

    Don't set vm.swappiness to 0, set it to 1 for safety's sake. vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition, when free memory will be below vm.min_free_kbytes limit. See the "VM Sysctl documentation". vm.swappiness = 1 Kernel version 3.5 and over, as well as...
  6. J

    Random Restarting

    Interesting Nemesiz, I saw the exact same behaviour you did, but haven't seen any crashes since limiting the ARC. I still see significant pauses with large writes, but hadn't yet limited other settings as you have. It would be helpful if this issue got some more attention.
  7. J

    Random Restarting

    Have you limited the amount of memory ZFS can use for its cache?
  8. J

    PVE4 reboot hang at "reached target shutdown"

    I've seen a similar I/O related hang & then reboot of the node when doing a disk migration to a pair of SSD drives in 'ZFS Raid1'. Shutdown works, but takes a long time. I've freed up one server that has this issue, and will do some intensive testing on it over the weekend. In addition, I have...
  9. J

    Cannot Initailize CMAP Service

    I ran into this when a node couldn't talk to its name servers & got around it by adding the IP addresses for all the nodes in the cluster to each one's etc/hosts file.
  10. J

    [SOLVED] Brocade 1020 on PX4.1 host + VirtIO net on guest == TX Checksum Badness

    Hey Matt! Thankfully I documented everything to some degree or another. Command: ethtool --offload eth0 tx off Firmware location: https://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/resourcebyos.aspx?productid=1238&oemid=410&oemcatid=135967 To upgrade the firmware: ISO downloaded...
  11. J

    [SOLVED] Brocade 1020 on PX4.1 host + VirtIO net on guest == TX Checksum Badness

    Damn you "tx checksumming" - you ate 2 days of my life. Turning it off on the guest seems to solve it.
  12. J

    [SOLVED] Brocade 1020 on PX4.1 host + VirtIO net on guest == TX Checksum Badness

    Okay, some progress on the errors in dmesg. Turns out there's a bug in the kernel where the wrong firmware files are being specified. https://bugzilla.kernel.org/show_bug.cgi?id=104191 I found and copied over the missing firmware file (cbfw-3.2.5.1.bin), then was got around the driver calling...
  13. J

    [SOLVED] Brocade 1020 on PX4.1 host + VirtIO net on guest == TX Checksum Badness

    Howdy all, I recently upgraded my Proxmox 3.4 install to 4.1 and since the upgrade, I can't use the VirtIO network driver on Linux clients. (We don't run Windows, so I can't test.) The setup is fairly basic, a Quanta LB4M switch with 2x 10G SFP+ ports, two servers each with a Brocade 1020 CNA...
  14. J

    Jessie and 3.10.0 OpenVZ kernel and the future

    When it's ready. Every time that question is asked, the release date slips by 32.4 hours. :p
  15. J

    Does openvz work with openswitch?

    You may be hitting the same bug we did when using bonding balance-alb / mode 6 - the node will fall off the network as the packets bounce between the port & the switch doesn't know what to do. http://forum.proxmox.com/threads/5914-balance-alb-on-host-causing-problems-with-guests...
  16. J

    After upgrading from 3.2 to 3.3 + old pve-qemu-kvm + ceph = vm doesnt start

    Thanks Spirit! Bizarre that it had been working under 3.2. I had some other oddness but found an old priest and a young priest and it's now behaving. Thanks again!
  17. J

    After upgrading from 3.2 to 3.3 + old pve-qemu-kvm + ceph = vm doesnt start

    I'm seeing something very similar with pve-qemu-kvm 2.1-10. It's picking up the hostname of the server as a virtio option. # dpkg-query -l | grep pve-qemu-kvm ii pve-qemu-kvm 2.1-10 amd64 Full virtualization on x86 hardware # /usr/bin/kvm...
  18. J

    [SOLVED] Proxmox and GlusterFS

    Re: Proxmox and GlusterFS What about using the "backupvolfile-server=" mount option? gluster1:/datastore /mnt/datastore glusterfs defaults,_netdev,backupvolfile-server=gluster2 0 0 And you can change Gluster's ping-timeout... See...
  19. J

    Mastering Proxmox - a book about Proxmox VE is finally available

    Congrats Wasim! I know how much of a pain it can be to get a book launched.