Yes, thus the "For my UPS backed & well-backed-up servers, I have the following settings:" and the
"And the controversial one - which of course made the biggest difference... :)" parts of my message.
Howdy! Since ZFS is so much more complex than EXT4, it requires some tuning to perform.
For my UPS backed & well-backed-up servers, I have the following settings:
sysctl.conf
vm.swappiness = 1
vm.min_free_kbytes = 131072
/etc/modprobe.d/zfs.conf
options zfs zfs_arc_max=8589934592
options zfs...
Don't set vm.swappiness to 0, set it to 1 for safety's sake.
vm.swappiness = 0 The kernel will swap only to avoid an out of memory condition, when free memory will be below vm.min_free_kbytes limit. See the "VM Sysctl documentation".
vm.swappiness = 1 Kernel version 3.5 and over, as well as...
Interesting Nemesiz, I saw the exact same behaviour you did, but haven't seen any crashes since limiting the ARC. I still see significant pauses with large writes, but hadn't yet limited other settings as you have.
It would be helpful if this issue got some more attention.
I've seen a similar I/O related hang & then reboot of the node when doing a disk migration to a pair of SSD drives in 'ZFS Raid1'. Shutdown works, but takes a long time.
I've freed up one server that has this issue, and will do some intensive testing on it over the weekend.
In addition, I have...
I ran into this when a node couldn't talk to its name servers & got around it by adding the IP addresses for all the nodes in the cluster to each one's etc/hosts file.
Hey Matt!
Thankfully I documented everything to some degree or another.
Command:
ethtool --offload eth0 tx off
Firmware location: https://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/resourcebyos.aspx?productid=1238&oemid=410&oemcatid=135967
To upgrade the firmware:
ISO downloaded...
Okay, some progress on the errors in dmesg.
Turns out there's a bug in the kernel where the wrong firmware files are being specified.
https://bugzilla.kernel.org/show_bug.cgi?id=104191
I found and copied over the missing firmware file (cbfw-3.2.5.1.bin), then was got around the driver calling...
Howdy all,
I recently upgraded my Proxmox 3.4 install to 4.1 and since the upgrade, I can't use the VirtIO network driver on Linux clients. (We don't run Windows, so I can't test.)
The setup is fairly basic, a Quanta LB4M switch with 2x 10G SFP+ ports, two servers each with a Brocade 1020 CNA...
You may be hitting the same bug we did when using bonding balance-alb / mode 6 - the node will fall off the network as the packets bounce between the port & the switch doesn't know what to do.
http://forum.proxmox.com/threads/5914-balance-alb-on-host-causing-problems-with-guests...
Thanks Spirit! Bizarre that it had been working under 3.2.
I had some other oddness but found an old priest and a young priest and it's now behaving. Thanks again!
I'm seeing something very similar with pve-qemu-kvm 2.1-10. It's picking up the hostname of the server as a virtio option.
# dpkg-query -l | grep pve-qemu-kvm
ii pve-qemu-kvm 2.1-10 amd64 Full virtualization on x86 hardware
# /usr/bin/kvm...
Re: Proxmox and GlusterFS
What about using the "backupvolfile-server=" mount option?
gluster1:/datastore /mnt/datastore glusterfs defaults,_netdev,backupvolfile-server=gluster2 0 0
And you can change Gluster's ping-timeout... See...
It should work great, I've done similar creative networking in the past.
I'd pick up 3 Brocade 1020 cards to link each node with 10GbE. Ballpark $200 including cables.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.