:) as always, know what you are doing/dealing with. See more on:
http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes, having a NVRAM backed controller doesn't break ZFS, but it might be inefficient in some cases:
'Contact you storage vendor for instructions on how...
Consider your controller part of your drive(s), this a like a HBA/JBOD ImHO. Think not bets are of, ZFS can also be run across/on top of HW raided device(s) if desired, like SAN LUNs eta. no problem, done that for large Oracle rDBMSes.
Think this merely refer to 'Do not use some kind of volume manager between device and ZFS' not that your couldn't use a smart controller and it's possible write cache.
Yes what I meant, single raid0 'volumes'/disk, think this is ideally ImHO
We run 4.3 on DL360 Gen9 booting in UEFI mode but w/Hardware raid no problem. Why use HBA mode?
Why not benefit from your smartarray controller say write-cache and read-ahead?
If you don't want to use HW raid, then just make 1-1 logical 2 physical drive mapping.
Thanks, we know!
It's purely a cost based decision not to separate networks physically and also our iSCSI isn't heavy loaded, we're using different vlans of course :) We can do traffic shaping among vlans in switches if desired.
Nope not for iSCSI traffic, iSCSI is used by hypervisor nodes as a shared SAN for VM storage to allow live migrations.
IP load balancer is run in a VM to balance traffic from remote peers across other service VMs.
iSCSI is just the main reason to use large MTU on our internal networks.
I assume this is true as far as you don't want to enable all your CPU cores to be DoS'ed by outside generated packets alone (if your pipe is bigger than the number of cores can handle) but thus leave at least one core to be able to handle other stuff like manage a ssh cnx/cli for you self :)
Not really got any benchmarks, but multi queued NIC(s) would be useful anytime on any [linux] OS instance, where you'll want to be able to process more packets/sec from NIC(s) than a single cpu core can process, and this is more offen the case for central network boxes like routers, FWs, load...
Believe your ESX link are talking of using the tg3 driver in the hypervisor node not in a VM.
Anyway, are you talking of using iSCSI from with inside a VM or as VM underlying shared storage from your hypervisor nodes?
We've dropped the GAIA FW and another big name FW, as niether could do...
Thanks, also our initial reason to do everything internally in our network mtu 9000, and everything is running mtu 9000 fine. Only it seems to hinder some remote peers, properly also with larger mtu, to talking to our IP load balancers.
Currently using mtu 1500 on load balancer public NICs and...
Hm think not, partly because I'm only seen issues for some incoming TCP cnx attempts as they get to DATA in a smtp dialog then flow stops. Believe that MSS value should be calculated from NICs MTU during TCP SYN/ACK phase hense attempt to lower VM' NIC MTUs. But not a network expert and my net...
running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs.
Two CentOS 6 VMs are used as a HAproxy load balancing...
Request For Enhancement
Whenever a HA managed VM is requested live migrated, it should be possible for HA to validate if destination hypervisor node is electable before performing migration task, if not then fail migration task or better only present valid electable HN nodes in UI pop-up list :)
Got a PVE FWed VM, that's only randomly letting me connect to it's port 443 from same allowed source, can not figure out why it's not stable.
PVE are latest 4.2.15, pve-kernel 4.4.10-1 and VM is running CentOS 6.8 no iptables/selinux, virtio net driver, no package loss seen in VM
# netstat...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.