Search results

  1. stefws

    Can't install Proxmox on HP Proliant DL380 Gen9

    :) as always, know what you are doing/dealing with. See more on: http://www.solarisinternals.com/wiki/index.php/ZFS_Evil_Tuning_Guide#Cache_Flushes, having a NVRAM backed controller doesn't break ZFS, but it might be inefficient in some cases: 'Contact you storage vendor for instructions on how...
  2. stefws

    Can't install Proxmox on HP Proliant DL380 Gen9

    Consider your controller part of your drive(s), this a like a HBA/JBOD ImHO. Think not bets are of, ZFS can also be run across/on top of HW raided device(s) if desired, like SAN LUNs eta. no problem, done that for large Oracle rDBMSes.
  3. stefws

    Can't install Proxmox on HP Proliant DL380 Gen9

    Think this merely refer to 'Do not use some kind of volume manager between device and ZFS' not that your couldn't use a smart controller and it's possible write cache. Yes what I meant, single raid0 'volumes'/disk, think this is ideally ImHO
  4. stefws

    Can't install Proxmox on HP Proliant DL380 Gen9

    We run 4.3 on DL360 Gen9 booting in UEFI mode but w/Hardware raid no problem. Why use HBA mode? Why not benefit from your smartarray controller say write-cache and read-ahead? If you don't want to use HW raid, then just make 1-1 logical 2 physical drive mapping.
  5. stefws

    PMTUD or large MTU size

    Thanks, we know! It's purely a cost based decision not to separate networks physically and also our iSCSI isn't heavy loaded, we're using different vlans of course :) We can do traffic shaping among vlans in switches if desired.
  6. stefws

    PMTUD or large MTU size

    Yeap, storage network is just a vlan on a shared physical 2x 10Gbs to save costs on 10Gbs switches. So we made this all MTU 9000 :)
  7. stefws

    PMTUD or large MTU size

    Nope not for iSCSI traffic, iSCSI is used by hypervisor nodes as a shared SAN for VM storage to allow live migrations. IP load balancer is run in a VM to balance traffic from remote peers across other service VMs. iSCSI is just the main reason to use large MTU on our internal networks.
  8. stefws

    KVM and multi queue NICs

    I assume this is true as far as you don't want to enable all your CPU cores to be DoS'ed by outside generated packets alone (if your pipe is bigger than the number of cores can handle) but thus leave at least one core to be able to handle other stuff like manage a ssh cnx/cli for you self :)
  9. stefws

    KVM and multi queue NICs

    Not really got any benchmarks, but multi queued NIC(s) would be useful anytime on any [linux] OS instance, where you'll want to be able to process more packets/sec from NIC(s) than a single cpu core can process, and this is more offen the case for central network boxes like routers, FWs, load...
  10. stefws

    KVM and multi queue NICs

    Sometimes you end in a catch-22 state also with older HW :) I'll rather go more stable than performant if it's a choice.
  11. stefws

    KVM and multi queue NICs

    Believe your ESX link are talking of using the tg3 driver in the hypervisor node not in a VM. Anyway, are you talking of using iSCSI from with inside a VM or as VM underlying shared storage from your hypervisor nodes? We've dropped the GAIA FW and another big name FW, as niether could do...
  12. stefws

    PMTUD or large MTU size

    Thanks, also our initial reason to do everything internally in our network mtu 9000, and everything is running mtu 9000 fine. Only it seems to hinder some remote peers, properly also with larger mtu, to talking to our IP load balancers. Currently using mtu 1500 on load balancer public NICs and...
  13. stefws

    PMTUD or large MTU size

    Hm think not, partly because I'm only seen issues for some incoming TCP cnx attempts as they get to DATA in a smtp dialog then flow stops. Believe that MSS value should be calculated from NICs MTU during TCP SYN/ACK phase hense attempt to lower VM' NIC MTUs. But not a network expert and my net...
  14. stefws

    PMTUD or large MTU size

    running our PVE HN attached to two Cisco nexus 5672 leaf switches, configured to support MTU 9000. So our Hypervisor Nodes all allow MTU 9000 on their physical NICs for iSCSI traffic eta, most our of VMs also allow MTU 9000 on their vNICs. Two CentOS 6 VMs are used as a HAproxy load balancing...
  15. stefws

    [#1049] RFE: HA migrations

    Yes, if overlooked and VM is migrated to a HA non-allowed node, then it's migrated straight back again after been resumed, waste of time etc.
  16. stefws

    [#1049] RFE: HA migrations

    Request For Enhancement Whenever a HA managed VM is requested live migrated, it should be possible for HA to validate if destination hypervisor node is electable before performing migration task, if not then fail migration task or better only present valid electable HN nodes in UI pop-up list :)
  17. stefws

    unable to create overlay network using OVS

    Why make an overlay instead just vlan tag VM NICs and connect VMs to vmbr0 and let them use whatever IP range they need?
  18. stefws

    Access webui without port address

    Bookmark the URL :)
  19. stefws

    Yey another weird cnx issue to a VM

    Got a PVE FWed VM, that's only randomly letting me connect to it's port 443 from same allowed source, can not figure out why it's not stable. PVE are latest 4.2.15, pve-kernel 4.4.10-1 and VM is running CentOS 6.8 no iptables/selinux, virtio net driver, no package loss seen in VM # netstat...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!