Search results

  1. B

    OpenVSwicth Bridge + Internal Ports + Untagged traffic

    The example was wrong, you are right the vlan should have been set to 1. You are missing the allow-ovs and allow-vmbr0 entries that need to exist prior to the entries. Also, you bridge definition should be first, before any interfaces that are part of the bridge.
  2. B

    suggestions for shared storage for productive environment

    I don't run Windows VMs, but I can't imagine the performance difference would be far off from HyperV or VMWare since the network and disk are both paravirtualized and all solutions use the hardware extensions provided by the CPU. Maybe I'm wrong, but I've been happy with Proxmox, but run...
  3. B

    RBD : which cache method to decrease iowait ?

    I'm not sure about the underlying implementation details of why it is faster, other than it is 'newer' ;). I primarily moved to virtio-scsi because virtio-blk did not work with 'discard' support which is crucial to enable if you want to reclaim space within Ceph when blocks are deleted in the...
  4. B

    RBD : which cache method to decrease iowait ?

    Also, make sure you use virtio-scsi (not standard virtio). That made a huge difference for me.
  5. B

    Help Needed -- I need to get Proxmox 3.4 Installed onto 3 Mac Mini's for a Deployment

    Re: Help Needed -- I need to get Proxmox 3.4 Installed onto 3 Mac Mini's for a Deploy Here's a full install guide for Proxmox on 3 mac minis: http://www.jaxlug.net/wiki/2014/07/16 It was written for 3.3, but no reason it wouldn't work for 3.4.
  6. B

    Virtual SAN

    For any sort of HA, 3 nodes is the minimum requirement. Using 2 nodes you can 'hack' it to make it work, but its not a good idea since you can't get quorum. I would strongly recommend Ceph for your backend storage, it offers high performance distributed storage with high reliability and quick...
  7. B

    NoVNC Console size adaption

    What OS? For instance, with say CentOS you'd add something like 'vga=0x315 nomodeset' to the kernel boot options to force the console to be smaller. Otherwise I think this is currently an issue with noVNC, the java VNC viewer also had that issue. I haven't tried spice, but I'd assume that...
  8. B

    Rgmanager doesn’t start automatically after reboot

    According to the log it does appear your network isn't fully up before things are trying to start. We noticed this ourselves when using cisco switches without 'portfast', where the negotiation time for STP takes about 45 seconds before the ports enter forwarding mode. Anyhow, in your network...
  9. B

    pveceph - firefly or giant?

    I don't see any reason to deploy with firefly at this point in time, it'll just mean you've got to do an upgrade to Giant in the future. We did some performance tests, and at least in our environment didn't see any differences. Giant is supposed to be faster, but we only run with 3 OSDs per...
  10. B

    Move qcow2 from NFS to CEPH - No Sparse?!

    Afaik none. The option shouldn't be check-able in proxmox when using standard virtio, and supposedly there are no plans on extending the standard/old virtio. It works fine with virtio-scsi though, and it sounds like they (kvm/qemu people) want to phase the old virtio out completely at some...
  11. B

    Move qcow2 from NFS to CEPH - No Sparse?!

    Sounds like you need to be testing this in a test environment since you can't do any testing on these vms. I really can't help you much further. If your VM isn't imported as sparse, then it sounds like Proxmox's GUI option isn't doing that. In which case your options are to either fstrim...
  12. B

    Move qcow2 from NFS to CEPH - No Sparse?!

    Please provide the full cut-and-paste output of the command provided along with the command you ran to confirm. I know for a fact that you cannot simply look at disk used on the filesystem ceph creates to check the used space as even with ZERO images mine shows 5.6GB used! Finally, if you use...
  13. B

    Move qcow2 from NFS to CEPH - No Sparse?!

    I've never tried using the GUI for that so can't say if it imports it properly, but the fact that it says format 2 is a good thing. rbd info nor rbd ls will show you the on-disk size, they show you the allocated size. You have to do some nasties to query the on-disk size: rbd diff...
  14. B

    Move qcow2 from NFS to CEPH - No Sparse?!

    What command did you use for your conversion? Did you just use qemu-convert directly from qcow to ceph rbd such as: qemu-img convert -f qcow2 -O raw debian_squeeze.qcow2 rbd:data/vm-121-disk-1 If so, that's probably the issue as I believe qemu-img only writes in RBD format 1. You need to...
  15. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    LACP does load balancing based on the source destination hash. So while you won't get double the bandwidth to a single destination, it does utilize both nics in an even fashion so it does double your overall bandwidth.
  16. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    Right. As for the virtual chassis stuff, it sounds like you didn't set up the switch in the first place, and you stated in your original request that you were "bonded over two NICs eth1+eth2 connected to two separate interconnected switches" and your config example showed you were using...
  17. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    Question doesn't make sense at all. You won't ever untag multiple vlans as the other end wouldn't be able to put the vlans back together. If you don't need the proxmox boxes to be able to share vlans, you could just tag the access port as it comes in by adding: ovs_options vlan_mode=access...
  18. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    That is not my understanding of how access ports work. Access ports are for strictly _untagged_ traffic. Trunk ports can still have a 'native' vlan that allows untagged traffic while other traffic is tagged, but access explicitly means there is no tagged traffic. Here's the relevant sections...
  19. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    See my reply to your post in the installation/configuration forum.
  20. B

    extra VLAN bridges on top on bonded network in a PVE cluster

    All I can say about that post is no, don't do that. Use open vswitch instead, don't use classic linux bridges and bonds, they simply don't have the featureset you'd want for a virtualized environment so causes more management overhead. See the wiki I wrote on using Open vSwitch here...