Search results

  1. M

    New 3.10.0 Kernel

    Yes, the 3.10 kernel is still experimental for this very reason. If youre feeling adventurous and know the ins and outs of openvz: https://openvz.org/Vzctl_for_upstream_kernel
  2. M

    Community Subscription

    Yea, if you buy the community support, you get just that: community support. Meaning: the non-subscription repository gets all updates first (after they pass internal testing) and then the community can use and test them with the tons of different hardware combinations out there. A little while...
  3. M

    Migrating PVE+CEPH to new IP addresses?

    first of all it is strongly suggested to NEVER EVER change the IP address of ceph nodes (more specifically: MONs). Ive you absolutely have to, you can try this: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address
  4. M

    Mastering Proxmox - a book about Proxmox VE is finally available

    neato. I wonder whether thered be interest in translating this to german and how complicated that would be lector/publisher-wise.
  5. M

    proxmox snapshot vs xen/vmware snapshot

    as far as I know the openvz guys are currently developing CRIU to be able to handle snapshotting not only for containers but for individual processes as well. It's still in development though, so I am merely giving you a heads up. What you can do to at least get filesystem snapshots of openvz...
  6. M

    proxmox snapshot vs xen/vmware snapshot

    theres actually a difference. When you create a snapshot, you create a CoW-image of the original image, meaning that any changes to the disk after this point will be saved to the new image, allowing you to go back to the original image (which also becomes read-only ("immutable") when creating a...
  7. M

    PVE Cluster Node Maintenance Mode?

    doesnt HA automatically move all the VMs back onto the node youre emptying if you were to use something like that?
  8. M

    Proxmox VE Ceph Server released (beta)

    You don't exactly NEED raid controllers for ceph to function. - You need raid controllers if you want more disks in your system than the mainboards controller is offering - You can use a raid controller to benefit from its battery-backed RW-caches. Do note that regular hard drives already have...
  9. M

    Proxmox VE Ceph Server released (beta)

    you may want to look at this benchmark: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/ while it is actually trying to determine which IO scheduler is best for several setups, you can also see that the 8xRAID0 setup (meaning: 8 disks as 8 RAID0s) in almost all...
  10. M

    1 IP - 3 Webserver

    You don't need 3 virtual server to host 3 websites on 1 IP. you can host all of them with one webserver. Now if you want to separate the sites for security reasons, thats completely valid. Just saying that you dont need to use virtualisation if you don't want to. If you want to, you may want...
  11. M

    ceph-performance and latency

    Just an FYI: people on the ceph-users mailing list have reported drops in ceph performance after upgrading from dumpling to firefly and from emperor to firefly respectively, with decreases of up to 20% in performance. Investigations for this are apparently underway atm ("Yeah, it's fighting for...
  12. M

    more then one network on one network card?

    jumbo frames are useful for ceph, just be aware that your switches need to support them.
  13. M

    more then one network on one network card?

    The only reason why the Ceph guys recommend putting the cluster communication into a separate, private, non-routed network is for security purposes: so that it's impossible for anybody to disrupt the cluster communication by flooding the network. That said, its not actually a necessity, ceph...
  14. M

    Ceph + Proxmox HA

    hm... weird. Maybe you can try this: get ceph.conf and ceph.client.admin.keyring from /etc/ceph of the ceph cluster (or ceph-deploy) and put it into /etc/ceph on the proxmox node. With that you can use the ceph CLI from the proxmox node and see if that connects okay when you turn off .2 So...
  15. M

    Ceph + Proxmox HA

    did you enter all the monitor addresses to proxmox? like so: because if your proxmox only knows of 1 monitor and you take that one down for testing... well
  16. M

    Ceph + Proxmox HA

    This delay is there because a ceph cluster doesnt need the data to be instantly moved when a node goes down because the others will continue to serve the data. The cluster will rebalance data after the timeout has been reached. If your ceph isn't continuing to serve data when one of the nodes...
  17. M

    Private networking on PV Cluster across the nodes

    yea openvswitch is a way to implement SDN (software defined networking). you can either use vlans to connect VMs on different nodes or GRE (switch needs to support MTU > 1500)
  18. M

    Private networking on PV Cluster across the nodes

    the only thing really is that the network hardware connecting your nodes needs to have all the VLANs youre going to use mapped to the trunking ports that connect to the nodes
  19. M

    Logs/debug information why the ceph connection is failing

    I think I fixed it, I noticed how for reasons that are beyond me, apt was only listing outdated ceph-common versions (arent firefly packages supposed to be in the debian stable main repository?). I had to add deb http://ceph.com/debian-firefly/ wheezy main and do "aptitude upgrade ceph-common"...
  20. M

    Logs/debug information why the ceph connection is failing

    Hi, does the pve-manager dump logs or error messages somewhere, indicating why the ceph connection is failing? In the web interface I can only see a connection problem, which isnt saying much. storage.cfg: rbd: rbd monhost 192.168.178.41:6789;192.168.178.42:6789;192.168.178.43:6789...