Search results

  1. N

    VLAN Question

    I am not a advance network engineer. I would like to connect my modem to the switch, give it a VLAN tag. Then set a VLAN tag on the network interface of a KVM container and run the firewall in there. As a test I have given a PC a VLAN tag in my switch, I set the switch port to Access and...
  2. N

    Proxmox VE Ceph Server released (beta)

    You don't need proxmox to create mon's and osd's? Sent from my SM-G920F using Tapatalk
  3. N

    Ceph (incremental)backup

    This is more a ceph question then a Proxmox question I guess. But I want to backup images in my Ceph pool for offsite backup so preferably incremental how would one do this? Can I take a snap shot like rbd snap create vm-105-disk-1@Initial then maybe for each day rbd snap create...
  4. N

    KVM to LXC

    Hi thanks for your message, but the wiki doesn't explain how to use a KVM raw file and boot it in LXC? To be clear, I'm not looking to move from OpenVZ to LXC. But from KVM to LXC.
  5. N

    KVM to LXC

    Is there a easy way to move a linux VM to LXC? Can I just mount the raw file and boot with LXC?
  6. N

    Upgrade to 4.0 some minor problems

    Ok this took me ages to find ..... There where multicast problems, first thought it was the router. But after entering iptables -A INPUT -m addrtype --dst-type MULTICAST -j ACCEPT It joined with no problem.
  7. N

    Upgrade to 4.0 some minor problems

    One more problem is added, I now upgraded the second node. They are unable to join the cluster, when adding the second to the first it stays on waiting for quorum
  8. N

    Upgrade to 4.0 some minor problems

    I upgraded one node and all went fine for two small issues, ceph wouldn't start. I complained there was no config file, but its there. When starting with -c pointing to the config file it works? And I keep ketting the following error, when starting things. libust[22857/22857]: Warning: HOME...
  9. N

    LXC and Ceph

    I am curious what the best solution would be, cloudstack apparently supports it only as data disk. not for root -> https://cwiki.apache.org/confluence/display/CLOUDSTACK/LXC+Enhancements while rbd mapping and running on top of this should also work...
  10. N

    LXC and Ceph

    So looking forward to finally being able to use containers on Ceph instead of only KVM!!! :cool: Hope this will come in v4.0 I'll instantly convert all linux vm's to LXC :rolleyes:
  11. N

    OpenVZ released 3.X kernel

    Version 3 kernel is now in development for openvz http://lists.openvz.org/pipermail/announce/2015-April/000579.html
  12. N

    mapping a RBD

    Is it possible in anyway to directly map a RBD device from ceph? As the RBD module is not present in the PVE kernel? I would like to convert a OpenVZ container to KVM, for this to do it "easier" it would help to be able to map a RBD device. Or does anyone know of a other method?
  13. N

    Recommendations for Ceph on gigabit?

    Sorry, I do have 2 gbit adapters. One is used for clients and one for the cluster network of Ceph. I should see if I can increase the journal. In the mean time I will try if setting a lower osd target transaction size will help anything.....
  14. N

    Recommendations for Ceph on gigabit?

    When monitoring my bandwith with nload, ceph traffic never goes to 1Gbit (max is around 600Mbit) I can easily reach 1 Gbit with iPerf so the network on the interface is ok. I also enabled CephFS and mounted with ceph-fuse for OpenVZ. In the performance with RBD I see with rados benchmark that I...
  15. N

    Recommendations for Ceph on gigabit?

    I have everything now running on 1 Gbit, but Proxmox gives very high IO delays. (journal is on SSD) Throughput is at about max 1gbit, but CPU and io delay usage are quite high. Is this normal? Nodes are 2x Xeon E5335 with 32GB Ram
  16. N

    Ceph stays in HEALTH_WARN

    For now I have 1 node with the OSD's, but it has enough OSD's to replicate. Just for kicks I purged everything and restarted and set the pool size to 1, just to see if it would get healthy. cluster e7720091-1647-4006-84f5-b627bf057609 health HEALTH_WARN 64 pgs stuck unclean monmap...
  17. N

    Ceph stays in HEALTH_WARN

    I created a ceph node, but the health never goes to healthy. I increased the pgs size from default, but don't know what else I can try. Clusters stays like the following, cluster 6b91e476-9579-43a1-a589-52e01a49bcc6 health HEALTH_WARN 256 pgs incomplete; 256 pgs stuck inactive; 256...
  18. N

    Shared ssd disc for host and journal

    Solved this issue. Partition the SSD with the space you need for Proxmox and keep the rest 'free', then you can just use the webinterface and choose the disc als journal. It will not overwrite the proxmox installation but create new partitions on the remaining space. It is a major speed...
  19. N

    Shared ssd disc for host and journal

    How do I share a SSD with proxmox host and use it for journal for Ceph? I tried pvedaemon /dev/sdb --journal_dev /dev/sda2 This works but creating the second OSD, errors out because they use the same journal. Should I make different partitions for all OSD's? Or is there another way?
  20. N

    Proxmox VE 3.3 and Freenas 9.2.7 ZFS via ISCSI

    Hi mir, thanks for your reply! Are you sure about this limit? I found the following http://www.secnetix.de/olli/FreeBSD/svnews/index.py?r=278037 According to this the limit is now 256 and will be changed to 1024