There are a couple servers available at Hetzner (Finland region). You can also order additional hard drives and switches is needed, so you can build a cluster.
We have that smaller server from Hetzner as well. Right now Ubuntu is there, but there is a request from our dev.team to have a dozen of VMs so I would like to build a cluster and attach our CEPH storage.
You can also change limits on already running process as well:
#!/usr/bin/env bash
for PID in $(ps aux | grep /usr/bin/kvm | grep -v grep | awk '{ print $2 }'); do
SOFT_LIMIT="1048576"
HARD_LIMIT="2097152"
echo "Changing the limits for PID ${PID}"
prlimit...
Please try my solution (steps 3...6) from this reply.
You can check the limits with this script:
#!/usr/bin/env bash
for PID in $(ps aux | grep /usr/bin/kvm | grep -v grep | awk '{ print $2 }'); do
SOFT_LIMIT=$(cat /proc/${PID}/limits 2>/dev/null | grep "Max open files" | awk '{ print $4 }')...
@kifeo just found a draft for myself which I had created some time ago:
# Migration running cluster to the new IPs
## Ceph Network overview
Ceph Network overview was done [in this article][Ceph Network Configuration Reference]. Please read it before you
continue with the current page...
I have the similar setup: IBM storage device promotes the disks over 2 HBA adapters and I'm using multipath in order to access them.
During Proxmox 3.x we had clustered LVM as a storage. Seems like with Proxmox 4.x clustered LVM is not available, because clvmd cannot be started (cannot work with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.