Templates aren't completely VMs, so I'm not sure how to manage them as far as making sure they're always available. It looks like I can assign them to HA but that seems odd to me as it has options like "started" in there. If I do make it HA will that mean the template will come on on a different...
I can test this live. So if I shutdown one of my nodes with a VM on it that's set to HA for the whole group, I should expect it to appear automagically on another node after shutdown?
I have docker and docker swarm in VMs, and portainer already on omv and looking to spread this out across shared storage backed VMs with docker inside. This way I can move VMs with the containers freely and not lose anything and eventually automate the VMs being created/added I think. It's a lot...
Hello,
I have a 3 node ceph cluster with 4 OSDs in each node in a 3/2 for a 12 disk 120TB HDD pool
If I have a power outage, and I lose my UPS batteries, what will happen?
1. If the servers shut off at different times because of lack of power.
2. I get to the hosts in time and shut them all...
Hello,
I have a new VLAN 8. This network supports network boot and tftp settings. Requests that are network booted from a KVM host that is tagged on VLAN 8 get the right IP from DHCP, then goes to fetch the pxelinux.0 from my nas. This works as well as the file is downloaded, but then it just...
This is a great idea as my USG supports it and have the tftp ready already. With the VLAN tag on the network config for the VMs this is gonna be awesome.
I run about 5 docker-host VMs across 5 nodes, with portainer in a swarm. Works a treat! Used with CEPH you can move VMs that house containers to other hosts, do maintenance, migrate again. Very fast and efficient. Love it.
With this setup you can test and use various orchestration methods very...
Thanks for helping to clarify this. So why is the Summary so much lower? I still dont understand that very well cause it should be a higher total of ceph in my case, as all disks are available but not all disks are OSDs right?
Also, there is no way I'm using 45TB of data as the summary states...
I'm curious why the difference here?
86TB of Summary Storage, but Ceph Usage shows 120TB total. Ceph is much closer to the original full raw amount of data from all the drives. The amount used also in the Summary seems far too high as well.
and
So are you saying just create a vm disk and mount it in the VM? My current approach is to do that and then rsync the data onto it from a NAS. Feels kludgey but maybe it's the right way? Then use docker to access the volume from the VM.
Hello,
I am trying to understand how to optimize my ceph pools and understand how to associate pgs to pools correctly. I have the following.
12 OSDs in HDD pool in 3 hosts
9 OSDs in NVMe pool in 3 hosts
3 OSDs in SSD pool in 3 hosts
Each one is a 3/2 with a default of 128 PGs
if I used this...
I have been tooling around quite a bit with Ceph and Proxmox the last few weeks, and really, it is pretty awesome. It's really taken my lab of a mix mash of machines and hosts, and made them somewhat valuable again and much easier to manage my needs and experiments. I mean it's another level of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.