Thanks!
It seems that luminous doesn't have all the commands to manage this yet. I'm searching the docs now...
I'm systematically upgrading this cluster to the latest version, but I need to understand how to limit the memory usage in the process. Since is just a test and dev cluster, so...
I see that ceph manages memory automatically according to https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#automatic-cache-sizing
Is the following normal for automatic cache sizing then?
Granted, some of the machines have only 8GB of RAM and are used a storage nodes...
So, is it one of the other for all lxc's? In other words, if I impliment this kernel setting, will all containers revert to using cgroups instead cgroupsv2
I have read through that, but something is not quite clear to me. In the Ubuntu 14.04 lxc image there is no /etc/default/grub as refered to by this linked reference. So should the systemd.unified_cgroup_hierarchy=0 parameter be set in the proxmox node kernel config??
After updating to ceph 16 (pacific) on pve7, I have the following condition:
~# ceph health detail
HEALTH_OK
but
~# ceph status
cluster:
id: 04385b88-049f-4083-8d5a-6c45a0b7bddb
health: HEALTH_OK
services:
mon: 3 daemons, quorum FT1-NodeA,FT1-NodeB,FT1-NodeC (age 13h)...
No, that's the only container. But then it's also the only container that was running Ubuntu 14.04 when the upgrade to pve7 was done.
The lxc was running perfectly before though. Now, when I enter the lxc and start all the services manually, they run. But of course, that should not be, they...
Yes, pct enter 138 works. I'm in the container now, but there's no network, which is probably the main problem. I'll dig around to see what I can find.
Been in contact with these, but not quite what I'm looking for. I'd like to have more "bare metal" where I can set up my own config, which is why I was hoping OVH would have a suitable offering.
I upgraded my nodes from PVE 6.4 to 7, checked in advance with pve6to7 for any issues and all seemed to have gone well, except I have one container that starts, but not properly.
If I do pct start 138, no error is returned, but the container doesn't run, although it's reported as running.
~#...
We are looking to add some services in Europe for clients and have been scouting for suitable hosting space. OVH seems to be one of the only options that offer Proxmox hosting. However, from their many options to select from, we can't quite figure out which is the most suitable one. We would...
Can one of the Proxmox Staff members give us an indication of how one can set up a bounty for this or better still, how one can contribute resources so this is being implimented please? This is something we really want, but I can't find info on how we could do this.
I have since added the ceph sources to /etc/apt/sources.list.d/cepg.list as
deb http://download.ceph.com/debian-jewel jessie main
and added the key manually. The pveceph install -version jewel command would have done that I suppose, but since https://git.ceph.com doesn't have the jewel keys...
A thought I had today was if it would be possible to have a newer version of pve (ie v7) and use an older version of ceph (pre bluestore) which doesn't require as much RAM as the bluestore based version do?
Would that be possible?
The problem is not actually Proxmox, it's ceph. As soon as I add OSD's to the cluster, the RAM usage goes up and when I add the 3rd node with a couple of drives, the node crash with out of memory errors. I tried a fresh install with PVE 7, which installed fine, but ceph 16 sank the ship.
This...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.