Just from experience I would think that you should drop one layer of proxmox (I cannot see why you need to run proxmox in a VM inside proxmox when proxmox provides nice features like pools & SDN to keep things separated) and run your you VM with the containers on a proxmox instance that is not...
I just thought about it a little more (and read up somewhat) and understand that the linked clone is based on a snapshot of the template. So once a snapshot is made, all changes go to the clone. If the template is changed, the clone still refers to the snapshot and does not include the...
I'm surprised to learn that a linked clone cannot be updated by updating the template the clone is linked to. Why is this? Is there a technical reason that one of the dev's here could explain. If it where possible to update a template and let the linked clone act modification it would be a...
We have recently upgraded to kernel 5.15 and now were having disk errors on one guest running FreeBSD 12.2.
The config is:
Let me clarify that: Because everything is virtualised, I lost the firewalls and thus remote access too. The Remote Management interfaces of the nodes are configured on a non-public network, so I guess I'll have to find a secure way of accessing these via some out-of-band system.
I have ifupdown2 installed... also, all guests on all nodes went offline
# dpkg -l 'ifupdown*'
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
I had a really perplexing situation last night. I had previously upgraded one or four nodes running the latest pve to pve-kernel -15.5. Because the naming of the network interfaces was changed at some stage, I had to recreate the /etc/network/interfaces file with the new nic names on all the...
It seems that luminous doesn't have all the commands to manage this yet. I'm searching the docs now...
I'm systematically upgrading this cluster to the latest version, but I need to understand how to limit the memory usage in the process. Since is just a test and dev cluster, so...
I see that ceph manages memory automatically according to https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#automatic-cache-sizing
Is the following normal for automatic cache sizing then?
Granted, some of the machines have only 8GB of RAM and are used a storage nodes...
I have read through that, but something is not quite clear to me. In the Ubuntu 14.04 lxc image there is no /etc/default/grub as refered to by this linked reference. So should the systemd.unified_cgroup_hierarchy=0 parameter be set in the proxmox node kernel config??
After updating to ceph 16 (pacific) on pve7, I have the following condition:
~# ceph health detail
~# ceph status
mon: 3 daemons, quorum FT1-NodeA,FT1-NodeB,FT1-NodeC (age 13h)...