Thanks for that. I have certbot already running on a separate host grabbing wildcard certs and was manually copying them over to my proxmox nodes. With ACME running now on proxmox, I don't have to do that any longer. I'll just stick to regular certs for my proxmox nodes.
That's a pretty amazing increase. I did a quick iperf2 test before and after and the results were similar ~ 9.86Gb/sec for each one. No massive increase for me.. :( :)
You mean that it doesn't build with Proxmox 6.2? Yeah, that's unfortunate but we'll have to wait on Intel to update their driver to work with kernel 5.4.
I'm hoping that it won't be too long. Until then I'm going to force myself to put up with the errors.
I would install the latest version of Proxmox. I run a small cluster at home with intel X56xx processors and everything runs really well.
Personally, if I had your gear, I would take the 3 or 5 of the newest machines and build out a cluster - maximizing ram and drives per host. Unless your...
To display a list of messages:
ceph crash ls
If you want to read the message:
ceph crash info <id>
then:
ceph crash archive <id>
or:
ceph crash archive-all
I see the same thing - Proxmox dash showing all memory used but when I go into the VM, I see that only 2GB is used but the rest is used for caching which I would totally expect.
So from the Proxmox host's perspective, all ram is indeed being used.
Seems at least on my setup, it's working as...
You're looking for:
To display a list of messages:
ceph crash ls
If you want to read the message:
ceph crash info <id>
then:
ceph crash archive <id>
or:
ceph crash archive-all
you should be able to use vgs to see what free space there is for the pve volume group. If there's free space, you would use lvextend to grow root the desired amount and then use resize2fs to resize it.
use lvdisplay to see what the path is for your logical volumes.
man lvextend
man resize2fs...
A lot of times a simple "dd if=/dev/sdg of=/dev/null" can help narrow down which drive it is. Even if a failed drive has been deleted by the OS, working drives will still help narrow down the bad drive.
Sounds like my home lab - 3 Proxmox nodes with Ceph, each connected to each other via a 2-port 10Gb NIC in a broadcast mesh network for Ceph.
If that's the case, you should be able to just move any VMs and containers to another node, down the host and perform your maintenance. Proxmox and Ceph...
Since I'm the latest kernel too, I tried creating some VMs. One Ubuntu 18.04.3 LTS, one Ubuntu 19.10, Debian 10.1 and a FreeBSD VM.
The Ubuntu 18.04.3 LTS VM fails to start. It just appears to hang after checking for a clean/dirty shutdown.
The Ubuntu 19.10, Debian 10.1 and FreeBSD VMs have no...
You know, I think I ran into this problem as well. It’s been a while. Looking back through my apt history and it looks like I did have this problem.
My logs indicate that I must have removed the curl package and then installed the curl package after.
while I do see the apt install curl I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.