Actually, it turns out that it was Puppet doing this via the https://github.com/puppetlabs/puppetlabs-apt module.
Puppet Apt Version 5.0 made a change via this commit to default to a blank auth.conf file https://github.com/puppetlabs/puppetlabs-apt/pull/752/files
Puppet Apt Version 6.3 added...
Also, I am not sure but this does appear to have fixed my issue on two systems I tested this on.
pvesubscription get | grep key | cut -d : -f2 | xargs -I {} pvesubscription set {}
Essentially, just a one line command to take the key that is already present and set it to the same key. I just...
I have set them all in the interface. If I run the following it works just fine. I don't want to leak the keys publically here.
root@vmhost17:~# pvesubscription get | grep status
status: Active
The whole cluster shows it to be good too.
I have two clusters, one with 7 nodes and one with 12 nodes. Every host in both clusters has this problem and all 19 hosts have a community edition ProxMox license/subscription attached to them.
The first apt update will always fail like below. However, once I run `pvesubscription update`...
Perfect, in that case, if you enable discard and run `fstrim -av` on the guest. You should reclaim the space.
If you need to reclaim more space, you will need to fill the unused space on the guest with 0's and then run fstrim. The only requirement is that the guest is not encrypted with...
As far as I am aware the easiest way is to use the new trim features to reclaim space.
Assuming you are using ZFS/LVM storage and have discard enabled. You can log into the guest and run the following.
~# fstrim -av
/: 5.8 GiB (6173188096 bytes) trimmed
Once you have done this, the backend...
Hello everyone,
Yes, the intel-microcode package is up to date and having the qemu-guest agent removed does seem to not cause the issue.
We have to generate heavy IO load on the host for the panic to occur. This usually occurs when moving VMs or restoring VMs from backup.
This occurs on...
In that case, you can follow the guide and use fstrim assuming you have a scsi controller selected and have discard turned on. That will trim the thin volume.
However, if you want to shrink how much is allocated and the actual size of the partition.
Then you will want to do a command such...
Is your disk a qcow image or is it in an LVM partition? Without knowing the storage the VM is using it is hard to say. Generally should be able to follow the guide here.
https://pve.proxmox.com/wiki/Shrink_Qcow2_Disk_Files
Hello,
We have been running ZFS in production for a whlie on various Virtual hosts and have been super happy with it.
However, we have been getting these kernel panics recently such as in the screenshot below. It seems to be tied to an issue between proxmox and ZFS when using the scsi disk...
Hey Fabian,
You were right! I had to go into the LSI HBA board configuration and update the number of int 13 devices. It was defaulted to 1, once I updated it to 4 to match the 4 disks in the server, ProxMox installed and booted without issues.
Thanks, for all the help!
Hey Neil,
Assuming you want to restore the VM to the same or a different KVM machine you would have to do the following.
vzdump <vmid>
#move the dump to where it needs to go
qmrestore <path to backed up image> <new vmid>
Here are the steps I took recently to move a VM to a new cluster...
We are running into issues with getting a zfs rpool to boot after the PVE installation completes.
Recently we have been using ZFS more and more and are working on setting a new server to use ZFS instead of a hardware raid card.
The server we are installing onto has the following specs.
CPU...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.