I added PVE storage to the root fs and moved the hookscript there.
dir: scripts
path /usr/local/share/pve
content snippets
shared 0
Now, when the host shuts down, that guest's hookscript is found and runs as expected:
Question remains why the default local lvm storage...
I have a hookscript with pre-stop and post-start phases defined.
The hookscript is successfully called when I shutdown the VM via the UI or when a scheduled backup job runs.
However, if I shutdown or reboot the PVE host, when it shuts down the guest concerned, the script is not found...
I believe when backend=journal, logpath is redundant.
Regardless of whether this is set globally in jail.conf or in the bundled jail definitions, you can override it for individual jails. So, if ssh is not being logged to a file, you could set backend=journal for the sshd jail.
Also, the .local...
Yes, I believe that command would download the pacakge systemd-boot plus any updated dependencies not in your apt cache.
I do not use systemd-boot. But I'd have thought, if you're already using it under PVE7, a dist-upgrade would pull in the new version and its deps, just like all the other...
I have earlier Intel NIC affected by the same issue. I found I only needed to disable offload when I'd configured vlans on the interface. Traffic volume didn't appear to be a factor.
There is always the chance of something going wrong and I wouldn't know which approach has the lowest risk.
I don't think there is any requirement to access a WAN resource during the upgrade, other than to download packages. So the upgrade should run to completion offline without any internet...
I believe the upload of ISOs via http/UI may temporarily require storage on the system disk regardless of its final destination.
If that is the case:
You need sufficient free space on your system volume to store the largest expected ISO.
You may need to ssh to the PVE host and delete the partial...
I don't think it's a new prerequisite. More a precaution as it is a major version upgrade.
I did so this time, but TBH I don't think I shutdown all VMs when I upgraded from 6.x to 7.0.
Any how, offline upgrade worked for me, once I'd editied repositories and run apt dist-upgrade --download-only.
I just created a new debian11 container and generated some IO. It appears in the summary page charts.
I then installed docker, started a simple container:
The pve host search page shows disk read, but zero disk write. Where as the guest summary Disk IO shows zero read and zero write. So, also...
Following the upgrade to PVE8, I notice in the web UI two containers that show zero Disk IO. Prior to the upgrade, these displayed disk read/write stats.
All other containers and VMs are displaying levels of disk IO as before.
The two CTs concerned are both unpriviledged Debian 11 containers...
I used Clonezilla. I haven't ever tried upgrading with guests running. Though I imagine a VM might survive the process better than a container.
From my recent experience, apt dist-upgrade needs no external connectivity, once all the upgrade packages are in apt cache. That's as long as you don't...
Yes, I did it over ssh. I took a chance. I first made full guest backups and imaged the host system disk.
Something I've just noticed, sshd's (default) config has changed resulting in this error for the login from my laptop:
sshd[111287]: userauth_pubkey: signature algorithm ssh-rsa not in...
I have just upgraded offline after downloading packages. It worked fine. No issues or workarounds needed in my case.
Just to add, I also performed the upgrade with shared storage disconnected (forgot to disable it) without impact.
AIUI it is recommended to perform the upgrade with all guests shutdown. My home router is a proxmox guest, so I wondered if I will encounter problems if I download packages, stop that guest, and continue with the upgrade offline:
apt update
apt dist-upgrade --download-only
qm shutdown 100
apt...
I haven't done this with PMG but have set up authenticated relay in postfix in the past. So not sure how/if this conflicts with any transport options you might have configured via the web UI. Also I don't believe it can be set up in the PMG web UI. If so, you need to do it via PMG's postfix...
I set up docker in an unpriviledged lxc container. In order to get an elasticsearch docker container to run, I set
lxc.prlimit.memlock: unlimited
How can I determine a suitable finite value?
They are only warnings and not errors. It isn't necessary to address them but if you want to, the options as I remember it are:
1. Set explicit gran_size & chunk_size options in kernel boot parameters. In /etc/default/grub append to GRUB_CMDLINE_LINUX_DEFAULT.
This will prevent the verbose...
With an integrated GPU, you may find that changing the amount of RAM assigned to it in the BIOS (and consequently the amount of system RAM) will help the kernel determine optimal values of gran_size & chunk_size.
Oh yes. However the significance of that file slipped my notice because I was unaware of how it was involved in presenting the contents of /etc/pve.
root@odin:~# systemctl stop pve-cluster.service
root@odin:~# ls -l /etc/pve/
total 0
FWIW my backup strategy now, since I do not run a cluster...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.