/boot is 98% full

boboinmo

Member
Jun 10, 2022
7
0
6
Inherited and old cluster and am auditing the nodes and ran up on this, what is advisable to remove and in what way? I searched the forums and am sure I am just missing it as this seems like it would have been asked and answered. Its obviously an older proxmox system v5.4. If there is a link to an old answer i would appreciate the leg up.

-rw-r--r-- 1 root root 212K Feb 25 2019 config-4.15.18-11-pve
-rw-r--r-- 1 root root 213K Jun 12 2020 config-4.15.18-30-pve
-rw-r--r-- 1 root root 212K Oct 30 2018 config-4.15.18-8-pve
-rw-r--r-- 1 root root 212K Nov 15 2018 config-4.15.18-9-pve
-rw-r--r-- 1 root root 186K May 24 2018 config-4.4.128-1-pve
-rw-r--r-- 1 root root 186K Jul 5 2018 config-4.4.134-1-pve
-rw-r--r-- 1 root root 186K Mar 30 2017 config-4.4.49-1-pve
-rw-r--r-- 1 root root 186K Apr 25 2017 config-4.4.59-1-pve
-rw-r--r-- 1 root root 186K Jun 23 2017 config-4.4.67-1-pve
-rw-r--r-- 1 root root 186K Aug 17 2017 config-4.4.76-1-pve
-rw-r--r-- 1 root root 186K Jan 8 2018 config-4.4.98-3-pve
drwxr-xr-x 5 root root 7.0K Nov 11 2020 grub
-rw-r--r-- 1 root root 16M Jun 12 2014 initrd.img-2.6.32-29-pve
-rw-r--r-- 1 root root 33M Mar 14 2019 initrd.img-4.15.18-11-pve
-rw-r--r-- 1 root root 33M Dec 10 11:02 initrd.img-4.15.18-30-pve
-rw-r--r-- 1 root root 33M Nov 14 2018 initrd.img-4.15.18-8-pve
-rw-r--r-- 1 root root 33M Nov 27 2018 initrd.img-4.15.18-9-pve
-rw-r--r-- 1 root root 24M Jun 6 2018 initrd.img-4.4.128-1-pve
-rw-r--r-- 1 root root 25M Nov 14 2018 initrd.img-4.4.134-1-pve
-rw-r--r-- 1 root root 24M May 9 2017 initrd.img-4.4.49-1-pve
-rw-r--r-- 1 root root 24M May 9 2017 initrd.img-4.4.59-1-pve
-rw-r--r-- 1 root root 24M Aug 24 2017 initrd.img-4.4.67-1-pve
-rw-r--r-- 1 root root 24M Sep 13 2017 initrd.img-4.4.76-1-pve
-rw-r--r-- 1 root root 24M Jun 6 2018 initrd.img-4.4.98-3-pve
drwx------ 2 root root 12K Jun 12 2014 lost+found
-rw-r--r-- 1 root root 179K Jun 25 2015 memtest86+.bin
-rw-r--r-- 1 root root 181K Jun 25 2015 memtest86+_multiboot.bin
drwxr-xr-x 2 root root 1.0K Nov 11 2020 pve
-rw-r--r-- 1 root root 3.9M Feb 25 2019 System.map-4.15.18-11-pve
-rw-r--r-- 1 root root 4.0M Jun 12 2020 System.map-4.15.18-30-pve
-rw-r--r-- 1 root root 3.9M Oct 30 2018 System.map-4.15.18-8-pve
-rw-r--r-- 1 root root 3.9M Nov 15 2018 System.map-4.15.18-9-pve
-rw-r--r-- 1 root root 3.8M May 24 2018 System.map-4.4.128-1-pve
-rw-r--r-- 1 root root 3.8M Jul 5 2018 System.map-4.4.134-1-pve
-rw-r--r-- 1 root root 3.8M Mar 30 2017 System.map-4.4.49-1-pve
-rw-r--r-- 1 root root 3.8M Apr 25 2017 System.map-4.4.59-1-pve
-rw-r--r-- 1 root root 3.8M Jun 23 2017 System.map-4.4.67-1-pve
-rw-r--r-- 1 root root 3.8M Aug 17 2017 System.map-4.4.76-1-pve
-rw-r--r-- 1 root root 3.8M Jan 8 2018 System.map-4.4.98-3-pve
-rw-r--r-- 1 root root 8.1M Feb 25 2019 vmlinuz-4.15.18-11-pve
-rw-r--r-- 1 root root 8.2M Jun 12 2020 vmlinuz-4.15.18-30-pve
-rw-r--r-- 1 root root 8.1M Oct 30 2018 vmlinuz-4.15.18-8-pve
-rw-r--r-- 1 root root 8.1M Nov 15 2018 vmlinuz-4.15.18-9-pve
-rw-r--r-- 1 root root 7.0M May 24 2018 vmlinuz-4.4.128-1-pve
-rw-r--r-- 1 root root 7.0M Jul 5 2018 vmlinuz-4.4.134-1-pve
-rw-r--r-- 1 root root 6.9M Mar 30 2017 vmlinuz-4.4.49-1-pve
-rw-r--r-- 1 root root 6.9M Apr 25 2017 vmlinuz-4.4.59-1-pve
-rw-r--r-- 1 root root 6.9M Jun 23 2017 vmlinuz-4.4.67-1-pve
-rw-r--r-- 1 root root 6.9M Aug 17 2017 vmlinuz-4.4.76-1-pve
-rw-r--r-- 1 root root 6.9M Jan 8 2018 vmlinuz-4.4.98-3-pve
 
Hi,

have you tried to run a apt autoremove? Can you also post the output of pveversion --verbose?
 
. Its obviously an older proxmox system v5.4.
With older PVE versions (iirc older than 5.4 ) all pve-kernels were marked as not to be autoremoved - you need to manually uninstall them:
* `dpkg -l |grep pve-kernel` should give you a list of installed kernels
* remove those you do not need (make sure to keep the currently running kernel and the newest one at least) - by running (one example)
`apt remove pve-kernel-4.15.18-8-pve`

And - to have mentioned it - please upgrade this system ASAP - the version has been EOL since a few years and there were many fixes (also for security issues) inbetween

I hope this helps!
 
Thanks folks:

pveversion --verbose
proxmox-ve: 5.4-2 (running kernel: 4.15.18-30-pve)
pve-manager: 5.4-15 (running version: 5.4-15/d0ec33c6)
pve-kernel-4.15: 5.4-19
pve-kernel-4.15.18-30-pve: 4.15.18-58
pve-kernel-4.15.18-11-pve: 4.15.18-34
pve-kernel-4.15.18-9-pve: 4.15.18-30
pve-kernel-4.15.18-8-pve: 4.15.18-28
pve-kernel-4.4.134-1-pve: 4.4.134-112
pve-kernel-4.4.128-1-pve: 4.4.128-111
pve-kernel-4.4.98-3-pve: 4.4.98-103
pve-kernel-4.4.76-1-pve: 4.4.76-94
pve-kernel-4.4.67-1-pve: 4.4.67-92
pve-kernel-4.4.59-1-pve: 4.4.59-87
pve-kernel-4.4.49-1-pve: 4.4.49-86
corosync: 2.4.4-pve1
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.1-12
libpve-apiclient-perl: 2.0-5
libpve-common-perl: 5.0-56
libpve-guest-common-perl: 2.0-20
libpve-http-server-perl: 2.0-14
libpve-storage-perl: 5.0-44
libqb0: 1.0.3-1~bpo9
lvm2: 2.02.168-pve6
lxc-pve: 3.1.0-7
lxcfs: 3.0.3-pve1
novnc-pve: 1.0.0-3
proxmox-widget-toolkit: 1.0-28
pve-cluster: 5.0-38
pve-container: 2.0-42
pve-docs: 5.4-2
pve-edk2-firmware: 1.20190312-1
pve-firewall: 3.0-22
pve-firmware: 2.0-7
pve-ha-manager: 2.0-9
pve-i18n: 1.1-4
pve-libspice-server1: 0.14.1-2
pve-qemu-kvm: 3.0.1-4
pve-xtermjs: 3.12.0-1
qemu-server: 5.0-56
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3


Here is the output from the above dpkg -l |grep pve-kernel:
ii pve-firmware 2.0-7 all Binary firmware code for the pve-kernel
ii pve-kernel-4.15 5.4-19 all Latest Proxmox VE Kernel Image
ii pve-kernel-4.15.18-11-pve 4.15.18-34 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-30-pve 4.15.18-58 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-8-pve 4.15.18-28 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.15.18-9-pve 4.15.18-30 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.128-1-pve 4.4.128-111 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.134-1-pve 4.4.134-112 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.49-1-pve 4.4.49-86 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.59-1-pve 4.4.59-87 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.67-1-pve 4.4.67-92 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.76-1-pve 4.4.76-94 amd64 The Proxmox PVE Kernel Image
ii pve-kernel-4.4.98-3-pve 4.4.98-103 amd64 The Proxmox PVE Kernel Image

so I should for example 'apt remove pve-kernel-4.4.98-3-pve` This is safe and will clean out the old the 4.4.98.3 kernel and files

The apt autoremove produces this result it is safe to proceed, or is the above operation safer or redundant?
The following packages will be REMOVED:
acl corosync-pve cpp-4.9 docutils-common docutils-doc libalgorithm-c3-perl libarchive-extract-perl libb-hooks-endofscope-perl libbind9-90 libboost-iostreams1.55.0 libcgi-fast-perl
libcgi-pm-perl libclass-c3-perl libclass-c3-xs-perl libclass-method-modifiers-perl libclass-xsaccessor-perl libcloog-isl4 libcpan-changes-perl libcpan-meta-perl libdata-optlist-perl
libdata-perl-perl libdata-section-perl libdevel-caller-perl libdevel-globaldestruction-perl libdevel-lexalias-perl libdns-export100 libdns100 libexporter-tiny-perl libfcgi-perl
libfile-slurp-perl libgetopt-long-descriptive-perl libicu52 libimport-into-perl libirs-export91 libisc-export95 libisc95 libisccc90 libisccfg-export90 libisccfg90 libiscsi4 libisl10 libjasper1
liblcms2-2 liblist-moreutils-perl liblog-message-perl liblog-message-simple-perl liblognorm1 liblua5.2-0 liblwres90 libmodule-build-perl libmodule-implementation-perl
libmodule-load-conditional-perl libmodule-pluggable-perl libmodule-runtime-perl libmodule-signature-perl libmoo-perl libmoox-handlesvia-perl libmro-compat-perl libnamespace-autoclean-perl
libnamespace-clean-perl libntdb1 libpackage-constants-perl libpackage-stash-perl libpackage-stash-xs-perl libpaper-utils libpaper1 libparams-classify-perl libparams-util-perl
libparams-validate-perl libpath-tiny-perl libperl4-corelibs-perl libpod-latex-perl libpod-markdown-perl libpod-readme-perl libprocps3 libprotobuf9 libpsl0 libregexp-common-perl
librole-tiny-perl libsnappy1v5 libsoftware-license-perl libstrictures-perl libsub-exporter-perl libsub-exporter-progressive-perl libsub-identify-perl libsub-install-perl libsub-name-perl
libterm-ui-perl libtext-template-perl libtry-tiny-perl libtype-tiny-perl libtype-tiny-xs-perl libunicode-utf8-perl libvariable-magic-perl libwebp5 libwebp6 libwebpdemux1 libwebpdemux2
libwebpmux1 libwebpmux2 libxtables10 python-cffi python-docutils python-ndg-httpsclient python-pil python-ply python-pycparser python-pygments python-roman python-suds
 
And - to have mentioned it - please upgrade this system ASAP - the version has been EOL since a few years and there were many fixes (also for security issues) inbetween

I hope this helps!

I would like to upgrade however I think given the nature of the live servers running in this cluster I am better off moving the vms to a new cluster I am creating that is running v7.x. Any potential gotcha issues, I will want to be aware of, with the qemu's moving over to the new cluster? Till I get to that I will be cleaning up via the above operations to avoid outages.
 
I would like to upgrade however I think given the nature of the live servers running in this cluster I am better off moving the vms to a new cluster I am creating that is running v7.x.
If you have the hardware for this - it's something I'd recommend! (since you can always fall back to the old install if something does not work out)

Any potential gotcha issues, I will want to be aware of, with the qemu's moving over to the new cluster?
In theory vzdump-backups should have remained stable across the releases and kvm does provide quite a good abstraction...
In practice making this large version-jumps always has the potential to run into an issue - on the upside - chances are that by now someone else has already run into that and you'll finde some answers online (potentially here in the forum) :)

If not - write the error here and maybe we can see where it's originating - if an error shows up at all
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!