i have some LXC debian 8 container on a Proxmox 6 cluster.. now i'm planning an upgrade to proxmox7,
but i have read about cgroupv2 support only systemd version 231.
(debian 8 has Systemd Version: 215..)
Unfortunately we can't upgrade to Debian 9 ..
Some ideas ?
I am trying out a setup using proxmox. For the time being (development of my new server) I've moved to the no-subscription proxmox.
When I do apt-get upgrade, I get
Hit:1 http://ftp.nl.debian.org/debian bullseye InRelease
Hit:2 http://ftp.nl.debian.org/debian bullseye-updates InRelease...
I tried running some updates on my server like I do a lot but for some reason this time it completely broke my apt. The latest 5.15 kernel just won't install and I've tried everything that I could find online to fix it. Only thing left that I can do is delete old kernels that are still installed...
After a upgrade on the server at home, it didn't get back online. I'm only using a single node installation
As I was abroad it had to wait a week.
I started to search/ask on this fine forum and did the following.
I installed Proxmox 7.2 on a new SSD and copied over the...
After performing some upgrades, I noticed that the server didn't came back online. I was abroad at that time.
Now I see via the iDRAC console that I have a ACPI error: No handler for Region[SYSI] 00000000f42c25ab [IPMI.
When I try to log in I wont accept my xxxxxxxxxxxxxxxxxxx long...
Last year while upgrading Ceph from Nautilus to Octopus we were bitten by two Ceph bugs: https://tracker.ceph.com/issues/51619 and https://tracker.ceph.com/issues/51682. While both bugs have been fixed, one is backported to 15.2.14, but the other is backported to 15.2.16. We have one...
Usual upgrade today, to "Setting up pve-kernel-5.15.30-2-pve".
Unsual answer from proxmox
Setting up pve-kernel-5.15.30-2-pve (5.15.30-3) ...
run-parts: executing /etc/kernel/postinst.d/apt-auto-removal 5.15.30-2-pve /boot/vmlinuz-5.15.30-2-pve
Wir möchten gerne unseren Proxmox Server von Version 6.4 auf 7.0 (oder auch 7.1) upgraden. Da unser Server nicht mit UEFI installiert wurde habe ich gelesen das ich zu dem "Proxmox Legacy Boot" switchen soll, da GRUB Probleme hat wenn ZFS auf der Root Partition läuft. Dafür sollen die vfat...
One thing i noticed is during the upgrade, it mentioned that pvemanager failed:
Setting up pve-manager (7.1-11) ...
Job for pvedaemon.service failed.
See "systemctl status pvedaemon.service" and "journalctl -xe" for details.
dpkg: error processing package pve-manager (--configure):
I converted from no-subscription to a community license but now I have a configuration issue.
python3-ceph-common is now orphaned in my apt repo and I can not remove it because of many dependancies.
Does anyone else have this issue or a suggestion for fixing it?
ich hoffe ich bin im richtigen Forum (sonst gerne einen Hinweis geben).
Als Vorbemerkung: Meine Linux- und Virtualisierungskenntnisse sind auf einem absoluten Anfängerstand. Ich gehe immer nach Anleitungen im Internet vor und bin froh, wenn ich es zum Laufen bekomme, was ich vorhabe...
Today i decided to upgrade PVE from V6.2.4 to V7.1
I did it according to the following steps:
1. First upgrade to the most recent 6.x version with `apt-get update` followed by `apt-get upgrade-dist`.
2. I followed these steps https://pve.proxmox.com/wiki/Upgrade_from_6.x_to_7.0
I've got a 4 hosts cluster (3 with exactly the same hardware, and 1 with another).
I've launched an upgrade on one of my host a few hours ago (1 of the 3 same).
No problem during the upgrade.
At reboot, no network.
I found that my nic names have changed (for a part of them).
on my test-cluster i upgraded all my nodes from 7.0 to 7.1
CEPH went to pacific 16.2.7 (was 16.2.5?)
Now the monitors and managers won't start.
I had a pool and cephFS configured with MDS.
I've read somewhere that a pool in combination with an old cephFS (i came from PVE6) it could...
Hello, I was trying to update my installation of proxmox and now I am getting the following when I try to do an apt update && apt dist-upgrade. I have tried to do an apt clean along with apt reinstall proxmox-ve but I keep getting this error and it stops updating and now none of my VMs or...
All was working well with my cluster with the update 6.3-2
I have recently upgraded to 7.1-5 and since the upgrade, nodes are randomly crashing.
Everytime the node is crashing and reboots.
Each time of crash, the sequence of logs are beginning with :
After upgrading from 6.4 to 7.1 I cannot run any VM or container anymore.
VMs show errors that there is no virtualization enabled anymore and containers show that they are not able to create network devices.
BIOS settings haven't been changed and I am unable to verify the current settings...
After a dist-upgrade the GUI won't work anymore. The synthoms are very weird cause if you try to check the VM's and the CT's there are no existing anymore, if you try to install proxmox-ve you receive the message:
The following packages have unmet dependencies: