You could access change logs for each package through our git server interface, at https://git.proxmox.com/?p=<PACKAGE-NAME>.git;a=blob;f=debian/changelog;hb=HEAD
For example, the change log for pve-manager can be accessed at...
Are you still experiencing this issue? It doesn't occur for me.
If you are, could you make sure that you're on the most up to date 'pve-kernel' branch? Could also try to update the submodule with 'make update_modules'?
If you run the newaliases command, it should rebuild the file /etc/aliases.db, based on the contents of the /etc/aliases file, fixing the posted error.
Regarding the GUI login issue, am I correct in understanding that it still works periodically? Do you see any error messages following the...
No problem, I'm happy to hear you found the problem! Would you mind marking the thread as "solved", so that others with similar issues can find the solution more easily? :)
Is this node part of a cluster? Is Ceph currently in use? I believe it's old Ceph packages (for which no repository is configured) that are keeping the update back.
If Ceph isn't in use, you can simply just add a newer repository, then run apt update and upgrade. Following the upgrade, you can...
The original question detailed moving the contents of an active datastore safely. If I understand correctly, the topic of discussion is now reattaching an old datastore to a new PBS instance.
For this, you could just mount the drive with the datastore to the new (or freshly reinstalled)...
In this particular case, it could be failing because the VM doesn't have enough disk space assigned. Kali recommend a minimum of 20Gb in their installation guide[1], and since it contains a massive amount of packages by default, including a desktop environment, I'm not sure 8Gb would be enough...
I wouldn't do this in case the system gets really messed up, unless you mean a fresh install, in which case, whatever you prefer.
If you'd rather not perform a fresh install, could you post:
All of your apt sources: cat /etc/apt/sources.list /etc/apt/sources.list.d/*
The output of: apt update...
I'm not sure why this doesn't work for you, as it works fine on my system. Maybe you could try manually selecting the kernel at boot and seeing if it remains selected after that?
I guess the problem is that the old datastore files are still in the nfs directory: nas01:/pbs.
Was there any backup data on this datastore? If not, you could delete all the files from the directory (note the hidden .lock file and .chunks directory), the create a new datastore at the path.
Note...
Hi,
The vmbr0 section of the config looks okay to me, but there are issues with enp5s0.
In short, I guess that it should look like:
allow-hotplug enp5s0
auto enp5s0
iface enp5s0 inet static
address 12.12.12.65/25
gateway 12.12.12.1
Changes to enp5s0:
Specify netmask '/25'...
What was the output of:
proxmox-boot-tool kernel pin 5.11.22-7-pve
and..
proxmox-boot-tool refresh
Could you also post the output of the following (having already run the above commands):
proxmox-boot-tool kernel list
I just tested these commands on a system with the 5.15.27-1-pve kernel...
Hi,
Did you follow the Proxmox VE 6 to 7 upgrade guide for this [1]? The upgrade seems to have been successful, however, I just want to ensure you followed the guide because the reason for your current issues is the use of apt upgrade instead of apt dist-upgrade.
You should always use apt...
Apologies for my lack of clarity. The VMID is just the number assigned to the VM, which from the system log (journalctl) seems to be 102.
Just to confirm, was VM 102 still running when you accessed the console. The system logs suggest that it wasn't meaning the issue could be with the VM itself...
Can you confirm that your DNS server (10.61.71.1) is active and functioning correctly? Is it reachable from the Proxmox VE host?
If any other systems use it, you can also check if DNS resolution functions correctly on them.
Have you still partitioned the drives correctly so that they contain 3 partitions, or are the entire drives used for the zpool?
Could you post the output of:
pveversion -v
zpool status
sgdisk -p <ZPOOL-DRIVE> (for each drive in the pool
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.