So I am trying to update my nodes and every single one is stuck at, "Processing triggers for pve-manager (8.0.4) ..."
Looking at top there is nothing unusual going on and all the VM's are running with no issues. Anyone else seeing issues in the latest round of upgrades?
Setting up...
Well, I solved it on my own.
Ended up being an issue with a Qdevice I had configured. Not sure if it was something during the upgrade or what, but I had to restart the small linux container running the daemon and a reboot of each node and all is good again...
So I upgraded from 7 to 8 following the directions and it has completely killed my cluster.
Basic setup with 4 nodes, NFS backed for all the storage (no ceph), a linux bridge and two vlans on the bridge. When the nodes startup they seem to start ok, but logging into each one and looking at...
If I can recall, it was installed via iso.. But it was quite a while ago.. Never had any upgrade issues until now.
However, ran through the process you mentioned if it was installed on top of Debian, and it seems to be working..
I am trying to upgrade my Proxmomx from 7.3-4 to the latest. However, when I run the upgrade I am getting the below error.
W: (pve-apt-hook) !! WARNING !!
W: (pve-apt-hook) You are attempting to remove the meta-package 'proxmox-ve'!
W: (pve-apt-hook)
W: (pve-apt-hook) If you really want to...
There is absolutely NO requirement to have 10 nic's on the VM. You can have "up to" 10 nics on the VM to use, but you do not have to..
I am very successfully running with two nic's configured without any issue. You do have to have a minimum of two. One for management and one for data to pass...
So it looks like there are two requirements here. One is the host cpu change and the other is at least 2 interfaces, which makes sense becasue the first interface is dedicated to management and you need at least one for the data plane.
I have now moved mine to a more beefy ESXi box and with the...
I got this working..
The thing I found is that when I had the processor type set to kvm64, the boot failed. When set to "host" or a specific cpu type, like "Nehalam", the vm booted and I was able to access the login page. So it seems the type of "kvm" it doesn't like..
The only other thing I...
I am in need of some help on this also. Running PVE 7.3 and cannot get PANOS v10 or v11 to run. It boots to the point of seeing the :vm login" prompt, but no further and cannot find any issues when rebooting to maintenance mode and looking at any of the logs..
Yeah... That was it.. Needed a reboot.
I thought about that after I posted...
All good now. Thanks for walking me through this as I now have a better understanding of how things work fin order to troubleshoot on my own in the future.
root@core-routing:/var/lib/pve-cluster# ls -l
total 40
-rw------- 1 root root 36864 Jun 30 09:23 config.db
crit: found entry with duplicate name 'qemu-server'
[database] debug: name __version__ (inode = 0000000000000000, parent = 0000000000000000) (database.c:375:bdb_backend_load_index)...
Yes, I followed that wiki entry for the renaming. I followed the "Change Hostname" section with the exception of the mail changes. Did not perform anything from the "Cleanup" section since it was a new system.
/etc/pve is completely empty:
root@core-routing:/etc/pve# ls -l /etc/pve
total 0...
So there is probably a right way of changing the hostname than the way I did it, but curious as to why this happened.
So I updated the hostname in /etc/hosts and /etc/hostname in the latest version of Proxmox. Rebooted, and had both the old and new hostname in the GUI and realized I forgot to...
Only on the LXC...
What I am seeing on the LXC is the free mem dropping to zero as the cache memory inscreases. When the free mem hits zero, the rsync transfer goes from 100Mbps to less than 5Mbps... Nothing ever swaps..
On the VM, when free memory hits around 110KB, it just bounces around and...
Hi all,
I am having an issue when running rsync in an LXC pulling files in from a remote system where the remote has a rsync daemon running.
So my command line in my LXC would look something like, "rsync -av --progress rsync://192.168.196.11/backup/ /mnt/restore/"
Initially the restore will...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.