I use a package called needrestart, which checks if there are services to be restarted after an update or upgrade.
||/ Name Version Architecture Description
+++-==============-============-============-===============================================================
ii needrestart...
Hi Fiona, sorry for my late reply, but today i have with the upgrade to Proxmox 8.1.3 again this problem.
First of all, apt dist-upgrade is not the issue. I do the upgrade via the Proxmox management GUI.
Another issue with the node 1 (pve01) is this, I can upgrade 3 other nodes and reboot them...
Last week i did package upgrades of my nodes
apt upgrade
and one of the nodes wasn't responsive after i hit enter OK to restart a bunch of services, this node started rebooting.
Today the same happened, this time to another node. I have never experienced this before.
I am using Proxmox 8.0.4...
I do have an enterprise subscription. I upgraded my first node without the new "Ceph Quincy Enterprise Repository", it did not complain at first.
Then i went over to the second node and i noticed that "Ceph Quincy" has now an enterprise repository. So i edited the repository accordingly. The...
Hi Fiona, many thanks for your input, indeed the ssh banner or in this case Neofetch seems causing the problem. I have uninstalled it, and the first impression is that live migration is working now also on every node.
I don't have special ssh banners, i do although have Neofetch. And in case i execute that command with get_ssh_info the neofetch banner displays first and then i get the right ip address, but only the ip address of <target node IP from corosync.conf>.
I have executed the command you wrote for me, and they all gave me the right ip address of the node. i have 3 nodes so i executed 6 different ways, 2 on each node to get the ip address. So didn't see any strange thing there.
This is the pveversion -v output:
root@pve01:~# pveversion -v...
I had similar problems when i tried to migrate for example from node 1 to node 2, first it was: ERROR: online migrate failure - unable to detect remote migration address. Then it became: TASK ERROR: failed to get ip for node 'pve02' in network '10.0.20.xxx/24'
In the first case it was possible...
Can you tell me in what use case you are using the Lenovo SAN DE2000H?
Furthermore is it possible to use ZFS or Ceph ont the storage pools with Lenovo DE2000H?
I wish not to use hardware Raid anymore also not on storage boxes like the Lenovo DE2000H.
The reason
The reason for this is to have choices and price differences for renting VPS' between ZFS and Ceph.
The only thing I found so far what does sharing memory is KSM (Kernel Same-page Merging), but I believe this is only useful for VMs with ZFS and if they have the same OS running...
I use a cluster of 3 nodes with each node 8 TB SSD (8x 1 TB) for ZFS and 8 TB nVME (4x 1,92 TB) for Ceph, Proxmox OS is running on separate disks in ZFS Raid 1. It's connected with a 10 Gbit network. I was wondering if you should consider memory considerations for Ceph and ZFS independently or...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.