Ok,
Thank you for the insight, I found out it works different if I update a node I'm directly connected to on GUI (node1) or not (node2):
pstree -a -p -g|grep -A2 termproxy
| | `-termproxy,2386260,2386259 5900 --path /nodes/pvenode1 --perm Sys.Console -- /usr/bin/pveupgrade --shell...
It actually does what it must do: run upgrades, so no actual problem.
I'd like to understand why it is so, and it is so on just one node, to prevent/solve moments in which it could be a problem (say it begins doing it on normal GUI shell, and not reading env settings from the right dir).
Ok, I found out zfs-auto-snapshot was installed and configured!
Uninstalled and cleared all snapshots, also my vpe nodes had that package installed and configured, also cleared them.
Thank you very much for helping me finding out the hidden setting!
So, do I assume the 1.60T of snaps (if I rightly read USEDSNAP) are not shown in the GUI monitor?
I also assume they are linked or anyway tied to the backups.
Isn't it better to show the full situation on the monitor? Again, I know this must be the n-th time this comes up, sorry XD
Hi,
Sure this is one of the many times this comes up, but I'd need some insight on this.
I have a 4Tb SSD that I put on ZFS, to create a Datastore.
The datasrtore seems to only be 2.13Tb. Why?
Thank you.
For reference, the root home directory seems to be right, here are three ways to access the console:
ssh (right):
Gui Shell (right):
Gui Shell from upgrades (wrong):
As said, the other nodes are really ok.
Thank you.
I did try some different storage solutions for K8s running on Proxmox VMs, and probably I have found out some useful things to think about before deploying your cluster for choosing the best storage solution for your persistent volumes.
If you have Ceph running on Proxmox, the best way is Ceph...
Hi,
I'm trying to achieve a stable cluster of k8s VMs. It's a 3xPVE node, the VMs are on a Ceph pool of fast nvme.
Actually using Longhorn on virtual disks on the same Ceph pool, but it seems to always fail when the vm backup jobs run or when I do restart the pve nodes for maintenance: I always...
Hi,
I'm facing a strange behaviour.
I have a 3xNodes cluster working find for month (almost a year and a half), now it begins to have this behiviour:
On Node1, when I start the shell from the Updates to Upgrade, it logs in / directory instead of /root
On Node2 and Node3 it rightly logs in...
@basil qemu agent works great, until the first bckup run (the internal one), then on the second one the VM is already frozen, qemu agent with it.
Actually @fiona ' s suggestion worked and now both bckup runs (internal and external) work fine wothout breaking VMs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.