Update: the HA engine does not follow the VM boot order feature: if a VM is in HA, it's started as soon as the node comes up, without waiting for its order, while VMs not in HA wait for order/wait features.
So, new question: is there a way to make the Ha and VM order to work together?
Thank...
Hi,
I've tried to setup VM order for boot/shutdown (to properly manage the right order for different services), but when rebooting the node (I have a three-node cluster) the order is not used, and VMs just boot and shutdown in ID order.
Is there something else to set apart from the order/wait...
Hi,
Thank you for the hints, it was probably an issue due to low Spice memory (default 16MB) and higher resolution asked to the client.
I set the memory to 32MB and it seems to be way more stable.
PS: I saw the machine getting stuck in the reboot process with 512MB to Spice memory, is there a...
Hi,
I have an Ubuntu 20.04 Desktop as a VM on my PVE cluster.
As I access it through SPICE, the session breaks and restarts after a few minutes.
Is there a way to properly debug this behiviour?
Is there a guide or tips to properly setting this machine up?
Thank you,
I cannot reach any VM in any node, nor I can reach the nodes themselves through the gui terms.
I also saw that thread, the change in ssh port slightly moves it in another direction.
Only thing I did similar, has been to add an authorized key on the nodes to enable key ssh from a PC.
Going back...
Ok, some more info, I get this when trying from Chrome:
client connection: 127.0.0.1:44068
failed reading ticket: connection closed before authentication
TASK ERROR: command '/usr/bin/termproxy 5900 --path /nodes/pvenode2 --perm Sys.Console -- /usr/bin/ssh -e none -t 10.0.10.12 -- /bin/login -f...
This is Chrome's debugger, pointing to somethin g in pve's js.
I have to say that from Chrome I can't even upload images to the forum (I am on Firefox now).
From Firefox it seems the "popup" term (both xterm and novnc) is not working (failure to connect like in Chrome) but the incorporated term...
It's up and running.
Actually I can't connect from a single PC (no Chrome, no Firefox, no Edge) but I can connect from another one.
I'm not able to dig deeper into the logs.
Is there some place/file where I can actually check what's happening while I'm trying to connect?
Thank you.
Hi,
I seem not to be able anymore to connect to proxmox shell anymore through the gui.
Nothing pointing in the right direction in syslog:
Jun 12 21:42:20 pvenode1 pvedaemon[973976]: launch command: /usr/bin/vncterm -rfbport 5900 -timeout 10 -authpath /nodes/pvenode1 -perm Sys.Console -notls...
The issue is which serial to passthrogh.
Let's say I passthrough to truenas, it will need the serial to be able to build its zvols.
If I passthrogh those SAS disks, no serial is passed.
Also as a side note, those SAS disks are not recognized as such by proxmox (type: unknown).
Hi @dcsapak
Thank you very much, that is actually the one shown in pve.
root@pvenode1:~# udevadm info -p 'block/sdc' --query all
P: /devices/pci0000:00/0000:00:02.2/0000:03:00.0/host0/port-0:0/expander-0:0/port-0:0:2/end_device-0:0:2/target0:0:2/0:0:2:0/block/sdc
N: sdc
L: 0
S...
Hi,
I have the same problem with SAS disks. I have 12 ssd disks on 3 nodes, they all show the same serials as in smartctl -a /dev/sdX, while for the 6 sas disks the serial is different between gui and smartctl -a /dev/sdX. The gui shows the logical unit id instead.
Hi,
From the output, it seems the 100 MB difference in metadata cache do not sum up to the total 30GB difference, I do not think that's the point here.
@Dunuin it's not like it's a problem, it's also cache so the system can resize it when needed.
I just can't understand the different behaviour on the two nodes.
Hi,
I have a 3-node cluster with identical nodes, but zfs is using up much different ram on different nodes:
Node: 128RAM, 2x500GB ssd's for PVE OS.
Node1 free -mh:
total used free shared buff/cache available
Mem: 125Gi 57Gi 61Gi...
Ok, auto-solved: I only had to wait for autoscaler to adapt to the new pg number. I expeceted it went directly to the target pg number, but it instead went slowly down to that number.
Hi,
On my 3-node cluster I set up ceph using a custom device class (sas900 to identify my sas 900GB devices and put them all in one single pool), waiting for new pools to be created when new devices with different classes will be added to the nodes. I created a custom crush rule...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.