Hi,
Thank you for the update.
We have 4 Proxmox hosts are connect to same switch and all host/VMs having similar issue.
We have another physical machine running Linux OS (no hypervisor) and is connected to same switch and is look very stable.
however, I will check any issue with cables...
Hi, I am attaching 3 screenshot.
1) mtr from proxmox VM
2) mtr from proxmox host
3) mtr from a physical machine (no hypervisor installed) -- giving better response
HI
We are facing high latency in Proxmox VMs even to the loopback IP also high latency
We are using Ubuntu 22 as VM OS
wondering about high latency in loopback IP also we noticed the latency from host machine is look better when compared to VM ,PFA
Hi,
I have a 3 node cluster and I am using Ceph storage. Now i am trying to enable HA in all VMs , while trying to add replication but I am getting the error
"No replicatable volumes found (500)"
I am getting error while migrating a VM to another node, Alreayy tired to disable from know-host but not helped.
Error
====
2023-05-22 14:54:32 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=vcche002' root@10.200.40.21 /bin/true
2023-05-22 14:54:32...
Please reply any one with suggestions for below problem .
I am also facing same issue .
ne of the nodes in my cluster seems to be hanging/freezing and the node goes offline. The node is still powered on, so I have to hard reboot... I'd like to know what's going on. Which logs should I check?
Please let us know how to add Dell power store500T lUN [FC] to proxmox nodes ?
Its very important and we are unable to add SAN lun to Proxmox nodes .
please let us know if any idea .
Hello Support,
We are trying to create a Ceph storage from SCSI Storage protocol. It is mapped from Dell PowerStorage Server.
1) The actual mapped storage is 5TB , but it is showing 8 disks with 5TB each in Proxmox Disks
2) It is throwing an error while creating an OSD in Ceph
"command...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.