Hello everyone,
We have been doing some network testing on our VMs because of a recent network change we have made. While doing this test we have noticed a high retransmission rate on our VMs, between nodes. This doesn't happen when VM's are on the same node, or when we test between 2 pve...
I have made an enquiry with HP, the servers are HP and we pay for hardware support.
It seems that the latest firmware version they have installed has generated problems with the hardware, fans running out of control, network cards that do not work correctly and disk control problems, which I...
Does anyone know what this log is due to and how it affects the functioning of Ceph?
Aug 26 07:46:03 pve01-poz kernel: libceph: osd10 (1)10.0.0.1:6827 socket closed (con state OPEN)
Aug 26 07:47:35 pve01-poz kernel: libceph: osd10 (1)10.0.0.1:6827 socket closed (con state OPEN)
Aug 26...
Does anyone know what this log is due to and how it affects the functioning of Ceph?
Aug 26 07:46:03 pve01-poz kernel: libceph: osd10 (1)10.0.0.1:6827 socket closed (con state OPEN)
Aug 26 07:47:35 pve01-poz kernel: libceph: osd10 (1)10.0.0.1:6827 socket closed (con state OPEN)
Aug 26...
I created it because it didn't create itself.
I set the permissions to backup:backup but no directories are created inside it.
What I did was create a data store inside it, everything was created correctly.
thanks
Hello,
When I try to backup my PVE to the PBS STORAGE I get this error
ERROR: VM 101 qmp command 'backup' failed - backup register image failed: command error: inserting chunk on store 'STORAGE' failed for bb9f8df61474d25e71fa00722318cd387396ca1736605e1248821cc0de3d3af8 - mkstemp...
Since we upgraded to version 18.2.2, the cluster is noticeably slow, when removing a disk it takes a whole day to return to a "health ok" state
So that you understand the architecture a little, I have 3 identical servers with 2 sockets of 24 cores and two threads, 512G ram, 6 2TB sas disks for...
In relation to the modification of corosync, I have seen the documentation that you have provided me, it is very easy to make the change, thank you very much
I have these TOTEM Messages, I understand that these are only notification messages.
May 30 10:40:40 pve01-boa corosync[2465]: [TOTEM ] Retransmit List: b6a93
May 30 10:40:40 pve01-boa corosync[2465]: [TOTEM ] Retransmit List: b6a95
May 30 10:40:41 pve01-boa corosync[2465]: [TOTEM ]...
there are 2 links of 10GB
According to what you indicate, I should remake the entire cluster
This is impossible without a very high service interruption
Can you tell me which part is the one that indicates that connectivity has been lost in the cluster?
to better understand the log
This is the line where it indicates that there is no connectivity in the cluster?
May 29 133644 pve01-boa ceph-osd[3055] 2024-05-29T133644.217+0200 7fdf3533e700 -1...
Thanks for the reply,
Sorry, I haven't realized how to respond in English.
We currently have two 10 GB links for corosync and ceph, do I understand that is enough or do I need more bandwidth?
We have had this environment running without problems for 2 years, the growth of VMs has not been...
Este entorno lo tenemos funcionando sin problemas desde hace 2 años, el crecimiento de las VMs no ha sido lo suficientemente exponencial como para generar una carga masiva en el disco.
Hemos detectado en uno de los nodos que una parte de la interfaz de red a través de la cual se envía el...
Hi everyone,
Since monday we are having unexpected reboots on all nodes of our proxmox cluster simultaneously. It has happened 3 times, once monday and twice on wednesday.
We have checked our hardware and everything seems to be ok, there's no power or network outage and temperatures are way...
La solucion es la siguiente
mkdir -p /usr/share/proxmox-ve
touch /usr/share/proxmox-ve/pve-apt-hook
chmod +x /usr/share/proxmox-ve/pve-apt-hook
apt install proxmox-ve
apt full-upgrade
si se quiere se puede ejecutar :
apt autoremove
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.