Thanks for the info. pvestatd died
kernel: MCE: Killing pvestatd:2946 due to hardware memory corruption fault at 560a8770c878
I restarted it, but it seems that I have a memory issue.
It has been 1 week since I upgraded to the latest version. This morning one of my servers shows up in the GUI with question marks. But the server is accessible, I can open a shell and the VM are all running. Not sure what could be missing? A file corrupted perhaps?
I am running CEPH also and...
Hi,
I backed all my VMs prior ro converting to Ceph. After creating the Ceph pool, I want to restore the VMs, but restore is looking to create the VM disks in local-lvm instead of the Ceph pool and the restore fails. How can I restore to use the Ceph pool for the VM. disks?
I used the 6.2.16-6-pve kernel and one LXC being backed
I tried the older kernel and one LXC, backups worked and no crash. When I added a second LXC to the backup list, it crashed. I read somewhere there was a bug with backup and too many file handles but cannot find it anywhere. For now I am...
Hi,
I switched to an earlier kernel 6.2.16-6-pve and will monitor today during backups. The node is a Dell R830 at the latest BIOS, the other two machines are Lenovo servers.
This is the first time I have had problems. At first I thought of maybe memory errors as I had a bad stick when I first...
Hi,
I am running PVE8 with the latest 6.2.16 kernel. Up to this time being very stable. One of the three hosts crashes during backup. I am using the Proxmox Backup Server also at the latest version.
The logs just before the crash are
Sep 23 20:30:00 sv4 pvescheduler[2297021]: <root@pam>...
Hi,
I am unable to see any of the x86-64-xx cpus as options for a VM. I am running a 3 node cluster with only one node upgraded to VE8 and the others still on VE7 to be upgraded to VE8 this weekend.
I have attached the output of lscpu which seems to indicate that x86-64 is available and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.