Hi,
Yes, I already have it.
The problem is that I have a user with VM.Console, VM.Monitor, VM.PowerMgmt privileges on a single VM, but he cannot see it on the UI.
It only appears when I add the VM.Audit privilege.
Regards,
Actually, yes, it was a network issue, here is the details :
We have the ceph cluster network : 192.168.0.0/24 and the ceph public network : 10.10.20.0/24
Months ago, we've added an OVS switch with and OVS bridge having the network 192.168.0.0/16
Until last week, everything was working just...
More logs when we start osd.25 and then it gets down :
2022-05-18T10:57:31.508+0000 7fc78327f700 1 osd.25 pg_epoch: 18356 pg[2.282( v 17528'2529901 lc 16523'2529900 (11747'2526900,17528'2529901] local-lis/les=18350/18351 n=502 ec=11802/98 lis/c=18353/18353 les/c/f=18354/18354/0 sis=18356...
Just for the records, we've found that it was a problem with system date, it was GMT+1 we've bring it back to GMT, we've restarted the cluster "systemctl restart pve-cluster" and the web UI returned back.
But now there another issue, ceph cluster is degraded and 6 osds (3 * pve0 + 3 * pve6) are...
Here it is : https://drive.google.com/file/d/12BWYNEvjK3Hw25K8GaQOwb48sfo89VQk/view?usp=sharing
It's has 38 MB that why I've uploaded it somewhere else.
Hi,
We have a cluster of 9 nodes with a ceph pool for vm storage, lately we stopped to have access to the web UI, but we still have ssh access to the servers, and all the VMs are still running fine.
We've tried to debug the problem with no luck, the only thing that we were sure about is that...
Hi everyone,
I have setup a 3 nodes Proxmox 7.0-11 cluster, each node has 3 HDD drives, the first drive is used for Proxmox and the others are OSDs.
I wanted to test HA by removing the last disk of the first node (which doesn't contain Proxmox), but when I remove it the node reboot.
Is this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.