Hi Mira, I had to delete all the VM disks, but yes, is up and running.
Waiting to harden a little before put in production again.
Changing from bridge networking to 2 switches, and adding to more nodes, to a total of 5.
Ok, I force created the missing PG, and the nodes started to work again ...
Now is rebalancing, so no idea what information is missing.
I used the command ceph osd force-create-pg
Hi Mira
Any idea how to format the OSD without reinstall ?
The information is too old to make sense recover.
The nodes are in the same status since 1 week ..
Hi Mira
Sure, I read some documentation, I am afraid about losing all the data, for that reason looking for some advise.
I have a ceph health detail before the ceph man reboot.
As i thought the info was in the node 3, I restarted, and then the stale+undersized+degraded+peered become...
Hi Mira.
The nodes are syncd.
The osd are running.
Current ceph.log uploaded here ceph.log
ip -details -statistics a uploaded here ip details
Thanks
Demian
Hi Mira
The problem started on November 5.
Bellow the sizes.
-rw------- 1 ceph ceph 6.7G Nov 8 08:04 ceph.log
-rw------- 1 ceph ceph 1.5G Nov 8 00:00 ceph.log.1.gz
-rw------- 1 ceph ceph 1.1G Nov 6 23:59 ceph.log.2.gz
-rw------- 1 ceph ceph 1.5G Nov 5 23:59 ceph.log.3.gz
-rw------- 1 ceph...
Hi Mira
Thanks for answer.
I restarted the nodes, no success.
Bellow results of pveversion -v
The log is huge, because has a lot of cluster [WRN] slow request osd_op.(22.811.414 of lines), after my rebalance ...
proxmox-ve: 6.4-1 (running kernel: 5.4.143-1-pve)
pve-manager: 6.4-13 (running...
Ok, restored 1 day old backups in another proxmox without ceph.
But now the ceph nodes are unusable.
Any idea how to restore the nodes without complete format the nodes ?
I still have some hope to restore access to old VM disks.
Thanks
Hi
I have 3 nodes in a cluster.
After remove an OSD trying to get more speed (fo use only SSD), and add again, because lack of space, I ended having this error:
cluster:
id: XX
health: HEALTH_WARN
Reduced data availability: 12 pgs inactive
220 slow ops...
Hi RokaKen
Thanks for answer.
So, the idea is to move all the VM & CT to another cluster, upgrade the node, and bring online again, and the other 2 nodes with old versions should connect to ceph ?
I mean it's possible to upgrade 1 node each time ?
Or I need to shutdown/backup everything, and...
Hi, I am new to proxmox and have some doubts.
Will be greatly appreciated if someone can help me.
I have installed Proxmox 6.2.12 with ceph in 3 nodes, with HA.
I was not able to found any post or documentation showing how to upgrade with no down time.
Is this possible ? Which es the procedure...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.