We're facing the problem on one node of our cluster:
3xR610 + SolarFlare SFN5322F.
Kernel Version Linux 4.15.18-9-pve #1 SMP PVE 4.15.18-30 (Thu, 15 Nov 2018 13:32:46 +0100)
PVE Manager Version pve-manager/5.3-5/97ae681d
All 3 nodes are up to date. No Ceph, no ZFS. Feel free to ask if...
As said in my 2nd post, pveceph purge did not worked ;)
After a good night, I found that I still had a ceph-osd process on one node. I killed the process.
Ran again pveceph purge and manage to recreate my cluster.
I managed to create OSD but got some error using web interface. So I used CLI...
No I did not because the only way I've found was to remove package ceph but last time I'll try this I remember that it implies to remove packages such as pve-manager.
I've just tried and it seems not be true. It's quite late here so I won't do it right now. I'll do it tomorrow and give feedback...
Cluster is based on a Dell C6100
4 nodes with Proxmox 5.2.9, 96GB RAM, 2xL5639 by node.
1Gb network card on each that goes on the firewall (throught switch) and bonding that goes on storage network (I have another Ceph cluster, and Synology that share storage with proxmox cluster)
I've tried to get Ceph working on my proxmox cluster, but it failed. Many OSD haven't been created and actually I can't destroy existing pool and/or add OSD.
I wass wondering if there is a way to fully reinit ceph without reinstall proxmox
I have a 4 nodes cluster running Ceph 5.2.9...
I'm going to add 10Gb network cards on my Proxmox nodes (Proxmox v4 up to date). I have 3 nodes and each has its own local storage (no NAS/SAN).
Actually, all network actions go through the original 1Gb card.
I'd like to use these news cards only for 'storage' actions like migration and...