Search results

  1. osd disk problem ? hmm...

    Hello :) 1) I removed the raid 2) I set up the HBA 3) I installed everything again 4) I connected to cluster 5) I set up Ceph 6) I created OSDs ... 7) 14 hours of synchronization 0 Errors !! Everything works perfectly !! Thank you very much :)
  2. osd disk problem ? hmm...

    any other errors ;-( ,,, yes ceph-18, ceph-19 are on new machine (DELL R540) root@pve05:~# ceph -s cluster: id: 68ed1284-ff0b-4ac0-9de9-3e7c2ab6fe9a health: HEALTH_OK services: mon: 2 daemons, quorum pve03,pve04 (age 11m) mgr: pve03(active, since 8d), standbys: pve05...
  3. osd disk problem ? hmm...

    Hi, I have 3 servers (2xDELL R540 & 1xDELL R510) in a cluster with ceph. Everything works fine but... I replaced one old machine with a new one and syslog gives such errors: Nov 29 05:35:22 pve05 ceph-osd[3957]: 2019-11-29 05:35:22.754 7fc5324f2700 -1 bdev(0x55a4dd670000...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!