Search results

  1. G

    CephFS Replication

    Thanks for the quick reply Alwin. Let's hope it's on the cards for the future.
  2. G

    CephFS Replication

    We're experimenting with various Ceph features on the new PVE 6.2 with a view to deployment later in the year. One of the Ceph features that we're very interested in is pool replication for disaster recovery purposes (rbd mirror). This seems to work fine with "images" (like PVE VM images within...
  3. G

    CephFS Mount Question

    You're absolutely right. It took me a couple of days to remember that ceph is set up to use a completely different network to the rest of proxmox. Makes total sense now. Will test tomorrow... Thanks!
  4. G

    CephFS Mount Question

    I would like to get CephFS working (I have already been using RBD for quite some time with great success). I've followed the guidance in the wiki to set up CephFS, and things seem fine. I then tried to mount it on a Ubuntu 18.04 client VM, like: root@server2:~# mount -t ceph...
  5. G

    Re-adding Ceph Node

    This was the solution, thank you! Removing the non-existent OSDs with 'ceph auth del osd.ID' and then re-adding them using the Web UI worked perfectly. Thanks all so much for the help! Ceph is rebalancing now...
  6. G

    Re-adding Ceph Node

    It shows the following: root@smiles1:~# ceph auth list installed auth entries: osd.0 key: AQAWH7paYyXFKRAAR955203HR1WKXiDMYmJeJA== caps: [mgr] allow profile osd caps: [mon] allow profile osd caps: [osd] allow * osd.1 key: AQBDNSxYQdD+ARAAGn0DoTbPOKt+6sM5uRlA9Q== caps...
  7. G

    Re-adding Ceph Node

    Any more suggestions? I would really like to get this last piece of the puzzle figured out so I can fully restore the cluster.
  8. G

    Re-adding Ceph Node

    No, they're not visible through 'ceph osd crush tree': root@smiles3:~# ceph osd crush tree ID CLASS WEIGHT TYPE NAME -1 4.96675 root default -2 2.52208 host smiles1 0 ssd 0.22240 osd.0 1 ssd 0.89999 osd.1 6 ssd 0.95000 osd.6...
  9. G

    Re-adding Ceph Node

    Yes, reload doesn't help, it never shows. OSD is not mounted; root@smiles3:~# mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,relatime) udev on /dev type devtmpfs (rw,nosuid,relatime,size=16341840k,nr_inodes=4085460,mode=755) devpts on /dev/pts type...
  10. G

    Re-adding Ceph Node

    Many thanks, this was helpful. smiles1 no longer its symlink, so the changes did not propagate. After fixing the symlink and then removing the entry for the new server manually, I was able to add it back. All monitors now show they have quorum now (although the new one is now named...
  11. G

    Re-adding Ceph Node

    I am using PVE in a 3-cluster configuration. The PVE part of it works fine (I can see all nodes, they have green arrows, I can move VMs between them all etc.). It's just Ceph that I can't get working.
  12. G

    Re-adding Ceph Node

    Thanks dcsapak, I removed mon.2 from ceph.conf on an old node, then ran pveceph createmon -mon-address 10.15.15.52 on the newly installed node, but still the same issue. It shows as Quorum = No on the list of monitors in the web UI, and if I check ceph.conf on the old node, the new node is not...
  13. G

    Re-adding Ceph Node

    I should add that the list of monitors look like the attached, and if I try to remove it I get "monitor filesystem '/var/lib/ceph/mon/ceph-2' does not exist on this node (500)".
  14. G

    Re-adding Ceph Node

    I use a 3-node cluster set up with Ceph. Over the weekend node 3's system disk (SSD, no RAID) failed. I replaced the disk, removed it from the cluster, re-added it per the instructions and all is well - the cluster is complete again. Now I'm having trouble with ceph. I removed all the OSDs and...
  15. G

    Mellanox Drivers for PVE 5.4

    Many thanks for this - it worked as you suggested using the included debs (DEBS_ETH directory for ethernet only in my case). Chapter 2.8.2 of the manual can be followed to install (add the DEBS_ETH directory as an apt-repo, add their key, update and install). dkms status mlnx-en, 4.5: added...
  16. G

    Mellanox Drivers for PVE 5.4

    We're using Proxmox Virtual Environment version 5.4-3 and have upgraded our cluster to Mellanox Connectx-3 40G NICs. We would like to use the Mellanox drivers if possible, but their download page only provides drivers for Debian up to version 9.5 and I believe PVE 5.4-3 runs on top of Debian...
  17. G

    Ceph: active+clean+inconsistent

    Ceph managed to fix this itself after issuing another repair, which is nice. Works in mysterious ways! Logs show the following: 2018-11-19 11:44:26.513656 mon.sb1 mon.0 10.32.113.1:6789/0 153119 : cluster [ERR] Health check update: 3 scrub errors (OSD_SCRUB_ERRORS) 2018-11-19 11:44:26.513677...
  18. G

    Ceph: active+clean+inconsistent

    Thanks Udo. This Ceph pool is only used for CCTV footage storage, so it isn't going to be fatal if I lose a clip somewhere. The CCTV system will eventually overwrite the affected block I suppose, but I don't know if that will solve the issue - anyway, it's a good exercise for me to learn how to...
  19. G

    Ceph: active+clean+inconsistent

    I've stopped and restarted (while waiting a while in between) all affected OSDs, but no luck. I saw a hard disk read error in the logs, but was able to successfully read the mentioned block using hdparm, so I would think it was only a temporary issue.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!