Yeah if I cancel a disk move to a ceph pool, it does say `Removing image: 1% complete...` but then is canceled at 2% so it seems that cancelling a disk move cancels the disk from being removed on the ceph pool. @Alwin
Derp I'm dumb, looks like I transferred a systemd network config file of /etc/systemd/network/99-default.link into the server with the contents:
NamePolicy=kernel database onboard slot path
Removing that and rebuilding the...
There was a new kernel update today/last night (9-19-2019) and it seems my servers' interface names reverted from ens0p0 to the old eth0 format. I don't have the predictable network interface names disabled in my grub startup parameters so why would it have reverted?
So my ceph pool reports its usage in the web gui as this:
But then when going to the storage summary for the ceph pool it reports it as this:
Is this normal or is there an issue going on that I need to resolve?
When viewing the content on our ceph rbd within the Proxmox GUI, it displays this error message. No errors for running VMs on the ceph cluster or moving disks or anything.
It's more annoying than anything but how can I resolve this issue without having to create a new pool and transfer data...
I have setup a small ceph cluster with the following specs:
Three Nodes identical
- HP DL380p G8
- Intel Xeon E5-2697-v2
- 128GB DDR3 RAM (16GB 2RX4 PC3-14900R)
- OS Drive: Intel DC S4500 240GB
- OSD Drives: 2x Intel DC S3500 800GB
- NIC: Intel X520
auth client required...