I'm experiencing this as well. The port disappears when the manager switches to another node. Now the active manager isn't binding 8003 for some reason. I tried disabling and enabling restful but to no avail. I have no idea how to debug this.
Anyone experience this issue?
Yeah if I cancel a disk move to a ceph pool, it does say `Removing image: 1% complete...` but then is canceled at 2% so it seems that cancelling a disk move cancels the disk from being removed on the ceph pool. @Alwin
Derp I'm dumb, looks like I transferred a systemd network config file of /etc/systemd/network/99-default.link into the server with the contents:
[Match]
Path=/devices/virtual/net/*
[Link]
NamePolicy=kernel database onboard slot path
MACAddressPolicy=none
Removing that and rebuilding the...
There was a new kernel update today/last night (9-19-2019) and it seems my servers' interface names reverted from ens0p0 to the old eth0 format. I don't have the predictable network interface names disabled in my grub startup parameters so why would it have reverted?
Thanks!
So my ceph pool reports its usage in the web gui as this:
But then when going to the storage summary for the ceph pool it reports it as this:
Is this normal or is there an issue going on that I need to resolve?
Thanks!
When viewing the content on our ceph rbd within the Proxmox GUI, it displays this error message. No errors for running VMs on the ceph cluster or moving disks or anything.
It's more annoying than anything but how can I resolve this issue without having to create a new pool and transfer data...
It's hard to compare but ~600mbps seems somewhat comparable, although I have less OSDs per node than the benchmarks, but what configuration changes were made in the benchmark to achieve the results?
I have setup a small ceph cluster with the following specs:
Three Nodes identical
- HP DL380p G8
- Intel Xeon E5-2697-v2
- 128GB DDR3 RAM (16GB 2RX4 PC3-14900R)
- OS Drive: Intel DC S4500 240GB
- OSD Drives: 2x Intel DC S3500 800GB
- NIC: Intel X520
Configuration:
[global]
auth client required...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.