new install, error during benchmark: -5

mouk

Renowned Member
May 3, 2016
40
0
71
53
Hi,

We have started evaluating proxmox and it looks _very_ interesting. Setup a test environment, three servers, used pveceph, and we're HEALTH_OK, quorum yes, etc, etc, all looking pretty good, 3 osds, 3up/in.

Now for some testing:
rados bench -p scbench 1000 write --no-cleanup
works perfectly on all three servers. However:

root@pve-ceph-3:~# rados bench -p scbench 10 rand
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat
0 0 0 0 0 0 - 0
read got -2
error during benchmark: -5
error 5: (5) Input/output error
root@pve-ceph-3:~#

This one works on ONE server (pve-ceph-2) but both other servers show the error above.

This is just initial testing, so probably I have done something wrong, somewhere..? This is happening basically straight after installation, so I have not yet really harmed the system in any way :-)

Suggestions?
 
Last edited:
root@pve-seph-1:/var/log/ceph# cat /etc/pve/ceph.conf
[global]
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 1.2.3.0/24
filestore xattr use omap = true
fsid = 99be1c9e-3c06-40e4-846f-f84423598ab0
keyring = /etc/pve/priv/$cluster.$name.keyring
osd journal size = 5120
osd pool default min size = 1
public network = 1.2.3.0/24

[osd]
keyring = /var/lib/ceph/osd/ceph-$id/keyring

[mon.0]
host = pve-seph-1
mon addr = 1.2.3.55:6789

[mon.2]
host = pve-ceph-3
mon addr = 1.2.3.90:6789

[mon.1]
host = pve-seph-2
mon addr = 1.2.3.69:6789
 
root@pve-seph-1:/var/log/ceph# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl

lvmthin: local-lvm
vgname pve
thinpool data
content rootdir,images

rbd: storage-test
monhost 1.2.3.69 1.2.3.55
content images
pool rbd
username admin
 
In my storage.cfg, I notice now that one IP is missing for my third server. I'll try adding that. But I'm sure something minor as that should not have such big a consequence..?
 
Dear Dominik,

And I tried so hard using google... Thanks for pointing me to that post. :-)