ok from /etc/pve on both nodesWere you in #/etc/pve on Proxmox node while running the CLI commands? If not then thats why it cannot find the ceph.conf file.
Code:
ceph6-ib /etc/pve # ceph-deploy disk list ceph6-ib
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy disk list ceph6-ib
[ceph6-ib][DEBUG ] connected to host: ceph6-ib
[ceph6-ib][DEBUG ] detect platform information from remote host
[ceph6-ib][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: debian 7.5 wheezy
[ceph_deploy.osd][DEBUG ] Listing disks on ceph6-ib...
[ceph6-ib][INFO ] Running command: ceph-disk list
[ceph6-ib][WARNIN] Traceback (most recent call last):
[ceph6-ib][WARNIN] File "/usr/sbin/ceph-disk", line 2579, in <module>
[ceph6-ib][WARNIN] main()
[ceph6-ib][WARNIN] File "/usr/sbin/ceph-disk", line 2557, in main
[ceph6-ib][WARNIN] args.func(args)
[ceph6-ib][WARNIN] File "/usr/sbin/ceph-disk", line 2197, in main_list
[ceph6-ib][WARNIN] tpath = mount(dev=dev, fstype=fs_type, options='')
[ceph6-ib][WARNIN] File "/usr/sbin/ceph-disk", line 812, in mount
[ceph6-ib][WARNIN] path,
[ceph6-ib][WARNIN] File "/usr/sbin/ceph-disk", line 303, in command_check_call
[ceph6-ib][WARNIN] return subprocess.check_call(arguments)
[ceph6-ib][WARNIN] File "/usr/lib/python2.7/subprocess.py", line 506, in check_call
[ceph6-ib][WARNIN] retcode = call(*popenargs, **kwargs)
[ceph6-ib][WARNIN] File "/usr/lib/python2.7/subprocess.py", line 493, in call
[ceph6-ib][WARNIN] return Popen(*popenargs, **kwargs).wait()
[ceph6-ib][WARNIN] File "/usr/lib/python2.7/subprocess.py", line 679, in __init__
[ceph6-ib][WARNIN] errread, errwrite)
[ceph6-ib][WARNIN] File "/usr/lib/python2.7/subprocess.py", line 1259, in _execute_child
[ceph6-ib][WARNIN] raise child_exception
[ceph6-ib][WARNIN] TypeError: execv() arg 2 must contain only strings
[ceph6-ib][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk list
Code:
ceph4-ib /etc/pve # ceph-deploy disk list ceph4-ib
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.3): /usr/bin/ceph-deploy disk list ceph4-ib
[ceph4-ib][DEBUG ] connected to host: ceph4-ib
[ceph4-ib][DEBUG ] detect platform information from remote host
[ceph4-ib][DEBUG ] detect machine type
[ceph_deploy.osd][INFO ] Distro info: debian 7.5 wheezy
[ceph_deploy.osd][DEBUG ] Listing disks on ceph4-ib...
[ceph4-ib][INFO ] Running command: ceph-disk list
[ceph4-ib][DEBUG ] /dev/sda :
[ceph4-ib][DEBUG ] /dev/sda1 other, primary
[ceph4-ib][DEBUG ] /dev/sda2 other, ext3, mounted on /boot
[ceph4-ib][DEBUG ] /dev/sda3 other, LVM2_member
[ceph4-ib][DEBUG ] /dev/sdb :
[ceph4-ib][DEBUG ] /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdd1
[ceph4-ib][DEBUG ] /dev/sdc :
[ceph4-ib][DEBUG ] /dev/sdc1 ceph data, active, cluster ceph, osd.1, journal /dev/sdd2
[ceph4-ib][DEBUG ] /dev/sdd :
[ceph4-ib][DEBUG ] /dev/sdd1 ceph journal, for /dev/sdb1
[ceph4-ib][DEBUG ] /dev/sdd2 ceph journal, for /dev/sdc1
[ceph4-ib][DEBUG ] /dev/sde :
[ceph4-ib][DEBUG ] /dev/sde1 ceph data, active, cluster ceph, osd.2, journal /dev/sdg1
[ceph4-ib][DEBUG ] /dev/sdf :
[ceph4-ib][DEBUG ] /dev/sdf1 ceph data, active, cluster ceph, osd.3, journal /dev/sdg2
[ceph4-ib][DEBUG ] /dev/sdg :
[ceph4-ib][DEBUG ] /dev/sdg1 ceph journal, for /dev/sde1
[ceph4-ib][DEBUG ] /dev/sdg2 ceph journal, for /dev/sdf1
[ceph4-ib][DEBUG ] /dev/sr0 other, unknown