I have two clusters , 1 that runs VMs and 1 with ceph storage. When I am moving a hard drive from my local storage on the proxmox cluster to RBD on dedicated ceph cluster I get:
create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0)
2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not load config file, using default settings.
2020-01-20 00:11:54.302368 7f640c7270c0 -1 Errors while parsing config file!
2020-01-20 00:11:54.302371 7f640c7270c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.302372 7f640c7270c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.302372 7f640c7270c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303070 7f640c7270c0 -1 Errors while parsing config file!
2020-01-20 00:11:54.303073 7f640c7270c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303074 7f640c7270c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303074 7f640c7270c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
...then the hard drive is moved with no issues. I tried several times even migrating several VMs from one proxmox node to the other with no issues. Is this something I should be concerned about ? There is no ceph.conf on this (Proxmox cluster running VMs) because this cluster does not run ceph, I am not sure why it would even check that. I just did a clean "re-install" from 5.4 and this message was not present when I was moving hard drives before (on 5.4).
The version of the proxmox cluster is:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Storage:
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,iso
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
rbd: cephhdd-pool
content rootdir,images
krbd 0
monhost 10.221.1.70,10.221.1.71,10.221.1.72,10.221.1.73
pool cephhdd-pool
username admin
rbd: cephssd-pool
content images,rootdir
krbd 0
monhost 10.221.1.70,10.221.1.71,10.221.1.72,10.221.1.73
pool cephssd-pool
username admin
I am using the pve-enterprise.list packages. I noticed that the ceph package: ceph-fuse: 12.2.11+dfsg1-2.1+b1 is different (older) version than the one I have on my dedicated ceph cluster which might or might not an an issue.
Thank you.
create full clone of drive virtio0 (local-lvm-thin:vm-100-disk-0)
2020-01-20 00:11:54.296691 7f640c7270c0 -1 did not load config file, using default settings.
2020-01-20 00:11:54.302368 7f640c7270c0 -1 Errors while parsing config file!
2020-01-20 00:11:54.302371 7f640c7270c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.302372 7f640c7270c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.302372 7f640c7270c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303070 7f640c7270c0 -1 Errors while parsing config file!
2020-01-20 00:11:54.303073 7f640c7270c0 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303074 7f640c7270c0 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-01-20 00:11:54.303074 7f640c7270c0 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
...then the hard drive is moved with no issues. I tried several times even migrating several VMs from one proxmox node to the other with no issues. Is this something I should be concerned about ? There is no ceph.conf on this (Proxmox cluster running VMs) because this cluster does not run ceph, I am not sure why it would even check that. I just did a clean "re-install" from 5.4 and this message was not present when I was moving hard drives before (on 5.4).
The version of the proxmox cluster is:
proxmox-ve: 6.1-2 (running kernel: 5.3.13-1-pve)
pve-manager: 6.1-5 (running version: 6.1-5/9bf06119)
pve-kernel-5.3: 6.1-1
pve-kernel-helper: 6.1-1
pve-kernel-5.3.13-1-pve: 5.3.13-1
pve-kernel-5.3.10-1-pve: 5.3.10-1
ceph-fuse: 12.2.11+dfsg1-2.1+b1
corosync: 3.0.2-pve4
criu: 3.11-3
glusterfs-client: 5.5-3
ifupdown: 0.8.35+pve1
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.13-pve1
libpve-access-control: 6.0-5
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-9
libpve-guest-common-perl: 3.0-3
libpve-http-server-perl: 3.0-3
libpve-storage-perl: 6.1-3
libqb0: 1.0.5-1
libspice-server1: 0.14.2-4~pve6+1
lvm2: 2.03.02-pve3
lxc-pve: 3.2.1-1
lxcfs: 3.0.3-pve60
novnc-pve: 1.1.0-1
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.1-1
pve-cluster: 6.1-2
pve-container: 3.0-15
pve-docs: 6.1-3
pve-edk2-firmware: 2.20191127-1
pve-firewall: 4.0-9
pve-firmware: 3.0-4
pve-ha-manager: 3.0-8
pve-i18n: 2.0-3
pve-qemu-kvm: 4.1.1-2
pve-xtermjs: 3.13.2-1
qemu-server: 6.1-4
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.2-pve2
Storage:
cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,vztmpl,iso
lvmthin: local-lvm
thinpool data
vgname pve
content images,rootdir
rbd: cephhdd-pool
content rootdir,images
krbd 0
monhost 10.221.1.70,10.221.1.71,10.221.1.72,10.221.1.73
pool cephhdd-pool
username admin
rbd: cephssd-pool
content images,rootdir
krbd 0
monhost 10.221.1.70,10.221.1.71,10.221.1.72,10.221.1.73
pool cephssd-pool
username admin
I am using the pve-enterprise.list packages. I noticed that the ceph package: ceph-fuse: 12.2.11+dfsg1-2.1+b1 is different (older) version than the one I have on my dedicated ceph cluster which might or might not an an issue.
Thank you.