Hello all,
I have been using proxmox for well over 7 years now. Really love the product!
Normally i can find the solution for my issues by searching this forum but this one issue seems to must be uncommon as i cant find details on it.
I currently have a 6 node cluster running 4 Compute nodes and 2 new R720s installed to start the switch to CEPH storage (going to be 4 as i migrate)
They are both fresh installs of PVE and i just moved them into the cluster. I installed CEPH on both of them and when I goto setup a OSD i get the following error:
command 'ceph-volume lvm create --cluster-fsid 3bac8555-d9ba-4064-a4b0-460ef1f76605 --crush-device-class ssd --data /dev/sdc' failed: exit code 1
Typing LSBLK in the shell i get this error:
lsblk: /lib/x86_64-linux-gnu/libsmartcols.so.1: version `SMARTCOLS_2.34' not found (required by lsblk)
All other nodes are up to date and have no issue. Even one that I installed ceph on to be a node 3, it has no problem with LSBLK. (This was on a HP blade chassis)
Both servers are similar, both are R720's and both have a Chelsio T580 40gbe.
Now i had a hell of a time getting these T580s to even be recognized so i am wondering if my efforts of installing the unified drivers may have somehow messed up the libsmartcols.so.1.
I tried to run the install and it says its already installed, uninstalling it will wreck the server.
I also tried to move the libsmartcols file from the working server with ceph to that server and same error comes up.
Any help would be much appreciated. Thank you!
I have been using proxmox for well over 7 years now. Really love the product!
Normally i can find the solution for my issues by searching this forum but this one issue seems to must be uncommon as i cant find details on it.
I currently have a 6 node cluster running 4 Compute nodes and 2 new R720s installed to start the switch to CEPH storage (going to be 4 as i migrate)
They are both fresh installs of PVE and i just moved them into the cluster. I installed CEPH on both of them and when I goto setup a OSD i get the following error:
command 'ceph-volume lvm create --cluster-fsid 3bac8555-d9ba-4064-a4b0-460ef1f76605 --crush-device-class ssd --data /dev/sdc' failed: exit code 1
Typing LSBLK in the shell i get this error:
lsblk: /lib/x86_64-linux-gnu/libsmartcols.so.1: version `SMARTCOLS_2.34' not found (required by lsblk)
All other nodes are up to date and have no issue. Even one that I installed ceph on to be a node 3, it has no problem with LSBLK. (This was on a HP blade chassis)
Both servers are similar, both are R720's and both have a Chelsio T580 40gbe.
Now i had a hell of a time getting these T580s to even be recognized so i am wondering if my efforts of installing the unified drivers may have somehow messed up the libsmartcols.so.1.
I tried to run the install and it says its already installed, uninstalling it will wreck the server.
I also tried to move the libsmartcols file from the working server with ceph to that server and same error comes up.
Any help would be much appreciated. Thank you!