mix local-lvm and local-zfs in cluster?

pvps1

Well-Known Member
Hi

i've got a new 4.4.15 node installed. it's the only node in the cluster (of 8) with no ZFS but lvm-thin.
i cannot see any storage on this node. neither in web-gui nor with pvesm status (hangs).

is it possible, that i cannot configure a storage (zfs in this case) that is NOT available on all nodes?

on all nodes there should be
ceph, nfs, local

7 nodes with local-ZFS additional work fine
on 8th node none is available....

any hints?
 
thx for the hint.
configed the 8th host only to use nfs,ceph and lvm-thin but still cannot see any storage backend.
(though the zfs errors in the logs are gone of course).
cannot not even see the local storage. pvesm eg. status still hangs (or better runs forever, I can stop it with ctrl-c)

did an strace but doesnt say anything to me:
[...]
ioctl(9, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffd1f56eda0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(9, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
ioctl(11, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffd1f56eda0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(11, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
fcntl(9, F_SETFD, FD_CLOEXEC) = 0
fcntl(11, F_SETFD, FD_CLOEXEC) = 0
pipe([12, 13]) = 0
ioctl(12, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffd1f56eda0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(12, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
ioctl(13, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 0x7ffd1f56eda0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(13, 0, SEEK_CUR) = -1 ESPIPE (Illegal seek)
fcntl(12, F_SETFD, FD_CLOEXEC) = 0
fcntl(13, F_SETFD, FD_CLOEXEC) = 0
rt_sigprocmask(SIG_SETMASK, ~[RTMIN RT_1], [], 8) = 0
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f8fbfdf29d0) = 6495
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
close(13) = 0
read(12, "", 8192) = 0
close(12) = 0
close(5) = 0
close(8) = 0
close(11) = 0
close(6) = 0
select(16, [7 9], NULL, NULL, {1, 0}) = 1 (in [9], left {0, 998244})
read(9, "2017-07-11 14:21:33.604129 7ff16"..., 4096) = 93
select(16, [7 9], NULL, NULL, {1, 0}) = 0 (Timeout)
select(16, [7 9], NULL, NULL, {1, 0}) = 0 (Timeout)
select(16, [7 9], NULL, NULL, {1, 0}) = 1 (in [9], left {0, 2806})
read(9, "2017-07-11 14:21:36.604112 7ff16"..., 4096) = 140
select(16, [7 9], NULL, NULL, {1, 0}) = 0 (Timeout)
select(16, [7 9], NULL, NULL, {1, 0}) = 1 (in [9], left {0, 717606})
read(9, "2017-07-11 14:21:37.888149 7ff16"..., 4096) = 150
select(16, [7 9], NULL, NULL, {1, 0}) = 0 (Timeout)
select(16, [7 9], NULL, NULL, {1, 0}) = 0 (Timeout)
select(16, [7 9], NULL, NULL, {1, 0}^CProcess 6493 detached

[...]
and so on. here it hangs forever.

when I try to access the storage summary I get follwing in the logs:
Jul 11 14:27:57 scalpel pveproxy[6271]: proxy detected vanished client connection
nothing more "informational"

network is up/running, ceph-network is up, I can ping and ssh all the hosts viceversa, all the other 7 hosts have no problems, the configuration is 1:1 execpt the kernel-version and the latest host/node is pve-manager/4.4-15 (others are 4.4.5 and 4.4.13).

we have several clusters and I added many hosts so far. I cannot see what I have done wrong this time .-)
every help is warmly welcome, thx in advance...

Peter
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!