PBS - chunks error after update

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
Hello

i have done a Update on PVE and my PBS then needed to reboot and then:

PVE can't conect to PBS

the error is a .chunks no such file or directory
(os error 2) (500)

any idea how to solve it?

have a nice day
vinc
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
please post the full error message, and also check the PBS side for errors in the logs. maybe your datastore didn't get mounted correctly?
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
journal, task logs, ..
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
also im Syslog steht
Code:
Sep  7 17:19:58 vpbs01 proxmox-backup-api[745]: successful auth for user 'root@pam'
Sep  7 17:19:58 vpbs01 proxmox-backup-proxy[974]: GET /api2/json/admin/datastore/vPBS01-scsi2/status: 400 Bad Request: [client [::ffff:10.18.14.42]:45780] unable to open chunk store 'vPBS01-scsi2' at "/vPBS01-
scsi2/.chunks" - No such file or directory (os error 2)
Sep  7 17:19:58 vpbs01 proxmox-backup-api[745]: successful auth for user 'root@pam'
Sep  7 17:19:58 vpbs01 proxmox-backup-proxy[974]: GET /api2/json/admin/datastore/vPBS01-scsi1/status: 400 Bad Request: [client [::ffff:10.18.14.42]:45782] unable to open chunk store 'vPBS01-scsi1' at "/vPBS01-
scsi1/.chunks" - No such file or directory (os error 2)
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 22224 10284 ? Ss 17:16 0:00 /sbin/init
root 2 0.0 0.0 0 0 ? S 17:16 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? I< 17:16 0:00 [rcu_gp]
root 4 0.0 0.0 0 0 ? I< 17:16 0:00 [rcu_par_gp]
root 6 0.0 0.0 0 0 ? I< 17:16 0:00 [kworker/0:0H-kblockd]
root 9 0.0 0.0 0 0 ? I< 17:16 0:00 [mm_percpu_wq]
root 10 0.0 0.0 0 0 ? S 17:16 0:00 [ksoftirqd/0]
root 11 0.0 0.0 0 0 ? I 17:16 0:00 [rcu_sched]
root 12 0.0 0.0 0 0 ? S 17:16 0:00 [migration/0]
root 13 0.0 0.0 0 0 ? S 17:16 0:00 [idle_inject/0]
root 14 0.0 0.0 0 0 ? S 17:16 0:00 [cpuhp/0]
root 15 0.0 0.0 0 0 ? S 17:16 0:00 [cpuhp/1]
root 16 0.0 0.0 0 0 ? S 17:16 0:00 [idle_inject/1]
root 17 0.0 0.0 0 0 ? S 17:16 0:00 [migration/1]
root 18 0.0 0.0 0 0 ? S 17:16 0:00 [ksoftirqd/1]
root 20 0.0 0.0 0 0 ? I< 17:16 0:00 [kworker/1:0H-kblockd]
root 21 0.0 0.0 0 0 ? S 17:16 0:00 [kdevtmpfs]
root 22 0.0 0.0 0 0 ? I< 17:16 0:00 [netns]
root 23 0.0 0.0 0 0 ? S 17:16 0:00 [rcu_tasks_kthre]
root 24 0.0 0.0 0 0 ? S 17:16 0:00 [kauditd]
root 25 0.0 0.0 0 0 ? S 17:16 0:00 [khungtaskd]
root 26 0.0 0.0 0 0 ? S 17:16 0:00 [oom_reaper]
root 27 0.0 0.0 0 0 ? I< 17:16 0:00 [writeback]
root 28 0.0 0.0 0 0 ? S 17:16 0:00 [kcompactd0]
root 29 0.0 0.0 0 0 ? SN 17:16 0:00 [ksmd]
root 30 0.0 0.0 0 0 ? SN 17:16 0:00 [khugepaged]
root 77 0.0 0.0 0 0 ? I< 17:16 0:00 [kintegrityd]
root 78 0.0 0.0 0 0 ? I< 17:16 0:00 [kblockd]
root 79 0.0 0.0 0 0 ? I< 17:16 0:00 [blkcg_punt_bio]
root 80 0.0 0.0 0 0 ? I< 17:16 0:00 [tpm_dev_wq]
root 81 0.0 0.0 0 0 ? I< 17:16 0:00 [ata_sff]
root 82 0.0 0.0 0 0 ? I< 17:16 0:00 [md]
root 83 0.0 0.0 0 0 ? I< 17:16 0:00 [edac-poller]
root 84 0.0 0.0 0 0 ? I< 17:16 0:00 [devfreq_wq]
root 85 0.0 0.0 0 0 ? S 17:16 0:00 [watchdogd]
root 88 0.0 0.0 0 0 ? S 17:16 0:00 [kswapd0]
root 89 0.0 0.0 0 0 ? S 17:16 0:00 [ecryptfs-kthrea]
root 91 0.0 0.0 0 0 ? I< 17:16 0:00 [kthrotld]
root 92 0.0 0.0 0 0 ? I< 17:16 0:00 [acpi_thermal_pm]
root 93 0.0 0.0 0 0 ? I< 17:16 0:00 [nvme-wq]
root 94 0.0 0.0 0 0 ? I< 17:16 0:00 [nvme-reset-wq]
root 95 0.0 0.0 0 0 ? I< 17:16 0:00 [nvme-delete-wq]
root 96 0.0 0.0 0 0 ? S 17:16 0:00 [scsi_eh_0]
root 97 0.0 0.0 0 0 ? I< 17:16 0:00 [scsi_tmf_0]
root 98 0.0 0.0 0 0 ? S 17:16 0:00 [scsi_eh_1]
root 99 0.0 0.0 0 0 ? I< 17:16 0:00 [scsi_tmf_1]
root 101 0.0 0.0 0 0 ? I< 17:16 0:00 [ipv6_addrconf]
root 112 0.0 0.0 0 0 ? I< 17:16 0:00 [kstrp]
root 113 0.0 0.0 0 0 ? I< 17:16 0:00 [kworker/u5:0]
root 126 0.0 0.0 0 0 ? I< 17:16 0:00 [charger_manager]
root 180 0.0 0.0 0 0 ? S 17:16 0:00 [scsi_eh_2]
root 181 0.0 0.0 0 0 ? I< 17:16 0:00 [scsi_tmf_2]
root 182 0.0 0.0 0 0 ? I< 17:16 0:00 [kworker/0:1H-kblockd]
root 196 0.0 0.0 0 0 ? I< 17:16 0:00 [kworker/1:1H-kblockd]
root 228 0.0 0.0 0 0 ? I< 17:16 0:00 [kdmflush]
root 231 0.0 0.0 0 0 ? I< 17:16 0:00 [kdmflush]
root 268 0.0 0.0 0 0 ? S 17:16 0:00 [jbd2/dm-1-8]
root 269 0.0 0.0 0 0 ? I< 17:16 0:00 [ext4-rsv-conver]
root 316 0.0 0.2 43212 8800 ? Ss 17:16 0:01 /lib/systemd/systemd-journald
root 325 0.0 0.0 0 0 ? I< 17:16 0:00 [iscsi_eh]
root 330 0.0 0.0 0 0 ? I< 17:16 0:00 [rpciod]
root 331 0.0 0.0 0 0 ? I< 17:16 0:00 [xprtiod]
root 332 0.0 0.1 23492 5844 ? Ss 17:16 0:00 /lib/systemd/systemd-udevd
root 333 0.0 0.0 0 0 ? I< 17:16 0:00 [ib-comp-wq]
root 334 0.0 0.0 0 0 ? I< 17:16 0:00 [ib-comp-unb-wq]
root 335 0.0 0.0 0 0 ? I< 17:16 0:00 [ib_mcast]
root 336 0.0 0.0 0 0 ? I< 17:16 0:00 [ib_nl_sa_wq]
root 337 0.0 0.0 0 0 ? I< 17:16 0:00 [rdma_cm]
root 338 0.0 0.0 0 0 ? S< 17:16 0:00 [spl_system_task]
root 339 0.0 0.0 0 0 ? S< 17:16 0:00 [spl_delay_taskq]
root 340 0.0 0.0 0 0 ? S< 17:16 0:00 [spl_dynamic_tas]
root 341 0.0 0.0 0 0 ? S< 17:16 0:00 [spl_kmem_cache]
root 407 0.0 0.0 0 0 ? I< 17:16 0:00 [cryptd]
root 452 0.0 0.0 0 0 ? S< 17:16 0:00 [zvol]
root 453 0.0 0.0 0 0 ? S 17:16 0:00 [arc_prune]
root 454 0.0 0.0 0 0 ? SN 17:16 0:00 [zthr_procedure]
root 455 0.0 0.0 0 0 ? SN 17:16 0:00 [zthr_procedure]
root 456 0.0 0.0 0 0 ? S 17:16 0:00 [dbu_evict]
root 457 0.0 0.0 0 0 ? SN 17:16 0:00 [dbuf_evict]
root 460 0.0 0.0 0 0 ? I< 17:16 0:00 [ttm_swap]
root 484 0.0 0.0 0 0 ? SN 17:16 0:00 [z_vdev_file]
root 485 0.0 0.0 0 0 ? S 17:16 0:00 [l2arc_feed]
systemd+ 670 0.0 0.1 93080 6504 ? Ssl 17:16 0:00 /lib/systemd/systemd-timesyncd
_rpc 671 0.0 0.0 6820 3664 ? Ss 17:16 0:00 /sbin/rpcbind -f -w
message+ 679 0.0 0.1 8972 4272 ? Ss 17:16 0:00 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root 680 0.0 0.1 101220 4780 ? Ssl 17:16 0:00 /usr/sbin/zed -F
root 682 0.0 0.1 19520 7292 ? Ss 17:16 0:00 /lib/systemd/systemd-logind
root 683 0.0 0.0 225820 4008 ? Ssl 17:16 0:00 /usr/sbin/rsyslogd -n -iNONE
root 684 0.0 0.1 11916 5416 ? Ss 17:16 0:00 /usr/sbin/smartd -n
root 745 0.3 0.4 162792 16572 ? Ssl 17:16 0:31 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
root 784 0.0 0.0 6888 296 ? Ss 17:16 0:00 /sbin/iscsid
root 785 0.0 0.1 7392 4956 ? S<Ls 17:16 0:00 /sbin/iscsid
root 792 0.0 0.1 15848 6708 ? Ss 17:16 0:00 /usr/sbin/sshd -D
root 796 0.0 0.0 8500 2756 ? Ss 17:16 0:00 /usr/sbin/cron -f
root 798 0.0 0.0 5608 1496 tty1 Ss+ 17:16 0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
root 969 0.0 0.1 43468 4856 ? Ss 17:16 0:00 /usr/lib/postfix/sbin/master -w
postfix 971 0.0 0.1 43868 7924 ? S 17:16 0:00 qmgr -l -t unix -u
backup 974 0.2 0.6 434092 24532 ? Ssl 17:17 0:24 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
root 977 0.0 0.3 20752 14004 ? Ss 17:17 0:00 /usr/bin/perl -wT /usr/sbin/munin-node
root 7190 0.0 0.0 0 0 ? I 18:18 0:00 [kworker/1:1-ata_sff]
postfix 10704 0.0 0.1 43816 7784 ? S 18:57 0:00 pickup -l -t unix -u -c
root 14131 0.0 0.0 0 0 ? I 19:33 0:00 [kworker/u4:2-events_power_efficient]
root 14571 0.0 0.1 16716 8004 ? Ss 19:38 0:00 sshd: root@pts/0
root 14588 0.0 0.2 21292 8744 ? Ss 19:39 0:00 /lib/systemd/systemd --user
root 14589 0.0 0.0 23188 2684 ? S 19:39 0:00 (sd-pam)
root 14602 0.0 0.0 0 0 ? I 19:39 0:00 [kworker/1:0-events]
root 14611 0.0 0.0 0 0 ? I 19:39 0:00 [kworker/0:1-events]
root 14617 0.0 0.1 7904 4744 pts/0 Ss 19:39 0:00 -bash
root 15886 0.0 0.0 0 0 ? I 19:45 0:00 [kworker/0:0-events]
root 16351 0.0 0.0 0 0 ? I 19:51 0:00 [kworker/u4:0-events_unbound]
root 16804 0.0 0.0 0 0 ? I 19:56 0:00 [kworker/u4:1-events_unbound]
root 16808 0.0 0.0 10912 3228 pts/0 R+ 19:57 0:00 ps -aux
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
kann ich sonst noch infos liefern und wenn ja welche und wo find ich die?

und von Seite PVE ist die error
TASK ERROR: could not get storage information for 'vPBS01-scsi1': proxmox-backup-client failed: Error: unable to open chunk store 'vPBS01-scsi1' at "/vPBS01-scsi1/.chunks" - No such file or directory (os error 2)
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
that sounds like something is not mounted that should be? check your storage config and mountpoints..
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
Hallo @fabian

# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/pbs/root / ext4 errors=remount-ro 0 1
/dev/pbs/swap none swap sw 0 0
proc /proc proc defaults 0 0
-
# findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/mapper/pbs-root ext4 rw,relatime,errors=remount-ro
tq/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime
x tq/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime
x tq/sys/fs/cgroup tmpfs tmpfs ro,nosuid,nodev,noexec,mode=755
x x tq/sys/fs/cgroup/unified cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate
x x tq/sys/fs/cgroup/systemd cgroup cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd
x x tq/sys/fs/cgroup/cpuset cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpuset
x x tq/sys/fs/cgroup/rdma cgroup cgroup rw,nosuid,nodev,noexec,relatime,rdma
x x tq/sys/fs/cgroup/cpu,cpuacct cgroup cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct
x x tq/sys/fs/cgroup/perf_event cgroup cgroup rw,nosuid,nodev,noexec,relatime,perf_event
x x tq/sys/fs/cgroup/net_cls,net_prio cgroup cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio
x x tq/sys/fs/cgroup/freezer cgroup cgroup rw,nosuid,nodev,noexec,relatime,freezer
x x tq/sys/fs/cgroup/blkio cgroup cgroup rw,nosuid,nodev,noexec,relatime,blkio
x x tq/sys/fs/cgroup/pids cgroup cgroup rw,nosuid,nodev,noexec,relatime,pids
x x tq/sys/fs/cgroup/hugetlb cgroup cgroup rw,nosuid,nodev,noexec,relatime,hugetlb
x x tq/sys/fs/cgroup/memory cgroup cgroup rw,nosuid,nodev,noexec,relatime,memory
x x mq/sys/fs/cgroup/devices cgroup cgroup rw,nosuid,nodev,noexec,relatime,devices
x tq/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime
x tq/sys/fs/bpf none bpf rw,nosuid,nodev,noexec,relatime,mode=700
x tq/sys/kernel/debug debugfs debugfs rw,relatime
x tq/sys/fs/fuse/connections fusectl fusectl rw,relatime
x mq/sys/kernel/config configfs configfs rw,relatime
tq/proc proc proc rw,relatime
x mq/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=28,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1714
tq/dev udev devtmpfs rw,nosuid,relatime,size=1991404k,nr_inodes=497851,mode=755
x tq/dev/pts devpts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000
x tq/dev/shm tmpfs tmpfs rw,nosuid,nodev
x tq/dev/mqueue mqueue mqueue rw,relatime
x mq/dev/hugepages hugetlbfs hugetlbfs rw,relatime,pagesize=2M
mq/run tmpfs tmpfs rw,nosuid,noexec,relatime,size=403056k,mode=755
tq/run/lock tmpfs tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k
tq/run/rpc_pipefs sunrpc rpc_pipefs rw,relatime
mq/run/user/0 tmpfs tmpfs rw,nosuid,nodev,relatime,size=403052k,mode=700
sorry the nice output is gone ;-(
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
yeah sounds like your big disks are not mounted anywhere.. how did you configure the datastore?
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
@fabian

my PBS is a VM on my PVE - also ein vPBS!
Had a basic installation with scsi0 - when up and running!
i did add the scsi1 in PVE
in vPBS i found the disk and done a Datastore! /datastore/vPBS01-scsi1/
in PVE i add the storage for backup and it worked like this for 2/3 weeks.
10days later i did add a 3th Disk the scsi2 which also worked smart.

i think i will add a 4th disk with 5 gig just to know what to add in the /etc/fstab
or you know how the missing information looks like ;-)

have a nice day
vinc
 
Last edited:
  • Like
Reactions: Jarvar

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
hello @fabian

did add a small disk and load it.
it is visible to PBS and there as datastore

a
findmnt
will show it at the end
- /vPBS01-scsi3 vPBS01-scsi3 zfs rw,xattr,noacl

but nothing in the /etc/fstab
so where are the disk mounted in PBS?
---
also running a backup to this datastore works well, without problem
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
could you post the output of systemctl status of the PBS VM?
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
# systemctl status
? vpbs01
State: degraded
Jobs: 0 queued
Failed: 1 units
Since: Tue 2020-09-08 17:05:05 CEST; 2 days ago
CGroup: /
tquser.slice
x mquser-0.slice
x tqsession-631.scope
x x tq23289 sshd: root@pts/0
x x tq23326 -bash
x x tq24112 systemctl status
x x mq24113 pager
x mquser@0.service
x mqinit.scope
x tq23302 /lib/systemd/systemd --user
x mq23303 (sd-pam)
tqinit.scope
x mq1 /sbin/init
mqsystem.slice
tqsystemd-udevd.service
x mq331 /lib/systemd/systemd-udevd
tqcron.service
x mq844 /usr/sbin/cron -f
tqmunin-node.service
x mq1003 /usr/bin/perl -wT /usr/sbin/munin-node
tqproxmox-backup.service
x mq770 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
tqsystemd-journald.service
x mq307 /lib/systemd/systemd-journald
tqssh.service
x mq841 /usr/sbin/sshd -D
tqproxmox-backup-proxy.service
x mq915 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
tqrsyslog.service
x mq698 /usr/sbin/rsyslogd -n -iNONE
tqrpcbind.service
x mq694 /sbin/rpcbind -f -w
tqsystem-postfix.slice
x mqpostfix@-.service
x tq 982 /usr/lib/postfix/sbin/master -w
x tq 984 qmgr -l -t unix -u
x mq15905 pickup -l -t unix -u -c
tqsmartmontools.service
x mq707 /usr/sbin/smartd -n
tqiscsid.service
x tq833 /sbin/iscsid
x mq834 /sbin/iscsid
tqzfs-zed.service
x mq702 /usr/sbin/zed -F
tqdbus.service
x mq701 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
tqsystemd-timesyncd.service
x mq693 /lib/systemd/systemd-timesyncd
tqsystem-getty.slice
x mqgetty@tty1.service
x mq847 /sbin/agetty -o -p -- \u --noclear tty1 linux
mqsystemd-logind.service
mq700 /lib/systemd/systemd-logind
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
# systemctl status
? vpbs01
State: degraded
Jobs: 0 queued
Failed: 1 units
Since: Tue 2020-09-08 17:05:05 CEST; 2 days ago
CGroup: /
tquser.slice
x mquser-0.slice
x tqsession-631.scope
x x tq23289 sshd: root@pts/0
x x tq23326 -bash
x x tq24112 systemctl status
x x mq24113 pager
x mquser@0.service
x mqinit.scope
x tq23302 /lib/systemd/systemd --user
x mq23303 (sd-pam)
tqinit.scope
x mq1 /sbin/init
mqsystem.slice
tqsystemd-udevd.service
x mq331 /lib/systemd/systemd-udevd
tqcron.service
x mq844 /usr/sbin/cron -f
tqmunin-node.service
x mq1003 /usr/bin/perl -wT /usr/sbin/munin-node
tqproxmox-backup.service
x mq770 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-api
tqsystemd-journald.service
x mq307 /lib/systemd/systemd-journald
tqssh.service
x mq841 /usr/sbin/sshd -D
tqproxmox-backup-proxy.service
x mq915 /usr/lib/x86_64-linux-gnu/proxmox-backup/proxmox-backup-proxy
tqrsyslog.service
x mq698 /usr/sbin/rsyslogd -n -iNONE
tqrpcbind.service
x mq694 /sbin/rpcbind -f -w
tqsystem-postfix.slice
x mqpostfix@-.service
x tq 982 /usr/lib/postfix/sbin/master -w
x tq 984 qmgr -l -t unix -u
x mq15905 pickup -l -t unix -u -c
tqsmartmontools.service
x mq707 /usr/sbin/smartd -n
tqiscsid.service
x tq833 /sbin/iscsid
x mq834 /sbin/iscsid
tqzfs-zed.service
x mq702 /usr/sbin/zed -F
tqdbus.service
x mq701 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
tqsystemd-timesyncd.service
x mq693 /lib/systemd/systemd-timesyncd
tqsystem-getty.slice
x mqgetty@tty1.service
x mq847 /sbin/agetty -o -p -- \u --noclear tty1 linux
mqsystemd-logind.service
mq700 /lib/systemd/systemd-logind
so sieht es besser aus...
status.PNG
 

vikozo

Active Member
May 4, 2014
684
22
38
suisse
www.wombat.ch
i found out:
systemctl show a error:
? zfs-import-cache.service loaded failed failed Import ZFS pools by cache file
doing a
Code:
# systemctl start zfs-import-cache.service
and i got a
Code:
# systemctl status zfs-import-cache.service
? zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2020-09-10 17:15:38 CEST; 1min 59s ago
     Docs: man:zpool(8)
  Process: 24563 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=0/SUCCESS)
Main PID: 24563 (code=exited, status=0/SUCCESS)

Sep 10 17:15:38 vpbs01 systemd[1]: Starting Import ZFS pools by cache file...
Sep 10 17:15:38 vpbs01 zpool[24563]: no pools available to import
Sep 10 17:15:38 vpbs01 systemd[1]: Started Import ZFS pools by cache file.

but doing a reboot and have the same failed back.
# systemctl status zfs-import-cache.service
? zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-10 17:19:01 CEST; 3min 36s ago
Docs: man:zpool(8)
Process: 513 ExecStart=/sbin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
Main PID: 513 (code=exited, status=1/FAILURE)

Sep 10 17:19:01 vpbs01 systemd[1]: Starting Import ZFS pools by cache file...
Sep 10 17:19:01 vpbs01 zpool[513]: cannot import 'vPBS01-scsi3': one or more devices is currently unavailable
Sep 10 17:19:01 vpbs01 systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Sep 10 17:19:01 vpbs01 systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
Sep 10 17:19:01 vpbs01 systemd[1]: Failed to start Import ZFS pools by cache file.

the small disk at post #14 is now also not working, after the reboot.
 
Last edited:

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
5,511
908
163
could you also post the version of PBS running in the VM? what FS did you use for the datastore? did you create it over the GUI or manually?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!