Everything works fine, but when executing shutdown, I see I/O errors (see attached screenshot):
print_req_error: I/O error, dev sda (sdb) sector ... sda and sdb are in rpool
root@telemachus:~# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:
NAME...
Thank you. I was missing some build-depends. One of them was libssl1.0-dev. Now I have a new hpn-ssh:
$ ./ssh -V
OpenSSH_7.4p1-hpn14v12, OpenSSL 1.0.2l 25 May 2017
Best regards,
Borut
I need openssh with hpn patch. I downloaded source package:openssh (1:7.4p1-10+deb9u4) from https://packages.debian.org/source/stretch/openssh, the same version which is already installed:
$ uname -a
Linux xxxx 4.15.18-7-pve #1 SMP PVE 4.15.18-27 (Wed, 10 Oct 2018 10:50:11 +0200) x86_64...
Kernel-care is more kernel-uncare. I installed it almost month ago and is still not working. Support is bad and very slow. They were missing last five PVE kernels. They needed more then a week to fixed this. Now, they are still looking why is my kernel not updated automatically....and this in 30...
Looks like FUSE couldn't be loaded because of ZFS on root, but I am not sure...
# dpkg-query -l 'zfs*'
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name...
I would like to use LTFS on HPE StoreEver LTO-7 Ultrium, PVE 4.15.18, but first I need to have FUSE kernel module present and loaded:
# apt-get update
# apt-get install fuse
Reading package lists... Done
Building dependency tree
Reading state information... Done
fuse is already the...
Creating rpool is default action, without "zfs set mountpoint=none rpool" is a bug with this result:
May 25 08:33:17 starspot zfs[125987]: cannot mount '/rpool': directory is not empty
May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Thank you! I set mountpoint to none for rpool and reboot:
root@starspot:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: active (exited) since Wed 2018-05-30...
I am wondering if nobody installed PVE on the root ZFS. I didn't do any changes on PVE... I just installed and upgraded it. Maybe the installation process created a mount point /rpool and then later on ZFS tried to create it again.
I don't see how I can empty /rpool...
You said /dev/random is not permitted on unprivileged containers.
After restore there are no random and urandom in /wpool/cts/subvol-1001-disk-1/var/spool/postfix/dev/
So, then is everything perfect...No need to do anything. Thank you!
Best regards,
Borut
After backup through web GUI in stop mode I couldn't restore CT:
# pct restore 1001 /var/lib/vz/dump/vzdump-lxc-102-2018_05_28-13_57_32.tar -ignore-unpack-errors 1 -unprivileged
400 Parameter verification failed.
storage: storage 'local' does not support container directories
pct restore <vmid>...
O.K. so this is a bug then! Adding a new storage with a choice of:
Selecting wpool/cts is a valid choice, which later produce:
TASK ERROR: cannot open directory //wpool: No such file or directory
During upgrade I found:
zfs-import-scan.service is a disabled or a static unit, not starting it.
Job for zfs-mount.service failed because the control process exited with error code.
See "systemctl status zfs-mount.service" and "journalctl -xe" for details.
zfs-mount.service couldn't start...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.