Search results

  1. B

    Shutdown print_req_error: I/O error

    Everything works fine, but when executing shutdown, I see I/O errors (see attached screenshot): print_req_error: I/O error, dev sda (sdb) sector ... sda and sdb are in rpool root@telemachus:~# zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME...
  2. B

    OpenSSH HPN

    Thank you. I was missing some build-depends. One of them was libssl1.0-dev. Now I have a new hpn-ssh: $ ./ssh -V OpenSSH_7.4p1-hpn14v12, OpenSSL 1.0.2l 25 May 2017 Best regards, Borut
  3. B

    OpenSSH HPN

    I need openssh with hpn patch. I downloaded source package:openssh (1:7.4p1-10+deb9u4) from https://packages.debian.org/source/stretch/openssh, the same version which is already installed: $ uname -a Linux xxxx 4.15.18-7-pve #1 SMP PVE 4.15.18-27 (Wed, 10 Oct 2018 10:50:11 +0200) x86_64...
  4. B

    Kernel upgrade without reboot

    Kernel-care is more kernel-uncare. I installed it almost month ago and is still not working. Support is bad and very slow. They were missing last five PVE kernels. They needed more then a week to fixed this. Now, they are still looking why is my kernel not updated automatically....and this in 30...
  5. B

    Installation for HPE LTFS 3.2.0 (Linux), FUSE

    Looks like FUSE couldn't be loaded because of ZFS on root, but I am not sure... # dpkg-query -l 'zfs*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name...
  6. B

    Installation for HPE LTFS 3.2.0 (Linux), FUSE

    I would like to use LTFS on HPE StoreEver LTO-7 Ultrium, PVE 4.15.18, but first I need to have FUSE kernel module present and loaded: # apt-get update # apt-get install fuse Reading package lists... Done Building dependency tree Reading state information... Done fuse is already the...
  7. B

    Upgrade PVE 5.1-41 to 5.2-1 Failed to start Mount ZFS

    Creating rpool is default action, without "zfs set mountpoint=none rpool" is a bug with this result: May 25 08:33:17 starspot zfs[125987]: cannot mount '/rpool': directory is not empty May 25 08:33:17 starspot systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
  8. B

    Upgrade PVE 5.1-41 to 5.2-1 Failed to start Mount ZFS

    Thank you! I set mountpoint to none for rpool and reboot: root@starspot:~# systemctl status zfs-mount.service ● zfs-mount.service - Mount ZFS filesystems Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled) Active: active (exited) since Wed 2018-05-30...
  9. B

    Upgrade PVE 5.1-41 to 5.2-1 Failed to start Mount ZFS

    I am wondering if nobody installed PVE on the root ZFS. I didn't do any changes on PVE... I just installed and upgraded it. Maybe the installation process created a mount point /rpool and then later on ZFS tried to create it again. I don't see how I can empty /rpool...
  10. B

    Convert privileged to unprivileged container

    You said /dev/random is not permitted on unprivileged containers. After restore there are no random and urandom in /wpool/cts/subvol-1001-disk-1/var/spool/postfix/dev/ So, then is everything perfect...No need to do anything. Thank you! Best regards, Borut
  11. B

    Convert privileged to unprivileged container

    I add "-storage cts" and got: # pct restore 1001 /var/lib/vz/dump/vzdump-lxc-102-2018_05_28-13_57_32.tar -ignore-unpack-errors 1 -unprivileged -storage cts extracting archive '/var/lib/vz/dump/vzdump-lxc-102-2018_05_28-13_57_32.tar' tar: ./var/spool/postfix/dev/urandom: Cannot mknod: Operation...
  12. B

    Convert privileged to unprivileged container

    After backup through web GUI in stop mode I couldn't restore CT: # pct restore 1001 /var/lib/vz/dump/vzdump-lxc-102-2018_05_28-13_57_32.tar -ignore-unpack-errors 1 -unprivileged 400 Parameter verification failed. storage: storage 'local' does not support container directories pct restore <vmid>...
  13. B

    Creating CT on ZFS storage failed!

    I didn't change the default mount path, I didn't set mountpoint /wpool! Now is everything O.K. Thank you.
  14. B

    Creating CT on ZFS storage failed!

    Does it mean I shouldn't create ZFS by myself (because PVE will not know about it)?
  15. B

    Creating CT on ZFS storage failed!

    What is a solution if I would like to keep CT's on wpool/ct or wpool/cts?
  16. B

    Creating CT on ZFS storage failed!

    O.K. so this is a bug then! Adding a new storage with a choice of: Selecting wpool/cts is a valid choice, which later produce: TASK ERROR: cannot open directory //wpool: No such file or directory
  17. B

    Upgrade PVE 5.1-41 to 5.2-1 Failed to start Mount ZFS

    During upgrade I found: zfs-import-scan.service is a disabled or a static unit, not starting it. Job for zfs-mount.service failed because the control process exited with error code. See "systemctl status zfs-mount.service" and "journalctl -xe" for details. zfs-mount.service couldn't start...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!