Search results

  1. T

    Feature Request : Creating & Managing datasets

    But ZFS Dataset management by the PVE GUI would be really great.
  2. T

    Docker in LXC problem after PVE kernel update.

    Of course if it is finally pure native ZFS it would be the best
  3. T

    Docker in LXC problem after PVE kernel update.

    /dev/null | grep 'Storage Driver' Storage Driver: vfs >> what is based of the zfs subvolumes. There is a different between ZFS Dataset and subvolume.
  4. T

    Docker in LXC problem after PVE kernel update.

    1. The Answer is zfs subvolumes >>> A LXC doesn't has access to the zfs (like zfs list or zpool) but it gets the dataset a want to give. 2. I'm looking forward to this. I believe we can use it with a permanent the snapshot (zfs send) concept for LXC zfs dataset on different Nodes far away...
  5. T

    Docker in LXC problem after PVE kernel update.

    1. ZFS as subvol with block file format spares you the docker overlay2 file-system stuff. ;-) 2. Very Easy with LXC Mount Point (MP) concept of PVE no comparison to VM stuff. 2.1 PVE creates a with the MP a same name (ID) ZFS subvolume. We are looking forward in 2020 to test it in cluster...
  6. T

    Docker in LXC problem after PVE kernel update.

    No way. Docker and ZFS with LXC is it worth to go this fragile way until it gets rock solid.
  7. T

    Docker in LXC problem after PVE kernel update.

    Solution: apt-get install lxc-pve=3.1.0-61 apt-mark hold lxc-pve >>> bug solved apt-mark unhold lxc-pve
  8. T

    Docker in LXC problem after PVE kernel update.

    Not yet. Wait for version lxc-pve: 3.1.0-65 maybe it will fix it.
  9. T

    Docker in LXC problem after PVE kernel update.

    Hi my experience is that since lxc-pve: 3.1.0-62 the LXC nested mode is not working. If you want nested LXC with Docker you need to get back to lxc-pve: 3.1.0-61 and wait for a bug fix in lxc-pve. Solution: apt-get install lxc-pve=3.1.0-61 apt-mark hold lxc-pve >>> bug solved apt-mark unhold...
  10. T

    PVE 6 Bug with Active Directory

    Ok I get the problem. The SSL stuff is not working , with Port 389 it works fine but with SSL on 636 not. PVE 5.4-11 works with SSL on port 636
  11. T

    PVE 6 Bug with Active Directory

    Here is the config for der AD-Connection. Sorry I took out the original name of the server for security reasons.
  12. T

    PVE 6 Bug with Active Directory

    Same Configuration but different behavior output from journalctl -f PVE 5.4.11 pvedaemon[21175]: <root@pam> successful auth for user ‚xxxx@xxx.de' PVE 6.0.4 pvedaemon[30092]: authentication failure; host=192.xx.xxx.xxx user=xxx@xxx msg=no such user ('xxx@xxx')
  13. T

    PVE 6 Bug with Active Directory

    On the PVE 6.0.4 the Active Directory connection stopped working. With same settings in PVE 5.4.11 it works fine on PVE 6.04 login with Active Directory accounts not possible. Where I can check this issue in CLI ? Best Tim.
  14. T

    Proxmox VE 6.0 released!

    On the PVE 6 the Active Directory connection stopped working. With same settings in PVE 5.4.11 it works fine on PVE 6.04 login with AD accounts not possible. Where I can check this issue in CLI ?
  15. T

    Proxmox VE 5.4 released!

    Thanks to the Proxmox Team. !!! Questions: Is there a plan for an improved usability of the ZFS Storage plugin. We are waiting for a option ZFS dataset creation and snapshot management by this nice GUI plugin. Is this maybe part of Proxmox VE version 6.0 incl. ZFS 0.8 with trim and...
  16. T

    PCI passthrough count maximum?

    Hello the configuration of the PCI.pm has changed in PVE 5.3. >>> /usr/share/perl5/PVE/QemuServer/PCI.pm PCI.pm >> PVE 5.3 my $devices = { vga => { bus => 'pcie.0', addr => 1 }, hostpci0 => { bus => "ich9-pcie-port-1", addr => 0 }, hostpci1 => { bus =>...
  17. T

    VM hangs during boot if passthrough device is Nvidia Tesla K80

    Was the problem solved? args: -machine pc,max-ram-below-4g=2G We are also looking for a solution for the Tesla K80. We transformed a HPC workstation with 4x Tesla K80 (8xK40) into a Proxmox Hypervisor but still need to use the Tesla K80 inside Win 10 Pro. With OVMF we get just a half K80...