Search results

  1. O

    LXC running docker fails to start after upgrade from 6.4 to 7.0

    I agree, setting "systemd.unified_cgroup_hierarchy=0" causes this same error for me on a LXC container with docker running Turnkey core.
  2. O

    LXC running docker fails to start after upgrade from 6.4 to 7.0

    I was doing a test upgrade today on on server, followed the upgrade guide. I had a number of turnkey core 16.0 LXC's running docker, and after upgrade these failed to start docker. I eventually chased the first issue down to "overlay" not being available, so I added that to...
  3. O

    Proxmox VE 7.0 released!

    Adding overlay fixed the first issue, now it looks like it's a cgroups issue: WARN[2021-07-06T13:53:23.416710405-05:00] Unable to find cpu cgroup in mounts WARN[2021-07-06T13:53:23.416730635-05:00] Unable to find blkio cgroup in mounts WARN[2021-07-06T13:53:23.416746328-05:00] Unable to find...
  4. O

    Proxmox VE 7.0 released!

    I have a number of containers that I created on PVE 6.4 using turkey-core 16.0 with docker (nesting enabled, privileged). However after upgrading to VE 7, docker won't start on these containers. Looking at the docker logs in side the container, it looks like "overlayfs" was missing, so I added...
  5. O

    Proxmox Backup Server backup

    OK, so now I have a PBS installed and backing up my PVE VM's. What steps do I need to do "backup" the PBS? If the server fails (the Linux OS drive) but the backup drive is intact can I recover? This is the case where you have a single backup data source and no tape backup or a second server. Bob
  6. O

    Native VLAN + tagged VLAN

    Follow up - yea, it was a switch issue. Rather a firewall rule issue that wasn't allowing traffic between vlan25 and vlan15. Measure twice, cut once !
  7. O

    Native VLAN + tagged VLAN

    I'm trying to convert over from a native vlan to a native vlan + tagged vlan and I can't seem to make it work. Right now my /etc/network/interfaces looks like this: auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2 iface eno2 inet manual auto bond0 iface bond0 inet...
  8. O

    New 5019D-FTN4 - storage configuration advice

    Looking for advice on how to move my current odd collection of services hosted across multiple raspberry pi 4's (4gb) and and old Macbook Air (4gb) over to a new Proxmox server I have on the way. It's a Supermicro 5019D-FTN4 with 64gb, 4x1TB SSD, 1tb nvme. I'm thinking of setting up the SSD's...