Search results

  1. Move of TPM State disk fails

    That did the trick, thanks!
  2. Move of TPM State disk fails

    I'm trying to move a Win11 VM storage from and NFS share back to a local (ZFS) store. I moved the system disk and EFI disks, but when I try and move the TPM disk it fails with this error: create full clone of drive tpmstate0 (vmstore-NFS:500/vm-500-disk-0.qcow2) transferred 0.0 B of 4.0 MiB...
  3. Updating node IPs after 10G addition

    Thanks, I didn't know that option existed. I'd actually ended up switching the IP of one of the two nodes using the procedure in this post:
  4. Updating node IPs after 10G addition

    I have a 2 node cluster that I originally setup with a 1G network (192.168.15) and I've added 10G cards to the servers. I've migrated the back-end NFS storage over to the 10G network (192.68.10) and that seems to be working well. However, I think there is still traffic (for instance live...
  5. Proxmox backup client for arm64?

    Great, thanks for the pointer.
  6. Proxmox backup client for arm64?

    Any plans to support arm64 backup client (raspberry pi) ? Bob
  7. ZFS pool import fails on boot, but appears to be imported after

    When I'm booting I see two messages: Failed to start Import ZFS pool vmstore2 Failed to start Import ZFS pool vmstore1 I used to have a pool vmstore2 but removed it. I probably deleted and re-created vmstore1 and it's working just fine. Look at zpool status after reboot, I see one pool...
  8. LXC running docker fails to start after upgrade from 6.4 to 7.0

    As the original poster, I wanted to confirm this is fixed for me when using lc-pve 4.0.9-4. Thanks.
  9. LXC running docker fails to start after upgrade from 6.4 to 7.0

    Following up to my previous comment - the docker version in LXC template "debian-10-turnkey-core_16.0-1" is quite old (18.09). This is based on debian buster and may be the root of the problem. The Ubuntu version I'm running is 20.10.2 and that's fine. I'm going to steer clear of any Debian...
  10. LXC running docker fails to start after upgrade from 6.4 to 7.0

    I agree, setting "systemd.unified_cgroup_hierarchy=0" causes this same error for me on a LXC container with docker running Turnkey core.
  11. LXC running docker fails to start after upgrade from 6.4 to 7.0

    I was doing a test upgrade today on on server, followed the upgrade guide. I had a number of turnkey core 16.0 LXC's running docker, and after upgrade these failed to start docker. I eventually chased the first issue down to "overlay" not being available, so I added that to...
  12. Proxmox VE 7.0 released!

    Adding overlay fixed the first issue, now it looks like it's a cgroups issue: WARN[2021-07-06T13:53:23.416710405-05:00] Unable to find cpu cgroup in mounts WARN[2021-07-06T13:53:23.416730635-05:00] Unable to find blkio cgroup in mounts WARN[2021-07-06T13:53:23.416746328-05:00] Unable to find...
  13. Proxmox VE 7.0 released!

    I have a number of containers that I created on PVE 6.4 using turkey-core 16.0 with docker (nesting enabled, privileged). However after upgrading to VE 7, docker won't start on these containers. Looking at the docker logs in side the container, it looks like "overlayfs" was missing, so I added...
  14. Proxmox Backup Server backup

    OK, so now I have a PBS installed and backing up my PVE VM's. What steps do I need to do "backup" the PBS? If the server fails (the Linux OS drive) but the backup drive is intact can I recover? This is the case where you have a single backup data source and no tape backup or a second server. Bob
  15. Native VLAN + tagged VLAN

    Follow up - yea, it was a switch issue. Rather a firewall rule issue that wasn't allowing traffic between vlan25 and vlan15. Measure twice, cut once !
  16. Native VLAN + tagged VLAN

    I'm trying to convert over from a native vlan to a native vlan + tagged vlan and I can't seem to make it work. Right now my /etc/network/interfaces looks like this: auto lo iface lo inet loopback auto eno1 iface eno1 inet manual auto eno2 iface eno2 inet manual auto bond0 iface bond0 inet...
  17. New 5019D-FTN4 - storage configuration advice

    Looking for advice on how to move my current odd collection of services hosted across multiple raspberry pi 4's (4gb) and and old Macbook Air (4gb) over to a new Proxmox server I have on the way. It's a Supermicro 5019D-FTN4 with 64gb, 4x1TB SSD, 1tb nvme. I'm thinking of setting up the SSD's...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!