Search results

  1. J

    Snapshot feature not available

    100% correct. I found the unimplemented feature request here: https://bugzilla.proxmox.com/show_bug.cgi?id=1007 and this discussion https://forum.proxmox.com/threads/why-do-bind-mounts-prevent-snapshots.85495/ Annoyingly, the mount points are cephfs, which supports snapshots. I'm not sure...
  2. J

    Snapshot feature not available

    I'm running the latest PVE with containers backed by CEPH. Somewhere in the past I stopped being able to snapshot containers. Backup jobs *can and do* make snapshots, but external tools are failing — for example, INFO: filesystem type on dumpdir is 'ceph' -using /var/tmp/vzdumptmp547208_105 for...
  3. J

    ceph warning post upgrade to v8

    I have this problem as well (both dashboard modules). Will this actually be ported to ceph 17.x? The ceph mailing list says "latest versions" — currently, 17.2.6 is the latest 17.x release.
  4. J

    Backup job locks up PVE 7.0

    Locked up again, different host (does not appear to be a pattern of LXC and host combination yet). Here's the backup job: https://pastebin.com/xxvUiKWS and /var/log/messages: https://pastebin.com/NSRnjxS7 It certainly appears as if something about the snapshot process is not playing nicely...
  5. J

    Backup job locks up PVE 7.0

    @dietmar There's a mix; currently they are on separate VLANs, and on some hosts share physical ports (I'm using openvswitch). No other cluster operations are failing. The problem appears to be intermittent but also possibly narrowed to only one or two LXCs.
  6. J

    Backup job locks up PVE 7.0

    Any thoughts? RBD issues are continuing to plague. I'm testing moving the backing volumes to a different pool.
  7. J

    Backup job locks up PVE 7.0

    Interestingly, for further notice there appear to be rbd errors at the time of the failure: Jul 30 00:29:04 quarb kernel: [144026.433830] rbd: rbd2: write at objno 1056 2686976~40960 result -108 Jul 30 00:29:04 quarb kernel: [144026.433841] rbd: rbd2: write result -108 Jul 30 00:29:04 quarb...
  8. J

    [SOLVED] LXC Container Ubuntu 18.04 Zerotier TAP/TUN Solution

    I should have tagged ALL the jameses… thanks!
  9. J

    Backup job locks up PVE 7.0

    6-node cluster, all running latest PVE and updated. Underlying VM/LXC storage is ceph. Backups -> cephfs. In the backup job, syncfs fails, and then the following things happen. •The node and container icons in the GUI have a grey question mark, but no functions of the UI itself appear to fail...
  10. J

    tun/tap broken in LXC in PVE 7

    Thanks @fabian. I had read the release notes about cgroup2 but for some reason I thought the directives were forward compatible. <smh> All working now.
  11. J

    tun/tap broken in LXC in PVE 7

    Even more fascinating… attempting to convert a privileged container (mknod=1 still set) to unprivileged fails and destroys the LXC: recovering backed-up configuration from 'cephfs:backup/vzdump-lxc-105-2021_07_08-20_19_10.tar.zst' /dev/rbd0 Creating filesystem with 4194304 4k blocks and 1048576...
  12. J

    tun/tap broken in LXC in PVE 7

    With PVE 6.4, I had functional tun/tap (think ZeroTier) inside privileged LXC with the following config: lxc.cgroup.devices.allow: c 10:200 rwm lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file In PVE 7, with or without features: mknod=1, ZeroTier now fails: zerotier-one[171]...
  13. J

    [SOLVED] LXC Container Ubuntu 18.04 Zerotier TAP/TUN Solution

    This appears to have broken under PVE 7. Has anyone else tried post-upgrade?
  14. J

    tagging instances

    Thanks for the reply, @shantanu. I may have misworded my question. I was wondering if you were using nomad to directly create containers (LXC) on the cluster, but it sounds like you are using nomad inside prebuilt VMs. I'm trying to avoid the overhead of having large, not-very-portable VMs; I'd...
  15. J

    tagging instances

    shantanu, are you using nomad "natively" with consul, i.e., managing containers? I'd like to get into this (already using consul) but not sure if someone's created a solution already.
  16. J

    User mapping breaks unprivileged containers

    What is ALC? I'm attempting to follow the directions in the wiki for using specific UID mappings in unprivileged containers, and getting the same error. How do I add a UID to a mount? Thanks.
  17. J

    openvswitch keeps crashing after proxmox 5 to 6 upgrade

    FWIW I am also getting occasional openvswitch crashes similar in nature since the upgrade to 6. Apparently there's no watchdog reboot enabled; system just locks up with kernel logs dumped.
  18. J

    Proxmox 5.4 to 6.0 : Strange network issues

    I had to revert to linux bridge, and also removed LACP. Interestingly OVS+LACP works just fine with different NICs. I had three nearly identical servers that experienced the problem, but intermittently -- average working uptime per machine was about 36 hours, which meant that on any given day...
  19. J

    PVE 6 and Mellanox 4.x drivers

    In this case it was not a bond used for VM access, only ceph (storage). 10G interfaces -- similar to your configuration. I have removed both the bonding and the OpenVswitch bridge (both OpenVSwitch features), and it has been stable for two days. Previously we could usually last 8-12 hours before...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!