I'm running the latest PVE with containers backed by CEPH. Somewhere in the past I stopped being able to snapshot containers. Backup jobs *can and do* make snapshots, but external tools are failing — for example,
INFO: filesystem type on dumpdir is 'ceph' -using /var/tmp/vzdumptmp547208_105 for...
6-node cluster, all running latest PVE and updated. Underlying VM/LXC storage is ceph. Backups -> cephfs.
In the backup job, syncfs fails, and then the following things happen.
•The node and container icons in the GUI have a grey question mark, but no functions of the UI itself appear to fail...
With PVE 6.4, I had functional tun/tap (think ZeroTier) inside privileged LXC with the following config:
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net/tun dev/net/tun none bind,create=file
In PVE 7, with or without features: mknod=1, ZeroTier now fails:
zerotier-one[171]...
I'm having massive instability with the built-in mellanox 4.0.0 drivers (mlx4_en). However, I don't seem to be able to compile the Mellanox drivers (for Debian 9.6).
Has anyone else had success or failure with this setup?
My particular cards are
lspci -k
82:00.0 Ethernet controller: Mellanox...
Fairly standard installation… and today dist-upgrade is trying to uninstall proxmox-ve. Other threads' suggestions aren't fixing it for me. Any ideas? I'm reluctant to override.
apt-get dist-upgrade
Reading package lists... Done
Building dependency tree
Reading state information... Done...
Since April 30, I've gotten 7 warning emails from one host, 5 from another, and 2 from another;
each email claims there is 1 CurrentPendingSector failed; that is, currently unreadable (pending).
On each host, it's the same type of drive, a CT1000MX500SSD1 (Crucial 1TB)
Running smartctl...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.