Search results

  1. M

    zfs_arc_max does not seem to work

    Side note, trawling through old archived useful info here. Yes, decimal is supported now. Secondly, if you use the echo command to change live size, you must drop the caches to allow them to repopulate with the new size. This may thrash your disk if a lot of reads from cache were happening and...
  2. M

    Upgrade LXC 16.04 to 18.04 Problems CGROUP

    Help for others encountering this -- have the same issue, but the services do NOT start. Nov 21 10:07:14 enig systemd[374]: Failed to attach 374 to compat systemd cgroup /system.slice/apache2.service: No such file or directory Nov 21 10:07:14 enig systemd[374]: apache2.service: Failed to set up...
  3. M

    Reinstalled Proxmox, How to add ZFS Pool Back Without Losing Data?

    But how do you reinstall over disks with an existing rpool? There's not even a shell console available to be able to dd if=/dev/zero to the drives to erase them, or say reinstall with ext3/4/xfs raid5 or 10 to overwrite them... And which devices in a zfs raid 10 are bootable (ie master boot...
  4. M

    cPanel Disk Quotas for LXC - need help

    yes this does work of course and I've detailed how I got this working in cpanel in other threads. However, it does not give the same visibility into the filesystem that a zfs would for other purposes like backups (to check snapshots for a file for eg) as you need to remount the ext4 in loopback...
  5. M

    PCT list not working

    I fear rebooting the whole server as nothing may work afterwards if this problem is deep. Also I come from a culture of "reboot as last resort as you may not get access to the machine afterwards" (even with BMC/IPMI). Nothing special but a deb9.8 container 741. Something is corrupt on this...
  6. M

    PCT list not working

    Something is out of sync cant pct list: can't open '/sys/fs/cgroup/cpuacct/lxc/741/ns/cpuacct.stat' - No such file or directory how to fix? proxmox-ve: 5.4-2 (running kernel: 4.15.18-20-pve) pve-manager: 5.4-13 (running version: 5.4-13/aee6f0ec) pve-kernel-4.15: 5.4-8...
  7. M

    [SOLVED] Imported CT doesn't start: Failed to mount "/dev/pts/8" onto "/dev/console"

    Not unrelated, a google search brought me exactly here to this thread. I think you missed the part that i insinuated that I got the 'No such file or directory - Failed to mount "/dev/pts/8" onto "/dev/console"' error to start with from pct start and looking at systemctl status etc / journal...
  8. M

    [SOLVED] Imported CT doesn't start: Failed to mount "/dev/pts/8" onto "/dev/console"

    BEWARE! all those 'failed to mount console' just means the ct didnt start. It's a generic report for ANY error that stops the container starting. in my case it's: lxc-start 913 20191002173623.626 DEBUG conf - conf.c:run_buffer:326 - Script exec /usr/share/lxc/hooks/lxc-pve-prestart-hook...
  9. M

    cpanel dovecot resource issue with apparmor

    This seems to have caused or is related to another issue I posted about previously, now some elements of /proc and /sys are unavailable: # pct list can't open '/sys/fs/cgroup/cpuacct/lxc/741/ns/cpuacct.stat' - No such file or directory and when I enter into an existing container i get...
  10. M

    cpanel dovecot resource issue with apparmor

    aha!! it does not work with the apparmor profile and pct start however! #pct start 741 Job for pve-container@741.service failed because the control process exited with error code. See "systemctl status pve-container@741.service" and "journalctl -xe" for details. command 'systemctl start...
  11. M

    cpanel dovecot resource issue with apparmor

    of course now it all suddenly works if i have no apparmor profile defined with pct start. (no dovecot alerts) if i start it with the manual lxc-start per the url above, it starts with the apparmor profile (despite some warnings?) :( will advise if this repeats. I have another container with...
  12. M

    cpanel dovecot resource issue with apparmor

    Forgot to include pve version: pve-manager/5.4-13/aee6f0ec (running kernel: 4.15.18-20-pve)
  13. M

    cpanel dovecot resource issue with apparmor

    Had a situation where constraints from apparmor were causing problems with cpanel's dovecot. The container is NOT unpriviledged and not protected. The cpanel support guy said I need lxc.aa_profile = unconfined But from what I...
  14. M

    /proc and /sys missing for pct enter container but exists for ssh session in

    Something funky with pct enter -- this just stared happening, wasnt occuring before. Something's changed (no no packages have been updated on the container that i know of... but obviously something changed while I wasnt looking...) root@arch:/etc/pve/nodes/arch/lxc# pct enter 909 website:/# ps...
  15. M

    Unprivileged containers

    There is some security risk to that. It should not be done without knowledge of what its effects are.
  16. M

    cPanel Disk Quotas for LXC - need help

    Solution is for zfs to support quotas in lxc, but it can't yet apparently.
  17. M

    Disk quota inside LXC container.

    did you follow my link to the other thread...?
  18. M

    Proxmox and SACK attack - CVE-2019-11477, CVE-2019-11478, CVE-2019-11479

    Which version is the minimal fixed version #? pve-kernel-4.15.18-16-pve amd64 4.15.18-41 [52.5 MB] pve-kernel-4.15.18-12-pve amd64 4.15.18-36 [52.5 MB] during a single update, want to be sure which of my other hosts need upgrading.
  19. M

    cPanel Disk Quotas for LXC - need help

    Update: this of course doenst dynamically generate the lxc.cgroup.devices.allow = b 230:16 rwm entry which should extend to all 230:* device nodes. If you have a trusted environment, could add entries for as many volumes as you think you'll ever need (ie :32 :48 :64 etc etc on up, seems to...
  20. M

    cPanel Disk Quotas for LXC - need help

    Some more helpful details - I guess I hadn't rebooted since tuning - and /dev/zd## drives can renumber randomly if you've created/removed other zvols. At any rate, for whatever reason, they changed on me. So instead of using rootfs:/dev/zd16 for eg in your rootfs lxc/$CTID.conf file options...