Search results

  1. Node in grey mode (question mark) up to 7 days

    May be chown -R www-data:www-data /var/log/pveproxy I've seen it before.
  2. Booting Debian with iSCSI root disk

    Hello, community. Exerpt from <quote> There is no way to do this with the standard Debian initrd. The init4boot project supplies the needed infrastructure (especially an adapted initrd). Standard PXE / tftp boot with iSCSI root and XEN guest systems with iSCSI...
  3. Lxc containers are shut down without intervention

    Hi, you can try to find logs here: /var/log/lxc/CTNUM.log regards, dale.
  4. CT: unable to change memory online

    After some googling i found: so in my case echo $((1024*1024*1024*16)) > /sys/fs/cgroup/memory///lxc/120/memory.memsw.limit_in_bytes echo $((1024*1024*1024*10)) > /sys/fs/cgroup/memory///lxc/120/memory.limit_in_bytes...
  5. CT: unable to change memory online

    In my case :( ~# free -h total used free shared buff/cache available Mem: 125Gi 43Gi 67Gi 121Mi 15Gi 81Gi Swap: 19Gi 10Mi 19Gi ~# cat /sys/fs/cgroup/memory///lxc/120/memory.limit_in_bytes...
  6. CT: unable to change memory online

    Hello, this is running containers only related (VMs - OK), webui result - the same. ~# pct set 120 --memory 10240 400 Parameter verification failed. memory: unable to hotplug memory: closing file '/sys/fs/cgroup/memory///lxc/120/memory.limit_in_bytes' failed - Invalid argument pct set <vmid>...
  7. PVE from RAMFS & LXC

    Feature req. added 2019-09-12 :(
  8. Linux/x86 5.0.21-4-pve Kernel & Linux VM (pvetesting repo)

    Installed pve-kernel-5.3.7-1-pve - Ok.
  9. Linux/x86 5.0.21-4-pve Kernel & Linux VM (pvetesting repo)

    Can confirm that this is "hardware dependent" problem. Another system with Intel(R) Xeon(R) Gold 6144 CPU @ 3.50GHz - Ok.
  10. Linux/x86 5.0.21-4-pve Kernel & Linux VM (pvetesting repo)

    VM reboot in earliest stage with message: Physical KASLR disabled: no suitable memory region! hardware info attached root@b12:~# cat /proc/cmdline BOOT_IMAGE=/boot/vmlinuz-5.0.21-4-pve root=UUID=e48d586c-8f58-4b03-a081-cdcf4776c83c ro console=tty0 console=ttyS0,115200n8 nopti nospectre_v2...
  11. Linux/x86 5.0.21-4-pve Kernel & Linux VM (pvetesting repo)

    :( nothing. Quick reboot after loading kernel and jumping to it.
  12. Linux/x86 5.0.21-4-pve Kernel & Linux VM (pvetesting repo)

    After upgrade to latest pvetesting repo Linux VMs constantly reboot in cause using 5.0.21-4-pve kernel [Windos & CTs - OK]. Linux/x86 5.0.21-3-pve - OK. root@b11:~# pveversion -v proxmox-ve: 6.0-2 (running kernel: 5.0.21-3-pve) pve-manager: 6.0-11 (running version: 6.0-11/2140ef37)...
  13. PVE from RAMFS & LXC

    Really? root@b02:~# ls -la /var/lib/vz/images/101/ total 9960 drwxr----- 2 root root 4096 Aug 30 12:35 . drwxr-xr-x 5 root root 4096 Sep 11 15:28 .. -rw-r----- 1 root root 536870912 Sep 11 15:35 vm-101-disk-0.raw root@b02:~# mount -o loop,noatime...
  14. PVE from RAMFS & LXC

    dir: local path /var/lib/vz content rootdir,iso,snippets,images,vztmpl maxfiles 0 zfspool: z0 pool zp00l content rootdir,images sparse 1
  15. PVE from RAMFS & LXC

    lxc.arch = amd64 lxc.include = /usr/share/lxc/config/alpine.common.conf lxc.apparmor.profile = generated lxc.apparmor.allow_nesting = 1 lxc.monitor.unshare = 1 lxc.tty.max = 2 lxc.environment = TERM=linux = a1 lxc.cgroup.memory.limit_in_bytes = 536870912...
  16. PVE from RAMFS & LXC

    IIUC, CT not started cause of (from debugging output submitted earlier) lxc-start 101 20190830122341.107 ERROR conf - conf.c:lxc_chroot:1389 - Permission denied - Failed to mount "/usr/lib/x86_64-linux-gnu/lxc/rootfs" onto "/" as MS_REC | MS_BIND lxc-start 101 20190830122341.107 ERROR...
  17. PVE from RAMFS & LXC

    pvereport output attached
  18. PVE from RAMFS & LXC

    Hello, community. Cause of running PVE node from ramfs lxc containers not starting (VMs - ok.) config: arch: amd64 cores: 1 hostname: a memory: 512 net0: name=eth0,bridge=vmbr1,hwaddr=A6:5B:35:4B:2C:85,ip6=auto,tag=6,type=veth ostype: alpine rootfs: local:101/vm-101-disk-0.raw,size=512M swap...
  19. Remove node from 2 nodes cluster

    Hi, Cyberavis. I successfully split 2 node cluster online with working vms with simple scripts executing on both nodes. ~# cat #!/bin/sh systemctl stop pvestatd.service systemctl stop pvedaemon.service systemctl stop pve-cluster.service systemctl stop corosync systemctl stop...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!