Search results

  1. A

    BR_PORT_BITS - limit of bridges

    Please increase BR_PORT_BITS in pve kernel. It is must have with Nested containers.
  2. A

    [BUG] Proxmox 6.1 AMD64 Ryzen 3600X edac kernel errors with ECC ram

    This is not only for Ryzen. Also this present on EPYC. PVE kernel need backport support Zen 2 CPU's
  3. A

    BR_PORT_BITS - limit of bridges

    @proxmox-kernel-team can you be increase this limit in pve-kernel by default?
  4. A

    BR_PORT_BITS - limit of bridges

    Hi! Linux kernel have default hardcoded parameter BR_PORT_BITS = 10. This param limit max bridges to 1024 (2^10 = 1024). In some cases need increase this param - for example qty containers over 1024 - big lxc containers with nested dockers containers. I know BR_PORT_BITS limit is inspired by...
  5. A

    Live migration LXC

    Have any progress about this? LXD have live migration, but Proxmox still no. When it's tentatively planned to add LXC live migrations? I was read this and I'm upset...
  6. A

    PVE kernel stuck

    Unfortunately I can't check this until next stuck or planned reboot.
  7. A

    PVE kernel stuck

    Most servers upgraded, but needs reboot, and two nodes still 4.x (will be updated when possible). But I think error explained in topic start may be present in latest kernels too: $ for h in {8,9,10,11a,11b,11c,11d,12a,12b,12c,12d}; do echo host$h; ssh root@host$h pveversion; done host8...
  8. A

    PVE kernel stuck

    Unfortunately cluster have 2 nodes with Proxmox 4.x and until this nodes will be upgrade to 5.x I can't update to proxmox 6.x currently (as you know before upgrade to 6.x need update corosync previously). Upgrade process is going on slowly, because these servers run services that support...
  9. A

    PVE kernel stuck

    Today one of the servers is stuck, some time after the next message: Feb 17 04:15:11 hostA kernel: [472111.800882] WARNING: CPU: 41 PID: 0 at net/sched/sch_generic.c:323 dev_watchdog+0x222/0x230 Feb 17 04:15:11 hostA kernel: [472111.801110] R13: ffff974a19760000 R14: ffff974a19760478 R15...
  10. A

    Docker support in Proxmox

    Perhaps you need enable in config container Options --> Features "keyctl" and "Nesting" or add into config next params: lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: lxc.mount.auto: proc:rw sys:rw br_netfilter no need to load on pve kernels, because: $ grep...
  11. A

    Docker support in Proxmox

    On proxmox host is need some modules preloaded: # cat /etc/modules bonding ip_vs ip_vs_dh ip_vs_ftp ip_vs_lblc ip_vs_lblcr ip_vs_lc ip_vs_nq ip_vs_rr ip_vs_sed ip_vs_sh ip_vs_wlc ip_vs_wrr xfrm_user #for Docker overlay nf_nat br_netfilter xt_conntrack Also docker can't start in LXC on ZFS...
  12. A

    Docker support in Proxmox

    Stop docker, create config: # less /etc/docker/daemon.json { "storage-driver": "overlay2", "iptables": true, "ip-masq": true } Run docker, save iptables generated rules in file: iptables-save > /var/lib/iptables-docker-default.rules Stop docker, сhange config: # less...
  13. A

    ext4_multi_mount_protect Delaying Container Migration

    Is that fixed in Proxmox 6? In Proxmox 5.4 this bug is present. It also appeared after moving .raw to XFS
  14. A

    Proxmox VE Ceph Benchmark 2018/02

    Unfortunatly, with RBD have problems too https://forum.proxmox.com/threads/ceph-rbd-slow-down-write.55055/
  15. A

    ceph rbd slow down read/write

    If you look in the post: https://forum.proxmox.com/threads/ceph-rbd-slow-down-write.55055/#post-254003 , you can see that only the read operation is performed from the storage, the write operation is performed in /dev/null. And filling the page cache in the container sufficiently, that would...
  16. A

    ceph rbd slow down read/write

    Difference (exclude IP, MAC and identifiers) only one: cached not filled in second container This is not expected. I want to achieve next: in container page cache, data is sync more often and invalidate synced older pages for allowing overwriting this pages and performance not drop when...
  17. A

    ceph rbd slow down read/write

    And why in second container with rbd (krbd, same pool) at the same time performance is not drop down? It allows me to think - the problem is not in ceph or in krbd.
  18. A

    ceph rbd slow down read/write

    And which params need to tune? May be next (?): "bluestore_cache_kv_max": "536870912", "bluestore_cache_kv_ratio": "0.990000", "bluestore_cache_meta_ratio": "0.010000", "bluestore_cache_size": "0", "bluestore_cache_size_hdd": "1073741824", "bluestore_cache_size_ssd"...