Search results

  1. Y

    LXC CSF and IPtables errors

    We have a simple LXC machine with CSF installed on it (PVE 7.X) we are getting the following errors inside the LXC CT [root@box ~]# /etc/csf/csftest.pl Testing ip_tables/iptable_filter...OK Testing ipt_LOG...OK Testing ipt_multiport/xt_multiport...OK Testing ipt_REJECT...OK Testing...
  2. Y

    LXC Migration between nodes with different storage types (LVM and ZFS)

    When I do to "Datacenter" on the WebUI and then go the existing `local-zfs` I added the source node (which is lvm set up disks) and even though the `local-zfs` did in fact come up on the source node, its not usable since the `local-zfs` is ZFS and the source node isnt on ZFS filesystem, its on...
  3. Y

    LXC Migration between nodes with different storage types (LVM and ZFS)

    Does anyone know if there is a better of of migrating VMs/CTs between the nodes?
  4. Y

    LXC Migration between nodes with different storage types (LVM and ZFS)

    We have two node cluster (no HA of course) Node 1 (older and has many guests VMs/CTs) Storage: local, local-lvm Node 2 (new and has no guests at all) Storage: local, local-zfs I understand that you can not do a migration between nodes since they dont have "match storage names" (as in...
  5. Y

    Centos/Fedora LXC doesnt boot with networking applied

    Another little point, on the first time the container is booted, ont he PVE host I see the following: 284: veth422i0@if5: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master vmbr0 state LOWERLAYERDOWN group default qlen 1000 But after I reboot the LXC container I see: 285...
  6. Y

    Centos/Fedora LXC doesnt boot with networking applied

    Hi all I am using the default centos7 or Fedora or even centos LXC templates from the proxmox downloader (nothing custom at all) and when the LXC CT is created and strated for the first time, the container does not have the static IP addresses applied. However if you reboot it AFTER it gets...
  7. Y

    [SOLVED] PVE 7 - How to make all new CTs have PPP enabled

    Hey Team So I know that we can make a single selected container have PPP working by simply adding into their XXX.conf file the following: lxc.cgroup.devices.allow: c 108:0 rwm lxc.mount.entry: /dev/ppp dev/ppp none bind,create=file However what we want to do is that all existing and new CTs...
  8. Y

    Migration from 6.4 to 7.0

    Just to confirm, what errors are you seeing on 20.04 containers?
  9. Y

    Open file limits on unprivileged LXC

    The way the LXC configs are loaded: First you have /usr/share/lxc/config/common.conf and then what ever is in /usr/share/lxc/config/common.conf.d/* and after the OS specific for example /usr/share/lxc/config/debain.common.conf and after that your VMID.conf. thats the order of how its loaded if...
  10. Y

    Open file limits on unprivileged LXC

    You could apply it on /usr/share/lxc/config/common.conf this way its applied to all LXC containers wide
  11. Y

    Migration from 6.4 to 7.0

    With the new fixes, do we still need to add? ``` GRUB_CMDLINE_LINUX_DEFAULT="systemd.unified_cgroup_hierarchy=0 quiet" ```
  12. Y

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    Remove the unprivileged var conf and instead try adding to yur LXC conf the following lxc.cgroup.devices.allow = lxc.cgroup.devices.deny = Let us know how that goes?
  13. Y

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    editting the conf and setting the unprivileged is not a real solution to the issue. even PVE docs state this should not be done manually
  14. Y

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    Try set/add the following to your LXC conf (/etc/pve/lxc/1234.conf for example) unprivileged: 1 and then try to boot the CT
  15. Y

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    Yes and also happens with Ubunutu 16.X too. Just read the docs and I understand what is happen. I am curious how the pve6to7 tests detects this since the pct conf doesnt state which version of centos or ubunutu they are
  16. Y

    Unified cgroup v2 layout Upgrade warning PVE 6.4 to 7.0

    Before we upgrade I just want to better understand what the following error means and what it could result in after the upgrade: WARN: Found at least one CT (174) which does not support running in a unified cgroup v2 layout. Either upgrade the Container distro or set...
  17. Y

    [TUTORIAL] windows cloud init working

    What was your comprise to getting windows to provision if not to edit the files? Would you mind posting a little how-to guide for it?
  18. Y

    [SOLVED] Vlans in a PVE cluster

    Wow you really solved my issues, after defining the vlan (123 for testing sake) its all up and working across the switch fully, thank you!
  19. Y

    [SOLVED] Vlans in a PVE cluster

    Yeah it’s done on purpose with the addressing. In theory I should be able to see arp and regular non ip traffic between the vm machines. If they are both on the same node then it’s working great hence why I tend to point my finger at the switch
  20. Y

    [SOLVED] Vlans in a PVE cluster

    Thanks for the reply @ph0x 1) Yes both are on vmbr0 and are vlan aware 2) The VLAN ID is set on both VMs via the webui 3) the switch (Catalyst 2960s) is set up as trunk Where did I go wrong? my first assumption is the switch config for the ports that both nodes are on, but it seems to be all...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!