Search results

  1. D

    [SOLVED] pass only untagged traffic to vm

    Thanks. :) Adding "tag=0" removes the network interface. The same is true if I add the trunks parameter without without vlan ids to it, e.g. trunks,tag=0. If I read man qm correctly trunks itself requires vlan ids seperated by ";", like trunks=0;5. I tried net0...
  2. D

    [SOLVED] pass only untagged traffic to vm

    I have a bond binding two nics connected to two switch ports each with an untagged and multiple tagged vlans (802.1q). On top of the bond is a bridge (linux ifupdown2 not ovs). Now I add some VMs. VMs which receive a vlan tag see only the vlan tagged traffic, all good. VMs without a vlan tag see...
  3. D

    Integration of AD and pve

    I am currently testing integration of AD with PVE including syncing and found the documentation and wiki somewhat incomplete. Especially there were two points can I use bind_dn and similar with AD as well (yes) how to filter a set of users and/or groups. Luckily there are options documented in...
  4. D

    [TUTORIAL] Migrating DC from hyper-v cluster to proxmox with ceph storage

    Thank you for your suggestion, but I am using sparse disks on both sides anyway, i.e. while the disk was configured with a max size of 80G the vhdx had <30G and the rbd is now just shy of 20G. Copying was done in <1 min. While qm importdisk took a little bit longer than that (a few minutes) and...
  5. D

    [SOLVED] Active-Passive Bond DELL Server (Intel(R) Ethernet 10G 4P X710)

    https://www.kernel.org/doc/html/latest/networking/bonding.html And yes bringing an interface down via OS does not necessarily mean the link is seen as down on the switch side, see https://forum.proxmox.com/threads/802-3ad-failover-time-when-bringing-a-slave-interface-down.87390/#post-383527
  6. D

    [TUTORIAL] Migrating DC from hyper-v cluster to proxmox with ceph storage

    Since I needed to document the process anyway I thought I share my experience of migrating a domain controller from a hyper-v cluster to a proxmox cluster with ceph storage. I wrote this with these 2 wiki articles as background information...
  7. D

    upgrade to pve 7 renamed interfaces because of /etc/systemd/network/99-defaults.link

    On our nodes interfaces got renamed from predictable to old naming scheme, i.e. eno1 -> eth0, eno33p0 -> eth3 … But why? No /dev/null links in /etc/systemd/network. No net.ifnames=0 in /proc/cmdline. No … wait, why is there a /etc/systemd/network/99-default.link file. That is new. Contents: #...
  8. D

    802.3ad failover time when bringing a slave interface down

    Just to finish this up if anyone ever comes across something similar: A link is not necessarily powered down by ip link set $NIC down. We found that across multiple cards from various vendors - and an equivalent for the windows version of disabling of a link. For our Mellanox cards we could...
  9. D

    option to add adhoc backups to Prune Simulator

    Just discovered the PBS Prune Simulator while looking at the pve documentation, really nice. Maybe add the option to create not only scheduled backups but also adhoc backups? That would help to visualize the scenario mentioned in the pve-docu:
  10. D

    renamed interfaces from 5.4 to 5.11

    Thanks for the pointer. What basically happened on my systems is a rename of several ID_NET parameters sudo udevadm info -e |grep ID_NET > interfaces_5.11 # do the same on kernel 5.4 with interfaces_5.4 diff interfaces_5.4 interfaces_5.11 3,4c3,4 < E: ID_NET_NAME_ONBOARD=eno33 < E...
  11. D

    renamed interfaces from 5.4 to 5.11

    Maybe hardware specific, with kernel pve-5.11 some Mellanox ConnectX-5 cards (onboard and pci) changed their names, which obviously caused havoc on our network config. What does it say on Predictable Network Interface Names again? > Stable interface names when kernels or drivers are...
  12. D

    cifs with dfs broken in pve-kernel-5.4.101-1

    With 5.11.7-1-pve DFS works fine (pve-kernel-5.11: Installed: 7.0-0+3~bpo10).
  13. D

    802.3ad failover time when bringing a slave interface down

    It seems at last some nics do not actually take the link down, when told so via ifdown/ip link down. [1], [2] I played around with mlxconfig yesterday to query and set the parameters. It did not resolve the issue yet, as KEEP_ETH_LINK_UP_P1=0 would result in a non functional network connection...
  14. D

    cifs with dfs broken in pve-kernel-5.4.101-1

    Will probably be fixed with 5.4.112 For reference: - https://cdn.kernel.org/pub/linux/kernel/v5.x/ChangeLog-5.4.112 - https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1923670 Should already work with the pve 5.11 kernel but had no time to test on actual proxmox installation yet (only arch...
  15. D

    cifs with dfs broken in pve-kernel-5.4.101-1

    The problem was probably introduced somewhere between 5.4.0-67 and 5.4.0-71 as it arrived on my client (Ubuntu 20.04 based) today as well. edit: with 5.4.0-70 it still works.
  16. D

    802.3ad failover time when bringing a slave interface down

    Thanks, I already had it to fast on host side, but not on switch side. With lacp rate fast on switch side the problem is dampened: 2 lost pings with interval 1s. The problem is the switch sees link is up, despite on host side ip link set eno33 down Therefore it waits until enough (3 I think...
  17. D

    802.3ad failover time when bringing a slave interface down

    I noticed something strange recently. If I bring a slave interface of a bond with mode 802.3ad down, the bond loses network connectivity for about 70-80s. If I unplug the cable I lose connectivity for less than one second (2 lost pings when intervalis 0.1s) which is what I would expect. I can...
  18. D

    cifs with dfs broken in pve-kernel-5.4.101-1

    sorry just noticed wrong sub-forum should be in Proxmox VE: Installation and configuration
  19. D

    cifs with dfs broken in pve-kernel-5.4.101-1

    I have a cifs storage configured in storage.cfg that points to a dfs path. I just noticed that this stopped working. Going back in /var/log/messages* it correlates to the upgrade to pve-kernel-5.4.101-1-pve on 2021-03-02. The same dfs mount still works on my client machine which is still on...
  20. D

    cgmanager.service does not start

    As additional info cgmanager has been deprecated a while ago and the package can be purged. See https://github.com/lxc/cgmanager

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!