Recent content by Han Boetes

  1. H

    vm on same vlan as the proxmox host

    Hi there, I have a host with a config like this: auto lo iface lo inet loopback iface eno2 inet manual auto vlan60 iface vlan60 inet static address 10.10.60.230 netmask 255.255.255.0 gateway 10.10.60.1 vlan_raw_device eno2 metric 10 auto vmbr0...
  2. H

    zfs expansion

    According to https://github.com/openzfs/zfs/pull/12225 zfs expansion is a feature coming soon. So let's look at this zpool status rpool output: # zpool status rpool pool: rpool state: ONLINE scan: scrub repaired 0B in 01:10:22 with 0 errors on Sun Aug 14 01:34:23 2022 config: NAME...
  3. H

    zpool upgrade - Fehler

    TL;DR Bin gerade gegen das gleiche Problem angelaufen. Einzige was ich tun musste, war den Boot-mode im BIOS auf UEFI umschalten.
  4. H

    2 hosts in my cluster are accessed over their public ip

    I just set up nginx as a reverse proxy for pveproxy on all my cluster hosts, added this to /etc/default/pveproxy ALLOW_FROM=127.0.0.1,10.10.60.0/24 DENY_FROM=all POLICY=allow 10.10.60.0/24 is the management vlan, all proxmox hosts have an entry in /etc/pve/corosync.conf with a 10.10.60 ip...
  5. H

    problem loading new apparmor profiles because timestamps were in the future

    I can't remember nor reproduce the exact error messages, but basically the new apparmor lxc profiles were not loaded after running systemctl reload apparmor.service which you will also find in the dmesg and if you run lxc-start -F 123 even though the apparmor config files are identical to the...
  6. H

    problem loading new apparmor profiles because timestamps were in the future

    TLDR; make sure the time is properly configured or at least in the past before proceeding with the install. Probably this is what the installer itself should do. I noticed my apparmor profiles weren't loading on one of the 2 new, identical machines I set up. After lots of head scratching I...
  7. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    Figured it out: I set up channel bonding, which always appeared to work but in reality caused a few percent of packet loss under load. Only corosync 3 really has problems with it. I had to enable the bonding on the Cisco switch as well.
  8. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    The main server redbaron has 10G Nic cards, and there was some noticeable packet loss going on. Seems to be a networking issue.
  9. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    I just had to restart the main production server's corosync again. Sep 18 10:15:20 redbaron corosync[27598]: [KNET ] host: host: 7 has no active links Sep 18 10:15:20 redbaron corosync[27598]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1) Sep 18 10:15:20 redbaron corosync[27598]...
  10. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    Here is some typical logs of when a single host is having a problem. After I restart all corosync processes It's usually quiet for say half an hour and then this begins. The missing host in this case crashed during the night and it's our main production server, so I had to get up to reboot it in...
  11. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    Here is the pveversion -v output. The journalctl output I will post when it happens. proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve) pve-manager: 6.0-7 (running version: 6.0-7/28984024) pve-kernel-5.0: 6.0-7 pve-kernel-helper: 6.0-7 pve-kernel-5.0.21-1-pve: 5.0.21-2 pve-kernel-5.0.18-1-pve...
  12. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    Well, my joy is short-lived. It's running into problems again. Back to the drawing boards. Do you spot anything out of place in the corosync.conf? logging { debug: off to_syslog: yes } nodelist { node { name: batman nodeid: 1 quorum_votes: 1 ring0_addr: 10.10.60.9 }...
  13. H

    [SOLVED]proxmox 6: corosync 3 problem caused by unfinished bonding configuration.

    On 3 of our 7 cluster members I had /etc/hosts entries like this: 10.10.10.100 host100 # normal VLAN, this host 10.10.60.100 host100 # corosync VLAN, this host 10.10.60.101 host101 # corosync VLAN, another host etc, etc. After removing the normal VLAN line from the hosts file and restarting...
  14. H

    [SOLVED] Zpools not mounting on boot

    On my server /zpool contained /zpool/subvol-102-disk-1/dev and /zpool/subvol-103-disk-1/dev resulting in a failed zfs-mount.service and not running the lxc containers. After running 'rm -rf /zpool/*' mount.service was able to work properly. Thanks for all the hints in this posting.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!