Hi there,
I have a host with a config like this:
auto lo
iface lo inet loopback
iface eno2 inet manual
auto vlan60
iface vlan60 inet static
address 10.10.60.230
netmask 255.255.255.0
gateway 10.10.60.1
vlan_raw_device eno2
metric 10
auto vmbr0...
According to https://github.com/openzfs/zfs/pull/12225 zfs expansion is a feature coming soon.
So let's look at this zpool status rpool output:
# zpool status rpool
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 01:10:22 with 0 errors on Sun Aug 14 01:34:23 2022
config:
NAME...
I just set up nginx as a reverse proxy for pveproxy on all my cluster hosts, added this to /etc/default/pveproxy
ALLOW_FROM=127.0.0.1,10.10.60.0/24
DENY_FROM=all
POLICY=allow
10.10.60.0/24 is the management vlan, all proxmox hosts have an entry in /etc/pve/corosync.conf with a 10.10.60 ip...
I can't remember nor reproduce the exact error messages, but basically the new apparmor lxc profiles were not loaded after running systemctl reload apparmor.service which you will also find in the dmesg and if you run lxc-start -F 123 even though the apparmor config files are identical to the...
TLDR; make sure the time is properly configured or at least in the past before proceeding with the install. Probably this is what the installer itself should do.
I noticed my apparmor profiles weren't loading on one of the 2 new, identical machines I set up.
After lots of head scratching I...
Figured it out: I set up channel bonding, which always appeared to work but in reality caused a few percent of packet loss under load. Only corosync 3 really has problems with it. I had to enable the bonding on the
Cisco switch as well.
I just had to restart the main production server's corosync again.
Sep 18 10:15:20 redbaron corosync[27598]: [KNET ] host: host: 7 has no active links
Sep 18 10:15:20 redbaron corosync[27598]: [KNET ] host: host: 5 (passive) best link: 0 (pri: 1)
Sep 18 10:15:20 redbaron corosync[27598]...
Here is some typical logs of when a single host is having a problem. After I restart all corosync processes It's usually quiet for say half an hour and then this begins. The missing host in this case crashed during the night and it's our main production server, so I had to get up to reboot it in...
Here is the pveversion -v output. The journalctl output I will post when it happens.
proxmox-ve: 6.0-2 (running kernel: 5.0.21-1-pve)
pve-manager: 6.0-7 (running version: 6.0-7/28984024)
pve-kernel-5.0: 6.0-7
pve-kernel-helper: 6.0-7
pve-kernel-5.0.21-1-pve: 5.0.21-2
pve-kernel-5.0.18-1-pve...
Well, my joy is short-lived. It's running into problems again. Back to the drawing boards. Do you spot anything out of place in the corosync.conf?
logging {
debug: off
to_syslog: yes
}
nodelist {
node {
name: batman
nodeid: 1
quorum_votes: 1
ring0_addr: 10.10.60.9
}...
On 3 of our 7 cluster members I had /etc/hosts entries like this:
10.10.10.100 host100 # normal VLAN, this host
10.10.60.100 host100 # corosync VLAN, this host
10.10.60.101 host101 # corosync VLAN, another host
etc, etc.
After removing the normal VLAN line from the hosts file and restarting...
On my server /zpool contained /zpool/subvol-102-disk-1/dev and /zpool/subvol-103-disk-1/dev resulting in a failed zfs-mount.service and not running the lxc containers. After running 'rm -rf /zpool/*' mount.service was able to work properly.
Thanks for all the hints in this posting.
I just had the same problem and ran into your posting just now, and I just fixed it like this. I hope it works for you as well.
I opened a terminator window to all hosts, put them in broadcast mode so what I type is the same in each window, then did a
tail -f /var/log/daemon.log | grep...
I just followed https://pve.proxmox.com/wiki/Cluster_Manager#_remove_a_cluster_node to the letter
I had to reinstall the node hab01 so I removed the node like instructed, reinstalled it and brought it up like described. But now I still see both the old and the new node in a different failed...
After adding a new node where I installed zfs afterwards I managed to get the proper incantation right after some testing:
zpool create -f rpool mirror sda sdb
pvesm add zfpool local-zfs -pool rpool
But after that the replications on all nodes fail with:
zfs error: For the delegated...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.