The same occurs with AlmaLinux 9.2. I have upgraded one of cluster nodes to PVE8 and migrated this LXC to it without any issues.
But when I run again pve7to8 on this upgraded node it still reports that:
WARN: Found at least one CT (30188) which does not support running in a unified cgroup v2...
I don't think that it's 50/50 chance to use the same link. When one link is fully saturated, it should use another one. Otherwise it wouldn't have any sense to use bond (802.3ad), when there is no gain in performance.
Hi! I have some troubles setting up bond on BCM57504 via S5224F-ON.
Network configuration:
auto lo
iface lo inet loopback
auto enp129s0f0np0
iface enp129s0f0np0 inet manual
auto enp129s0f1np1
iface enp129s0f1np1 inet manual
auto enp129s0f2np2
iface enp129s0f2np2 inet manual
auto...
It's just a wild guess, but I think the type is determined from the name.
I would try to rename vmbr0.5 to vmbr5 in /etc/network/interfaces and reload with ifreload -a
Hi, I have a 13-node PVE custer connected via FC to storage.
I'm going to add another 4 PVE nodes as Ceph cluster storage.
I would like to know whether it is better to have separate 4 node Cluster with Ceph connected to the existing cluster as separate storage,
or should I add those 4 nodes with...
There is only A record for "mail.exmample.com" no AAAA record found.
This sholudn't be a problem normally. Most domains we communicate with doesn't have AAAA records and delivery works to those.
Here is error report, when it doesn't work:
Nov 4 09:58:08 pmg postfix/smtp[2812391]: B90D347EA8: to=<user@example.com>, relay=none, delay=0.01, delays=0.01/0/0/0, dsn=5.4.4, status=bounced (Host or domain name not found. Name service error for name=example.com type=AAAA: Host found but no...
Hi I've experienced this issue many times in the past, and it starts to be annoying. It looks like this is still not fixed in current version of PMG. Is this such low priority because nobody cares or the solution is somewhat complex? The bug report hasn't been updated for three years now.
When noone replied, I have posted a ticket to the customer portal and here is the solution:
Enable invalid packets with nf_conntrack_allow_invalid on all cluster nodes in /etc/pve/nodes/$NODE_NAME/host.fw:
[OPTIONS]
nf_conntrack_allow_invalid: 1
Thank you Proxmox for your superb support!
Hi,
I have an IPVS direct routing Load Ballancer VM (LB) which works only when the target Backend VM (e.g. B1) is on the same cluster node.
If LB is on different node than Backend (B1), TCP connection between Client (CL) and Backend (B1) can not be established.
CL sends SYN packet to LB and LB...
I've experienced the same ugly issue today. The "workaround" of using offline migration first and booting on the other node works and VM can be migrated online since then. But it's really sad. It afects all VMs on 1 of my 7 nodes. All nodes are fully upgraded.
I have exported an iSCSI LUN with multipath to a PBS server in the SAN.
How should I initialize this remote disk? It doesn't show up in drive list.
# lsscsi
[1:0:0:0] cd/dvd QEMU QEMU DVD-ROM 2.5+ /dev/sr0
[2:0:0:0] disk QEMU QEMU HARDDISK 2.5+ /dev/sda
[3:0:0:0]...
This doesn't seem to be the case. I've set proxy_pass to single host. The 8006 works okay but SPICE proxy on 3128 doesn not:
server {
listen <server_IP>:3128;
include mime.types;
server_name <FQDN>;
location / {
proxy_pass https://n1:3128;
proxy_set_header Host $host...
How about SPICE proxy is it working for you? I have tried to proxy it over nginx but without success:
upstream spice {ip_hash; server n1:3128; server n2:3128; server n3:3128; }
server {
listen <server_IP>:3128;
include mime.types;
server_name <FQDN>;
location / {
proxy_pass...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.