Yes. At scale the cleanest approach is to query the PVE REST API and either consume it directly or wrap it with a small script. The one-call overview endpoint is “pvesh get /cluster/resources –type vm”, which returns every VM and container across...
Hi,
There are 2 issues in your config - the main issue is MTU flapping. KNET's Path MTU Discovery runs globally across all links. When a node rejoins on link 0 (eno1, MTU 1500), KNET resets the global PMTUD process for that link. While it's...
Gute Nacht, das sieht am anderen Problem aus, denn USB Platten verwendet man nicht in einem ZFS Pool. So kann es sich natürlich um andere Probleme handeln wie: ausgefallener USB-Controller, ausgefallene Kabel und so weiter und so fort.
That was my first thought as well, but as @andreisrr asked about "From: " (note the colon) header, I believe that SPF won't help, because SPF verifies only "From " (note: without colon) header, i.e. MAIL FROM: address.
To verify (also) "From: "...
We are running a cluster of 8 Proxmox VE nodes with OCFS2 on a SAN LUN since several years.
It works but it is not officially supported. There were some issues even with OCFS2 in recent kernels as the development of OCFS2 seems to have come to a...
I found something else in journalctl (on proxmox host) when the VM was stopped:
Apr 11 23:25:40 p** QEMU[4172743]: kvm: ../block/io.c:444: bdrv_drain_assert_idle: Assertion `qatomic_read(&bs->in_flight) == 0' failed.
I may be important, that i...
There is nothing in dmesg VM just suddenly stops without errors inside VM and there is no kernel panic or anything on VNC because VM i stopped not in error state. I have similar VM (the same size, system, configuration...) on PVE8 and they backup...
I've got an ZFS array of 2TB disks. One of them has failed.
I made this ZFS via the GUI. Is there a way to replace the disk in the GUI?
If not, is this the correct command to do it via text command? (from this wiki)
# zpool replace -f...
I use headscale, too.
But establishing a common network between my otherwise segregated PVEs means to circumvent my inner firewall. If I were willing to do this, it would easier to remove the firewall instead of making an effort to circumvent...
There are also alternatives to tailscale who works similiar ( they all base on wireguard) like netbird or headscale and there are even more. Personally I use headscale which is an open source Implementation of tailscales protocol.
For your...
There is no additional PVE involved ;-)
The "external relay" is just a common reachable rendezvous point to establish the tunnel. All payload traffic would then go through this device, which vastly influences the reachable bandwidth...
Hmm, “classic” is probably the right word here, in the sense that it’s usually not done this way anymore nowadays ;)
Isn’t that a bit overkill for a homelab? I mean, unless you are trying to replicate a large enterprise environment with...
I'm wondering why you don't just put the VMs in the DMZ on this PVE instead of the entire PVE? Don't you trust the VM isolation?
I'm asking because if you had all the PVEs on the same management network and only exposed the VMs to the...
My personal choice would be to establish a Wireguard tunnel.
My preferred method is to handcraft something like https://www.wireguard.com/quickstart/. But this requires one endpoint to be able to reach the other one; in a single direction is...
Yes, that would work, if both PVEs shared the same management network. But they don't. The networks outside my inner firewall and inside are totally segregated. Nothing goes in. That's the problem in my case. And, yes, I could put them on the...