I've found the namespaces... not that "great" (though, I am using them in some cases), and have opted to rather do separate datastores for my "DCs"/clusters, even when on the same ZFS pool, at least I get better indications on the storage usage for each DC (which you won't see "Easily" in PBS...
1) check the disk models whether they are CMR or shingled(sp?)/SMR - the Shingle drives (typically those with like 128MB-256MB or more RAM/cache) are known to break horribly with ZFS
2) personally, I'd rather setup a DRAID2 or a RaidZ3 given your number of disks *AND* have SSD/NVMe SLOG and...
Thank you!
Having those entries added in, is a good starting point to check/do things.
What I am getting/doing, just... might not work in the SDN setup, as I don't "want" the PVEs to be the router/gateways, but a dedicated VM (FortiGate/etc.) , but still thank you for the pointers to consider
Good day,
is there perhaps a template/documentation/examples for implementing a custom IPAM for the ProxMox SDN side?
I'm looking at https://spritelink.github.io/NIPAP/ for IPAM as we roll out the next steps of our network, so obviously the question comes in w.r.t. PVE not (yet) supporting...
The last time *I* used pfSense/OPNsense, the reason you disabled the hardware acceleration and did not want to use the virtIO nics, was that DHCP/UDP had troubles with the lack of checksums added to the packets, and then certain parts would drop the packets, so the need was to stick with E1000...
The case where I used that verification, was when I needed to bootstrap a remote ('cross atlantic) PBS, and the single connection synchronizations was just.... plain molasses on a freezinglingly cold winters day...
The easiest was to spin up multiple rsync sessions of each of the directories...
Okay, seems to find the "culprit":
This is the values that is set inside the Devuan 4.0 container:
/proc/1/task/1/mounts:devpts /dev/pts devpts rw,nosuid,noexec,relatime,mode=600,ptmxmode=000 0 0
And for a Debian container:
devpts /dev/ptmx devpts...
Good day,
I've been deploying Devuan 4.0 images (pve 7.4) the past 4 months, and had noticed `byobu` strangeness, but only this week it caused me problems that I had to get to the bottom of it.
I've tried all the settings in the GUI panel for UNprivileged containers, and eventually...
that, when you look at the replied posts, using the PVE 7.4 GUI, to configure the CloudInit for Debian 12 cloudinit images, it doesn't configure the needed and the vm doesn't boot right
I'm in need of executing a script to allow traffic through firewall and open port 80 inbound to the PVE (and next PBS), and then once done, close the ports etc.
Is there a current way to do it in PVE 7.x ?
hmmm... seems to also botch the authorized_keys when I do find a "valid" multiple keys pub file:
==> subvol-999996-disk-0/root/.ssh/authorized_keys <==
# --- BEGIN PVE ---
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKrogu2+b8HXlhr+M7LuCRdPKwpXSYx0ZJeMDrB7E50M ssh-ed25519...
I'm bumping my head against this problem where I need to add several keys for containers at creation time, but keep getting the above error on PVE 7.4 when doing it via the WebUI/GUI interface.
What is the expression/criteria to match or make it work?
Is it acctually supported? as it seems to...
There just isn't anything for ProxMox to respond about: LXCs doesn't and isn't expecting the get live migrations, it'll only be the Qemu/KVM VMs.
PS: Does K8s supply live migrations? using which container tech?
That is the normal state that a bridge interface goes through (especially when spanning trees are configured) as it starts up.
Blocking state: don't forward any traffic to prevent loops
Forwarding: This interface doesn't make a loop, or the spanning tree coose it to forward traffic, so enable...
Best advice:
Setup a firewall, having all the IPs on it, and then have the firewall route to the LXCs on their own VLANs.
Other than that, you will have to create the virtual IP MAC for those IPs, and set the LXC's interfaces to that on the same bridge as the outside IP/interface
yeah, these scripts aren't "guaranteed", the beter/more guaranteed method: reboot the ProxMox whenever there are things changing on openvswitch, or other host network related interface changes
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.