After banging my head for about a day as to why a CentOS 7 VM created via the latest official cloud-init image just wouldn't have networking at all, on both a Proxmox 7 & a Proxmox 8 server with routed networking at Hetzner, here's a short guide that might benefit others...
As always, your...
As of OCT 2023, there seems to be an updated release of "libssl1.1" and we can also use a repo instead of using dpkg to manually install the relevant .deb file...
# Install the Debian Bullseye repo key first (this repo is "closest" to Ubuntu 22.04 "jammy")
wget...
Circling back at this...
What's the point of configuring backup jobs using the Web UIs (PBS & Proxmox) if you can't pass on any additional parameters to the underlying CLI commands?
And if I manually add backup jobs in a shell script, it really defeats the purpose of using a UI in the first...
Now in 2022 and since apparently "--entries-max" can be passed to the proxmox-client-backup utility according to the docs, is there perhaps some general config file for that utility?
On a new PVE 7 installation with 7 containers only, I hooked up a PBS (v2) server and when the time came, PVE started backing up the CTs to the PBS server (as expected).
So far so good.
Since these CTs were migrated from a PVE 6 system a few hours before the backup I (stupidly?) run a "pct...
Yesterday I went into Proxmox to update the system and got these errors:
Ign:1 http://ftp.debian.org/debian stretch InRelease
Get:2 http://ftp.debian.org/debian stretch-updates InRelease [91.0 kB]
Get:3 http://security.debian.org stretch/updates InRelease [94.3 kB]
Ign:3...
As a sidenote, let me add that after a reboot, networking is lost and the container's external IP is the Promox server's IP. But if I do "ifdown eth0 && ifup eth0" right after the reboot, the container will then pick up the correct networking configuration.
Is there any equivalent trick for Red Hat based OSs (CentOS 7 in particular) where the path to the interfaces is "/etc/sysconfig/network-scripts"?
Thank you.
I've actually resolved mine, which it seems boils down partially to the hosting provider and partially to more strict networking perhaps on Proxmox's or Debian's side (I guess).
I had to remove the line "pointopoint 195.154.150.YYY" inside the eth0 block as this was causing the primary issue...
Same here. Nothing changed in network configuration but after today's updates to system packages (incl. PVE kernel etc.) CT networking is totally broken.
Typical CT networking for me is like this:
62.210.7.XXX is the public IP of a CT
195.154.150.YYY is the gateway
auto lo
iface lo inet...
And for the record, CSF setup is identical for the VMs I setup. However the issue I originally mentioned happens between VMs across Proxmox v4 servers. It doesn't happen between a VM on a Proxmox v3 server to a VM on a Proxmox v4 host (and vice versa)... THAT is the weird part.
I have encountered a very strange networking behaviour with the latest 4.1 build of Proxmox and I'd like the Pros insight on this if possible.
A (quick) little background first...
I currently run 2 x Proxmox v3 & 5 x Proxmox v4.1 hosts in Online.net [FR] and one Proxmox v4 host in i3D.net [NL]...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.