Sounds like your PDM host cannot reach the Google host here. Are you sure PDM is configured with sufficient access to the internet?
We are aware of that and a fix has already been send to the mailing list [1].
[1]...
Because Proxmox is not using auto settings but inet manual. So no auto for IPv4 and no auto for IPv6.
Not the default I would choose, but it is what it is.
Fortunately you can change this. Proxmox VE is using the Linux network stack...
When you run the pvesm add nfs command, it does two things. It mounts the NFS export and adds the storage definition to PVE.
When you run the pvesm remove command, it is only removes the storage definition from PVE. It does not unmount the NFS...
Of cause there are multiple ways to set this up and there are definitely more details than would fit into one post. One possible quick-n-dirty approach:
take a new unused switch (or use VLANs)
connect one NIC on each PVE node here
connect the...
@brightrgb the docs are a little light on detail.
We consolidated some info and provide a Python and Bash example in the hookscripts folder here: https://github.com/Weehooey/proxmox-lab-notes
Please share benchmarks. Until then, I am guessing that you eventually got the vm to address the socket housing the PCI link to the NIC. generally speaking, adding sockets does no harm AT BEST- and usually slows down the machine by introducing...
so i went through this way this week, found many helpful resources, but no complete guide.... maybe this will help someone
https://github.com/hnzl62/xen-to-pve
Today, we announce the availability of a new archive CDN dedicated to the long-term archival of our old and End-of-Life (EOL) releases.
Effective immediately, this archive hosts all repositories for releases based on Debian 10 (Buster) and older...
When moving from standalone nodes to a cluster, the network you have to be most concerned about is the cluster network. The cluster network carries the Corosync traffic, and best practice is to have a dedicated 1Gbps network for just the cluster...
Hi All,
This issue has been fixed / problem resolved.
Looks like problematic was EFI disk.
Live migration start working smoothly when I removed EFI disk from hardware configuration.
With this case @weehooey-bh helped a lot.
Here is link which...
Yes, you're right. It works as intended.
The same is true for a vanished storage. Losing all virtual disks is surprisingly not considered a problem.
If you want this problem area to be more visible to the developers you could open a Feature...
Hi y’all,
I’ve updated my Proxmox hardening guide, it now includes PVE 9 and PBS 4, in addition to PVE 8 and PBS 3. It continues to extend the CIS Debian benchmark with Proxmox specific hardening tasks.
Repo...
The above recap is so good, it should be bookmarked for anyone designing Proxmox networking. I wanted to add one item here which wasn't really addressed and took me a while to figure out, and that is 'backup or failover' links and networking...
Some suggestions:
You can use bmon on your PVE host to determine which interface is handling the traffic.
When you run iperf3, use port numbers other than the default.
Disconnect one of the links and test again. Do you get the speeds you are...
you can give the PVE access token only access to read old backups and create new ones, but not remove/prune backups. you can combine this with a sync job on the PBS side to keep an offsite archive that is decoupled from the main PBS system (with...
@bytegenius You can file a feature request here: https://bugzilla.proxmox.com/
Once you do that, please post the link back here so people can follow it.
@lost_avocado where did you see this? Can you post a link to it?
How do you get the port...