I have set up a test SDN configuration on a PVE cluster with OVS, following a simple zone example with SNAT and DHCP. A test VM gets assigned a DHCP IP, but is unable to get out to the Internet. The host on which this VM is located is routing to the Internet successfully.
Can the issue be...
Experimenting with PVE SDN, I noticed an issue, where when a vNet is deleted, and configuration applied, the corresponding files in /etc/dnsmasq.d are not deleted. For example:
1. Create a zone called nz1
2. Create a vNet with DHCP
3. Delete everything
4. Still seeing files in...
Thank you very much for the quick reply. I have the latest versions here, and located the Note Template tab in the backup job. After rerunning the backup job, I can now see the VM name under "Notes" in PVE, and under ''Comment" in PBS, when expanded.
Would there be a way to display the name of a VM/CT in the list of backups, such as highlighted in the attached screenshot? If not, can an enhancement be made to do this? Seeing the configuration name in addition to the VM/CT numeric ID would be very helpful when recovering archived backups...
Reviving this briefly, and I can open a new thread if that would be better - looks like live migration is working fine now with ZFS replication (pvesr), but the current documentation in https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesr states:
Ah ok thank you - so this is for Remotes. I was hoping this could be done with client PVE clusters, but it seems that a port selection is not currently possible there (i.e. for backup to a PBS storage you must use port 8007).
Can I ask where you are setting the port? We tried this, but don't seem to be able to specify a port in the Add Storage, Proxmox Backup Server. If I put a port in the Server field, like backup.example.com:443, I get an error.
Three nodes in my lab cluster. I've defined a specific HA group for just this VM, and tested both reboot and power off. This worked as expected, except that I had to wait for the ZFS replication to complete prior to migration, otherwise the migration would fail. I think this is a reasonable...
I have been testing the (very exciting) PVE 6.3 capability to do live migration with ZFS replication. I was assuming that upon an automatic migration for reboot (cluster Options setting to migrate and reboot node), the cluster would pick the PVE host where replication is configured to go as a...
I accidentally shut down 2 out of 3 monitors on a Ceph cluster, external to PVE 6.1. All VMs on Ceph kept running, the cache setting is "No Cache". Is PVE still caching the VM storage somewhere, or how could this work? The only way I noticed is by trying to migrate a VM and that kept failing...
We are also very excited to see progress on this, as the feature would put Proxmox in the truly enterprise category with the capability to reliably replicate large data sets functionally, per VM.
We are seeing the same issue on the following software. Has the cause been found for this issue? We see this on PVE nodes connected to Cumulus switches, but not others.
root@hyperpod3:/var/log# dpkg -l openvswitch-switch
Desired=Unknown/Install/Remove/Purge/Hold
|...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.