Hello
I can confirm this issue, randomly on 5 servers in OVH, two was intels and three amds, now we have three amds. guest are debian from 11 to 13. one big python app and three mysql and haproxy.
Same case, but we migrate vms to exact hardware...
Yes, I'm currently looking into a fix for this - I cannot give any guarantees as to when it will land though. As a workaround you can always use either SDN VNets for VLAN Tags or *not* use VNets and tag network devices directly - without mixing them.
I have a 2013 Mac Pro, I think it is A1481 running Proxmox 9.1.2 now.
E5-2697 v2 12-Core, 64GB Ram, 500GB SSD.
I installed it with the proxmox-ve_8.4-1.iso on a USB a while ago.
I also reinstalled with proxmox-ve_9.0-1.iso using PiKVM couple...
Hello
I can confirm this issue, randomly on 5 servers in OVH, two was intels and three amds, now we have three amds. guest are debian from 11 to 13. one big python app and three mysql and haproxy.
Same case, but we migrate vms to exact hardware...
The migration path is not relevant. You should examine the current state of the LVM as suggested previously. Post the results here as text output encoded with CODE tags.
The two primary outcomes are: the snapshot exists on disk and does not exist...
Yes, I'm currently looking into a fix for this - I cannot give any guarantees as to when it will land though. As a workaround you can always use either SDN VNets for VLAN Tags or *not* use VNets and tag network devices directly - without mixing them.
I am also trying to achieve this. Haven't had any luck with either of the workarounds proposed (modifying vnets.cfg to change the tag to 0 or setting up a simle zone and vnet). Any other ideas. Currently running PVE 9.1.2. This seems like a basic...
Hi,
Just checking in to see if this issue is taken into account and if we can expect any fixes/updates in the future? And if not, is there anything else to do for this to be taken into account for a fix?
Thanks
One thing to consider for using the PBS as qdevice: If you add a host as qdevice every node of the cluster can login as root on the qdevice (and thus the PBS) without additional authentification. This is something you usually don't want on your...
I'm not using one, but I have a question: that seems plausible during setup, but during operation? Is ssh-access actually used for a QDev "doing its thing"? If not one could disable it after installation...
My (possibly wrong) understanding was...
Hi,
For deduplication YMMV, for example, if I look at our PBS, it's pretty good with more than 90% of Linux VMs (more than 60% of these Linux machines are Debian), and retention from 14 to 365 days.
We have critical VMs which backups run every...
Hi @fabian, are there some new opportunity to sync between namespaces on same datastore without using the "remote 127.0.0.1" workaround?
We use this to keep a long weekly retention on some vms
Thanks
ok so am i doing this the right way?:
I migrate everything to NodeB
Under the Datacenter>Storage menu, i click on the ZFS pool and remove NodeB from the configured nodes
I delete the ZFS Pool in NodeB
Datacenter replaces all disks
I recreate the...