seems disks was excluded from backups, show/compare config from two snapshots backups to check it.
edit: seems you loose 4 months of data.
in PBS all snapshots backups are full backups, PBS do deduplication during each backup = all data already existing in the PBS datastore, are not recopied.
not experience here, but afaik, set public ip and the route in the vm not on the vmbr1.
auto vmbr1
iface vmbr1 inet manual
bridge-ports enp10s0f1np1
bridge-stp off
bridge-fd 0
it's expected, in single thread operation, a CPU can be slower than a 4 newer years CPU.
Mainly Xeon can do more tasks at the same time, but necessarily faster.
edit: older cpu are slower due to mitigations enabled in Linux/Proxmox.
You can set mitigations=off as kernel boot option...
fyi, here running two hosts 24/7/365 from an USB Key without problem yet , one since 2021 , the other from 2022.
https://forum.proxmox.com/threads/running-proxmox-from-a-usb-drive-solved.66762/post-464877
imo, only ZFS is the most TBW eater then require DataCenter/PLP disks.
I was lost in your post , please edit it , put copy/paste into CODE tags (not inline CODE).
there is too many words/things.
Remember inter-VM communication doesn't use your physical NIC, it's only virtual network flow, over vmbr0 bridge which speed is bounded to your oldish CPU.
There is an issue , try downgrade virtio-scsi driver to version 0.1.208, within Windows Device Manager.
https://forum.proxmox.com/threads/redhat-virtio-developers-would-like-to-coordinate-with-proxmox-devs-re-vioscsi-reset-to-device-system-unresponsive.139160/post-691337
btw, curious about your...
Current mitigations is downgrade virtio-scsi driver to version 0.1.208, within Windows Device Manager.
https://forum.proxmox.com/threads/redhat-virtio-developers-would-like-to-coordinate-with-proxmox-devs-re-vioscsi-reset-to-device-system-unresponsive.139160/post-691337
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.