After a local sync from the namespace where PBS02 stored the synced snapshots to the namespace where PVE stores it's backups, now PVE only transfers changed data and in most cases even uses the existing dirty-map (I suspect dirty-map can be...
please post a backup task log.
client-side deduplication can only happen if there is a previous snapshot on the backup target (datastore+namespace+group!) that is not a in a verification-failed state. based on your description I suspect you have...
This is what I was missing here. Yes, they are in a new, empty namespace. Now that I think it again makes sense as there is no "list of snapshots with their list of chunks" to compare to and help PBS decide before hand if a chunk should be...
I've had some bad luck this time with a broken PBS server. This is the sequence of events:
- Cluster PVE does it's backups to PBS01 (v3.4.4) for nearly two years.
- PBS02 (v4.1.1) in a remote location syncs backups from PBS01. This has been...
I'm fully aware about the usefulness of snapshots as volume chains for LVM and I am about not needing it on any file based storage. That's not what I'm asking. My question is what is the use case and motivation to use snapshots as volume chains...
That script doesn't change the IP on any of the needed files, just on the network configuration and doesn't really add anything to what you can do by hand or via webUI. Don't use it.
Change the entry on /etc/hosts too and restart pve-proxy and pvecluster services (or reboot host). Details here [1]. Remember this works if host isn't in a cluster, which probable isn't as it's a single host.
[1]...
Just stumbled on this. PVE9 with QEMU 10.1 deprecates VM machine versions older than 6 years [1]. You will have to change the machine version on the hardware of the VM to be >=6, both for 440fx or Q35. This implies that a new virtual motherboard...
Definitely not the same issue, even if the symptom is the same. I remember having somewhat similar issue on some Dell long ago (AFAIR it was when PVE7.0 came out) and enabling all of X2APIC, IOMMU and SRIOV on BIOS + BIOS update solved it at the...
We're very excited to present the first stable release of our new Proxmox Datacenter Manager!
Proxmox Datacenter Manager is an open-source, centralized management solution to oversee and manage multiple, independent Proxmox-based environments...
I though it would be related to nested virtualization / virtio vIOMMU. Haven't seen any issue with bare metal yet.
Can you manually import the pool and continue boot once disks are detected (zpool import rpool)?
For reference, https://forum.proxmox.com/threads/proxmox-ve-8-4-released.164821/page-2#post-762577
CRIU doesn't seem to be powerful / mature enough to be used as an option and IMHO seems that Proxmox would have to devel a tool for live migrating...
As mentioned previously, PVE will try to connect iSCSI disks later in the boot process than multipath expects them to be online, so multipath won't be able to use the disks.
You can't use multipath with iSCSI disks managed/connected/configured...
Hi @VictorSTS ,
You are correct. If there are existing entries in the iSCSI database (iscsiadm -m nodes) at the time of the upgrade, they will cause issues when you try to modify them afterward. Other scenarios can also lead to problems, for...
IIUC, this may/will affect iSCSI deployments configured on PVE8.x when updating to PVE9.x, am I right? New deployments with PVE9.x should work correctly?
Thanks!
Hello everyone, This is a brief public service announcement regarding an iSCSI upgrade incompatibility affecting PVE 9.x.
A bug introduced in iscsiadm in 2023 prevents the tool from parsing the iSCSI database generated by earlier versions. If...
Can't really recommend anything specific without infrastructure details, but I would definitely use some VPN and tunnel NFS traffic inside it, both for obvious security reasons and ease of management on the WAN side (you'll only need to expose...
On one of my training labs, I have a series of training VMs running PVE with nested virtualization. These VM has two disks in a ZFS mirror for the OS, UEFI, secure boot disabled, use systemd-boot (no grub). VM uses machine: q35,viommu=virtio for...
Hi, this should be resolved with Proxmox VE 9.1: When KRBD is enabled, RBD storages will automatically map disks of VMs with a Windows OS type with the rxbounce flag set, so there should be no need for a workaround anymore. See [1] for more...
It's unclear if you are using some VPN or direct connection using public IPs (hope not, NFS has no encryption), but maybe there's some firewall and/or NAT rule that doesn't allow RPC traffic properly? Maybe your synology uses some port range for...