When we're trying to add our cluster or even a node, we're getting the error below. All the nodes have been fully updated & upgraded to pve-manager/8.3.2/3e76eec21c4a14a7 (running kernel: 6.8.12-5-pve). All the nodes are the same hardware & kernel.
api returned unexpected data - expected json...
I cannot seem to find any documentation regarding the firewall rule precedence between a VM, node/host, and the datacenter. Specifically, which level has the ultimate authority when determining if something is blocked or allowed.
For example, let's say I create a rule at the datacenter level to...
No, the code is bugged. HA itself will have this issue with any VM and any storage setup when that storage is unable to that VM & host. The code base need to verify that host has access to the needed storage & the virtual disks for that VM before moving it anywhere at anytime. It's that simple...
Simple all clustered systems firewall, dhcp, dns and AD have a failover VM. In each of these clusters a VM is always setup to not use any shared storage as a (last resort) fall back. This way only a single node and disk set is required to keep everything limping along.
What needs to happen is that HA checks that the remove host can access the storage needed, if not then it should not move the config there and error out HA if no other host can access that storage. Problem solved.
HA will live migrate without replication when you vary the priority of host in...
I never used or talked about storage replication, I specify outlined a HA setup using local storage only and this error is repeatable. HA plus local storage is a valid setup and is extremely useful as is. But that fact remains that that HA doesn't do any kind if validation checks on the MV...
We continue to run into an issue where we unexpectantly lose a host due for "X" reason (network outage, power, hardware, etc.). When this happens and a VM is configured with both HA and is using local storage. The VM becomes unusable as HA tries and fails to migrate the VM to another host...
@ggoller, Thanks for the suggestion. But this bug has bad PBS unusable for us as any new rsync/zsync copies Just recreate this attribute. And we need to do preserve the original attributes with our copies.
This issue appears to exist in other backup utilities/applications, and with those the...
Sure, this only started after we upgraded the proxmox servers.
storage xattr on default
storage/backups xattr sa local
storage/backups/backup3 xattr sa...
On a few of our Promox (8.1.3) servers we're getting the following PBS backup client errors when trying to backup flat files.
Server Spec:
* Proxmox 8.1.3
* zfs-2.2.2-pve1
* zfs-kmod-2.2.2-pve1
* raidz2, all drive are clean, 3x scrubs have been run and come back clean, smart data is also clean...
If you're logging into your server's desktop as root, Chrome on Linux won't launch. You can edit the Chrome launcher to address this issue, or you can simply install firefox-esr.
Is there a way to transfer backups between namespaces? Initially, we started using PBS before the namespace (NS) feature was available, which resulted in all our backups being saved in the "root". Since then, we've been organizing our backups into separate namespaces, each with its specific use...
We are trying to backup several read-only volumes using proxmox-backup-client and have encountered the following error. We suspect the issue might be related to the application's inability to save the file to the backup path. While we have come across posts mentioning this error, those seem to...
Our current issue stems from using PBS-client to back up approximately 22TB of data every day. This process creates substantial log directories, as mentioned earlier. These extensive logs are not only consuming a significant amount of our storage space but also wearing down our SSDs to the point...
This is wonderful to hear, but in the mean time is would be the impact if these log files weren't able to be written to ? Say due to filling the disk ?
That's extraordinarily weird, as I wouldn't expect log data of any kind to be transient like that.
It's a simple enough matter of switching rsyslog to ignore those lines if in fact that's what it is.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.