Okay, so the drive is actually attached to a Proxmox VE host, rather than an instance of Proxmox Backup Server?
Do you have anywhere else that you could transfer the backup data to temporarily, so that you can format and re-partition the disk? Even some space on the root filesystem would...
I believe Home Assistant uses systemd, which should also include services for fstrim. You can check systemctl status fstrim.timer to see if it's enabled already, and enable it if not.
Trim operations are used by a system to discard unused blocks. The discard option allows the proxmox host to...
The problem here is that the target which is set does not support acls [1], which are in use for the files shown in the error. For suspend mode container backups setting the temporary target to a location which supports acls is a requirement (see Backup modes for Containers [2]). You will need...
I'm not too familiar with the arp_interval/arp_ip_target configuration, but if setting it manually works, you could add pre-up configurations for the bond. This would add the configuration just before bond1 is about to be brought up by ifupdown2.
auto bond1
iface bond1 inet manual...
Could you try just running the failed rsync command with the flags -v --progress added, and the target directory (/var/tmp/vzdumptmp510946_101, in the above case) set somewhere you can write. This should show which file(s) are actually failing to transfer.
Note that the number in...
Hi,
Is it a secondary storage, or the system's root disk?
What kind of storage type is it?
What filesystem is it using/would you like to use?
Is there data on the partitions?
Hi,
If you have ifupdown2 [1] installed (optional since PVE 6.1, default since PVE 7), you can reload your network interfaces with ifreload -a. Otherwise, ifdown bond1; ifup bond1 should do it.
[1] https://packages.debian.org/buster/ifupdown2
Have you read over the guide for installing Proxmox VE inside VirtualBox (in particular the network considerations section) [1]? From the console of Proxmox VE, could share the network configuration, by posting the output of 'cat /etc/network/interfaces'? Could you also provide the host computer...
Rsync's exit code 23 signifies a partial transfer due to error, but doesn't give much more information than that.
Could you provide the entire output for the task, to see if it has any more information?
Also the container config file could be helpful (pct config <ct_id>)
Hi,
Both options refer to the same thing on the PVE side, i.e., retention period of the backups on the storage [1]. With VZDump, you are just telling PVE to manually prune backups one time, while configuring it on the storage makes it the default behavior for that storage. It probably makes most...
It looks like you're trying to add the storage on the path '/dev/sde', which i guess is actually the device you are trying to mount?
What you instead need to do is mount the drive (assumed to be /dev/sde): mount /dev/sde /mnt/mountpoint,
then add /mnt/mountpoint as directory storage.
It's hard to say the reason for which the old logs remain in the task history, after they get rotated out, but it could have been due to issues in the older (beta) versions. For now I would delete the logs older than 20 days with:
find /var/log/proxmox-backup/tasks/[0-9A-F][0-9A-F]/ -mtime +20...
I can't find any reference to the file '/etc/pve/corosync.qdev' anywhere, and not sure it's something that was placed there by Proxmox VE. What is the contents of this file? Is there any mention of the qdevice in /etc/pve/corosync.conf? Have you tried running pvecm qdevice remove?
Does it happen to be the ones that were upgraded which display this behavior or is it all of the containers?
Could you run du -sh /var/log/proxmox-backup/tasks/* | sort -hr on the node with the +2GB log files to get an overview of where storage is being used (i.e., archives or not-yet-rotated...
Have you removed vm-101 since your original post? From the output of lvs I see that only vm-100 is accounted for, which takes up roughly 13GB or 12GiB (32g * 42.84%). In general, total usage can be calculated with: "LSize" * "Data%"
As far as the growing data is concerned, this could simply be...
There's a 'maxroot' parameter in the installer (for lvm configuration), which sets the maximum value of the root volume (see section 2.3.1 of the admin guide [1]).
To shrink an LVM partition, you need to unmount the filesystem. Given that it's the root file system, this would require booting...
Sorry, I am not very familiar with drbd/linstor, is it required to run on the three nodes, even if one is just a qdevice? Would pve's storage replication [1] functionality be of any help as an alternative?
[1] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#chapter_pvesr
The BTRFS filesytem in PVE 7 only supports raw images for VMs. To answer your question, you would also be subject to the same loss of performance as is mentioned in the linked post, as BTRFS is also a cow (copy on write) file system.
According to the Ceph documentation, the max size for rados objects is 128Mb [1]. This is to prevent users directly adding objects with sizes that could negatively affect system behavior [2]. The manpage for rados also warns against using this command directly, and instead opting for one of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.