hello guys, I join the post to report that I am also in the same problem.
i tried running the logrotate command but it didn't solve it and i have version 2.3.1-1 of pmg-log-tracker.
I will leave you a list of the version of my packages.
proxmox-mailgateway-container: 7.1-2
pmg-api: 7.1-7...
Hi, i have a disparity in ram usage between pve summary and pbs dashboard.
My pbs is installed on vm and qemu-guest-agents are present and balloning is active.
I am attaching three pictures of the ram usage that confuse me and i do not understand which one is correct.
Thanks to all
PVE...
hello everyone, i have a cluster with 3 nodes (kve-kve2-kve3) and today i noticed, during a migration, that kve2 cannot migrate to kve.
2022-02-28 16:06:14 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=kve' root@192.168.1.6 /bin/true
2022-02-28 16:06:14...
Hi everyone,
I need your advice for updating my home lab.
Unfortunately I have to decommission my current server (DELL R710), which has served me well so far, and replace it with a smaller server and storage on my LAN to cut down on electricity costs.
I need to downsize the server to an i7 pc...
hello everyone.
I have a problem with disk permissions.
tonight one of the nodes stopped working and i had to restore the backup of the virtual machines.
unfortunately the backup was from the day before but i had active replication from node 1 to node 2.
I then restored my virtual machine 101...
thanks for your reply!
i've opted to create a new vm and transfer the data. definitely faster and more within my reach. i've read a few posts on how to turn a ct into a vm but it's definitely out of my skill set for now.
thank you for your clarification.
you have been very kind.
obviously there is no way to convert a lxc to qemu vm, right?
i have to make the vm and import the data, correct?
hi guys,
I am posing a problem that is bothering me a bit.
I have a container running iredmail as mailserver
The disk of this container is 1tb and the total space occupied is about 195gb.
The first backup, with compression, is about 90gb and so far so good.
The incremental backups, the last...
Hello everyone, hello Fabian,
I have re-tested as I mentioned in my previous message.
I backed up a 2.26gb machine and used rclone to sync remotely.
rclone, without the --create-empty-src-dirs option, doesn't synchronize the empty folders so the recovery works but it doesn't allow to make new...
Hi Fabian,
thanks for your reply!
Indeed rclone, without any option, does not synchronize empty directories.
On the datastore "pbs-android", in my case, I have 65540 directories and 1244 files.
On mega I have the same number of files but 1232 directories.
In the documentation I found that with...
Hi,
I tried again as promised.
I explain my steps:
1. created datastore on PBS with NFS share destination on QNAP;
2. connected in PVE;
3. backup of a VM of 2.25gb;
4. with rclone I moved the backup to the cloud;
5. deleted the folder from QNAP to simulate disaster recovery;
6. created a new...
I am including here the link to a post of mine regarding this issue.
After that date (December 17, 2020), I did not test again.
For me, the important thing was to be able to restore from the cloud and import VMs/CTs.
Creating a new datastore for new backups is not a priority issue for me.
With...
Hi, I'm going to add to your post.
I set up this configuration:
- Proxmox VE with virtual machines;
- VM PBS backing up to NFS in my Qnap NAS;
- I have installed rclone on PBS which makes copy of datastore on cloud mega.nz;
I tried, as you say to do disaster recovery.
- I deleted the PBS VM...
I tested again with a few gig machine but did not solve the problem.
The original folder from my first backup was copied to external/remote device.
I deleted the original folder (simulating a disk failure or replacement), restored the copy of the original folder by mounting it in...
Hello and thank you for your response.
My mount point is /mnt/testds.
I actually gave the chown -R command only to the contents of the mount point (.lock .chunk ct vm) but not to the mount point itself (/mnt/testds).
I'll redo the test and tell you more. Thanks for your reply.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.