yes i did see the test results, and i would agree that this change should have only a minor impact. the real world example is different :-D
i did not change any settings on the pool, it has been carried over for many generations of openzfs, so some of the modules are now in "local" setting...
i could run another test where i instruct the zfs datastore to `sync=always` to see if there is a difference - this should achieve the same effect as setting the pbs sync-level to file.
but please tell me, if this is helpful in any way before i change my setup ;-)
adding `tuning sync-level=none` to the corresponding datastore in `/etc/proxmox-backup/datastore.cfg` did actually work. the backups are now finishing.
i am wondering why this problem occurs on my local ZFS storage, which indeed does not have a separate ZIL (as this is an archive HDD). During...
sorry for the late reply - i was really busy and had no time to conduct the tests.
since my pbs is installed alongside pve (bare-metal) i have a combined log:
Jan 28 09:12:05 pve pvedaemon[17861]: <root@pam> starting task UPID:pve:00042639:0002E52B:63D4D8D5:vzdump:102:root@pam:
Jan 28 09:12:05...
my PBS is a bare-metal install alongside PVE, the datastore resides on a local-mounted ZFS-Datastore.
Here is the datastore.cfg:
datastore: pbs
comment
gc-schedule tue *-1..7 23:30
notify gc=error,sync=error,verify=error
notify-user root@pam
path...
hi there!
i had to redo all the steps because it interfered with my scheduled verification and backup.
here are the answers to your questions:
- pveversion -v
proxmox-ve: 7.3-1 (running kernel: 5.15.83-1-pve)
pve-manager: 7.3-4 (running version: 7.3-4/d69b70d4)
pve-kernel-5.15: 7.3-1...
INFO: Starting Backup of VM 102 (qemu)
INFO: Backup started at 2022-12-22 12:02:32
INFO: status = running
INFO: VM Name: vWin10
INFO: include disk 'scsi0' 'ssdpool:vm-102-disk-1' 160G
INFO: include disk 'efidisk0' 'ssdpool:vm-102-disk-0' 1M
INFO: include disk 'tpmstate0' 'ssdpool:vm-102-disk-2'...
nope, checked that aswell. nothing out of the ordinary.
if you can point me somewhere to look or maybe tell me where to implement additional logging (the above mentioned .pm file?) then i can run some tests if you like
Hi there,
i am using PBS for quite some time now but after the recent switch to 2.3.x a problem occured:
On some VMs the backup is stuck after reaching 100% with the message "waiting for server to finish backup validation".
I *don't* have the option "Verify new backups immediately after...
I read the docs and still am a bit confused of how deduplication works:
1. On the Summary page of my Datastore it says: "Deduplication Factor 5.54". What exactly does that mean?
2. Is deduplication done in the scope of machines (host,CT,VM) or in the scope of the datastore? I.E. if i have 3...
ok, as a workaround i activated implicit ssl on port 465 on my mail-server with an invalid (aka snakeoil) cert, because the machine is only a relay and not available to the public.
to do that, one has to remove the smtp_port and smtp_ssl variable via
ceph config rm mgr mgr/alerts/smtp_ssl
ceph...
Thanks for your reply, i think this might be another issue? Because i have no interval set at all and the error messages are different?
I also found your reply on this thread:
https://forum.proxmox.com/threads/ceph-manager-alerter.68694/#post-307898
So maybe you meant this tracker...
Hi there,
i am running proxmox 6.2-10 with ceph 14.2.9
i also installed the ceph-manager-dashboard (to get the alert module) via
apt install ceph-mgr-dashboard
and then configured the alerts module like this:
ceph mgr module enable alerts
ceph config set mgr mgr/alerts/smtp_host...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.