My mistake, I should have mentioned that I did check the option was set using mount | grep -i freenas and the result was
[proxmox: ~]── - mount | grep -i freenas
freenas:/mnt/tank/media on /mnt/pve/media type nfs...
I tried adding the local_lock=all option but that doesn't seem to work.
Here's my storage config before and after adding the option via the command pvesm set pbsDatastore --options local_lock=all
[proxmox: ~]── - cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content...
Thanks, I'll try to add that option and see if that works.
I had the NFS share mounted via UI in PVE. PBS runs as a container in the PVE instance and is provided the same NFS share as a mountpoint configured in the container's config file. But the /etc/fstab on the PBS container is still blank...
Thanks @Dunuin
I have just added the NFS share from the webUI (in proxmox and in pbs) and don't have the NFS mounted via the fstab. Are you saying that I would have to change how the NFS share was mounted in proxmox or in pbs ?
I have PBS running in a LXC container on my PVE server. The datastore is mounted as NFS in PVE and made available to the PBS container as a mount point. All this was working fine until the 8th of Nov, when I updated my TrueNAS server from 13.0-U2 to 13.0-U3. This involved a reboot of the NAS and...
Tried reloading & restarting. Didn't work. Maybe I'll try a reboot. I haven't rebooted in 236 days. Might as well.
Nope, not costing me money. I am running Unbound in recursive resolver mode with AdGuard intercepting the queries for ad-blocking. At one point I was thinking of moving to NextDNS...
Hi @bbgeek17 ,
Changing to the IP address in the storage.cfg did seem to reduce the number of DNS queries, however, my NAS datastores in Proxmox now show a question mark indicating "Status Unknown".
All my containers which use these NAS drives as shares seem to be working and can access those...
I was using hostnames in the storage config. Weirdly, I am not using the hostname for the pbs server but using hostname for the freenas server. I have just made the change to use the IP address in the storage config. I will monitor it for 1 more day and see how many queries it reduces just as an...
Hi,
My Proxmox seems to be requesting a large number of DNS queries and I was wondering why that was. I have about 24K queries from the proxmox box in 25 hours which accounts to about 40% of the queries.
The top queried domain is for my NAS box -- probably because my NAS is configured as a...
Never mind....
Found this thread after posting which indicates that a fix is coming -- https://forum.proxmox.com/threads/arch-guest-warn-old-systemd-v232-detected-container-wont-run-in-a-pure-cgroupv2-environment.111411/#post-480292
When I backup my containers and vms, I see the following warning message
INFO: restarting vm
WARN: old systemd (< v232) detected, container won't run in a pure cgroupv2 environment! Please see documentation -> container -> cgroup version.
INFO: guest is online again after 163 seconds
INFO...
Thanks @fabian for the reply.
Am I correct in my assumption that setting up a Prune Job in PBS will also apply to the backups that I take from my non-PVE machines -- for eg. the backup of my desktop via the pbs client etc.
That could possibly be one of the advantages of using Prune Jobs so...
Hello,
With the PVE backup job also providing a retention policy option, what is now the recommended way to set up retention policy? Implementing it in PVE seems natural since the backup job details will show you the retention settings as well giving you the complete picture.
Does implementing...
I have the mapAll user as root and mapAll group as wheel and was able to backup 2 containers. so it seems to be working.
I'll set up some regular jobs and see how the IOPS affect it for these 2 test containers and then start prep for moving to SSDs somehow.
I did indeed have maproot set. I removed it, restarted the NFS service on TrueNAS, but now I get this error instead.
INFO: Starting backup protocol: Wed Sep 14 20:28:34 2022
INFO: Error: Permission denied (os error 13)
INFO: restarting vm
WARN: old systemd (< v232) detected, container won't run...
Thanks @Dunuin.
I double checked the owner for the pbsDatastore from the Proxmox shell and the owner & group is 100034 as expected.
As for the latency regarding IOPS and network (due to NFS) -- yes I did read about it but then again, this is just my home and I have about 16 CTs and 1 VM. So...
Hello,
I have Proxmox & TrueNAS on my network. I created a PBS container on Proxmox and want to use the TrueNAS as the backup datastore for PBS since I have already setup ZFS Replication from TrueNAS to another server. So when I use TrueNAS for PBS, I would get 2 backups for the price of 1...
Thanks @t.lamprecht . I know that my Dell 755 is quite old with an E8200 cpu. HP thin client is relatively newer, but does not pack a punch. So it's a toss up between the two.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.