I am now experiencing the same issue.
Here is my most recent backup log:
INFO: trying to get global lock - waiting...
INFO: got global lock
INFO: starting new backup job: vzdump 105 --mailnotification failure --quiet 1 --node tweety --mailto REDACTED --mode stop --storage pbs-ssd...
Thanks for the prompt response, Fabian.
I've gone ahead and attempted to do this by exporting my ZFS pool from my first node and renamed it to simply be `zfs-storage`
I then added it back into Proxmox using the ID "zfs". No problems here.
The problem, however, arises when I attempt to do...
Is there any sort of solution for this?
For instance, I am running ZFS pools on both of my hosts named `zfs-<hostname>-01`. Could I theoretically migrate/power off everything in the pool, export the pool, and then rename (on both hosts), to where they'd then have the same storage name? Just...
I was wrong about the IP simply establishing the subnet. I have since redone my /etc/network/interfaces to something that uses valid IPs and appear to be able to have things working properly.
My understanding of how some of this works was incorrect
I have since simplified/cleaned up my `/etc/network/interfaces` file to look like this
auto lo
iface lo inet loopback
iface enp35s0 inet manual
iface enxd6f6dc0112ee inet manual
iface enp36s0 inet manual
iface enxdad9ab886db9 inet manual
auto vmbr69
iface vmbr69 inet manual...
1. I have confirmed the IP addresses of both the Proxmox host and the VMs
- Proxmox host inet 10.10.10.110/24 scope global vmbr10
2. I have confirmed the IP of the Linux host, am able to SSH to it, and am able to access the services that are being hosted off of it
- inet 10.10.80.45/24 brd...
I have run into a bit of a peculiar issue when trying to set up InfluxDB within my PVE environment. Any time I am using a console/shell from one of my actual PVE hosts and attempt to ping or SSH a Linux VM from within my environment, I am unable to:
root@lola:~# ping 10.10.80.45
PING...
Followed the link at the bottom of the previous post and am now running into:
()
INFO: starting new backup job: vzdump 128 --storage pbs --mode snapshot --all 0 --mailnotification failure --node pluto
INFO: Starting Backup of VM 128 (qemu)
INFO: Backup started at 2022-03-24 00:13:25
INFO...
Using the regular old backup method within Proxmox, I'm able to back up to an NFS share without issue.
When I try to use PBS, however, I encounter issues like this:
()
INFO: starting new backup job: vzdump 128 --storage pbs --mode snapshot --node pluto --all 0 --mailnotification failure
INFO...
I am currently running TrueNAS Scale on an R720xd and would like to use some of my excess storage there as a home for my Proxmox Backup Server backup data.
Would there be any issues I may encounter by mounting an NFS share to the PBS host at `/mnt/share_name` and then using that NFS share as...
I don't think that's quite what I'm looking at doing. I plan on running PBS on an old PC with an NFS share from my NAS mounted to /mnt/share_name and using PBS to back up directly to the NFS share
I didn't know if there'd be issues with this
An update for anyone that stumbles upon this thread in the future:
I simply followed this video from ServeTheHome and things are working as they should:
https://www.youtube.com/watch?v=kJB6BOtKKNU
I went ahead and did a root delay of 10 to be safe. You could probably get away with less
Hello,
I am attempting to boot a new installation of Proxmox off of an old Dell R720xd. The Proxmox installation has been done on a single SSD, configured to use ZFS RAID 0.
Also contained within the server are a total of 12 drives - 6x10TBs and 6x3TBs. Each of the 10TBs and each of the 3TBs...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.