We are seeing the same with our PoC PBS installation. PBS installed on a simple supermicro DOM, with 8 storage disks in a zfs raidz2 config.
During boot, the import fails:
root@pbs:~# systemctl status zfs-import@storage\\x2dbackup.service
● zfs-import@storage\x2dbackup.service - Import ZFS...
Right! So the environment variable is always required. That helps, thanks Hannes. Agreed on the documentation. Specifically some more examples would help for understanding better. :-)
Anyway, it works now, except this:
"Error: backup owner check failed (root@pam!server != root@pam)
I escaped like \!, because I was unsure where to start quoting. This gets me further, new error:
root@server:~# proxmox-backup-client backup root.pxar:/ --repository root@pam\!email@example.com:backup-repo --include-dev /var
Error: error building client for...
Trying to setup a cron job, for doing regular backups to PBS. Trying to use an API token for authentication.
I selected my backup-user, created an API token. And now want to use that token with my backup command:
root@server:~# proxmox-backup-client backup root.pxar:/ --repository...
Is it possible to configure synchronization of a local PBS datastore to a remote PBS install?
Perhaps specify which backups to keep offsite, and for how long, etc?
We are currently rsyncing VMs to onsite freenas, and then use zfs snapshots/replication to achieve offsite copies of the...
Ah super!! Interesting info, your reply here is _very_ appreciated! :-)
Any idea when this patch would become included in proxmox? Of is there a a way to disable those "hyper_v enlightments" for this specific machine in it's machine-id.conf config file on proxmox?
Today we changed storage for a (debian wheezy) VM to SCSI with virtio SCSI and and rebooted. Came up fine. Storage is on ceph. (three node cluster, 10G network, with total of 12 OSDs)
Then we issued "fstrim -v / " on the VM and some trouble appeared:
in the wheezy guest we received...
We see the following output of ceph bench:
> root@ceph1:~# rados bench -p scbench 600 write --no-cleanup
> Maintaining 16 concurrent writes of 4194304 bytes for up to 600 seconds or 0 objects
> Object prefix: benchmark_data_pm1_36584
> sec Cur ops started finished avg MB/s cur...
We have a three-node proxmox/ceph setup, with a 10G internal (meshed, 10.10.89.0) network and a public (1G) network.
We would like ALL heavy traffic to be on the 10G network. However, we also see a lot of traffic on ports 680? on the 1G public network.
Google tells me: "The "public"...
root@pm2:~# cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
# If you want to manage part of the network configuration manually,
# please utilize the...