[SOLVED] how to move pbs to new hardware

RobFantini

Famous Member
May 24, 2012
2,022
107
133
Boston,Mass
Hello

we are moving pbs to new hardware.

zpools / zfs have been set up with same names

rsync of datastore is in progress and will take some hours.

for configuration , besides /etc/proxmox-backup/ is there anything else that needs to be copied over?
 
If you want to move data between two zfs pools you should google for "zfs send | zfs receive" for the next time.

If I remember right the import isn't implemented yet but you only need to manually add the new datastore by editing the datastore.conf.

Almost all relevant configuration files from Proxmox Backup Server are saved in the /etc/proxmox-backup/ directory, so backing up makes sense - even if only just to be sure.

If you mount the CIFS store on the same path as now you could just restore the datastore.cfg from the backup to the directory again after reinstallation and you should be done.

We try to keep our configuration files as simple as possible, so re-adding a datastore is generally quite easy, see my post in an earlier thread:
https://forum.proxmox.com/threads/datastore-recovery.72835/#post-325491
 
probably there are just few more things besides datastore and /etc/proxmox-backup/ to scp / rsync over .
do you happen to know the location of keys etc?

after that move IP address ..

PS
thanks for the zfs send/rcv suggestion .
 
after installing pbs and before doing any config.
when creating a zpool use same name as original system

0- the transfer will create the zfs on the target. if target exists receive failed here. [ could be because i had already set up a datastore on the target . in any case i am unsure on this point. ]

1- send the datastores . Dunuin's suggestion to use zfs send / receive using netcat. this is 10x faster [more or less] then rsync.
Code:
# 1-create a snapshot
zfs snapshot tank2/backups@2021-09-03

# 2-send. see   man zfs-send
zfs send -v tank2/backups@2021-09-03  | nc -l -v  -w 20  -p 3333

# 3-receive at new system . see  man zfs-receive
### IMPORTANT  the target zfs should  not already exist. [  tank2/backups ] .  ###

apt install pv #   pv - monitor the progress of data through a pipe


nc 10.11.12.80   3333 -w 120  -v  | pv | zfs recv  -s -v -F tank2/backups

2-
Code:
#  send from old to new
rsync -a /etc/proxmox-backup/  newip:/etc/proxmox-backup/
rsync -a --del  /var/log/proxmox-backup/  newip:/var/log/proxmox-backup/

3- check /etc/proxmox-backup/datastore.cfg
make sure that the path names the same. . use zfs list .. because the mount points may be different. if so edit the file and use the new mount points or change the zfs mount point . for example
Code:
zfs set mountpoint=/backups tank2/backups
 
Last edited:
  • Like
Reactions: gag and Dunuin
I'l edit this post sometime....

in the middle of transferring 8T ...

I have not tried this.... was researching if resume is possible..

following this https://unix.stackexchange.com/ques...end-receive-resume-on-poor-bad-ssh-connection

supposedly as we used the -s option of zfs receive which will save a resumable token on the receiving side if the transfer fails.

If the transfer fails, go on the recv machine and type:

zfs get all tank2/backups


Get the receive_resume_token and go on the send machine:

- the rest needs to be modified.
zfs send -v -t <token> | nc <host> <port>


zfs send -v -t <token> | ssh ... zfs receive -s -v tank2/backups
 
  • Like
Reactions: Dunuin

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!