[SOLVED] swap ssd with datastore on it

extcon

New Member
Mar 26, 2023
16
1
3
Hi,
Proxmox Backup Server running from an NVME drive ext4 (/dev/nvme0n1) with an additional single SSD as Datastore/backup disk on ZFS (/dev/sdX). I want to replace the SSD in the Proxmox Backup Server. Old SSD is 4TB (overkill), new SSD is 1TB. I replaced the SSD to a new one and created a new Datastore and ext4 library on it. I attached the old SSD via USB-to-SATA adapter. I can see both the old (/dev/sdb) and the new (/dev/sda). I can also see both the old datastore (in maintenance mode) and the new datastore.

How do I "mount" the old datastore that is in maintenance mode and is there a way to copy/move the existing backups to the new datastore? I want to repurpose the old SSD so remove it altogether from pbs without loosing the backups already on it.

If it helps I can easily start over on the new disk since there is nothing on there, e.g. move the existing datastore in maintenance mode to the new SSD with all backups in it?

Thanks,
Carl
 

Attachments

  • pbs disks.png
    pbs disks.png
    74.1 KB · Views: 16
  • pbs datastores.png
    pbs datastores.png
    72 KB · Views: 16
Last edited:
first thing is: it makes no sense to use zfs with a single disk - better use ext4.
the datastore is defined via a fs path. so it is no problem to copy the data from one disk to another with the same permissions. mount the new disk to the same path.
 
  • Like
Reactions: extcon
first thing is: it makes no sense to use zfs with a single disk - better use ext4.
LZ4 saves some space and bit rot detection can detect if the index files got corrupted (but of cause not correct them without redundancy). As far as I understand PBS will only verify chunks, but not the index files.
 
  • Like
Reactions: extcon
your backup files are already compressed - there is no space to save
Yes, chunks are already ZSTD compressed. But it still saves space.
My datastore is on an LZ4 compressed 1M recordsize dataset and ZFS reports a compression factor on block level of 1.21x. ~1.15TB datastore size only consuming ~1TB of actual pool space.
 
Last edited:
  • Like
Reactions: extcon
Wait, the pbs datastore first chunks and then compresses? Then there's no surprise you don't get fixed-size chunks in the fs.
With variable-size chunks on a fs with a coarse allocation granularity, no wonder there is wasted space. Which compressing can reclaim. Not so much because of the actual compression, but rather the better handling of alignment.
 
  • Like
Reactions: extcon
Yes. PVE is first chunking, then hashing, ZSTD compressing them and optionally also encrypting. Those compressed and maybe encrypted chunks are then send to the PBS who will store them in the datastore. So chunk size depends on the amount of zero data and how well that data is compressible.
 
  • Like
Reactions: extcon
Very late to the party here. Thanks for your insightful comments, I learned a lot.
The answer to my original question still eludes me though. Can anyone share some insights and advise on how to move backups from a ZFS datastore to another (ext4 or other) datastore. Is this possible? If so, how? If not, what would be the workaround (my only idea so far is to provision a temporary PVE server, restore all the backups and back up again to another datastore, which seems doable but very inefficient).

Thanks,
Carl
 
You just move the whole datastore folder to a new location and then edit your datastores path in the /etc/proxmox-backup/datastore.cfg to match the new location of the datastore.

And I would enable the maintainance mode of that datastore so it wouldn"t accessed while moving stuff.
 
  • Like
Reactions: extcon
You just move the whole datastore folder to a new location and then edit your datastores path in the /etc/proxmox-backup/datastore.cfg to match the new location of the datastore.

And I would enable the maintainance mode of that datastore so it wouldn"t accessed while moving stuff.
Thank you, I'll attempt/try that later today.
 
first thing is: it makes no sense to use zfs with a single disk - better use ext4.
the datastore is defined via a fs path. so it is no problem to copy the data from one disk to another with the same permissions. mount the new disk to the same path.
Thanks, I'll take your kind advise and use ext4.

Sorry about my ignorance. How would I copy the datastore from one disk to another? The datastore is mounted in /mnt/datastore not sure how I would copy that to the new disk, can you provide guidance? It's probably basic Linux sysadmin knowledge but I don't have it. Tried to search for it but couldn't get how it should be done. I guess my lack of understanding is how the disks are addressed in the shell ("the path").

I did try to create a new datastore on the new ext4 formatted disk through the PBS GUI and then copy the /mnt/datastore/existing_backup_datastore to /mnt/datastore/new_backup_datastore. The backups showed up in the GUI but all failed validation so I must not understand how to do it properly...
 
Last edited:
and then copy the /mnt/datastore/existing_backup_datastore to /mnt/datastore/new_backup_datastore.
yes, u copy all the files and folders inside the datastore folder to the new one. Check the permissions afterwards and correct them. then u can create a new datastore with the new folderpath.
 
  • Like
Reactions: extcon
yes, u copy all the files and folders inside the datastore folder to the new one. Check the permissions afterwards and correct them. then u can create a new datastore with the new folderpath.
That didn't work, perhaps a permissions issue. I used Midnight Commander to copy all the files and directories, must not have retained the permissions. I then tried copy operation using cp -a -v /mnt/datastore/old_backups/* /mnt/datastore/new_backups/* but only VM and CT directories were copied (not .chunks and the other directories). I will keep trying copy operations. Thank you!
 
Files/Folders starting with a "." mark them as hidden in linux, so some commands might skip them.

I usually copy bigger folders with rsync. There are also flags to retain ownership and permissions:
rsync -az --progress /path/to/source/folder/ /path/to/target/folder
Then I verify with rsync that checksums of both folder contents are identical:
rsync -avnc /path/to/source/folder/ /path/to/target/folder
Then I remove the source folder if rsync can't find differences:
rm -r /path/to/source/folder.

https://linux.die.net/man/1/rsync
 
Last edited:
  • Like
Reactions: extcon
Files/Folders starting with a "." are hidden in linux so some commands might skip them.

I usually copy bigger folders with rsync. There are also flags to retain ownership and permissions:
rsync -az --progress /path/to/source/folder/ /path/to/target/folder
Then I verify with rsync that checksums of both folder contents are identical:
rsync -avnc /path/to/source/folder/ /path/to/target/folder
Then I remove the source folder if rsync can't find differences:
rm -r /path/to/source/folder.

https://linux.die.net/man/1/rsync
Thanks, very helpful. I'll give it a try.
 
Thanks all for your kind help!
I learned a lot that will come in handy. In the end I re-installed PBS from scratch with the new disk as an ext4 backup directory/datastore. because of ending up in a non-consistent state with datastores confusion. Now I will have to figure out how to mount the old single zfs drive to copy the backup files. I will mark this thread as solved as the information in here should be enough for anyone to solve similar issues and open a new thread for my newly raised question.
 
I then tried copy operation using cp -a -v /mnt/datastore/old_backups/* /mnt/datastore/new_backups/*
JFTR, that's not how it works.
The shell expands * before the argument list is passed to cp.
The last argument should NEVER contain wildcards because the last argument passed to the command is the target. Which then depends on shell expansion in the case of wildcards. If /mnt/datastore/new_backups/ is a newly-created directory, this will mean /mnt/datastore/new_backups/* does not match anything and the behavior now depends on how the shell handles this. Usually the argument is passed as such, which in case of cp leads to "cp: target '/mnt/datastore/new_backups/*' is not a directory", but it is bad practice to rely on this. If the directory is not empty, cp will still fail if the last expansion is not a directory, but if it is a directory, it will copy the contents of old_backups and any other entries in new_backups there, happily overwriting existing files. In case of mv, it is even worse, because older versions of mv will not insist that the target is a directory when there are more than two (after expansion) arguments and therefore move all other files to that single file, which usually isn't what you want.

The first argument also shouldn't use a wildcard because of the already-mentioned hidden dotfiles.
However, if you use cp -avi /mnt/datastore/old_backups/ /mnt/datastore/new_backups/, you will end up with /mnt/datastore/new_backups/old_backups. To avoid this, use cp -avi /mnt/datastore/old_backups/. /mnt/datastore/new_backups/.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!