Moveing from Unraid to Proxmox

Numbleski

New Member
Nov 11, 2025
8
1
3
HI guys, is there any guides so I can move all my dockers, and files over to proxmox from unraid?
 
And yeah been having issues with setting up the vm for unraid this week, passed through the sata controller, then tinkered with trying to get coretemp to be seen , rebooted it couldn't passed through the controller again and would freeze up proxmox and couldn't shut down the vm, So I've reinstalled proxmox. And here I am, so reading docker LXC seems to have issues vs debian.
 
Last edited:
Basically you can use any Linux distribution as docker vm. So if you prefer Fedora, Ubuntu, Arch, Alpine or whatever they will work as well as Debian ;) Docker in lxcs can work but break from time to time after updates, so this is possible but not recommended:
https://pve.proxmox.com/wiki/Linux_Container
In theory you can also setup docker directly on the host but this is even more error-prone ( since it can mess with Proxmox SDN and other networking features) and not supported at all.

I wonder why you want to migrate from unRaid? If it works for you, you don't need to change it. ProxmoxVE is more flexible than a baremetal unRAID but also has a higher complexity and steeper learning curve. unRAID can also host virtual vms if you happen to need one, e.g. for HomeAssistantOS.
 
Last edited:
  • Like
Reactions: Numbleski
The problem is the win 11 Vm in unraid, when I upgraded from win 10, I would just get blacked screened, been trying over a year to get it to work, and on the unraid forum tried the thing they recommended then was told I should pay for one of experts, which I cant afford and pissed me off to be honest. So I started to look for an alternative and proxmox seems to be highly recommended and wanted prove that it was Unraids VM issues. I then setup my Win 11 works fine and can pass through my 1080ti happily on proxmox, the Unraid vms been a pain with passthrough of few things so I am now trying docker via a Linux vm in this case debian.
 
The problem is the win 11 Vm in unraid, when I upgraded from win 10, I would just get blacked screened, been trying over a year to get it to work, and on the unraid forum tried the thing they recommended then was told I should pay for one of experts, which I cant afford and pissed me off to be honest. So I started to look for an alternative and proxmox seems to be highly recommended and wanted prove that it was Unraids VM issues. I then setup my Win 11 works fine and can pass through my 1080ti happily on proxmox, the Unraid vms been a pain with passthrough of few things so I am now trying docker via a Linux vm in this case debian.
Understandable :) The thing is, that unraid and ProxmoxVE basically use the same technical foundations (both are Linux systems). VMs are based on KVM+qemu on both systems, so I would expect that if something doesn't work under unRaid it most likely wouldn't work under ProxmoxVE too.

So the guest drivers for Windows VMs are also needed in both systems, maybe following helps you for your problem:

Basically there are some versions of the virtio drivers who have problems which another version doesn't have. So it might be worth a shot if your problem persists on ProxmoxVE or even to solve it on unRAID to try out different versions of these drivers/guest tools

Another thing to consider: If you want to continue unRAID as a VM you need a dedicated controller for your discs see https://www.truenas.com/community/r...guide-to-not-completely-losing-your-data.212/ for details. Although the page is from the TrueNAS forums the technical reasons are valid for unRAID or OpenMediaVault too.

Alternatively you could setup a lxc (Turnkey Fileserver or the zamba smb services https://github.com/bashclub/zamba-lxc-toolbox ) or vm as fileserver. That might be less comfortable to setup though
 
Yeah the win11 works and no black screen, so not worried about that anymore too much. Now what I would like help with is the docker side of things, to recreate what was on unraid and my eco system, personal cloud and media server. aka my nvme cache pool handles the day stuff then passes it off to the array at night or when the cache is full it then moves the files. Yeah my HBA card didn't play nice with passthrough and grab a pci sata, and that works fine. debian sees the all the unraid drives but I haven't mounted them yet. What do you think the next steps should be as all the docker appdata is on the cache drives, but of course they are not pooled yet as i'm worried ill loose all my configs if I do anything!
 
Yeah the win11 works and no black screen, so not worried about that anymore too much. Now what I would like help with is the docker side of things, to recreate what was on unraid and my eco system, personal cloud and media server. aka my nvme cache pool handles the day stuff then passes it off to the array at night or when the cache is full it then moves the files. Yeah my HBA card didn't play nice with passthrough and grab a pci sata, and that works fine.

I'm not sure I understand. Did your patthrough work or not? If you have a working PCI passthrough with your SATA card the fastest option would be to setup an unraid vm and import your (hopefully backed up) configuration or recreate it.
debian sees the all the unraid drives but I haven't mounted them yet. What do you think the next steps should be as all the docker appdata is on the cache drives, but of course they are not pooled yet as i'm worried ill loose all my configs if I do anything!

Just replicate your docker configuration in your Debian VM (if unraid has saved some docker-compose files somewhere this would help a lot) and configure the volumes of the containers to show to the corresponding directories on your discs. What filesystemd did you use in unraid? For ZFS something like zfs import -f should help (on Debian you will first need to do install zfs see https://wiki.debian.org/ZFS) for btrfs or ext4/xfs you should be able to add them to your /etc/fstab and then mount them.
 
I'm not sure I understand. Did your patthrough work or not? If you have a working PCI passthrough with your SATA card the fastest option would be to setup an unraid vm and import your (hopefully backed up) configuration or recreate it.
Yeah it just wouldn't work, tried a bunch of different things but the HBA just wouldn't pass through the drives , you could 'manually' add them but of course that changed their id to qemu-blah and stopped the array from starting.
Just replicate your docker configuration in your Debian VM (if unraid has saved some docker-compose files somewhere this would help a lot) and configure the volumes of the containers to show to the corresponding directories on your discs. What filesystemd did you use in unraid? For ZFS something like zfs import -f should help (on Debian you will first need to do install zfs see https://wiki.debian.org/ZFS) for btrfs or ext4/xfs you should be able to add them to your /etc/fstab and then mount them.
So would I just save the appdata folder where all the dockers are ? Yeah array drives are in xfs and need to check what the cache drives are. Also playing with Debian I might switch to ubuntu I played with it before and more user friendly.
 
You are welcome :) I'm at my wits end now since I never used unRAID or the SW RAID of Linux itself ( I guess they use lvm or mdadm for it but I really don't now), so I'm afraid I can't help you with your last inquirey. But if you backed up everything before you can't risc to much. If you didn't do a backup before now would be a good time to do one ;)
 
So I have back up my pics , and gave up on saving the media files as they can be ripped again from dvds and cds etc. so I have installed docker and portainer , and created my zpools , one for the fast cache pool with nvmes and then an array with my 4 10 tb drives. but i cant see the zfpools in ubuntu so i can start sorting out my file structure
 
So for what it's worth, I recently migrated from Unraid to PVE w/ TrueNAS VM. I'm not sure I can be of much help, but maybe explaining my journey will help you a little...

My old setup was bare metal Unraid w/ (3) 12TB HDDs in an XFS Unraid array and a 500Gb NVME as a cache drive. Hardware I purchased as part of the transition was an LSI HBA card and two additional 10TB HDDs. I added the HDDs for two main reasons... One is because I was planning on tossing them in an old PC and run that as my backup server. Second was because I knew in this process I was switching from XFS/Unraid array to ZFS, so I needed to offload as much of the data as I could before reformatting the main storage HDDs.
I decided I was moving from Unraid to TrueNAS for my primary bulk storage (media, documents, NAS etc.). I wanted to take advantage of ZFS benefits as well as learn ZFS. Yes, Unraid has ZFS now, but I only had the basic (old school) license that allowed up to 6 disks. Being at 4 already and wanting to expand in the future, I decided to part ways with the paid plans of Unraid. I digress... So, TrueNAS was the way for me. I then was faced with the decision of bare-metal TrueNAS vs runnign it as a VM. Having already really enjoyed using Proxmox on a smaller mini-PC (as a true homelabber, it gave my 2012 Mac mini a new life!) I decided to run Proxmox on my main server too and run TrueNAS as a VM.

Since I decided that TrueNAS was still going to be my primary data storage solution, I then decided to run TrueNAS as my backup server host as well. My thought here was to set up ZFS replication tasks to automate my data backups to the backup server. So I installed TrueNAS on the backup server and created a new ZFS pool with the two 10TB HDDs I just bought. Because I was staying on a budget and was not willing to invest too much more money into the backup server, I elected to run these two HDDs as a ZFS stripe rather than with any redundancy. My thought process here was that this was purely backup data and if I lost all of it, I may be a little freaked out about not having 2-3 copies of the data, but it would not disrupt my life much. So... Now I have ~18TB usable storage set up on a backup server.

I then used rsync to move all my data from the Unraid array and cache (including all my appdata) to a temporary dataset on the backup server. In hind sight... I maybe should have set up more nested datasets, but it still worked with just one large one. I only have a 1Gbps network, so this process took a long time (~12-14TB I had to move). Once all my data was copied over and verified, I moved on to installing PVE 9.0 on the main server. After all the typical setup and config, including the post install helper scripts and setting up IOMMU for PCI passthrough, I went ahead and spun up the TrueNAS VM, created most of the datasets I knew I needed, and started rsync'ing data back over from the backup copy I made.

As for all my apps and VMs... Luckily I didn't use any VMs in Unraid, so I didn't have to worry about that. so that left basically just all of my Docker containers. Now, my Docker setup was, shall we say, a little messy. As I progressed and learned more about Docker, I started out with using Unraid templates found on their CA store but eventually moved on to preferring Docker compose. So I had a mix of CA templates and compose.yaml files. For the CA templates, I used a plugin called Composerize (sp?) that converted them to yaml files.

I already had one instance of Docker running in an LXC (come at me bro lol) on the mini PC. I pun up an Ubuntu VM on the new PVE server and installed Docker on that to be my new main Docker host (there, happy? LOL). So I suppose I get to be on both sides of the flame war? :D

For containers that had little to no external dependencies (i.e. no bind mounts, no passthrough, etc) the solution was quite simple... Just spin up a new container and be done with it.

Many containers I ran (arrs, Immich, Paperless-NGX) had their own backup solutions built in. So for those containers, I elected to just spin up new containers and restore the backups. That went really well.

Other containers (like Plex) had ways to copy configs from the appdata. So for those I also spun up new instances then copied the config files from the appdata backups I made.

For a solid handful of containers, I elected to convert from Docker to either its own LXC or a full-blown VM. Immich and Plex both use my passthrough GPU, so running those in unprivileged LXCs so they can share it made the most sense (although I did make the mistake first if setting Plex up in its own VM, but switched to an LXC later after finding out about the restriction on the GPU passthrough).

As far as passing the storage (media, photos, etc) from the ZFS pool to the apps, the unprivileged LXCs as well as VMs all have either SMB or NFS shares mounted to them (plenty of tutorials online for this). While the process of moving the data twice over a 1Gbps network was time consuming, it went pretty well all things considered.

Some lessons learned from all this...

1. When you want to share hardware across different "apps" you can either use LXCs and share the hardware from the host or you can spin up one VM and run multiple apps on that single VM. I elected for the former and it's working great for me.

2. I only have 32GB RAM in the main server and allocated 16GB to TrueNAS. Make sure you turn ballooning off if you do this. I understand the concept of ZFS cache and how RAM hungry ZFS is in general, but I sometimes wonder if it would reduce a little overhead if I just let Proxmox manage the ZFS storage?

3. I don't know how common this is, but on my server my Nvidia Quadro P2000 card needs to be "woken up" with a script I made on boot of the PVE node. otherwise the modules will not exist in /dev/ to be passed/mounted to the LXCs, causing the LXCs to fail to start up on reboot of the PVE node. This was a simple fix but took a bit to find the solution.

4. I spun up a PBS VM inside the TrueNAS backup server. So yes, I have one machine that is PVE with a TrueNAS VM and another machine that is TrueNAS with a PBS VM... That almost feels like some weird inception, but it all works very well for me at the moment. The TrueNAS VM runs regular replication tasks to bac up the main data storage. The two PVE nodes run their backups to PBS; I run both a backup task from the datacenter level as well as set up a systemd task to use proxmox-backup-client to backup the /etc folders of both PVE hosts.
 
  • Like
Reactions: Numbleski
So would I just save the appdata folder where all the dockers are ? Yeah array drives are in xfs and need to check what the cache drives are. Also playing with Debian I might switch to ubuntu I played with it before and more user friendly.
The solution really depends on a few things. One of those things is if you used docker volumes or bind mounts to store the container data. If the former, I would suggest just moving the volume. If the latter, you can spin up a new container, then copy the old data into the config directories of the new containers. There are tutorials out there on how to do either.

I have also fund Ubuntu to be more "user friendly." I suspect that this is because it comes out of the box with more "things" (dependencies etc) installed already, so there is less to know about in terms of dependencies you need to install for things to work. Some may see that as bloat. I would agree, but it's a value proposition b/t resource usage and your time (troubleshooting why things don't work). Neither is wrong.
 
  • Like
Reactions: Numbleski
So I have back up my pics , and gave up on saving the media files as they can be ripped again from dvds and cds etc. so I have installed docker and portainer , and created my zpools , one for the fast cache pool with nvmes and then an array with my 4 10 tb drives. but i cant see the zfpools in ubuntu so i can start sorting out my file structure
Ubuntu is the VM you spun up to host Docker? Is it a VM or an LXC?

You need to mount them to the Ubuntu machine. The process would differ based on if that is a VM or an LXC.
 
  • Like
Reactions: Numbleski
Sorry I hadn't replied yet, Yeah its an ubuntu VM , I have a a fair amount setup now. So just plodding along setting it all up!