Moveing from Unraid to Proxmox

Good morning, you can't run Docker on a Proxmox VE Host itself, you must set up new Debian 13 VM and Setup Docker inside.
Debian 13, Docker is only another VM, which must manage by your self.
 
Last edited:
And yeah been having issues with setting up the vm for unraid this week, passed through the sata controller, then tinkered with trying to get coretemp to be seen , rebooted it couldn't passed through the controller again and would freeze up proxmox and couldn't shut down the vm, So I've reinstalled proxmox. And here I am, so reading docker LXC seems to have issues vs debian.
 
Last edited:
Basically you can use any Linux distribution as docker vm. So if you prefer Fedora, Ubuntu, Arch, Alpine or whatever they will work as well as Debian ;) Docker in lxcs can work but break from time to time after updates, so this is possible but not recommended:
https://pve.proxmox.com/wiki/Linux_Container
In theory you can also setup docker directly on the host but this is even more error-prone ( since it can mess with Proxmox SDN and other networking features) and not supported at all.

I wonder why you want to migrate from unRaid? If it works for you, you don't need to change it. ProxmoxVE is more flexible than a baremetal unRAID but also has a higher complexity and steeper learning curve. unRAID can also host virtual vms if you happen to need one, e.g. for HomeAssistantOS.
 
Last edited:
  • Like
Reactions: Numbleski
The problem is the win 11 Vm in unraid, when I upgraded from win 10, I would just get blacked screened, been trying over a year to get it to work, and on the unraid forum tried the thing they recommended then was told I should pay for one of experts, which I cant afford and pissed me off to be honest. So I started to look for an alternative and proxmox seems to be highly recommended and wanted prove that it was Unraids VM issues. I then setup my Win 11 works fine and can pass through my 1080ti happily on proxmox, the Unraid vms been a pain with passthrough of few things so I am now trying docker via a Linux vm in this case debian.
 
The problem is the win 11 Vm in unraid, when I upgraded from win 10, I would just get blacked screened, been trying over a year to get it to work, and on the unraid forum tried the thing they recommended then was told I should pay for one of experts, which I cant afford and pissed me off to be honest. So I started to look for an alternative and proxmox seems to be highly recommended and wanted prove that it was Unraids VM issues. I then setup my Win 11 works fine and can pass through my 1080ti happily on proxmox, the Unraid vms been a pain with passthrough of few things so I am now trying docker via a Linux vm in this case debian.
Understandable :) The thing is, that unraid and ProxmoxVE basically use the same technical foundations (both are Linux systems). VMs are based on KVM+qemu on both systems, so I would expect that if something doesn't work under unRaid it most likely wouldn't work under ProxmoxVE too.

So the guest drivers for Windows VMs are also needed in both systems, maybe following helps you for your problem:

Basically there are some versions of the virtio drivers who have problems which another version doesn't have. So it might be worth a shot if your problem persists on ProxmoxVE or even to solve it on unRAID to try out different versions of these drivers/guest tools

Another thing to consider: If you want to continue unRAID as a VM you need a dedicated controller for your discs see https://www.truenas.com/community/r...guide-to-not-completely-losing-your-data.212/ for details. Although the page is from the TrueNAS forums the technical reasons are valid for unRAID or OpenMediaVault too.

Alternatively you could setup a lxc (Turnkey Fileserver or the zamba smb services https://github.com/bashclub/zamba-lxc-toolbox ) or vm as fileserver. That might be less comfortable to setup though
 
Yeah the win11 works and no black screen, so not worried about that anymore too much. Now what I would like help with is the docker side of things, to recreate what was on unraid and my eco system, personal cloud and media server. aka my nvme cache pool handles the day stuff then passes it off to the array at night or when the cache is full it then moves the files. Yeah my HBA card didn't play nice with passthrough and grab a pci sata, and that works fine. debian sees the all the unraid drives but I haven't mounted them yet. What do you think the next steps should be as all the docker appdata is on the cache drives, but of course they are not pooled yet as i'm worried ill loose all my configs if I do anything!
 
Yeah the win11 works and no black screen, so not worried about that anymore too much. Now what I would like help with is the docker side of things, to recreate what was on unraid and my eco system, personal cloud and media server. aka my nvme cache pool handles the day stuff then passes it off to the array at night or when the cache is full it then moves the files. Yeah my HBA card didn't play nice with passthrough and grab a pci sata, and that works fine.

I'm not sure I understand. Did your patthrough work or not? If you have a working PCI passthrough with your SATA card the fastest option would be to setup an unraid vm and import your (hopefully backed up) configuration or recreate it.
debian sees the all the unraid drives but I haven't mounted them yet. What do you think the next steps should be as all the docker appdata is on the cache drives, but of course they are not pooled yet as i'm worried ill loose all my configs if I do anything!

Just replicate your docker configuration in your Debian VM (if unraid has saved some docker-compose files somewhere this would help a lot) and configure the volumes of the containers to show to the corresponding directories on your discs. What filesystemd did you use in unraid? For ZFS something like zfs import -f should help (on Debian you will first need to do install zfs see https://wiki.debian.org/ZFS) for btrfs or ext4/xfs you should be able to add them to your /etc/fstab and then mount them.
 
I'm not sure I understand. Did your patthrough work or not? If you have a working PCI passthrough with your SATA card the fastest option would be to setup an unraid vm and import your (hopefully backed up) configuration or recreate it.
Yeah it just wouldn't work, tried a bunch of different things but the HBA just wouldn't pass through the drives , you could 'manually' add them but of course that changed their id to qemu-blah and stopped the array from starting.
Just replicate your docker configuration in your Debian VM (if unraid has saved some docker-compose files somewhere this would help a lot) and configure the volumes of the containers to show to the corresponding directories on your discs. What filesystemd did you use in unraid? For ZFS something like zfs import -f should help (on Debian you will first need to do install zfs see https://wiki.debian.org/ZFS) for btrfs or ext4/xfs you should be able to add them to your /etc/fstab and then mount them.
So would I just save the appdata folder where all the dockers are ? Yeah array drives are in xfs and need to check what the cache drives are. Also playing with Debian I might switch to ubuntu I played with it before and more user friendly.