How to create samba share on proxmox host

I was wondering: what is the best approach for me to run a samba share on my proxmox host. Please note: I'm talking about a home-server, please don't worry about HA and stuff like that. I only want my local machines (in the local network) to be able to connect to a hard drive that is physically attached to the proxmox server.

Of course I can run ubuntu with cockpit in a dedicated VM that I call NAS.
Or I can even run a Vm with trueNAS in it.

But fact is: I prefer to run this smb share from within the proxmox host itself.
So what is the best way to achieve this?
 
I am just wondering, why do you want to run it on the host? I run an instance of open media vault in a VM and I pass through a bunch of big spinning drives directly to the VM. I prefer to do it this way so that my proxmox host sits in its own VLAN that none of the other VMs or VLANs can access. It seemed like a more secure arrangement to me to isolate the Proxmox host.
 
I am just wondering, why do you want to run it on the host? I run an instance of open media vault in a VM and I pass through a bunch of big spinning drives directly to the VM. I prefer to do it this way so that my proxmox host sits in its own VLAN that none of the other VMs or VLANs can access. It seemed like a more secure arrangement to me to isolate the Proxmox host.
Hey, thanks! I haven't considered this option.
You mean this, right? (And this)
I was thinking too complex I guess.

I thought that I had to create a virtual hard disk for the VM and use THAT as the storage location.
Then I foresee all kinds of troubles like huge backups, etc.

I will try to indeed add a new hard disk to the proxmox host and simply pass-that-through as a dedicated drive in a VM that has one purpose: act as a NAS server. I think that indeed is a perfect solution :).
 
Last edited:
I thought that I had to create a virtual hard disk for the VM and use THAT as the storage location.
Then I foresee all kinds of troubles like huge backups, etc.
That is a virtual disk...just mapped to a physical disk.
There is no real physical passthrough without PCI passthrough.
 
A proper mainboard/CPU with BIOS/UEFI/wireing that allows for granular IOMMU groups as well as NVMe SSDs or a dedicated USB/disk controller to passthrough with all of its ports.
 
From the command line execute "ls -n /dev/disk/by-id/"
You will get a list of drives by disk ID

For me I wanted SDB and SDC

1693938691068.png

So I issued the following commands also from the command line

"qm set 103 -scsi1 /dev/disk/by-id/ata-ST1000DM003-1CH162_S1DFPJXR,serial=myserial001"

"qm set 103 -scsi2 /dev/disk/by-id/ata-ST1000DM003-1CH162_S1DFPKZY,serial=myserial002"

Note that I gave them unique (but totally made up) serial numbers, so that they would work properly in a BTRFS raid array that I built in OpenMediaVault. If you are not trying to put your disks in a raid array, you can probably skip this part. But it doesn't hurt to do it
 
  • Like
Reactions: rogierl
A proper mainboard/CPU with BIOS/UEFI/wireing that allows for granular IOMMU groups as well as NVMe SSDs or a dedicated USB/disk controller to passthrough with all of its ports.
That's true but there's more than one way to skin a cat. I don't have separate SATA controllers that I can pass through via IOMMU so I do it by passing through the disk id using the qm set command. Works fine. You just have to make sure that the disks are not mounted and in use by Proxmox.
 
  • Like
Reactions: rogierl
Also you will likely want to make it so that when Proxmox backups the VM disk, the storage disks are NOT backed up. Otherwise your Proxmox backups will take forever

1693939044254.png


Just edit the drives in question and un-check the backup box

1693939090197.png
 
That's true but there's more than one way to skin a cat. I don't have separate SATA controllers that I can pass through via IOMMU so I do it by passing through the disk id using the qm set command. Works fine. You just have to make sure that the disks are not mounted and in use by Proxmox.
Yes, not the biggest deal. Performance looks fine with very low overhead. But people mostly think the VM will see the real disk and this isn't the case. Thats why stuff like SMART won'T work, why the guest OS can't spin down the HDDs, why the disk is working with 512B physical+logical sectors even if the physical disk is using 4K sectors and why sometimes things are screwed up (I for example had a partitioned disk where fdisk on the host was showing the partitions but the guestOS was only showing a unpartitioned disk. When repartitioning it within the guest I was seeing the partitions in the guestOS but not any longer on the host...). So when real PCI passhrough is an option, I would prefer that.
 
Last edited:
Oh one more thing. I let OMV handle the backups of the data disks using rsync. So it happens outside of Proxmox, and doesn't tie up Proxmox when it does.
 
Well, SMART still works through the Proxmox web interface, so that's a non-issue for me. I am not sure I care about spinning these disks down. Maybe I should. Dunno. I have not had any problems so far, and like I said, I have these two disks in a BTRFS raid 1 array, with daily snapshots, pre-scheduled scrubs (I forget if it is monthly or quarterly), and daily backups. It works pretty well, but I am just experimenting with it (mostly to play with BTRFS and Nextcloud). I use a Synology for my important data.
 
Yes, not the biggest deal. Performance looks fine with very low overhead. But people mostly think the VM will see the real disk and this isn't the case. Thats why stuff like SMART won'T work, why the guest OS can't spin down the HDDs, why the disk is working with 512B physical+logical sectors even if the physical disk is using 4K sectors and why sometimes things are screwed up (I for example had a partitioned disk where fdisk on the host was showing the partitions but the guestOS was only showing a unpartitioned disk. When repartitioning it within the guest I was seeing the partitions in the guestOS but not any longer on the host...). So when real PCI passhrough is an option, I would prefer that.
Hey @Dunuin , thanks for your good feedback! I have a follow-up question. The past few days I indeed passed-though to a VM by getting the /disk-by-id value and configuring the VM to attach that disk. This works fine.

But I want to do it the same way as you describe, so passing-through the REAL thing. I am planning to buy a new home-server soon and on my list are two options.

  1. An Intel NUC: NUC12WSKi3.
  2. The ASRock mini 310, in combination with an Intel core I3/9100 processor
Will both options support your way of passing-through the disk to a VM?
My plan is to buy two NVMe SSD's. One for the Proxmox OS (a small one).
The other will will be passed-through to a VM (running Ubuntu server, with samba server on it)

Will my plan work?
And are there specific things I need to consider while buying the SSDs?
 
You can only passthrough whole IOMMU groups and not singe devices. What devices will be in different IOMMU groups depends on the BIOS and how the mainboard is wired. Best you ask people with the exact same hardware what will work and what not before buying anything. Everything attach to the chipset (which some M.2 slots often are) will usually not be possible without acs_override to be passed through. So would be good if the mainboard got a M.2 slot that is directly wired to the CPU.
 
You can only passthrough whole IOMMU groups and not singe devices. What devices will be in different IOMMU groups depends on the BIOS and how the mainboard is wired. Best you ask people with the exact same hardware what will work and what not before buying anything. Everything attach to the chipset (which some M.2 slots often are) will usually not be possible without acs_override to be passed through. So would be good if the mainboard got a M.2 slot that is directly wired to the CPU.
He could add a PCIe host bus adapter and pass that through. Then it really wouldn't matter about the motherboard and what not.
 
He could add a PCIe host bus adapter and pass that through. Then it really wouldn't matter about the motherboard and what not.
Not all PCIe slots of the motherboard can be used for passthrough too. Here it is the same problem as with M.2 slots with PCIe lanes either being wired to the chipset or CPU.
 
On a NUC, expansion flexibility tends to be more on the outside of the box for lack of space inside.

I've started to experimemt with the latest generation of 10Gbit USB drives like the Kingston Data Traveller to regain some of the flexibility I enjoyed, when most of my storage was on carrier-less SATA SSDs in hot-swap enclosures. It's nowhere near NVMe bandwidths, but twice ordinary SATA at around 1GByte/s sequential and being full SATA controller it should implement the full scope of wear levelling and defect management you'd need to run an OS reliably. So I hope it a new class of USB storage, that overcomes the typical limits and hazards of USB storage of the past. After all it's more expensive than the same capacity in NVMe or SATA!

Also, since Proxmox likes to eat disks whole, I've then used the smallest size 256GB sticks to boot Proxmox and then allocated internal NVMe, SATA and external whatever to Proxmox VM or container storage and backup. The Thunderbolt port connects a 10Gbase-T network for business, whilst the on-board 1/2.5Gbit ports can do corosync, still far from the ideal functional network separation, but this is home-lab, not enterprise.

Having USB boot means you could pass-trough the entire onboard storage to a VM, but without passing trough a dedicated storage network, too, I just see bottlenecks being pushed between one corner or another and I feel safe to say that in most cases the network is the far bigger (or should I say smaller?) bottleneck than storage these days.

Except for spinning rust, but that shouldn't require pass-through to obtain maximum bandwidth either, nor incur significant latency or CPU usage penalties with modern non-emulation hypevisor drivers.

I haven't done tons of benchmarking with Proxmox yet, but so far VM storage with CEPH has always been rather close to network speeds (and the 3x write amplification penalty on three-way full-replica writes), at least with NVMe storage and 10Gbit Ethernet, even without pass-through, but using non-emulation drivers.
 
I came up with this final configuration.
Any thoughts? Would this be a nice, power-efficient homeserver for proxmox?

Totaal​
€ 556,27​
#​
Category​
Product​
Prijs​
Subtotaal​
1​
ProcessorIntel Core i3-9100 Tray€ 117,42€ 117,42
1​
BarebonesASRock DeskMini 310€ 162,64€ 162,64
1​
CoolingNoctua NH-L9i Bruin€ 44,90€ 44,90
1​
Memory internCrucial CT2K16G4SFD824A€ 63,60€ 63,60
1​
Solid state drivesSamsung 970 Evo Plus 500GB€ 34,71€ 34,71
1​
Solid state drivesWD Red SA500 2,5" 2TB€ 133,-€ 133,-
 
It's hard to say, because if Samba is all you want, Proxmox is overkill. But if you want Proxmox for a bit of hosting other VMs and containers, you'd need to state your needs a bit more clearly.

Your subtotals, don't add...

The CPU is ancient and won't be as efficient as anything newer, but I do like (and use) that fan a lot: too bad it's so pricey... but if you use a more effiient CPU you may not need to spend that much on the fan..

You can get far more speed and efficiency using a NUC class base with a mobile CPU. You could have a look at Erying who offer Alder-Lake i5 based Mini-ITX boards at rather competitive prices. Most will offer 2.5Gbit Ethernet for free, as well as 2-3 NVMe slots. And they run rings around that Cannon Lake.

There are plenty of Mini-ITX chassis around, no need to go with that old ASrock box.

I don't understand the SATA SSD. SATA SSD capacity these days actually tends to be more expensive than NVMe, while it certainly remains much slower.

And I don't understand the functional split between the NVMe and the SATA drive: Is that to do pass-through?

With a Gbit Ethernet interface, I can't see how it could possibly matter if storage is passed-through or not.

You can run Proxmox from a single (large) NVMe drive or a RAID of two, if you forego the GUI setup. So if that's the motive behind the dual drives, it may not be necessary.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!