How to create samba share on proxmox host

It's hard to say, because if Samba is all you want, Proxmox is overkill. But if you want Proxmox for a bit of hosting other VMs and containers, you'd need to state your needs a bit more clearly.

Your subtotals, don't add...

The CPU is ancient and won't be as efficient as anything newer, but I do like (and use) that fan a lot: too bad it's so pricey... but if you use a more effiient CPU you may not need to spend that much on the fan..

You can get far more speed and efficiency using a NUC class base with a mobile CPU. You could have a look at Erying who offer Alder-Lake i5 based Mini-ITX boards at rather competitive prices. Most will offer 2.5Gbit Ethernet for free, as well as 2-3 NVMe slots. And they run rings around that Cannon Lake.

There are plenty of Mini-ITX chassis around, no need to go with that old ASrock box.

I don't understand the SATA SSD. SATA SSD capacity these days actually tends to be more expensive than NVMe, while it certainly remains much slower.

And I don't understand the functional split between the NVMe and the SATA drive: Is that to do pass-through?

With a Gbit Ethernet interface, I can't see how it could possibly matter if storage is passed-through or not.

You can run Proxmox from a single (large) NVMe drive or a RAID of two, if you forego the GUI setup. So if that's the motive behind the dual drives, it may not be necessary.

Ok, you have a lot of points there :). Second try, now using a NUC. My aim is to install proxmox on the samsung 980 pro, which is an PCle 4.0 NVMe M.2 SSD.

Then, for storage purposes, I will add another NVMe M.2 SSD drive, possible a 2 TB. And indeed my aim is to pass-through that drive to the VM running Ubuntu + NAS + syncthing. Doesn't that make sense?

 
Back to the beginning:
But fact is: I prefer to run this smb share from within the proxmox host itself.
So what is the best way to achieve this?

Take a look at https://github.com/bashclub/zamba-lxc-toolbox

They do Samba with ZFS-snapshots which are visible in the Windows File Explorer. It is a Container, so no VM overhead with passthrough-problems required and all data is easily accessible on the host without any tricks.

Good luck!
 
Ok, you have a lot of points there :). Second try, now using a NUC. My aim is to install proxmox on the samsung 980 pro, which is an PCle 4.0 NVMe M.2 SSD.

Then, for storage purposes, I will add another NVMe M.2 SSD drive, possible a 2 TB. And indeed my aim is to pass-through that drive to the VM running Ubuntu + NAS + syncthing. Doesn't that make sense?


I wonder why you are fixated on passing through the disk to the Samba VM? If the virtualization overhead is a concern, that it should apply to all of the I/O, disk and network because every byte consumed by your Samba clients will need to pass through the entire pipeline.

But there I believe you overestimate that overhead. In the very old days, say PDP-11 or so, and before the VAX brought virtual memory to the masses, I/O data blocks had to be copied in and out from kernel space to user space. That was also still the case in the early days of the PC, because between the 8-bit DMA chips on the original IBM-PC and the first bus master ATA and SCSI controllers, data had to be moved via the CPU.

But for a very long time now most disk and network I/O only sets up page table entries and controller registers and then let DMA engines do the minimal amount of transfers: disk data will go to RAM without the CPU being involved and then out to the network, again without being copied with the box and the CPU will only do some PTE manipulations and register setups to inform the devices where things should go.

And these days CPUs have gained the ability to delegate PTEs to VMs, so there is no need to transition to the hypervisor even for the orchestration work, which would still mean some overhead. Long story short, with the proper hypervisor-aware drivers, there shouldn't be much of a difference if you operate a Samba server on the host or in a VM. If you use a container, it's pretty much like running it on the host.

Pass-through of host devices to a VM is typically meant to eliminate virtualization overhead in terms of double-buffering (not happening here), ring-transition or hypercalls (eliminated) which cost CPU time and latency overhead. And in case of a GPU these overheads can still be felt most, but mostly they are hard to manage from two different operating systems at once.

For network and disk devices the limitations of the medium are far more likely to limit performance.

The NUC at least has 2.5GBit Ethernet, but unless your network and clients can keep up, throughput might still be limited to 1Gbit or 100MByte/s, which is slower than even a high-perfomance hard disk these days. To keep up with a SATA SSD, you really need a 5Gbit/s network, cheap PCIe v3 NVMe drives will saturate a 10Gbit Ethernet three times over. And with a PCIe v4 device, anything less than 100Gbit Ethernet won't be able to keep up.

In other words, whether you pass the disk through to the VM or just dedicate most of its capacity to that VM, should not make a noticeable difference But you'll be able to test that once you have the hardware.

Other notes on the hardware: that NUC is probably a good match in terms of power consumption/efficiency, but it's not a "great" deal. Note, that I said "NUC class" not NUC...

The biggest issue with NUCs is that they tend to have only a single full NVMe slot because of the space constraints. So if you insist on two drives, you might have to either use a tall variant which supports a secondary SATA drive, or use something more exotic, like the Phantom Canyon NUC, which btw. is a steal at €450 and throws in an RTX 2060m and an extremely competent cooling solution that never gets noisy.

The other avenue is to go with a Mini-ITX that uses "NUC class" mobile technology and there are some really good deals here from Erying, a chinese company that sells on Aliexpress. I have their G660 with an Alder-Lake i7-12700H which offers 3 full NVMe slots, dual SATA, dual Ethernet and after replacing the factory applied paste with a liquid metal runs at 90 Watts sustained using that Noctua cooler very quietly. I run it with a 10Gbit Ethernet NIC in the "GPU slot", because the Xe iGPU is more than enough for that machine.

But the biggest benefit of these NUCs and "mobile" Mini-ITX boards is that you can limit PL1, PL2 and TAU to whatever suits your efficiency/noise needs and tolerances.

If you want to stay with the NUC, PCIe v4 vs v3 should not be noticeable except on the price per GB, but unfortunately the most logical choice, using a Samsung 970 Evo+ 4TB device isn't available, because they only make them up to 2TB...

And note, that the 2nd M.2 slot in this NUC is only connected via one PCIe v3 lane, so it's limited to SATA speeds!

But even with the 2.5GBit/s network, you can really afford to use the cheapest solid state storage you can find, because that's your bottleneck, pass-through or not.
 
  • Like
Reactions: Vzem
Hi all. Sorry to hijack this thread a bit (please suggest creating my own thread if this breaks rules etc.) - it just seems to be a rich collection of everyone who knows about this topic specifically.
I am VERY new to all this, but have engineered myself into a bit of a corner, with no real solutions to get me back out!
I wanted to create (at home) a HA solution for all my services. I have:
  • 3 * Proxmox Installs (mini pcs). Each has a 500GB external SSD (USB - this is where Proxmox is installed), and an internal 1TB NVME Drive
  • All are clustered - An attempt to create a HA environment.
  • All three 1TB internal NMVE disks are CEPH managed (Pooled Share) – Hence data is moderately protected from failure (will also arrange further backups elsewhere)
The intention was to use the pooled share for Proxmox Backups, and General Store for Pictures etc
The thinking was to partition this into two, one for Backups, the other for general purpose
The idea was to get OMV to do all the SAMBA/NFS stuff for this.

The problem: I can’t for the life of me figure out how I can get this share visible in OMV. It just refuses to ‘see’ anything I try. Does anyone have any ideas?
 
Hi all. Sorry to hijack this thread a bit (please suggest creating my own thread if this breaks rules etc.) - it just seems to be a rich collection of everyone who knows about this topic specifically.
I am VERY new to all this, but have engineered myself into a bit of a corner, with no real solutions to get me back out!
I wanted to create (at home) a HA solution for all my services. I have:
  • 3 * Proxmox Installs (mini pcs). Each has a 500GB external SSD (USB - this is where Proxmox is installed), and an internal 1TB NVME Drive
  • All are clustered - An attempt to create a HA environment.
  • All three 1TB internal NMVE disks are CEPH managed (Pooled Share) – Hence data is moderately protected from failure (will also arrange further backups elsewhere)
The intention was to use the pooled share for Proxmox Backups, and General Store for Pictures etc
The thinking was to partition this into two, one for Backups, the other for general purpose
The idea was to get OMV to do all the SAMBA/NFS stuff for this.

The problem: I can’t for the life of me figure out how I can get this share visible in OMV. It just refuses to ‘see’ anything I try. Does anyone have any ideas?
I have zero experience with open media vault (had to look up what OMV might be...)

But I guess it's just another packaged storage solution to install on bare metal or inside a VM.

And I guess it doesn't have CEPH support out of the box, but being based on Debian, you could probably hack something...

The easiest solution is to simply provision an OMV VM with a disk (or two) from the CEPH storage. That virtual disk can then be shared via OMV and will be clustered underneath by CEPH, even if that VM still remains a single point of failure.

If it's ok to restart that VM on another node if the one currently running it were to go up in flames, you're pretty much settled.
But true fault-tolerant seamless high-availability starts with the client side being to support it, too, e.g. via transparent reconnects on a primary failure.

That's very involved with NFS (or basically not supported, AFAIK, that's why they invented Lustre). SAMBA/CIFS seems to have some facilities there.

The only file system I've used where that's really easy (from the client side) is GlusterFS, which you can use pretty much like NFS, except that unlike NFS it's not supported on Windows clients.

And that could be another solution: If you put a CephFS on your pool, that can be exported via NFS and I guess also via CIFS/SAMBA. But it will be involve some hacking and a little more than clicking buttons on a GUI.

True HA is never easy and always more complicated than a single node.

If you don't want to spend too much time and aren't worried about ultimate performance, just using the Ceph pool for storage redundancy, then a single OMV VM running from that redundant storage pool may just prove the ticket to some peace of mind. Use thin allocation, but watch for storage oversubscription.

Then you might want to work on automated differential backups to warm and cold secondary and tertiary storage, before you go more deeply into HA.
 
I have zero experience with open media vault (had to look up what OMV might be...)

But I guess it's just another packaged storage solution to install on bare metal or inside a VM.

And I guess it doesn't have CEPH support out of the box, but being based on Debian, you could probably hack something...

The easiest solution is to simply provision an OMV VM with a disk (or two) from the CEPH storage. That virtual disk can then be shared via OMV and will be clustered underneath by CEPH, even if that VM still remains a single point of failure.

If it's ok to restart that VM on another node if the one currently running it were to go up in flames, you're pretty much settled.
But true fault-tolerant seamless high-availability starts with the client side being to support it, too, e.g. via transparent reconnects on a primary failure.

That's very involved with NFS (or basically not supported, AFAIK, that's why they invented Lustre). SAMBA/CIFS seems to have some facilities there.

The only file system I've used where that's really easy (from the client side) is GlusterFS, which you can use pretty much like NFS, except that unlike NFS it's not supported on Windows clients.

And that could be another solution: If you put a CephFS on your pool, that can be exported via NFS and I guess also via CIFS/SAMBA. But it will be involve some hacking and a little more than clicking buttons on a GUI.

True HA is never easy and always more complicated than a single node.

If you don't want to spend too much time and aren't worried about ultimate performance, just using the Ceph pool for storage redundancy, then a single OMV VM running from that redundant storage pool may just prove the ticket to some peace of mind. Use thin allocation, but watch for storage oversubscription.

Then you might want to work on automated differential backups to warm and cold secondary and tertiary storage, before you go more deeply into HA.
Thanks for your reply. I tried re-creating the VM on the Pooled share, no-go. Still can't see any disks.
I think this is definitely an OMV issue not supporting LXC containers. I will try and build OMV now the old-fashioned way ;-) Hopefully, that works. Its a shame really; it was looking very performant (memory etc)
 
Thanks for your reply. I tried re-creating the VM on the Pooled share, no-go. Still can't see any disks.
I think this is definitely an OMV issue not supporting LXC containers.
Yes you need a OMV VM. OMV needs block devices and LXCs use filesystems. To use filesystems you need the "sharerootfs" OMV plugin but that isn't working anymore with LXCs for 2 years or so.
 
Last edited:
Thanks for your reply. I tried re-creating the VM on the Pooled share, no-go. Still can't see any disks.
I think this is definitely an OMV issue not supporting LXC containers. I will try and build OMV now the old-fashioned way ;-) Hopefully, that works. Its a shame really; it was looking very performant (memory etc)
You should only see a single logical disk, the one you create for the virtual machine on top of the Ceph pool. The fact that Ceph might do all sorts of smart redundancy would be hidden, VMs are not supposed to care where disks come from or how resilient they are.

Don't try getting smart and doing things with LXC. It drops and ties you to the individual host and you don't get the type of mobility you aim for when you want live migration or auto-recovery.

I love my containers, used to do all sort of things with OpenVZ and was looking foward to live migration with CRIU...

But VMs have struck back with better abstractions and you have to weigh the overhead vs. the simplicity here.
 
Last edited:
  • Like
Reactions: fredu
You should only see a single logical disk, the one you create for the virtual machine on top of the Ceph pool. The fact that Ceph might do all sorts of smart redundancy would be hidden, VMs are not supposed to care where disks come from or how resilient they are.

Don't try getting smart and doing things with LXC. It drops and ties you to the individual host and you don't get the type of mobility you aim for when you want live migration or auto-recovery.

I love my containers, used to do all sort of things with OpenVZ and was looking foward to live migration with CRIU...

But VMs have stuck back with better abstractions and you have to weigh the overhead vs. the simplicity here.
Yup, learned a lot in the past week. I managed to get it up and running on a standard VM. You are correct, LXC containers don't like the shared pool I setup much, although it's not immediately obvious until you do a migration across nodes.
Everydays a school day!
 
I just set this up and can see the OMV VM’s SMB from a windows laptop. How did you share the SMB to another (say plex) VM?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!