Just starting out ...

kraken99

New Member
Sep 15, 2023
5
0
1
Hi, just starting out on my virtualisation journey after mainly using Synology kit, but wanting to try a more flexible platform and learning something new along the way.

I have been trying to find as much information as possible on initial set up and configuration and although I feel I've made some progress in understanding the concepts involved, not be being a regular linux user, I'm struggling to get some of the basic underlying commandline syntax under my belt ... which is how I ended up investigating proxmox, given the more user friending GUI option.

To ensure I start off on the right foot, I would appreciate some advice on initial set up of my system, to ensure I don't make any schoolboy errors.

My objective is to set up my PVE with a number of guest systems
1. VM running Xpenology/DSM7.2 ... mainly to migrate my current 2 bay NAS to some new hardware and retain a system that I already know - I assume I can pass through the current BTRFS SHR drives so they are exclusively used by the DSM VM.
2. VM running Unraid ... mainly to try this out, I have never used it, but curious to try and see how it compares with Synology (or TrueNAS Scale, which I've also dabbled with in the past)
3. A linux distro (recommendation please for a beginner who is keen to learn more about the platform and making use of the command line. Not sure if I should run this as a VM or LXC container?
4. VM running a light install (if there is such a thing) of Windows ... just because :rolleyes:

The hardware I collated for this experiment is as follows
Motherboard - Topton Celeron N5105 CPU/ 2x M.2, 6x SATA and 4x 2.5Gb intel NICS - feels like a great little flexible system
2x 8Gb DDR4 RAM
2x M.2 500Gb NVME drives - currently boot drives for Proxmox/ ZFS Raid 1
4x 1Tb 2.5" SSDs - currently not formatted
2x 12Tb 3.5" Enterprise SATA drives - currently BTRFS/ SHR
Corsair RM550x PSU
All housed in a Fractal Node 304 case

So I feel I've made a start but have a few questions, that I've not managed to answer during my research
1. Is my objective of running these 4 systems on this hardware achievable, given to start with it'll only be my previous synology setup that will have much to do, and even then it's only serving up the occasional movie, backing up a couple of PCs and a MAC, any syncing my photo collection to the cloud?
2. Have I done the right thing using the 2x M.2 drives as the Proxmox boot drives and do I need to partition these further to also store any ISOs or VM operating systems?
3. I'm intending to use the 4x 1Tb SSDs as storage for my test Unraid/ TrueNAS, Linux and Windows VM. Would I be better off setting the RAID up in proxmox, and having storage partitions for each VM, or would it be better to let each VM handle the raid config?
4. Would I be better to install docker/ portainer as an LXC or VM, or just use the inbuilt capabilities available on the NAS OS's?
5. Is there anything else I should be doing at this stage ... e.g. one thing I've yet to investigate is how best to backup my VM configurations, in case of failure or corruption so I can easily recover from a stupid error.

Apologies for the long first post, and thanks if you've managed to get this far ... any help, advice or highlighting the error of my ways will be appreciated
 
Last edited:
One other thing I forgot to mention... with the luxury of having four network connections, would there be benefit in dedicating two of these to the DSM VM only, or would there be some better network config that I should investigate or try - thanks
 
Last edited:
Not sure if I’ve asked this question in the wrong area of the forum, but I thought installation and configuration seemed the right place, but not sure now, as I haven’t had a single response … if there is somewhere more appropriate, or maybe a different forum altogether, that is more suited to newcomers, trying to get started, I happy to try there …

Given the lack of response here I thought I’d try feed my questions into ChatGPT 4 , just to see if the latest in AI could help … and you may be interested in it’s response … of course, I know that it might look like a knowledgeable set of answers, but as I’m just learning, I know full well that I should take most of this as a possible fabrication, but hopefully those of you with some expertise in this area might be able to critique it’s response to my questions …

Hello, this is Bing. I’m glad you’re interested in Proxmox and Linux. I’ll try to answer your questions as best as I can.

  1. Your objective of running these four systems on your hardware is achievable, but it depends on how much resources you allocate to each VM and how much load they generate. You have 16 GB of RAM, which should be enough for basic tasks, but you may need to upgrade it if you want to run more intensive applications or multiple VMs simultaneously. You also have a quad-core CPU, which should be sufficient for most use cases, but you may encounter some performance issues if you run CPU-heavy workloads on your VMs. You can monitor the resource usage of your VMs using the Proxmox web interface or the command line tools like htop or top.
  2. Using the 2x M.2 drives as the Proxmox boot drives is a good choice, as they offer fast read and write speeds and low latency. You don’t need to partition them further, as Proxmox will create a ZFS pool with them and use it to store the system files, the ISOs, and the VM operating systems. You can also add more storage devices to the pool later if you need more space.
  3. There are pros and cons to both options of setting up RAID in Proxmox or in each VM. If you set up RAID in Proxmox, you will have a single storage pool that can be shared among all your VMs, and you will benefit from the ZFS features like snapshots, compression, encryption, and deduplication. However, you will also lose some flexibility and control over how each VM accesses its storage, and you may encounter some compatibility issues with some operating systems that don’t support ZFS natively. If you set up RAID in each VM, you will have more granular control over how each VM manages its storage, and you can use different file systems and RAID levels for different purposes. However, you will also have to manage each RAID separately, and you will not be able to use the ZFS features that Proxmox offers.
  4. Docker and Portainer are both applications that can run inside a VM or an LXC container. The choice depends on your preference and needs. Running them inside a VM will give you more isolation and security, but it will also consume more resources and add more overhead. Running them inside an LXC container will give you more efficiency and performance, but it will also expose them to more potential risks and vulnerabilities. You can also use the built-in capabilities of the NAS OSes, but they may not offer the same features and functionality as Docker and Portainer.
  5. One thing you should do at this stage is to configure a backup strategy for your Proxmox system and your VMs. Proxmox has a built-in backup tool that can create full or incremental backups of your VMs and store them on local or remote storage devices. You can also schedule backups to run automatically at regular intervals. You can restore your backups using the Proxmox web interface or the command line tools like vzrestore or qmrestore. You should also backup your Proxmox configuration files, which are located in /etc/pve/, using a tool like rsync or tar.
  6. With four network connections, you have several options to optimize your network performance and reliability. One option is to dedicate two of them to the DSM VM only, which will give it more bandwidth and redundancy. Another option is to configure network bonding and LACP on your Proxmox host, which will combine two or more network interfaces into one logical interface that can balance the traffic load and provide failover protection. You can then assign this bonded interface to your DSM VM or any other VM that needs high network performance. A third option is to use VLANs (virtual LANs) to separate your network traffic into different segments based on your needs and preferences.
I hope this helps you with your Proxmox setup and configuration. If you have any other questions, feel free to ask me
 
Hello,

I consider myself a bit of a Hobbyist Linux Geek and have been playing with all the nice toys for years. 20 years ago I built a Beowulf cluster using something called 'openMosix' and it kept my house very warm during the winter months with its 8 severs/16 CPUs that would now be considered slower than your average mobile phone. That is to say - I am still enjoying my learning experience!

I am now hooked on all things home-lab and spend way too much time getting into silly things that no one else seems to have documented - so I break things - A LOT!!!

I started using Unraid several years ago and felt restricted in what I was able to play around with (due to past play times with XenServer) and made the move to Proxmox - I have broken things several times, but mostly have been able to get out of trouble on my own with research and 'trial and error'.

I have found Proxmox answers all my needs without me building my own custom setup - the learning curve was not difficult due to its WebUI. As you state you are a beginner, I would suggest you install Proxmox on separate hardware along side whatever you are currently running. Get to know it hands on... also... break it a few times (it is how I learn).

Once you are comfortable with it, then test out what it is capable of (I have had a docker swarm running in containers - it is a bit messy... Virtual Machines are generally a bit more bloaty, but easier - however, you should perhaps test out their ease of deployment first).

I am a big fan of Proxmox's built in Ceph storage, however, you are not limited to it.

Finally, if you are new to Linux, installing Debian as a VM is a good way to go as Proxmox is based upon it (make snapshots and backups within Proxmox so you can break it and recover with ease).

I hope this helps you on your journey - good luck!
 
The best advice I can give you as a new person learning is to not expect to achieve your "production" environment on the first go. I have been using Proxmox for about a year, but I was pretty entrenched with Linux on the desktop when I began this journey and I understood permissions, how the kernel works, basic Linux commands, how the file system is structured, etc. Even still, I must have installed and re-installed proxmox 6-7 times over the last year, correcting or improving various choices I made. There's a lot to learn, but I feel its worth it.

1. Is my objective of running these 4 systems on this hardware achievable
Maybe. I thought about getting one of those topton boards, but I went with a refurbished HP Z640 machine instead as I got a nice bargain on it (Xeon 2690V3, 64GB of ECC ram for ~$300) I have a worry that the N5105 is not enough horsepower, but I have no empirical evidence to back that up.

2. Have I done the right thing using the 2x M.2 drives as the Proxmox boot drives and do I need to partition these further to also store any ISOs or VM operating systems?
You will get conflicting answers on this as to whether these are suitable or not. There seems to be a line of reasoning that Proxmox quickly wears out non-enterprise grade SSDs (NVMe or otherwise). This has not been my experience. I have been running consumer grade M.2 NVMe drives in a ZFS raid 1 mirror to host my VMs and consumer grade SATA SSDs in a ZFS raid 1 mirror for my operating system. I will say that you need nowhere near 500GB for the OS. No need to partition them, Proxmox will handle that during the install. If I was starting from scratch I would go with much smaller OS drives. Maybe 128GB even. I would also not use the M.2 drives for booting, since you really want the speed in my opinion in the drives that host the VMs.

3. I'm intending to use the 4x 1Tb SSDs as storage for my test Unraid/ TrueNAS, Linux and Windows VM. Would I be better off setting the RAID up in proxmox, and having storage partitions for each VM, or would it be better to let each VM handle the raid config?
In my opinion, I would set up your drives as follows (leaving aside the number of drives for a moment): I would have two smaller SSDs in a mirror for the OS, two larger drives (1-2TB, probably SSD or larger M.2 drives) in a mirror for hosting the VMs, and I would do a pass through on the other drives and let Unraid, etc. manage those drives. You can't get a good feel for how Unraid or any other software package will perform if Proxmox is controlling the storage for them. Again, just how I do it. I run openmediavault, and I pass through two spinning disks to it to manage. They are formatted with BTRFS in a mirror, and OMV handles the snapshots, file system scrubbing, backups etc.).

I am also a fan of not storing data on Proxmox. I use it to run my VMs but my data is all stored on a separate NAS that provides NFS shares to the individual VMs and/or docker packages I have running. I also back up my VMs to NFS shares rather than store them on Proxmox. IMO it makes my backup strategy easier. Data sits in my NAS, then gets backed up to a second NAS I built using a raspberry PI. That second NAS uses rsync to make a duplicate copy of everything on a second drive, and it shoots out differential backups to the cloud nightly using rclone. (3,2,1 backup strategy). So for me, If my proxmox server crashes, all I have to do is re-install the OS, and restore everything from backup. Easy Peasy.

4. Would I be better to install docker/ portainer as an LXC or VM, or just use the inbuilt capabilities available on the NAS OS's?
There's probably no right way to do this, but I keep multiple docker instances in separate VMs. The reason for this is network segregation. I have my network segregated using VLANs and some of my VMs and docker images are public internet facing and some are not. The stuff that is public internet facing goes into a special VLAN with port forwarding and/or more restrictive firewall rules. The stuff that is internal only resides in a more trusted VLAN. I also use VLANs to segregate out IOT devices like Ring cameras and TVs. My VM for Home Assistant sits in that VLAN so it can access all the IOT devices. But that VLAN can't access anything else.

5. Is there anything else I should be doing at this stage ... e.g. one thing I've yet to investigate is how best to backup my VM configurations, in case of failure or corruption so I can easily recover from a stupid error.
I would think about network segregation. I would also advise you to learn how to do snapshots, since that is a great way to roll back a VM if you mess something up, you can revert back to the previous state in seconds. I think you might also consider adding more memory to your system. I seem to run out of memory when spinning up new VMs faster than I run out of CPU cores. ZFS, if you use it on Proxmox, will consume a lot of memory. This isn't a concern or a bad thing, so just expect it and do some pre-emptive reading on the subject.
 
3. A linux distro (recommendation please for a beginner who is keen to learn more about the platform and making use of the command line. Not sure if I should run this as a VM or LXC container?

I think Debian is the best Linux distro. Its all I use. I have been super pleased with the latest release and use it for all my VMs as well as on my laptop as a daily driver. Most every tutorial out there covers Debian. I ran Ubuntu for a while but its a little too funky for me. I also ran Linux Mint as my desktop for a long time, along with Redhat, Fedora, OpenSuse, Mepis Linux, and a few others. I just feel most comfortable with Debian and apt/rpm as the package manager for the most part. Other people will tell you that CentOS is good as well as Arch Linux. I think it is like arguing if vanilla or chocolate ice cream is better. I just like Debian the best.
 
One other thing I forgot to mention... with the luxury of having four network connections, would there be benefit in dedicating two of these to the DSM VM only, or would there be some better network config that I should investigate or try - thanks
Couldn't hurt. I run everything on one 2.5gbe network connection and I don't seem to be bandwidth constrained for what I do.
 
  • Like
Reactions: kraken99
Hello,

I consider myself a bit of a Hobbyist Linux Geek and have been playing with all the nice toys for years. 20 years ago I built a Beowulf cluster using something called 'openMosix' and it kept my house very warm during the winter months with its 8 severs/16 CPUs that would now be considered slower than your average mobile phone. That is to say - I am still enjoying my learning experience!

I am now hooked on all things home-lab and spend way too much time getting into silly things that no one else seems to have documented - so I break things - A LOT!!!

I started using Unraid several years ago and felt restricted in what I was able to play around with (due to past play times with XenServer) and made the move to Proxmox - I have broken things several times, but mostly have been able to get out of trouble on my own with research and 'trial and error'.

I have found Proxmox answers all my needs without me building my own custom setup - the learning curve was not difficult due to its WebUI. As you state you are a beginner, I would suggest you install Proxmox on separate hardware along side whatever you are currently running. Get to know it hands on... also... break it a few times (it is how I learn).

Once you are comfortable with it, then test out what it is capable of (I have had a docker swarm running in containers - it is a bit messy... Virtual Machines are generally a bit more bloaty, but easier - however, you should perhaps test out their ease of deployment first).

I am a big fan of Proxmox's built in Ceph storage, however, you are not limited to it.

Finally, if you are new to Linux, installing Debian as a VM is a good way to go as Proxmox is based upon it (make snapshots and backups within Proxmox so you can break it and recover with ease).

I hope this helps you on your journey - good luck!
thanks for the feedback ... getting into the silly things, is something I'm looking forward to ... regarding the separate hardware for Proxmox, having looked at the Xpenology install and configure procedure, I think that's good advice as I wouldn't want to lose or corrupt my photos and other documents backed up by the Synology and feel that probably is a risk too far, at least until I feel that I understand the way things work.
Using the new and used hardware I listed above, does mean I have the option to do that, and if this break along the way, at least there won't be any lasting impact. Once I've worked out how to back up my containers and VM with their specific configs (as you've recommended above), then hopefully it will only be a couple of commands before I have everything reinstated again.
I've seen Ceph storage in the dashboard menu but not yet investigated what it is, or why I might want to use it ... so it's been added to my list to look at.
Debian it is then - thanks
 
The best advice I can give you as a new person learning is to not expect to achieve your "production" environment on the first go. I have been using Proxmox for about a year, but I was pretty entrenched with Linux on the desktop when I began this journey and I understood permissions, how the kernel works, basic Linux commands, how the file system is structured, etc. Even still, I must have installed and re-installed proxmox 6-7 times over the last year, correcting or improving various choices I made. There's a lot to learn, but I feel its worth it.


Maybe. I thought about getting one of those topton boards, but I went with a refurbished HP Z640 machine instead as I got a nice bargain on it (Xeon 2690V3, 64GB of ECC ram for ~$300) I have a worry that the N5105 is not enough horsepower, but I have no empirical evidence to back that up.


You will get conflicting answers on this as to whether these are suitable or not. There seems to be a line of reasoning that Proxmox quickly wears out non-enterprise grade SSDs (NVMe or otherwise). This has not been my experience. I have been running consumer grade M.2 NVMe drives in a ZFS raid 1 mirror to host my VMs and consumer grade SATA SSDs in a ZFS raid 1 mirror for my operating system. I will say that you need nowhere near 500GB for the OS. No need to partition them, Proxmox will handle that during the install. If I was starting from scratch I would go with much smaller OS drives. Maybe 128GB even. I would also not use the M.2 drives for booting, since you really want the speed in my opinion in the drives that host the VMs.


In my opinion, I would set up your drives as follows (leaving aside the number of drives for a moment): I would have two smaller SSDs in a mirror for the OS, two larger drives (1-2TB, probably SSD or larger M.2 drives) in a mirror for hosting the VMs, and I would do a pass through on the other drives and let Unraid, etc. manage those drives. You can't get a good feel for how Unraid or any other software package will perform if Proxmox is controlling the storage for them. Again, just how I do it. I run openmediavault, and I pass through two spinning disks to it to manage. They are formatted with BTRFS in a mirror, and OMV handles the snapshots, file system scrubbing, backups etc.).

I am also a fan of not storing data on Proxmox. I use it to run my VMs but my data is all stored on a separate NAS that provides NFS shares to the individual VMs and/or docker packages I have running. I also back up my VMs to NFS shares rather than store them on Proxmox. IMO it makes my backup strategy easier. Data sits in my NAS, then gets backed up to a second NAS I built using a raspberry PI. That second NAS uses rsync to make a duplicate copy of everything on a second drive, and it shoots out differential backups to the cloud nightly using rclone. (3,2,1 backup strategy). So for me, If my proxmox server crashes, all I have to do is re-install the OS, and restore everything from backup. Easy Peasy.


There's probably no right way to do this, but I keep multiple docker instances in separate VMs. The reason for this is network segregation. I have my network segregated using VLANs and some of my VMs and docker images are public internet facing and some are not. The stuff that is public internet facing goes into a special VLAN with port forwarding and/or more restrictive firewall rules. The stuff that is internal only resides in a more trusted VLAN. I also use VLANs to segregate out IOT devices like Ring cameras and TVs. My VM for Home Assistant sits in that VLAN so it can access all the IOT devices. But that VLAN can't access anything else.


I would think about network segregation. I would also advise you to learn how to do snapshots, since that is a great way to roll back a VM if you mess something up, you can revert back to the previous state in seconds. I think you might also consider adding more memory to your system. I seem to run out of memory when spinning up new VMs faster than I run out of CPU cores. ZFS, if you use it on Proxmox, will consume a lot of memory. This isn't a concern or a bad thing, so just expect it and do some pre-emptive reading on the subject.
Thanks Louie ... I did look at using a secondhand xeon based system, but as I'm intending to leave this powered on 24/7 I also wanted to not over do it and was hoping for something more power efficient (this uses about 25w at idle with just proxmox installed and disks powered up). Not sure if I can run any benchmarks on the system (as it's basically Debian, I guess I can). So whilst it's starting out small and light, it does come with a lot of more modern tricks built in, such as the 6 SATA, 2 M.2 and 4 2.5Gb I/Fs all built in. I'm pretty sure though, that if the homelab bug sets in, it won't be long until I'm looking for something more capable.

Regarding the size of the boot drives, I am intending to use that space to also host the VM and Containers and use the SSDs just for storage - as you say they are way too big for just proxmox itself, but it seemed a waste to tie up both M.2 drives with smaller NVME drives (plus smaller capacity nvme drives aren't much cheaper, even if you can find them these days). The way you are managing your drives and snapshots for OMV is pretty much what I'm hoping to do for DSM ... I might still set DSM up on proxmox as a test platform but appreciate the warning regarding using proxmox for data storage. I have a couple of Pi4's that I have used previously for various projects so using proxmox and a similar backup/ recovery strategy would work for me.

I also have separate VLANs for for secure/ IoT and one for media/ tv's etc but haven't risked opening up ports for access from the internet ... feel I need to understand a lot more around firewall configurations before I do that, although have looked into using a ZeroTier solution that would allow me access back into my network whilst out and about.
I have also read that ZFS is memory hungry and I have the option of replacing my two memory modules to 2x 32Gb which I'm hoping will help significantly. I also will considering turning off some of the default logging that goes on, to try and reduce the read/ writes on the SSDs.
Thanks for you help and the steer in the right direction.
 
Not trying to prove my point, but FWIW, I just re-installed Proxmox again last night because I got some new drives and wanted to reconfigure all of my storage pools. :D

But at least now I have my idle power usage down to 58 watts, which is pretty good for this hardware. How if I can figure out how to boot without the GPU I may be able to cut that in half. Unfortunately the bios won't post without the GPU so that is my next puzzle to tackle.
 
  • Like
Reactions: kraken99
The best advice I can give you as a new person learning is to not expect to achieve your "production" environment on the first go. I have been using Proxmox for about a year, but I was pretty entrenched with Linux on the desktop when I began this journey and I understood permissions, how the kernel works, basic Linux commands, how the file system is structured, etc. Even still, I must have installed and re-installed proxmox 6-7 times over the last year, correcting or improving various choices I made. There's a lot to learn, but I feel its worth it.


Maybe. I thought about getting one of those topton boards, but I went with a refurbished HP Z640 machine instead as I got a nice bargain on it (Xeon 2690V3, 64GB of ECC ram for ~$300) I have a worry that the N5105 is not enough horsepower, but I have no empirical evidence to back that up.


You will get conflicting answers on this as to whether these are suitable or not. There seems to be a line of reasoning that Proxmox quickly wears out non-enterprise grade SSDs (NVMe or otherwise). This has not been my experience. I have been running consumer grade M.2 NVMe drives in a ZFS raid 1 mirror to host my VMs and consumer grade SATA SSDs in a ZFS raid 1 mirror for my operating system. I will say that you need nowhere near 500GB for the OS. No need to partition them, Proxmox will handle that during the install. If I was starting from scratch I would go with much smaller OS drives. Maybe 128GB even. I would also not use the M.2 drives for booting, since you really want the speed in my opinion in the drives that host the VMs.


In my opinion, I would set up your drives as follows (leaving aside the number of drives for a moment): I would have two smaller SSDs in a mirror for the OS, two larger drives (1-2TB, probably SSD or larger M.2 drives) in a mirror for hosting the VMs, and I would do a pass through on the other drives and let Unraid, etc. manage those drives. You can't get a good feel for how Unraid or any other software package will perform if Proxmox is controlling the storage for them. Again, just how I do it. I run openmediavault, and I pass through two spinning disks to it to manage. They are formatted with BTRFS in a mirror, and OMV handles the snapshots, file system scrubbing, backups etc.).

I am also a fan of not storing data on Proxmox. I use it to run my VMs but my data is all stored on a separate NAS that provides NFS shares to the individual VMs and/or docker packages I have running. I also back up my VMs to NFS shares rather than store them on Proxmox. IMO it makes my backup strategy easier. Data sits in my NAS, then gets backed up to a second NAS I built using a raspberry PI. That second NAS uses rsync to make a duplicate copy of everything on a second drive, and it shoots out differential backups to the cloud nightly using rclone. (3,2,1 backup strategy). So for me, If my proxmox server crashes, all I have to do is re-install the OS, and restore everything from backup. Easy Peasy.


There's probably no right way to do this, but I keep multiple docker instances in separate VMs. The reason for this is network segregation. I have my network segregated using VLANs and some of my VMs and docker images are public internet facing and some are not. The stuff that is public internet facing goes into a special VLAN with port forwarding and/or more restrictive firewall rules. The stuff that is internal only resides in a more trusted VLAN. I also use VLANs to segregate out IOT devices like Ring cameras and TVs. My VM for Home Assistant sits in that VLAN so it can access all the IOT devices. But that VLAN can't access anything else.


I would think about network segregation. I would also advise you to learn how to do snapshots, since that is a great way to roll back a VM if you mess something up, you can revert back to the previous state in seconds. I think you might also consider adding more memory to your system. I seem to run out of memory when spinning up new VMs faster than I run out of CPU cores. ZFS, if you use it on Proxmox, will consume a lot of memory. This isn't a concern or a bad thing, so just expect it and do some pre-emptive reading on the subject.
Can you indicate if your VM storage is ZFS or Ciph? is there any particular tutorials you used ofr OMV and rclone that I can refer to?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!