Recommended storage system???

Just one final question: I've been having problems getting both the 10G fibre and the RAID-controller working using Omnios. And all of the hardware works out-of-the-box using Debian Jessie. How about if I install Debian Jessie and use ZFS (the zfsonlinux repository) as root-system, and on top of that installs napp-it. Will that work??? - I have been using ZFS as root-system for Debian in a some years and this is stable. So the only problem is whether the Linux ZFS supports using a ZFS Volume as an iSCSI LUN.

Forget this question... The hardware-recognition failed during installation of the Omnios LTS release but afterwards I tested with the Omnios stable release and here the hardware was correctly identified. So I will just continue using stable instead of LTS.
 
Forget this question... The hardware-recognition failed during installation of the Omnios LTS release but afterwards I tested with the Omnios stable release and here the hardware was correctly identified. So I will just continue using stable instead of LTS.
Was it failing on the LTS after pkg update or was it a pristine system from install CD? AFAIK there should be no difference in hardware support between LTS and stable in regards to nics and hardware controllers. If it is a matter of micro architecture then there is a difference. skylake is not supported on LTS and in regards to kaby lake I don't know.
 
Was it failing on the LTS after pkg update or was it a pristine system from install CD? AFAIK there should be no difference in hardware support between LTS and stable in regards to nics and hardware controllers. If it is a matter of micro architecture then there is a difference. skylake is not supported on LTS and in regards to kaby lake I don't know.

Sorry for the delay... I actually never figured out how to make Omni OS work at the NAS-server. The problems were all related to +2TB disk sizes. In principle Omni OS should support GPT if I added the full disks to the root pool, but the installer just stopped working instead. I was able to install using Omni OS if I used MBR for the rpool disks.

At first I installed OpenIndiana + Napp-it instead. But at the moment I am using Solaris OS + Napp-it. And both systems handle GPT-disks much better than Omni OS. I haven’t decided what to do yet... The Solaris OS solution it working very well but I am using the free network developer edition and I cannot use this in a production environment. So I will probably either have to figure out how to get the Omni OS installer to work with GPT-disks or use OpenIndiana.

I have also been playing around with how to configure the Proxmox and Napp-it configurations. Your thread with RobFantini about a year ago have been enlightening... But so far I am still testing. All 3 PVE-servers works without problems and they are just waiting for the final iSCSI configuration.
 
The problems were all related to +2TB disk sizes. In principle Omni OS should support GPT if I added the full disks to the root pool, but the installer just stopped working instead. I was able to install using Omni OS if I used MBR for the rpool disks.
Why do you need a rpool of 2+ TB?
The preferred why is to have the rpool on small disk (Intel DC S3610 100GB is more than adequate or two mirrored disks) and then have the storage in a large pool.
 
Why do you need a rpool of 2+ TB?
The preferred why is to have the rpool on small disk (Intel DC S3610 100GB is more than adequate or two mirrored disks) and then have the storage in a large pool.

I have figured that out... But I have 12 x 3TB in the NAS. And no small disks. At the current Solaris OS configuration I am using the first 2 x 3TB in a mirrored rpool and the remaining 10 disks in 2 x raidz2 (each with 5 disks). The speed of this 2 x raidz2 is quite good so I can easily live with this. I still haven't found any particular use for the full rpool but something will turn up for using the remaining 2.? TB of the rpool. - Solaris OS supports GPT partitioning so I actually started creating a 300GB partition for the rpool and use the remaining disk space in a RAID10 configuration (2 x mirrored) and then just use the remaining 8 disks in a 2 x raidz2 pool. I switched back to using the full disks because I thought about how we could handle a disk error in one of the first two disks if I wasn't at work. And here the disk partitioning should be done before reinserting the replacementdisk in the pools, and I am the only one for doing this at work. So I switched to the 3TB rpool simply because it's easier to maintain in case of disk errors.
 
Omnios supports GPT partitioning as well but the problem for you is that Omnios LTS does not support disks larger than 2 TB but if you use Omnios Stable it does. You will however not be able to create an install partition larger than 2 TB.
 
Omnios supports GPT partitioning as well but the problem for you is that Omnios LTS does not support disks larger than 2 TB but if you use Omnios Stable it does. You will however not be able to create an install partition larger than 2 TB.

I tried installing Omni OS using LTS, Stable and Bloody releases. And for all 3 the installation failed when I selected the entire 3TB disk as rpool storage. And all the error messages contained fdisk comments...

So I will probably have use either OpenIndiana or Solaris OS where +2TB rpool disks are supported.

I miss linux where you at most distributions can create the partitions from an other system, copy the default-files into the system and then just chroot into the system and install the bootloader and configure some settings. This would be quite easy for Omni OS where I just could boot up from Openindiana Live. - Is it possible to do this for Omni OS?
 
Omnios supports GPT partitioning as well but the problem for you is that Omnios LTS does not support disks larger than 2 TB but if you use Omnios Stable it does. You will however not be able to create an install partition larger than 2 TB.

And how about using Openindiana for the OS. I switched from Openindiana to Solaris OS because I like stability and I expect larger stability from Omni OS and/or Solaris OS. But I only have positive experiences from Openindiana where the system was easy to configure and the Napp-it installation also worked without glitches. So my expectation about most costumer fríendly systems being unsuited for servers may very well be wrong...
 
I tried installing Omni OS using LTS, Stable and Bloody releases. And for all 3 the installation failed when I selected the entire 3TB disk as rpool storage. And all the error messages contained fdisk comments...

So I will probably have use either OpenIndiana or Solaris OS where +2TB rpool disks are supported.

I miss linux where you at most distributions can create the partitions from an other system, copy the default-files into the system and then just chroot into the system and install the bootloader and configure some settings. This would be quite easy for Omni OS where I just could boot up from Openindiana Live. - Is it possible to do this for Omni OS?
You can prepare the disk before installation and when you are asked by the installer to choose disk you simply choose the prepared slice for installation. If Omnios recognize a Solaris partition on the disk it will not reformat the disk but use the assigned slice and only format this slice before installation.
 
You can prepare the disk before installation and when you are asked by the installer to choose disk you simply choose the prepared slice for installation. If Omnios recognize a Solaris partition on the disk it will not reformat the disk but use the assigned slice and only format this slice before installation.

Sorry... But I have just tried where I started by running the Solaris OS install and here I selected to install Solaris OS at a 300GB GPT partition (used the Solaris OS installer to create the partition and allowed it to finish the installation). Afterwards I started the Omni OS installer (stable) and it didn’t recognize the Solaris partition created by the Solaris OS installer. So I am sorry to inform you: The Omni OS installer simply doesn’t work with my GPT partitions...

So what’s your recommendation: Should I choose Openindiana or Solaris OS? My patience related to Omni OS has expired. And considering GPT has been the primary partitioning system for pretty much all hardware at least for the last 5 years I don’t understand the priorities of the Omni OS maintainers. So I will have to use either Openindiana or Solaris OS. We cannot find the money for a full Solaris OS license. So Openindiana is probably the only real option. - Within the Network Developer License for Solaris OS there is however a small loophole for educational use. And this server is primarily used for educational use at an educational institution (possibly within the license restrictions). But in order to be on the safe legal side I will probably end up using Openindiana.
 
Solaris 11 is incompatible when any other Solaris (Blame Oracle) so this will surely not work. As your question to OpenIndiana versus Solaris. I don't know OpenIndiana and to Solaris: Avoid anything coming out from Oracle as the plague.
 
Solaris 11 is incompatible when any other Solaris (Blame Oracle) so this will surely not work. As your question to OpenIndiana versus Solaris. I don't know OpenIndiana and to Solaris: Avoid anything coming out from Oracle as the plague.

From my very, very, very small experience with the illumos based OS’es it looks like Openindiana is a slightly more user friendly variant (like Ubuntu compared to Debian). But elements of the structure is very similar to Omni OS (a little different compared to Solaris OS). So I guess Openindiana and Omni OS uses the same kernel and core-configuration where Solaris OS probably uses a slightly different development branch.

But I have only played around with Solarish for some weeks... And my knowledge is therefore very limited.
 
OpenIndiana uses same kernel and userland - Illumos kernel and kernel utils and GNU tools for userland, and same ZFS implementation - Open ZFS. Main difference between OpenIndiana and Omnios is that OpenIndiana is a desktop OS while Omnios is a server OS geared towards datacentres. Another difference is that Omnios is backed by a company - Omniti, which provides commercial support should it be needed.
 
OpenIndiana uses same kernel and userland - Illumos kernel and kernel utils and GNU tools for userland, and same ZFS implementation - Open ZFS. Main difference between OpenIndiana and Omnios is that OpenIndiana is a desktop OS while Omnios is a server OS geared towards datacentres. Another difference is that Omnios is backed by a company - Omniti, which provides commercial support should it be needed.

I'm quite sure you're right... I just notice the small differences like in Solaris OS it's "ipadm create-ip" where the analog command in Omnios/Openindiana is "ipadm create-if".

Regarding the desktop feature I compared OpenIndiana to Ubuntu. They both use a lot of resources related to the desktops (for some reason), but they also both have server editions. Personally I prefer Debian over Ubuntu when it comes to servers, but every once in a while I experience the need for some program that's only officially supported by Ubuntu, and here the Ubuntu server-editions are useful and works pretty well. I expect pretty much the same from OpenIndiana...
 
Last edited:
OpenIndiana uses same kernel and userland - Illumos kernel and kernel utils and GNU tools for userland, and same ZFS implementation - Open ZFS. Main difference between OpenIndiana and Omnios is that OpenIndiana is a desktop OS while Omnios is a server OS geared towards datacentres. Another difference is that Omnios is backed by a company - Omniti, which provides commercial support should it be needed.

Based on all of your advise I am now using Omnios and almost everything have been working perfectly for months. I am using ZFS over iSCSI for the KVM-based VMs without any problems and LVM over iSCSI for the LXC-based VMs. But during the last 2 months I have 2 times seen problems with the LVM over iSCSI system. The problem has both times started in the same VM (LXC-based) that's at a 600GB disk where pretty much all data is stored in a B-tree storage structure (millions of very small files - all created by the server program FirstClass running at a Debian system). Both times the problems started with this one VM and afterwards the problem were somehow transferred to the other LXC-based VMs (at all the servers). A simple reboot of the VMs solves the problem. There is no need to reboot the PVE-servers or the Omnios storage. - I guess the problem somehow is related to the LUN used for LVM over iSCSI, and I guess the problem originates from the Omnios storage system (because it affects all 3 PVE-servers).

Do you know of some settings for Omnios allowing a lot of open file-connections to a LUN. This is frequently the problem with this particular type of program (the B-tree storage variant). I am thinking about creating a KVM-based VM for this machine instead, but it's been working very well at a LXC-based system stored at local ZFS-disks before being moved to the new servers (it started out as a OpenVZ-based system quite a few years ago).
 
I don't think this is a limitation in iSCSI or the iSCSI implementation in Omnios. I rather think this is either a limitation in LVM or perhaps you are hitting the max configuration for open files in Linux. Try using info from this article: https://www.cyberciti.biz/faq/linux-increase-the-maximum-number-of-open-files/ on your proxmox hosts and inside the LXC client.

You were right the sysctl file-max wasn't configured at the new server. But the default unconfigured file-max setting at the new servers is 26350512 which is 3-4 times higher than the configured setting at the old server (was 8388608). And there was not defined user level limits at any of the systems. The old server did however not use LVM for this particular filesystem, this was entirely handled as a ZFS local file system. So perhaps there are some LVM limitations... It looks like the LVM over iSCSI system defaults to the ext4-filesystem. So there may also be limitations here... I have been using either zfs and/or btrfs for everything the last couple of years and does not follow the normal filesystem development. So my knowledge about EXT4 is some years old. - But the problems may very well be related to the EXT4-filesystems. The kernel errors related the machine shutdowns were all filled with EXT4-related errors (usually read/write errors bringing my thoughts back to the LVM system). But these were not LVM-errormessages but instead EXT4 message. The peculiar part is that everything works after a VM reboot. Here I guess they umount the filesystem during shutdown and remounts during bootup. And after that the VMs works again.

But if you have no other suggestions I will probably create a KVM-based VM today and move the userdata (about 450GB) from the old LXC-machine to the new KVM-based one tonight when nobody is using the system.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!