You were right the sysctl file-max wasn't configured at the new server. But the default unconfigured file-max setting at the new servers is 26350512 which is 3-4 times higher than the configured setting at the old server (was 8388608). And there was not defined user level limits at any of the...
Based on all of your advise I am now using Omnios and almost everything have been working perfectly for months. I am using ZFS over iSCSI for the KVM-based VMs without any problems and LVM over iSCSI for the LXC-based VMs. But during the last 2 months I have 2 times seen problems with the LVM...
I'm quite sure you're right... I just notice the small differences like in Solaris OS it's "ipadm create-ip" where the analog command in Omnios/Openindiana is "ipadm create-if".
Regarding the desktop feature I compared OpenIndiana to Ubuntu. They both use a lot of resources related to the...
From my very, very, very small experience with the illumos based OS’es it looks like Openindiana is a slightly more user friendly variant (like Ubuntu compared to Debian). But elements of the structure is very similar to Omni OS (a little different compared to Solaris OS). So I guess Openindiana...
Sorry... But I have just tried where I started by running the Solaris OS install and here I selected to install Solaris OS at a 300GB GPT partition (used the Solaris OS installer to create the partition and allowed it to finish the installation). Afterwards I started the Omni OS installer...
And how about using Openindiana for the OS. I switched from Openindiana to Solaris OS because I like stability and I expect larger stability from Omni OS and/or Solaris OS. But I only have positive experiences from Openindiana where the system was easy to configure and the Napp-it installation...
I tried installing Omni OS using LTS, Stable and Bloody releases. And for all 3 the installation failed when I selected the entire 3TB disk as rpool storage. And all the error messages contained fdisk comments...
So I will probably have use either OpenIndiana or Solaris OS where +2TB rpool...
I have figured that out... But I have 12 x 3TB in the NAS. And no small disks. At the current Solaris OS configuration I am using the first 2 x 3TB in a mirrored rpool and the remaining 10 disks in 2 x raidz2 (each with 5 disks). The speed of this 2 x raidz2 is quite good so I can easily live...
Sorry for the delay... I actually never figured out how to make Omni OS work at the NAS-server. The problems were all related to +2TB disk sizes. In principle Omni OS should support GPT if I added the full disks to the root pool, but the installer just stopped working instead. I was able to...
Forget this question... The hardware-recognition failed during installation of the Omnios LTS release but afterwards I tested with the Omnios stable release and here the hardware was correctly identified. So I will just continue using stable instead of LTS.
Just one final question: I've been having problems getting both the 10G fibre and the RAID-controller working using Omnios. And all of the hardware works out-of-the-box using Debian Jessie. How about if I install Debian Jessie and use ZFS (the zfsonlinux repository) as root-system, and on top of...
Thanks... I just needed to understand which iSCSI systems support Proxmox.
I have to admit that the 3 Proxmox-servers I am managing at the moment all uses local ZFS for storage so I am not accustomed to having these problems with iSCSI. But I am in the process of retiring 2 of these servers and...
So the only Linux iSCSI versions fully supporting ZFS over iSCSI for Proxmox uses earlier kernels than 2.6.38? If so Debian Squeeze was the last one with iSCSI supporting ZFS over iSCSI for Proxmox.
I have to admit that I have difficulties understanding why Proxmox haven't put all resources...
OK... I will follow your recommendation and try the Omnios and Napp-it combo (and iSCSI for the server). I actually haven't played around with Solaris OS since about 20 years ago at university where Solaris OS was the only real possibility. But my preferences for Linux and the BSD systems are...
The primary NAS have 12 x 3TB 3.5" SAS and the secondary for backup contains at the moment just 8 empty 3.5" caddies. I was only planing on using the secondary one for backups, and may still chose cheaper SATA disks. That depends on if I am going to use it for Ceph or just for backups. Regarding...
Both NAS have 2 x 6 core Xeon and all 3 virtualization servers have 2 x 8 core Xeon. But I will give the old NAS' their deserved retirement...
The first NAS was intended as storage for the VMs and the second one was only intended for backups of the VMs. But since Ceph has builtin disaster...
I could create a Ceph cluster using 2 NAS' and also most of the local storage space in the 3 virtualization servers. All 3 virtualization servers have 4 HD's but all of the VMs are going to use network-storage. So this limited local diskspace (4 x 300GB enterprise disks) is only for the OS, and...
I am about to configure a NAS that’s suppose to supply the storage for 3 virtualization servers. And I need some advise regarding which base-system to chose. The NAS has good processors, sufficiently RAM, good disks and 10G connections between the servers… So I am mostly looking for general...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.