You were right the sysctl file-max wasn't configured at the new server. But the default unconfigured file-max setting at the new servers is 26350512 which is 3-4 times higher than the configured setting at the old server (was 8388608). And there was not defined user level limits at any of the...
Based on all of your advise I am now using Omnios and almost everything have been working perfectly for months. I am using ZFS over iSCSI for the KVM-based VMs without any problems and LVM over iSCSI for the LXC-based VMs. But during the last 2 months I have 2 times seen problems with the LVM...
I'm quite sure you're right... I just notice the small differences like in Solaris OS it's "ipadm create-ip" where the analog command in Omnios/Openindiana is "ipadm create-if".
Regarding the desktop feature I compared OpenIndiana to Ubuntu. They both use a lot of resources related to the...
From my very, very, very small experience with the illumos based OS’es it looks like Openindiana is a slightly more user friendly variant (like Ubuntu compared to Debian). But elements of the structure is very similar to Omni OS (a little different compared to Solaris OS). So I guess Openindiana...
Sorry... But I have just tried where I started by running the Solaris OS install and here I selected to install Solaris OS at a 300GB GPT partition (used the Solaris OS installer to create the partition and allowed it to finish the installation). Afterwards I started the Omni OS installer...
And how about using Openindiana for the OS. I switched from Openindiana to Solaris OS because I like stability and I expect larger stability from Omni OS and/or Solaris OS. But I only have positive experiences from Openindiana where the system was easy to configure and the Napp-it installation...
I tried installing Omni OS using LTS, Stable and Bloody releases. And for all 3 the installation failed when I selected the entire 3TB disk as rpool storage. And all the error messages contained fdisk comments...
So I will probably have use either OpenIndiana or Solaris OS where +2TB rpool...
I have figured that out... But I have 12 x 3TB in the NAS. And no small disks. At the current Solaris OS configuration I am using the first 2 x 3TB in a mirrored rpool and the remaining 10 disks in 2 x raidz2 (each with 5 disks). The speed of this 2 x raidz2 is quite good so I can easily live...
Sorry for the delay... I actually never figured out how to make Omni OS work at the NAS-server. The problems were all related to +2TB disk sizes. In principle Omni OS should support GPT if I added the full disks to the root pool, but the installer just stopped working instead. I was able to...
Forget this question... The hardware-recognition failed during installation of the Omnios LTS release but afterwards I tested with the Omnios stable release and here the hardware was correctly identified. So I will just continue using stable instead of LTS.
Just one final question: I've been having problems getting both the 10G fibre and the RAID-controller working using Omnios. And all of the hardware works out-of-the-box using Debian Jessie. How about if I install Debian Jessie and use ZFS (the zfsonlinux repository) as root-system, and on top of...
Thanks... I just needed to understand which iSCSI systems support Proxmox.
I have to admit that the 3 Proxmox-servers I am managing at the moment all uses local ZFS for storage so I am not accustomed to having these problems with iSCSI. But I am in the process of retiring 2 of these servers and...
So the only Linux iSCSI versions fully supporting ZFS over iSCSI for Proxmox uses earlier kernels than 2.6.38? If so Debian Squeeze was the last one with iSCSI supporting ZFS over iSCSI for Proxmox.
I have to admit that I have difficulties understanding why Proxmox haven't put all resources...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.