its not 'busted' but under load (and this is indicated by the high io wait)Why isn't the RAM statistics showing that it's busted?
it can improve things, but depending on what io load you impose on your 2 spinning disk it may not be muchSo you suggest 16 GB of RAM and then it should run again?
check the syslog/journal (journalctl) for cluesIs it possible, that with my new machine and it's low (relatively) RAM the host boots due to to full RAM?
Okay, I went on in my analysis. Assuming the RAM causes the trouble and together with my experience with my big server, I got the idea of all the strange trouble is cause by low RAM. So after the new machine ran smoothly all night long, I started an additional VM and after less than 15 minute the system did a reboot. Nothing in the logs to find, at least where I looked.
Just now, the additional RAM for the new machine arrived, I will do some test with busting RAM, to validate whether it is my problem or not. Afterwards I will do the same tests with the additional RAM installed. I will publish me results in the other task.
zfs set primarycache=metadata rpool/swap
zfs set secondarycache=metadata rpool/swap
zfs set compression=zle rpool/swap
zfs set checksum=off rpool/swap
zfs set sync=always rpool/swap
zfs set logbias=throughput rpool/swap
Thanks for your feedback, the only reason i'm testing zfs is because of the ability to do a vm replica in a second node as a disaster recovery but without zfs is not possible( If i understood correct)Thanks mate,
I got the hint for ARC limiting and defined it with max 2GB. This let to the situation that the system was running stable but quite slow.
After upgrading to 16GB of RAM, I enlarged the ARC to 4GB, what made working significantly more comfortable. But I'm afraid it's only for four maybe five VMs with low workload.
The swappiness I haven't tuned up to now, but I will try to do so. Thanks for the hint.
I'm working on my big machine where I decided to go to 64GB (coming from 32) and introducing Samsung SM863a SSDs for ZIL and L2ARC. This should then accelerate the system dramatically. I hate spending all the money, but the big machine will get ZFS running, by any means. ;-)
The small one I will most probably try to setup a Debian 9 using software RAID1 and then install Proxmox on top.
I have an old server running Debian 4 with RAID1 powering 7 VMs. This old machine is using VMware Server since 2008 with 2 GB of RAM, which is sufficient in performance. So I think for small machines, ZFS is a too mighty FS eating too much resources. All the nice management features and intelligence is maybe a bit extreme overkill to little "baby servers"... :-(
Cheers,
Christian
We use essential cookies to make this site work, and optional cookies to enhance your experience.