When I first deploy an OS, things work great, but occasionally I feel like the underlying IO is slurping up bits from the ISCSI leading to "lags" in my VM. But sometimes it works great again.
Example. I have quantshare that exports a 10GB database inside my VM. Imagine all that cache that just got overwritten by the DB!
My cache is 32GB... assuming the OS is in there, that 10GB db that's sitting on the VM hd as a .csv is about to be imported as a DB into postgresql.
So assuming all that happens, my VM starts to act slow for a long time, but then magically works at full speed sometime later.
I SUSPECT this is because of the way the DB is shuffled around. Once recently held, now no longer, the 10GB of cache is shuffling in/out of my L2Arc.
So... does anyone have experience with ZFS ISCSI caching strategies? Has anyone changed the underlying caching mechanism such as from LIFO to FILO?
Example. I have quantshare that exports a 10GB database inside my VM. Imagine all that cache that just got overwritten by the DB!
My cache is 32GB... assuming the OS is in there, that 10GB db that's sitting on the VM hd as a .csv is about to be imported as a DB into postgresql.
So assuming all that happens, my VM starts to act slow for a long time, but then magically works at full speed sometime later.
I SUSPECT this is because of the way the DB is shuffled around. Once recently held, now no longer, the 10GB of cache is shuffling in/out of my L2Arc.
So... does anyone have experience with ZFS ISCSI caching strategies? Has anyone changed the underlying caching mechanism such as from LIFO to FILO?