Have you tested this recently?
I have never used ZFS on Solaris, but I used the BSD port of ZFS for many years, and it is an older, more mature port that is widely considered on par with the Solaris original.
I actually saw INCREASED performance when I exported my BSD based pool and imported...
Don't confuse SATA and TLER. There are plenty of SATA drives that have TLER, this is not a SAS thing. I currently use WD RED drives, which do have TLER.
In fact, WD has multiple lines of Enterprise class SATA drives with TLER, including their Re, Se and Gold lines. What's more they also...
To OP, I have been running ZFS on SATA disks for years and have never had any performance issues.
In fact, I see SAS as nothing but a waste of money these days.
Let me add one caution to this.
If you created your pool a long time ago on a different system, and imported it to your Proxmox box it MAY still be running the old pool version, despite running on a more modern implementation of ZFS.
You can correct this by upgrading your pool using the "zpool...
Yeah, so "Log Device Removal" was added as a feature in ZFS Pool Version 19 in September 2010.
Ever since then, removing a log device no longer kills the pool. You just lose any uncommitted pending data in the log, which you would only have if your system crashed or lost power at the same...
This USED to be the case several pool revisions ago. I can't find the exact pool revision it changed in, but it hasn't been a problem, probably since 2012 or so. These days when a ZIL fails you just lose any data that needs to be read from the ZIL, and that only happens if the system goes...
So, googling around I found someone else with an identical problem to mine, on a Debian Webserver.
both df -h and df -ih show plenty of free space, but he, like me, is still getting "no space left on device" error messages.
He seems to have solved his issue by making a change to...
I'd argue you can go with one log drive and one cache drive. These days you only need to mirror your logs if you are REALLY paranoid.
You have to have the system crash/lose power at the same time as your log SSD fails (within a 1-5 second window) in order to have any resulting data loss, and...
Please explain why you are doing this. You should be using separate drives for cache and log.
It used to be very important to mirror your slog, because if you lost it it could be a real problem, these days it is a lot more forgiving. There are still some corner cases where if you are...
Hey all,
I am trying to run ntopng in an Ubuntu 14.04LTS container on my Proxmosx host.
I set up my switch (Procurve 1810G-24) to mirror both RX and TX of the port connected to my router, to a separate port on the switch.
Then I connected a designated NIC (eth3) on my Proxmox box to that...
Hey all,
I am having an odd problem I'm hopng someone might help me solve.
I keep randomly getting "Error: No space left on device" error messages in console.
Examples:
# ifup vmbr3
Waiting for vmbr3 to get ready (MAXWAIT is 2 seconds).
Error: No space left on device
Error: No space left on...
Hey all,
Personally I always like to take precautions whenever I upgrade something to make sure that package updates don't break anything for me.
I've been wondering, on a ZFS based install, is snapshotting rpool/ROOT/pve-1 a good way of accomplishing this?
In theory, I could run the...
Looks like my hesitance to have to reboot the server and restart everything paid off. Now the 4.4.8-1-pve kernel has made it to the enterprise repository :p
Going to have to bite the bullet and reboot now. (I hate rebooting the server :p )
That could be, yes.
Current enterprise repo kernel is 4.4.6-1-pve... hmm.. Trying to decide if I want to try the 4.4.8 kernel.
The changelog on kernel.org DOES mention a patch related to IOMMU errors in the 4.4.8 kernel, but the significance of it goes above my level of understanding.
Is...
Thank you for that suggestion.
My I don't have "allocate memory dynamically within this range" enabled in the UI for the VM, (its set to static) but this still shows up in lspci inside the guest:
06:03.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
Is there something else I...
Much obliged for this information, unfortunately still a no go for me.
I duplicated your configs as much as possible, The only differences are :
1.) I'm on the Enterprise repo, not the pvetest repo
2.) I've used the vfio.conf method instead of the grub commands method.
q35 + pcie causes my...
So, is it safe to assume that passthrough issues on non-Skylake systems have been resolved with the current 4.4.6-1-pve PVE kernel?
The reason I ask is, I have been fighting Nvidia GPU passthrough on my LGA1366 Xeon system, both in OVMF and Seabios mode and I am about to tear my hair out, as I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.