Hey @nlubello
its most likely the controller card being an issue.
stick with LSI or HBA Broadcom 9300 SAS 8-port dedicated HBA controllers.
Common issue with HP and Dell controllers that switch between RAID and HBA mode having their own caches and other features that interfere with ZFS...
Hi @rakurtz
I would say that the Controller cache is another level of cache that cant be controlled by ZFS while drive cache may act in a different way.
Raid controller cache is specifically designed to be a middle man cache while drive cache is direct on drive.
ZFS uses Ram for cache and...
Thanks for confirming.
I did a check with our new cluster and all the drive caches are on by default.
these proprietary cards from Dell and HP seem to be a common thread for these types of issues Is what I’ve discovered so far.
””Cheers
g
Hi @shaneshort
just curios to get a little more info on your server as i have a theory and wanted to cross check a few configs.
are you still experiencing this issue?
Are you ok to share the following info:
brand of server
Raid/ HBA card
Model of server
What Raid Configuration ZFS Mirror...
Hey @stefanzman
did you ever get an answer to your question?
can only vouch for PMG being able to handle a lot of emails at a time, just last week we had seen over 6k emails hit our clients PMG with 5k of those being spam and viruses, all remaining emails quickly and cleanly filtered.
the...
hey @rafafell
from my understanding proxmox replication is based on snapshots which are taken in increments at given time intervals.
its not streaming replication in real time.
if you are looking for failover in close to real-time with close to zero data loss then some form of shared storage...
Thanks for this reference :)
so the point of difference if selected is Journal-based replication as an option compared to snapshot replication which is what ZFS uses.
both are crash consistent except that "Journal-based" replication can be more accurate up to the second compared to snapshot...
OK so its DR.
Is there an automatic way to trigger a fail-over or is this all manually done at this stage?
So would you say its equivalent to ZFS replication between sites?
Any additional positives/ negatives of using RMB-Mirroring?
thanks always appreciate your input.
speak soon.
""Cheers
G
Hey @Alwin happy NY.
im just checking in on if there has been any more detailed testing on the rbd-mirror feature in ProxMox?
it’s something that has caught my eye vs ZFS replication.
wondering if the replication is any better or worse for DR.
would it be considered HA or DR ?
””Cheers
G
Hey @hacman happy NY.
just curious you mentioned DRBD are you using this per vm or per host to replicate all VM’s?
are you still using Xen or have you made the switch to Proxmox?
””Cheers
G
thanks mate that worked.
for what ever reason the enterprise repo was disabeled, no idea why as its a new install and is normally enabled by default.
any hoo, uncommitted it
ran apt update && apt full-upgrade and problem resolved :)
thanks for the heads up, i may not have checked the repos...
Hey @wolfgang
would there be any reason why our 6.2-4 wont upgrade to the latest 6.3-2.
not sure why but we have 1 host that upgraded to 6.3-2 and another that wont upgrade past 6.2-4.
have rebooted and tried the apt update && apt full-upgrade still no luck we keep getting the below output...
not sure if you have capacity or ability to test ZFS over iSCSI.
If you are not running LACP you may be able to.
It will give you native ZFS features on shared storage.
COW, Snapshots, Replication etc
just food for thought :)
good luck with the upgrade.
""Cheers
G
from my understanding each vm will have its own zvol so basically it’s a volume to it self.
yes for snapshots and rollback for an individual vm.
if my understanding is correct the benefit to this is granular Management of the vm storage, replication can be set at the TrueNas/FreeNas layer...
How are you connecting to the TrueNas storage array?
iscsi, nfs, ZFS over iSCSI?
iscsi normally can have the time outs adjusted to be more or less, default time out on VMware is about 20 sec.
normally with dual controller modules active/ passive or active/active the failover between...
sorry let me clarify.
internal comms to local local VM's there will be no loss of performance as its just an internal bridge nothing leaves the host or even the Nic so you shoudl get native IO speed on the host which will be greater than the Nic bandwidth.
it's just a bridge device...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.