I dont know if its any different then the intel download link, but I did get this nic to work with pve8.2. I used this src: https://github.com/intel/ethernet-linux-ixgbe/releases
and before you ask, it was a customers install and I no longer have access to it ;)
quts 5.2 uses the scst target. It should be relatively easy to write the plugin to facilitate zfs-over-iscsi. I'll look for documentation for user provided plugins- maybe I'll write it.
defaults {
verbosity 1
path_grouping_policy multibus
find_multipaths smart
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
property "ID_ATA"
}
I dont understand how this is related to...
just spitballing here, but have you tried the manual?
https://h10032.www1.hp.com/ctg/Manual/c00814876.pdf
but seriously, default multipath.conf should work just fine. I have a corevault (same controller tech) and I didnt have to do any customization.
That is really not a true statement. All you need to do is recreate the datastore, and make a new vmid.conf- it doesnt even matter if you knew that the original settings were; all the important stuff is in the guest os anyway.
the qcow filename would be in the form of "vm-vmid-disk-n". all you...
the consequences of this arent really impacted by the TYPE of cpu you use. AMD and intel are both fine for ceph.
Many variables aside of number of vms. how many nodes, OSDs, etc.
I... dont think you and your dell rep were speaking the same language. that, or he doesnt know anything about ceph...
It depends on your role. IF you are in a position of responsibility, you explain to your client that their server control interface should not be exposed to the internet. full stop. if you dont know why, you should probably not accept a position of responsibility.
If the client doesn't care-...
what.... failover?
I guess I missed this part of your question. ceph is multiheaded; as long as you dont have a service with only one daemon, there should be no "failover" to speak of. If you only have one active mds daemon, there will naturally be a "failover" period to a standby; the amount...
3 node config is an absolute minimum configuration for ceph. I dont consider 3 node to be either good for production or good for a predictor of proper performance simulation. on top of that, 2 OSDs per node is way too little. there is no granularity for io which means all your IOs vye for the...
This isnt an "issue" as such.
You have more pgs then your system can scrub within the defined timeframe values. There are a number of knobs for you to turn, including:
- increase the time required (osd_deep_scrub_interval, osd_scrub_interval_randomize_ratio)
- increase your scrub load...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.