The most interesting feature in Tentacle with direct relevence to PVE is the instant RBD live migration. @t.lamprecht have you guys discussed implementation within PVE, and if so can you share your thoughts?
The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say.
So use larger drives. My advice...
I tried to extract your workload from your description:
2 docker VMs
1 Windows
1 "hungry"
Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your...
yeah that was pretty much a given with your problem description :)
They probably arent; you're just not aware of the problem because their active bond interfaces all connect to the same switch.
1st order of business- get your switches to talk...
Your network is, for a lack of a better word, broken.
are you using bonds for your corosync interface(s)? do the individual interfaces have a path to ALL members of bonds on the OTHER nodes?
This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev.
EXCEPT this didnt actually work, which is why you dont see these anymore.
abstraction doesnt change the underlying device...
There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.)
1- zfs over iscsi- as @bbgeek17 explained.
2. qcow2 over nfs- install nfsd, map the dataset into exports...
This isnt actually so. you can think of monitoring quorum rule as 3:1. fun fact- a cluster with 2 monitors is more prone to pg errors (monitor disagree) then with one. Feel free to try it yourself- shut down all but one of your monitors and see...
Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the...
Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much...
Because I would not expect that vlan to be accessible to virtual machines. adjust that as appropriate.
Far be it from me to dissuade you from pursuing NIC level fault tolerance. Suffice it to say I dont; I care about path redundancy- a switch...
I'm confused. were you not asking for help troubleshooting this?
SQL performance is a function of two things- query efficiency and disk IO latency and IOPs. since we know your queries are the same, what remains is the storage.
How did you have...
cite your sources please. 5 monitors are "suggested" with a high number of OSD nodes.
with a typical crush rule of 3:2, this only makes sense IF you have dedicated monitor nodes (eg, no OSD) AND you have environmental issues that take your nodes...
Not at all. my (and everyone else's) participation in this forum is voluntary. Nothing you provide (or not) is necessary as long as you dont expect anything in return.
you have 4 ports. how are you attaching them to 7 different devices? More...
lets back way up.
1. you have 4 physical interfaces. what are the physically connected to?
2. describe your vlan plan, and which physical interfaces you want to have those vlans travel over
3. describe what traffic you want to use the vlans for...
Unknown, especially since you posted output from the cluster in a healthy state. size 4 is generally a bad idea (even number; the last copy offers no utility), but it should not cause you any issues like this, especially with only one node out...
On reflection, storage.cfg doesnt tell us anything useful; instead, post the storage configuration- raid level, disk technology and count, and subrank block size (eg, if you have a striped mirror using 4k disks, subrank size will be 4k; if you...
there is nothing unusual about a windows VM laying claim to all assigned memory. this is normal. as for your performance issues, please post the content of:
vmid.conf
/etc/pve/storage.cfg