When you migrate from pve2 to 6, you will need to specify store on the destination- but why do you limit node access when all 5 nodes can see the storage?
Hi @anthony1956 , welcome to the forum.
While I sympathize with your situation and your request, we have all been there, what you are proposing is essentially a "knee-jerk" reaction and has very little chance of being implemented. It may make...
If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those)
If a "shared" pool doesnt exist on the destination, the migration will...
You're clearly upset, but I dont think your understanding of the situation warrants that conclusion.
The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/
but that documentation doesnt cover everything applicable to...
transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do.
change your HBA to virtio-scsi-single, with io-thread checked for disks.
The most interesting feature in Tentacle with direct relevence to PVE is the instant RBD live migration. @t.lamprecht have you guys discussed implementation within PVE, and if so can you share your thoughts?
The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say.
So use larger drives. My advice...
I tried to extract your workload from your description:
2 docker VMs
1 Windows
1 "hungry"
Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your...
yeah that was pretty much a given with your problem description :)
They probably arent; you're just not aware of the problem because their active bond interfaces all connect to the same switch.
1st order of business- get your switches to talk...
Your network is, for a lack of a better word, broken.
are you using bonds for your corosync interface(s)? do the individual interfaces have a path to ALL members of bonds on the OTHER nodes?
This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev.
EXCEPT this didnt actually work, which is why you dont see these anymore.
abstraction doesnt change the underlying device...
There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.)
1- zfs over iscsi- as @bbgeek17 explained.
2. qcow2 over nfs- install nfsd, map the dataset into exports...
This isnt actually so. you can think of monitoring quorum rule as 3:1. fun fact- a cluster with 2 monitors is more prone to pg errors (monitor disagree) then with one. Feel free to try it yourself- shut down all but one of your monitors and see...
Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the...
Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much...
Because I would not expect that vlan to be accessible to virtual machines. adjust that as appropriate.
Far be it from me to dissuade you from pursuing NIC level fault tolerance. Suffice it to say I dont; I care about path redundancy- a switch...
I'm confused. were you not asking for help troubleshooting this?
SQL performance is a function of two things- query efficiency and disk IO latency and IOPs. since we know your queries are the same, what remains is the storage.
How did you have...