Assuming service virtual disks to a cluster is the use case, sure. But if that's the use case, its worth mentioning the limitations, such as it means he will end up with a node as a SPOF which is not ideal. Proper C&C of the backing store requires some sophistication as well- he will need to use...
asked and answered many times on this forum. start here: https://forum.proxmox.com/threads/zfs-tests-and-optimization-zil-slog-l2arc-special-device.67147/
Thats the key, really. unless the user deliberately created an untenable situation, reweighting can only push relative weight DOWN for overloaded OSDs. and if the situation is untenable... well they're not going to fix the problem this way ;) I'm gonna go out on a limb and say that wasnt the...
1. you specified 32 cores. Do you have 32 cores? are they amd? try to keep to 8 or less to avoid numa traversal issues. Generally speaking, if your vm needs 32 cores you're probably better off installing it directly on the metal.
2. whats your zpool look like (disks type/organization.) My guess...
I wonder how actually relevant that is and not theoretical; I am an admittedly light user of PBS, but in my case (approx 40vms with 4 hosts) I'm able to reach a high percent of wire speed to a relatively slow raidset on a spinners in a single raid6 volume... in any case, thanks for making me aware.
I have to admit this is mostly second hand; I run into this mostly with existing VMWare shops who have existing solutions they have in place (eg, ASA) but if you have products that are NAS-first tailored for virtualization, I'd love to hear about it directly from a Netapp FAE/SE ;)
The biggest challenge adding a netapp type storage device onto a pve network is the limitation of block and file level access provided.
1. With Block device access (eg, iscsi, nvof, etc) the linux underpinnings of pve provide a limitation of using it backing lvm, which means losing thin...
I had such an issue which was ended up being due to pgs on disks that should have been failed out. Ceph by default is not smart enough to fail OSDs if the underlying disk has not failed completely- zfs would have kicked out a drive with multiple read faults but ceph doesnt.
smart test all your...
I'm not sure what to say here. You knowingly operated in a manner opposite to your experience. Either you're excluding yourself from the group "Anybody with any experience in IT," or deliberately courting failure?
the MEANS by which one can achieve recoverability are not the issue. Why did you...
I see. So, in your view, The Proxmox PVE product is deficient because they (the developers) did not actively educate you? since they do a relatively good job documenting, what would you have liked them to do differently/additionally? not criticizing, genuinely curious. More to the point, EVERY...
PBS doesnt use snapshots. The price you pay for this is the need for delta caching for io subsequent to backup commencement; this can and does cause severe performance issues on a busy system and you'd need some backup fleecing to mitigate- which is only supported on specific target...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.