You can run reasonably with a small number of OSD. 12 would be fine. Assuming they will be on the two "primary" nodes you'll want to set repicaiton to "2" (size = 2) and the minimum size to "1", which will allow you to keep writing with one node down. Not really ideal but workable. You'll...
You are correct that GlusterFS is probably not a stable solution today for VM storage. Your "problems", though undescribed, are not surprising.
With Ceph, however, you have a bigger problem to consider than the minimum number of OSD. Running Ceph on 2 hosts will likely not give you the...
Since you've already popped the heatsink and can see the case markings you can always try a warranty check. Problem with that is the scammer's have learned how to copy legitimate markings so it might not be conclusive even if it says A-OK...
Your first post says:
but case of the processor in the picture is marked E5-2630 V4. Your cpuinfo also indicates 10 cores - but an E5-2698 v4 should have 20. There is no good reason why your BIOS and Debian should not properly recognize this CPU if it were retail.
My money says you've got...
You sure its a retail CPU and not an engineering sample? IIRC, the model name string is pulled off the CPU and "Genuine Intel(R) CPU 0000 @ 2.20GHz" generally indicates ES.
You can increase the PG count using Ceph command line tools, but you can not decrease it.
I understand why this is true (because Ceph is designed for horizontal scaling and assumes storage always grows). I also understand enough about how it really works to know that shrinking the PG count is...
ScaleIO is available on a "free and frictionless" basis, per their EULA, as long as you want it on an unsupported basis. Not really much different than how competing products like Ceph is available.
If you want support, however, I'm not sure Inktank/Redhat pricing for Ceph is much lower than...
I think the biggest issue with Scaleio is EMCs lack of support for systemd & apt based package management - both required for simple integration to Debian. If they would support this then integration into Proxmox would be trivial - at least no harder than integrating other storage platforms...
No real need to raid the journal that way. If an OSD goes down Ceph just recover and re-creates the placement groups on other OSDs.
Just make sure you have sufficient free space to cover for a failed OSD (or, if you are holding several OSDs jounal on a single SSD, that you have enough space to...
No worries here. I didn't reference the bug tracker to criticize you for not looking - I did it just to show you that they were already working on the same idea.
Still anxious for it to get integrated into the repos. I've tested it from Git but I don't want to push it out until its published.
Assuming you are going with 3-way replicas (default) then I'd highly recommend adding a 4th - and perhaps 5th - OSD node. Same number of SSDs, just spread them over more nodes. This will significantly improve your outage resiliency.
With exactly the same number of OSD nodes as replicas you...
If you follow the bug tracker this is exactly what they have done: https://bugzilla.proxmox.com/show_bug.cgi?id=952. Its checked into Git - just waiting for them to get it into the repos in an upcoming update. Hopefully that won't be long now.
I'll resist the burning desire to mock using Wikipedia as a technical reference. One would have hoped that you'd quote the relevant IETF documents - but I know those docs intimately and I know you wouldn't find support for your approach in them.
In any case, assuming the statement from Wiki is...
Yup - running trunks with VLAN 1 tagged is normal and supported by Linux, Server 20xx, almost all VLAN capable routers and switches, In many cases the major network vendors (Cisco, Juniper, etc) actually recommend it. At least two examples of configs using VLAN 1 tagged are included in Proxmox...
I implemented the three edits proposed by Lord_Gaav and can confirm that it completely restores the behavior from 4.1-22.
- The ability to set VLAN 1 as a tagged VLAN is restored
- The VM operates correctly with the VLAN 1 tagged.
- You can migrate a VM with VLAN 1 tagged to the fixed PVE host...
@fabian - can we get an update on the bug report with this info (I am not able to update it) and perhaps it expedited back into the next point release?
I appreciate opening the bug report, but the approach is quite soft ("if possible..."). Please note that selecting VLAN 1 as a tagged VLAN was fully supported in 4.1-22 and quietly went away with the UI changes and audits in 4.1-33. Getting this put back the way it was is actually quite...
I think your statement should be revised to read "VLAN 1 is normally used for untagged traffic...".
I agree that in the default this is the case. But it is not universally true. Linux networking, including bridges and OVS, fully support the use of tagged traffic on VLAN 1 and setting the PVID...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.