Yeah i figured as much. I am already looking for a better deal to come up at hetzners server auctions. If i am quite honest, I did not even think about it when renting this server, i was fixated on the maximum space available. I should have known better to be quite honest.
There may be a...
That is 160 GB/day (18 month). Assuming you have 128/256GB SSD's you are only half way on the TBW these are rated for (if they are bigger models you are only looking at 1/4th the rated TBW)
to be clear. This does not sound like the problem i described above. As in that case the SSDs were past...
You said you run 6x Samsung 850 Pro in Raid 6 since about 18 month ?
We had a Cluster Server at work that was exclusively running Samsung 850 Pro's for a ceph cluster (others were different brands), that showed the same problem, until we noticed that some of em had TBW values was beyond the...
So, i made some progress ...
I stumbled upon this explanation regarding the caching modes:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Caching.html
IMHO it is...
TL;DR: Questions at the bottom
Sidenote: I typically use Ceph for all my professional and personal Proxmox needs. (other then the occasional ZFS Raid1 for the OS-Disks).
I have a small personal project going, and its not going as expected at all.
Some Specs:
32 GB Ram
2x 3 TB HDD
Proxmox...
We are talking 57 GiB/Day, which comes down to
0,67 Mebibyte/s
5,4 Mebibit/s
Not sure what exactly is generating these amounts of data on your SSD's, but it for sure should stick out when you are tracking it down via iotop, iostat and the likes.
We only use SSD/HDD with copies on different mediums for a large capacity single node Cluster (very specific usecase) and Datacenter (Campus) failure domains where the same node has SSDs, NVME's and HDD's, whereby we can fail 2 out of 5 datacenters . Not ran into this issue yet. We do not use it...
yes and no.
No, since i have never been able to pass the AMD GPU through to the windows VM.
Yes, in the sense that i found a workaround that works for me.
Basically i created a windows VM and passed a whole SSD through to it. On that SSD i installed the boot loader and the OS:
I can run...
Q: Have you checked how much of your 8-12 TB data is Cold-Data and how much is Hot data ? Might make a big difference in terms of Cache-Sizing or the need of Raid-Levels on the NAS (Raiz2 / Raid10)
Am i reading this correctly ? You are doing <=600 Writes /s ? and no reads ?
That would mean...
I'm personally partial to FreeNas, mainly due to there being a commercial company behind it, that pays a larger amount of developers, the larger community (altho they have a large amount of anti-social members), and seem to have a healthier commit rate.
Whats the reasoning behind all SSD ? is...
Sry, i do not. As i said we do not use LXC at work, and Gluster only for experimental Lab stuff with kvm containers (different from your usecase)
Q: What connectivity do your proxmox-Nodes have ? 1G, 10G, infiniband ?
The reason i keep asking is as follows:
When ever you use a SAN, Ceph or...
Whats your node to node connectivity like ? 1G ? 10G ? multiple links ?
A multi-Datacenter Ceph setup is probably to complex as you already blocked that:
So only Datacenter Internal real-time sync and failover abilities ?
You could still use Ceph for this, but honestly, it is too much...
So i am assuming you will have multiple "pods" of 3x Proxmox-nodes in multiple Datacenters.
Q1: Are these all in the same Proxmox-Cluster ? As in 3 Nodes on Datacenter A and 3 Nodes in Datacenter B
Q2: How much IO do you actually need ? Do you have a ballpark area ?
Q3: What type of local...
At work we do run a Cluster with nodes in three data centers.
The network has very low latency tho, since it is our own fibre and network gear all the way. The datacenters are <10km distance from each other too.
Back in 2013, i ran a three node Cluster for project using OVH servers. 1 in...
I'm not sure the "custom Crush Hook" part was explained well by me.
Its basically a script that gets triggered everytime a OSD gets started on a Ceph-Node. It makes sure that said OSD is added to the Crush map according to characteristics of a disk, perhaps even the hostname or other information...
http://docs.ceph.com/docs/master/rados/operations/cache-tiering/
Should fix that for you in detail.
There are basically 4 Modes:
Writeback
Read-only
read-forward
read-Proxy
Afaik, only in "writeback" mode you need to keep in mind that the pool used as Cache-tier also replicates data...
I'd use Cache-tiering (cause that is what i am familiar with and use widely at work altho on a much larger scale) with an appropriate Cache-Mode for your use (see ceph documentation). Use of a custom Crush hook to split HDD-OSD's from SSD-OSD's is highly recommended, since it makes setting this...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.