Search results

  1. Q

    [SOLVED] Working with ZFS - Caching Strategies ?

    Yeah i figured as much. I am already looking for a better deal to come up at hetzners server auctions. If i am quite honest, I did not even think about it when renting this server, i was fixated on the maximum space available. I should have known better to be quite honest. There may be a...
  2. Q

    SSD Setup with huge write loads

    That is 160 GB/day (18 month). Assuming you have 128/256GB SSD's you are only half way on the TBW these are rated for (if they are bigger models you are only looking at 1/4th the rated TBW) to be clear. This does not sound like the problem i described above. As in that case the SSDs were past...
  3. Q

    SSD Setup with huge write loads

    You said you run 6x Samsung 850 Pro in Raid 6 since about 18 month ? We had a Cluster Server at work that was exclusively running Samsung 850 Pro's for a ceph cluster (others were different brands), that showed the same problem, until we noticed that some of em had TBW values was beyond the...
  4. Q

    [SOLVED] Working with ZFS - Caching Strategies ?

    So, i made some progress ... I stumbled upon this explanation regarding the caching modes: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-BlockIO-Caching.html IMHO it is...
  5. Q

    [SOLVED] Working with ZFS - Caching Strategies ?

    TL;DR: Questions at the bottom Sidenote: I typically use Ceph for all my professional and personal Proxmox needs. (other then the occasional ZFS Raid1 for the OS-Disks). I have a small personal project going, and its not going as expected at all. Some Specs: 32 GB Ram 2x 3 TB HDD Proxmox...
  6. Q

    IPv6 enabled Proxmox repositories ?

    That is too bad. I saw a request from about 1 year ago and thought they just might have added v6 to the mix already :) Lets hope third time's a charm.
  7. Q

    IPv6 enabled Proxmox repositories ?

    Quick Question: Are there any IPv6 enabled Proxmox repositories available ?
  8. Q

    ProxMox 4.x is killing my SSDs

    We are talking 57 GiB/Day, which comes down to 0,67 Mebibyte/s 5,4 Mebibit/s Not sure what exactly is generating these amounts of data on your SSD's, but it for sure should stick out when you are tracking it down via iotop, iostat and the likes.
  9. Q

    What if proxmox cluster host fails?

    check out this link: https://pve.proxmox.com/wiki/Proxmox_Cluster_File_System_(pmxcfs)
  10. Q

    Flashcache vs Cache Tiering in Ceph

    We only use SSD/HDD with copies on different mediums for a large capacity single node Cluster (very specific usecase) and Datacenter (Campus) failure domains where the same node has SSDs, NVME's and HDD's, whereby we can fail 2 out of 5 datacenters . Not ran into this issue yet. We do not use it...
  11. Q

    AMD gpu passthrough issue - windows 10 - steps included

    yes and no. No, since i have never been able to pass the AMD GPU through to the windows VM. Yes, in the sense that i found a workaround that works for me. Basically i created a windows VM and passed a whole SSD through to it. On that SSD i installed the boot loader and the OS: I can run...
  12. Q

    Suggestions for SAN Config

    Q: Have you checked how much of your 8-12 TB data is Cold-Data and how much is Hot data ? Might make a big difference in terms of Cache-Sizing or the need of Raid-Levels on the NAS (Raiz2 / Raid10) Am i reading this correctly ? You are doing <=600 Writes /s ? and no reads ? That would mean...
  13. Q

    Suggestions for SAN Config

    I'm personally partial to FreeNas, mainly due to there being a commercial company behind it, that pays a larger amount of developers, the larger community (altho they have a large amount of anti-social members), and seem to have a healthier commit rate. Whats the reasoning behind all SSD ? is...
  14. Q

    Suggestions for SAN Config

    Sry, i do not. As i said we do not use LXC at work, and Gluster only for experimental Lab stuff with kvm containers (different from your usecase) Q: What connectivity do your proxmox-Nodes have ? 1G, 10G, infiniband ? The reason i keep asking is as follows: When ever you use a SAN, Ceph or...
  15. Q

    Suggestions for SAN Config

    Whats your node to node connectivity like ? 1G ? 10G ? multiple links ? A multi-Datacenter Ceph setup is probably to complex as you already blocked that: So only Datacenter Internal real-time sync and failover abilities ? You could still use Ceph for this, but honestly, it is too much...
  16. Q

    Suggestions for SAN Config

    So i am assuming you will have multiple "pods" of 3x Proxmox-nodes in multiple Datacenters. Q1: Are these all in the same Proxmox-Cluster ? As in 3 Nodes on Datacenter A and 3 Nodes in Datacenter B Q2: How much IO do you actually need ? Do you have a ballpark area ? Q3: What type of local...
  17. Q

    Cluster over high latency WAN?

    At work we do run a Cluster with nodes in three data centers. The network has very low latency tho, since it is our own fibre and network gear all the way. The datacenters are <10km distance from each other too. Back in 2013, i ran a three node Cluster for project using OVH servers. 1 in...
  18. Q

    Flashcache vs Cache Tiering in Ceph

    I'm not sure the "custom Crush Hook" part was explained well by me. Its basically a script that gets triggered everytime a OSD gets started on a Ceph-Node. It makes sure that said OSD is added to the Crush map according to characteristics of a disk, perhaps even the hostname or other information...
  19. Q

    Flashcache vs Cache Tiering in Ceph

    http://docs.ceph.com/docs/master/rados/operations/cache-tiering/ Should fix that for you in detail. There are basically 4 Modes: Writeback Read-only read-forward read-Proxy Afaik, only in "writeback" mode you need to keep in mind that the pool used as Cache-tier also replicates data...
  20. Q

    Flashcache vs Cache Tiering in Ceph

    I'd use Cache-tiering (cause that is what i am familiar with and use widely at work altho on a much larger scale) with an appropriate Cache-Mode for your use (see ceph documentation). Use of a custom Crush hook to split HDD-OSD's from SSD-OSD's is highly recommended, since it makes setting this...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!