alexskysilk's latest activity

  • A
    Generally speaking, your idrac interface (physical or shared) will NOT be visible to the host operating system. Treat it like a separate computer. Now that we got that out of the way, what are you actually trying to do?
  • A
    alexskysilk replied to the thread Physical Server Migration.
    This isnt really a pve question. Also, I would strongly advise taking this opportunity to migrate your mail server to a currently supportable environment- mail is one of the most obvious places for external attack after all. Luckily for you...
  • A
    For you- sure. for me- I dont have this hardware or this problem, so its not useful to me nor am I able to participate in the troubleshooting. Please be sure to post any solution you uncover. that is, as you pointed out, the point and nature of...
  • A
    me? I'm not really aware of any ;) careful with jumping to conclusions. in all seriousness, technology isnt static. a lot of the issues present in earlier/older flash chips and controllers have been mitigated over the years. and wouldnt apply...
  • A
    You already found the answer. the fact you're moving the goalposts isnt helping you. I'd advise to get rid of your "wants"- the newer kernel is probably providing you with no utility at all. Given that the issues with your NIC are known and...
  • A
    out of curiosity, what was the vexing question you asked that had no results on the internet?
  • A
    U = FOS Lots of us here are in tech-support related positions, so forum support gets to seem like "more work" after a while. A) Watch proxmox-related youtube videos B) Read the last 30 days of forum posts, here and on Reddit (free education)...
  • A
    Just to put things in perspective, I have nodes running on consumer level OS ssds for OVER 10 YEARS, and that's without local log prevention. as long as you're not comingling payload and OS even crappy old drives dont get enough writes for it to...
  • A
    alexskysilk reacted to gurubert's post in the thread Ceph Storage question with Like Like.
    You will only lose the affected PGs and their objects. This will lead to corrupted files (when the data pool is affected) or a corrupted filesystem (if the metadata pool is affected). Depending on which directory is corrupted you may not be able...
  • A
    alexskysilk reacted to SteveITS's post in the thread Ceph Storage question with Like Like.
    Also of note you’d have to lose the 2 OSD at the same time…after one drops Ceph will immediately copy those PG to other OSDs. On the same node if you have only 3. This also means you need the capacity to handle that.
  • A
    alexskysilk replied to the thread Ceph Storage question.
    no. if you lose three disks on three separate nodes AT THE SAME TIME, the pool will become read only and you'll lose all payload that had placement group with shards on ALL THREE of those OSDs. BUT here's the thing- the odds of that happening...
  • A
    Might not be an obvious question, but why? your OS needs are pretty meagre, and disk performance will have little (if any) impact on your vms. The only real consumer of iops are the logs, and if you are really concerned with write endurance...
  • A
    Generally speaking you wont get much benefit from more than two host connections to a node (one per controller,) but it is conceivable you would be able to consume more then 25gbit on a single host under which case you will want to ensure that...
  • A
    alexskysilk reacted to bbgeek17's post in the thread NetApp & ProxMox VE with Like Like.
    Hi @daus2936 , your question was: The title of the documentation page provided by @alexskysilk is: Set up the multipath.conf file in E-Series - Linux (iSCSI) It is very succinct and states: No changes to /etc/multipath.conf are required. It...
  • A
    Thats an interesting take. For someone who derides others for being fanboys, that statement shows an astounding lack of self awareness. ceph is a scaleout filesystem with multiple api ingress points. zfs is a traditional filesystem and not...
  • A
    alexskysilk reacted to UdoB's post in the thread I recommend between 2 solutions with Like Like.
    Well..., a RAID10 (with two vdev) will give you double the IOPS of a single RaidZ vdev. If this is relevant depends on your use case - and the specific test you run to check the behavior.
  • A
    This is a big pet peeve for me. you dont LOSE anything. you write things multiple times so you can lose a disk and continue functioning. It is irrational to think you get to use 100% of the available disk AND handle its failure. All fault...
  • A
    alexskysilk replied to the thread NetApp & ProxMox VE.
    you could, you know, read the docs. https://docs.netapp.com/us-en/e-series/config-linux/iscsi-setup-multipath-conf-file-concept.html
  • A
    Based on your original criteria- why bother clustering anything at all? Since it appears all you're really after is a single pane of glass- leave them all as standalone servers and use PDM for the control plane. Clustering makes sense when you...
  • A
    that depends on how you dice the data. If a "pve admin" is just the infrastructure admin, storage is provided by the storage team. if its a home user, not sure that what they recognize is of particular importance. Not from my viewpoint- these...