Recent content by alexskysilk

  1. A

    Proxmox user base seems rather thin?

    out of curiosity, what was the vexing question you asked that had no results on the internet?
  2. A

    Write amplification for OS only drives, zfs vs btrfs

    Just to put things in perspective, I have nodes running on consumer level OS ssds for OVER 10 YEARS, and that's without local log prevention. as long as you're not comingling payload and OS even crappy old drives dont get enough writes for it to matter. With current drives (even the cheapest...
  3. A

    Ceph Storage question

    no. if you lose three disks on three separate nodes AT THE SAME TIME, the pool will become read only and you'll lose all payload that had placement group with shards on ALL THREE of those OSDs. BUT here's the thing- the odds of that happening are astronomically low which is the whole point. And...
  4. A

    Write amplification for OS only drives, zfs vs btrfs

    Might not be an obvious question, but why? your OS needs are pretty meagre, and disk performance will have little (if any) impact on your vms. The only real consumer of iops are the logs, and if you are really concerned with write endurance either log to an outside logging facility or to zram...
  5. A

    PVE 8.x Cluster Setup of shared LVM/LV with MSA2060

    Generally speaking you wont get much benefit from more than two host connections to a node (one per controller,) but it is conceivable you would be able to consume more then 25gbit on a single host under which case you will want to ensure that you have at least two disk volumes and a lun on each...
  6. A

    !! Voting for feature request for zfs-over-iscsi Storage !!

    Thats an interesting take. For someone who derides others for being fanboys, that statement shows an astounding lack of self awareness. ceph is a scaleout filesystem with multiple api ingress points. zfs is a traditional filesystem and not multi initiator aware. the Fact that you CAN kludge...
  7. A

    I recommend between 2 solutions

    This is a big pet peeve for me. you dont LOSE anything. you write things multiple times so you can lose a disk and continue functioning. It is irrational to think you get to use 100% of the available disk AND handle its failure. All fault tolerance techniques are a tradeoff- mirrors have the...
  8. A

    NetApp & ProxMox VE

    you could, you know, read the docs. https://docs.netapp.com/us-en/e-series/config-linux/iscsi-setup-multipath-conf-file-concept.html
  9. A

    Best practices for host + cluster organization, two DCs

    Based on your original criteria- why bother clustering anything at all? Since it appears all you're really after is a single pane of glass- leave them all as standalone servers and use PDM for the control plane. Clustering makes sense when you intend to use them as resource providers for the...
  10. A

    !! Voting for feature request for zfs-over-iscsi Storage !!

    that depends on how you dice the data. If a "pve admin" is just the infrastructure admin, storage is provided by the storage team. if its a home user, not sure that what they recognize is of particular importance. Not from my viewpoint- these are "nice to haves". what make zfs-over-iscsi a poor...
  11. A

    Proxmox cluster limited to 2 nodes - adding Ceph-only nodes

    based on that requirement, seems like option 2 is the only rational solution- or, BTW, there are other ways to get fault tolerant storage- you can buy it. a Dell ME50xx or HP MSA26xx would do the trick nicely. penny wise, pound foolish. (unless, of course, the original requirement for being...
  12. A

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    This is only true if the underlying storage is also thick provisioned. You are conflating hardware snapshot=all snapshots. Its true that pve does not provide any built in orchestration tools for hardware snapshotting- but the option to make your own was always available using pve and storage...
  13. A

    Proxmox cluster limited to 2 nodes - adding Ceph-only nodes

    any storage solution has a sweet spot, but that spot is completely dependent on its use. ceph scales well with number of initiators, which in the hypervisor use case can translate to number of VMs. if you have 3 VMs, you can scale to 100 nodes and your performance will not improve meaningfully...
  14. A

    New Proxmox Setup for Enterprise - best practices

    oh for sure. but I think you're concentrating on the wrong thing. why? whats wrong with what you have now? "as possible" is a, forgive me for my bluntness, a stupid metric. if I were you, I'd start by asking the question "what are the goals, and am I meeting them?" there's no point in trying to...
  15. A

    [TUTORIAL] PVE 7.x Cluster Setup of shared LVM/LV with MSA2040 SAS [partial howto]

    Why is LVM thin required? your SAN is likely thin provisioned anyway, so it serves no actual benefit to thin provision above that. the problem was that there was no snapshot support, not the lvm thin part. your SAN either supports dedup or it doesnt. not sure what the relevance is for lvm in...