Search results

  1. A

    Request: SAS HBA LUN Sharing Between Proxmox Cluster Hosts (Like VMware)

    @RodolfoRibeiro If you want more direct assistance, post the content of (from both hosts) lsblk multipath -ll -v2 if you have system logs available from the point in time when your vm became corrupted, would be good to look at what happened.
  2. A

    Request: SAS HBA LUN Sharing Between Proxmox Cluster Hosts (Like VMware)

    For the generations of hardware where iscsi and SAS were offered as available SKUs there was no meaningful performance difference- 16G FC simply had more headroom to fill cache. When 25GB iscsi product started shipping, THOSE were faster (even for fc16.) it is theoretically be SASg4 host...
  3. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    K=6,M=2 results in 6 data strips per 8 total. 6/8=0.75 in replication you have 1 data strips per 3 total. 1/3=0.33 its not exactly the "same" availability because survivability in a replication group is much higher; you need one living osd per pg to recover, whereas with an EC 6+2 you need 6...
  4. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    "lower" and "higher" are subjective. Ceph achieves HA using raw capacity. suit yourself. this is not a recommended deployment. You are far better served by just having two SEPERATE VMs each serving all those functions without any ceph at all- you'll have better fault tolerance, better...
  5. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    The number of OSDs isn't relevant to a pool as long as it is larger then the minimum required by the crush rule. For example, If you have an EC profile of K=8,N=2 rule, you need a minimum of 10 OSDs DISTRIBUTED ACROSS 10 NODES. so 1 OSD per node. you can have more OSDs and more nodes and data...
  6. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    from understanding failure domains. damn @UdoB beat me to the punch. I wont "professor" you on this. You can either read and understand, or deploy your preconcieved notions and learn on your flesh and blood. I would also note that if your expectations that this 4 node ceph+EC+HDDs will be faster...
  7. A

    yawgpp (Yet Another Windows Guest Performance Post)

    ahhhh that makes so much sense... I so rarely set up windows guests so I didnt even think of that.
  8. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    Ok. lets touch on this. From my perspective, there are two types of storage (there are more but in scope.) There is payload (think OS and application) storage and bulk storage. Bulk storage can most efficiently be served by a single device such as your 36 bay with slow spinning drives...
  9. A

    yawgpp (Yet Another Windows Guest Performance Post)

    Caching occurs in multiple layers of presentation. By the time a virtual disk is presented to a guest, the multiple caching layers can conflict and actually SLOW the guest storage performance. see https://forum.proxmox.com/threads/disk-cache-wiki-documentation.125775/
  10. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    you dont need pci passthrough for lxc- just would need to install the proper nvidia driver based on hardware and kernel deployed. You are better off creating an installation script, especially if you intend on having multiple nodes with GPUs. FYI, that 4x node solution is VERY old and has...
  11. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    I think you need to carefully consider what your end goal is. PCIe passthrough is not a good citizen in a PVE cluster, since VMs with PCIe pins not only cannot move anywhere, but also liable to hang the host. if you MUST use PCIe passthrough, consider leaving that node outside the cluster. I...
  12. A

    yawgpp (Yet Another Windows Guest Performance Post)

    in the many years I've been using PVE, I havent had much call for using Windows guests, and when I did it was usually Windows 2016 (and older before) and had reasonably good results. In the last few weeks, I had need of a Windows guest for a specific purpose and I figured its a good time to set...
  13. A

    Pardon my less-than-intelligent question, but is there a way to install Proxmox on a Ceph cluster?

    In a cluster you dont need or even want to backup a host. everything important lives in /etc/pve which exists on all nodes. If you DID back up a host(s), you'd open the possibility of restoring a node that has been removed from the cluster and causing untold damage when turning it on. This is...
  14. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The dashboard and smb modules are, as the terms suggests, OPTIONAL MODULES. they are not required for "basic functionality" and provide no utility to a ceph installation as a component of PVE.
  15. A

    Splitbrain 1 Node after Hardware Error

    The short answer is yes. the longer answer is you need to take into consideration what ceph daemons are running on the node and account for them in the interim. moving all but OSDs are trivial- just create new ones on other nodes and delete the ones on the "broken" one. OSDs add a wrinkle; its...
  16. A

    Splitbrain 1 Node after Hardware Error

    simplest fix? evict the "out" node from the cluster and reinstall pve on it from scratch, WITH A NEW NAME.
  17. A

    Move Storage fails with qemu-img convert (Invalid argument)

    Interwebs say this happens when the on-disk block size is going from 4k source to 512b destination. Is reformatting the destination volume a possibility?
  18. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    I didnt know that, but that kinda begs the question what does the dashboard offer you beyond what PVE presents; if it really something necessary, I'd probably just set up ceph with cephadm seperate from pve. PVE doesnt consider the entirety of the ceph stack necessary for full function since it...
  19. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    You dont actually need the module- all it does is integrate what you always could do with smbd. Its a nice to have, not a showstopper.
  20. A

    SAS pool on one node, SATA pool on the other

    correct. and it doesnt have to be a full pve either. see https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_corosync_external_vote_support