Search results

  1. A

    Low disc performance with CEPH pool storage

    not 50gbit, 2x25. a single io request cannot exceed 25gbit a single channel on a lagg, and ceph transactions are still single threaded. the good news is that it wouldnt really make a difference anyway, since each of your OSD nodes need two transactions per IO anyway (one on the public interface...
  2. A

    Low disc performance with CEPH pool storage

    Are you SURE? when benchmarking 4k performance, note that - MB/S is irrelevant. what are the IOPs? - data patterns (sequential/random) will have a large impact on this perceived performance. Sequential large read/write performance numbers get the warm and fuzzy but are largely inconsequential...
  3. A

    Fibre Channel (FC-SAN) support

    Understandable. In my view, deploying a product that is effectively unsupported (by anyone) is a bad solution, regardless of budgetary requirements. outages, loss of service, or loss of data are more expensive than upfront spending. Given your set of constraints, I'd probably be looking at...
  4. A

    VirGL hardware accelerated h264/h265

    search the forums. https://forum.proxmox.com/threads/virgl-hardware-accelerated-h264-h265.137023/
  5. A

    Fibre Channel (FC-SAN) support

    True, but there are ways to live with it anyway, it really depends on your commitment to operating PVE in production and availability of devops talent. Method 1. LVM Thick+Hardware snapshots. If your storage has internal thin provisioning and snapshot support (which many do) Its possible to...
  6. A

    Moving Proxmox to a New Server

    everything in /etc/pve is cluster wide.
  7. A

    VirGL hardware accelerated h264/h265

    Not... exactly. its just not included.
  8. A

    Fibre Channel (FC-SAN) support

    exactly. the drivers are fine. yes. that is the point. thats kind of a silly point to make. PVE isnt responsible for anything, and choose what they want to support. Again, kind of the point. Its not a TECHNOLOGICAL limitation.
  9. A

    Fibre Channel (FC-SAN) support

    I'm not a red hat rep. if you really want to know, reach out to their sales team ;) otherwise, this may help: https://docs.openstack.org/cinder/latest/reference/support-matrix.html
  10. A

    Fibre Channel (FC-SAN) support

    I know @bbgeek17 is too humble, but I'm pretty sure you can get the functionality you are after using blockbridge. The state of PVE's support of block level SANs via API has been "not in scope" for a long time, and doesnt look like it will be. Linux is not a hypervisor. there are plenty of...
  11. A

    Moving Proxmox to a New Server

    No. It will be reapplied when you join the cluster. I WOULD recommend moving all your assets to your shared storage before proceeding. it will make the whole process much easier and less prone to error. You can move them back when you're done (or better yet, dont bother with the local storage...
  12. A

    Promox Design / Dell Powerstore / Pure Storage

    The more impactful factor would be how you would quorum and manage PVE across a stretch configuration. PVE doesn't have any provisions for this, and you would need to deploy some kind of stonith at the first layer cluster (with pve clusters being the second.) If you can have that addressed the...
  13. A

    Proxmox cluster

    I have to assume the same- but those obviously apply to v1 and v10 according to his network plan, and I left them the same. I'm operating under the assumption that addresses on other vlans would be arbitrary and can/should use normal reserved IP space. In any case, it can be deduced that his...
  14. A

    Moving Proxmox to a New Server

    You dont actually need to. As long as your host allows legacy boot you can just boot your existing installation and convert to to UEFI using proxmox-boot-tool. rest of the answers will apply to a fresh install Yes, your new OS install will recognize the VGs. as long as your storage is...
  15. A

    DELL fatal error was detected after Proxmox install

    Not wrong, but when it comes to Linux not accurate; its the Kernel that will matter. PVE 8 had 4 different kernels during its lifespan to this point (6.2, 6.5, 6.8, 6.11) Its possible that one or more of these will work, and can be pinned for the duration.
  16. A

    Proxmox cluster

    I have no idea why you are using cgnat addresses. or why they are bridges. Instead of waxing poetic, allow me to create a sample interfaces file for you: # /etc/network/interfaces iface enp33s0f0np0 inet manual iface enp33s0f1np1 inet manual # Corosync R1 auto enp33s0f0np0.100 iface...
  17. A

    Proxmox cluster

    Might be a good idea to compare all your /etc/network/interfaces files on all nodes, as well as the hosts files. Make sure you have at least one dedicated interface for corosync (two preferred) and that all nodes can ping each other on the corosync interface(s.) good practice to also create a...
  18. A

    [SOLVED] CEPHFS - SSD & HDD Pool

    Well, one of two things must be true: 1. whats actually mounted is not what PVE is mounting (check mount) 2. the files APPEAR to be the same but are in fact not.
  19. A

    [SOLVED] CEPHFS - SSD & HDD Pool

    Looks like you're mounting the same filesystem twice. post the content of ceph fs ls /etc/pve/storage.cfg