Search results

  1. A

    VirGL hardware accelerated h264/h265

    search the forums. https://forum.proxmox.com/threads/virgl-hardware-accelerated-h264-h265.137023/
  2. A

    Fibre Channel (FC-SAN) support

    True, but there are ways to live with it anyway, it really depends on your commitment to operating PVE in production and availability of devops talent. Method 1. LVM Thick+Hardware snapshots. If your storage has internal thin provisioning and snapshot support (which many do) Its possible to...
  3. A

    Moving Proxmox to a New Server

    everything in /etc/pve is cluster wide.
  4. A

    VirGL hardware accelerated h264/h265

    Not... exactly. its just not included.
  5. A

    Fibre Channel (FC-SAN) support

    exactly. the drivers are fine. yes. that is the point. thats kind of a silly point to make. PVE isnt responsible for anything, and choose what they want to support. Again, kind of the point. Its not a TECHNOLOGICAL limitation.
  6. A

    Fibre Channel (FC-SAN) support

    I'm not a red hat rep. if you really want to know, reach out to their sales team ;) otherwise, this may help: https://docs.openstack.org/cinder/latest/reference/support-matrix.html
  7. A

    Fibre Channel (FC-SAN) support

    I know @bbgeek17 is too humble, but I'm pretty sure you can get the functionality you are after using blockbridge. The state of PVE's support of block level SANs via API has been "not in scope" for a long time, and doesnt look like it will be. Linux is not a hypervisor. there are plenty of...
  8. A

    Moving Proxmox to a New Server

    No. It will be reapplied when you join the cluster. I WOULD recommend moving all your assets to your shared storage before proceeding. it will make the whole process much easier and less prone to error. You can move them back when you're done (or better yet, dont bother with the local storage...
  9. A

    Promox Design / Dell Powerstore / Pure Storage

    The more impactful factor would be how you would quorum and manage PVE across a stretch configuration. PVE doesn't have any provisions for this, and you would need to deploy some kind of stonith at the first layer cluster (with pve clusters being the second.) If you can have that addressed the...
  10. A

    Proxmox cluster

    I have to assume the same- but those obviously apply to v1 and v10 according to his network plan, and I left them the same. I'm operating under the assumption that addresses on other vlans would be arbitrary and can/should use normal reserved IP space. In any case, it can be deduced that his...
  11. A

    Moving Proxmox to a New Server

    You dont actually need to. As long as your host allows legacy boot you can just boot your existing installation and convert to to UEFI using proxmox-boot-tool. rest of the answers will apply to a fresh install Yes, your new OS install will recognize the VGs. as long as your storage is...
  12. A

    DELL fatal error was detected after Proxmox install

    Not wrong, but when it comes to Linux not accurate; its the Kernel that will matter. PVE 8 had 4 different kernels during its lifespan to this point (6.2, 6.5, 6.8, 6.11) Its possible that one or more of these will work, and can be pinned for the duration.
  13. A

    Proxmox cluster

    I have no idea why you are using cgnat addresses. or why they are bridges. Instead of waxing poetic, allow me to create a sample interfaces file for you: # /etc/network/interfaces iface enp33s0f0np0 inet manual iface enp33s0f1np1 inet manual # Corosync R1 auto enp33s0f0np0.100 iface...
  14. A

    Proxmox cluster

    Might be a good idea to compare all your /etc/network/interfaces files on all nodes, as well as the hosts files. Make sure you have at least one dedicated interface for corosync (two preferred) and that all nodes can ping each other on the corosync interface(s.) good practice to also create a...
  15. A

    [SOLVED] CEPHFS - SSD & HDD Pool

    Well, one of two things must be true: 1. whats actually mounted is not what PVE is mounting (check mount) 2. the files APPEAR to be the same but are in fact not.
  16. A

    [SOLVED] CEPHFS - SSD & HDD Pool

    Looks like you're mounting the same filesystem twice. post the content of ceph fs ls /etc/pve/storage.cfg
  17. A

    Feature Request: Userscripts in webUI

    The source is not the relevant part, insofar that makes a software trustworthy or not. theres plenty of open source malware. The admonition here is not to run software you found on the internet without knowing what it does- expecially if it has access to the root of your OS.
  18. A

    Replace RDS server with mass Windows 11

    As others pointed out, this isnt actually an option (at least not a valid one licensing-wise.) Its also a whole lot less efficient than a handful of terminal services hosts; I would pause here to discuss WHAT your clients use on their remote sessions; there may be more efficient/less costly ways...
  19. A

    HA Migration

    Isnt is the opposite of is.
  20. A

    Proxmox + Ceph Cluster – Architecture & Technical Validation

    Completely up to you. You can have multiple pools with multiple crush setups using the same disks, but generally speaking unless you have different OSD classes a single pool is likely what you want. SQL wants smaller object sizes (like 16k,) but trial and error would show you what yields best...