alexskysilk's latest activity

  • A
    Are you SURE? when benchmarking 4k performance, note that - MB/S is irrelevant. what are the IOPs? - data patterns (sequential/random) will have a large impact on this perceived performance. Sequential large read/write performance numbers get...
  • A
    Understandable. In my view, deploying a product that is effectively unsupported (by anyone) is a bad solution, regardless of budgetary requirements. outages, loss of service, or loss of data are more expensive than upfront spending. Given your...
  • A
    alexskysilk reacted to bbgeek17's post in the thread Fibre Channel (FC-SAN) support with Like Like.
    Have you looked at the project's status as of today? This was the last conversation I recall: https://lists.ovirt.org/archives/list/users@ovirt.org/thread/6Z5CCSNZPSBBG2M3GN5YJBNFEMGEHNEA/ Last release 12/1/23 Blockbridge : Ultra low latency...
  • A
    search the forums. https://forum.proxmox.com/threads/virgl-hardware-accelerated-h264-h265.137023/
  • A
    True, but there are ways to live with it anyway, it really depends on your commitment to operating PVE in production and availability of devops talent. Method 1. LVM Thick+Hardware snapshots. If your storage has internal thin provisioning and...
  • A
    alexskysilk reacted to spirit's post in the thread Fibre Channel (FC-SAN) support with Like Like.
    yes, snapshots on san are coming (with qcow2 on top of lvm, like ovirt indeed) . I'm working on it, I hope to finish it for pve9.
  • A
    The last time I tried this almost 10 years ago, we ran into the same problems and abandoned OCFS2. It's an unsupported solution and it sadly feels like it is not supported for a good reason. We went with FC-based SAN (3 different models over the...
  • A
    everything in /etc/pve is cluster wide.
  • A
    Not... exactly. its just not included.
  • A
    exactly. the drivers are fine. yes. that is the point. thats kind of a silly point to make. PVE isnt responsible for anything, and choose what they want to support. Again, kind of the point. Its not a TECHNOLOGICAL limitation.
  • A
    I'm not a red hat rep. if you really want to know, reach out to their sales team ;) otherwise, this may help: https://docs.openstack.org/cinder/latest/reference/support-matrix.html
  • A
    I know @bbgeek17 is too humble, but I'm pretty sure you can get the functionality you are after using blockbridge. The state of PVE's support of block level SANs via API has been "not in scope" for a long time, and doesnt look like it will be...
  • A
    No. It will be reapplied when you join the cluster. I WOULD recommend moving all your assets to your shared storage before proceeding. it will make the whole process much easier and less prone to error. You can move them back when you're done...
  • A
    The more impactful factor would be how you would quorum and manage PVE across a stretch configuration. PVE doesn't have any provisions for this, and you would need to deploy some kind of stonith at the first layer cluster (with pve clusters being...
  • A
    alexskysilk replied to the thread Proxmox cluster.
    I have to assume the same- but those obviously apply to v1 and v10 according to his network plan, and I left them the same. I'm operating under the assumption that addresses on other vlans would be arbitrary and can/should use normal reserved IP...
  • A
    You dont actually need to. As long as your host allows legacy boot you can just boot your existing installation and convert to to UEFI using proxmox-boot-tool. rest of the answers will apply to a fresh install Yes, your new OS install will...
  • A
    Not wrong, but when it comes to Linux not accurate; its the Kernel that will matter. PVE 8 had 4 different kernels during its lifespan to this point (6.2, 6.5, 6.8, 6.11) Its possible that one or more of these will work, and can be pinned for the...
  • A
    alexskysilk replied to the thread Proxmox cluster.
    I have no idea why you are using cgnat addresses. or why they are bridges. Instead of waxing poetic, allow me to create a sample interfaces file for you: # /etc/network/interfaces iface enp33s0f0np0 inet manual iface enp33s0f1np1 inet manual...
  • A
    alexskysilk replied to the thread Proxmox cluster.
    Might be a good idea to compare all your /etc/network/interfaces files on all nodes, as well as the hosts files. Make sure you have at least one dedicated interface for corosync (two preferred) and that all nodes can ping each other on the...
  • A
    Well, one of two things must be true: 1. whats actually mounted is not what PVE is mounting (check mount) 2. the files APPEAR to be the same but are in fact not.