Search results

  1. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    This isnt actually so. you can think of monitoring quorum rule as 3:1. fun fact- a cluster with 2 monitors is more prone to pg errors (monitor disagree) then with one. Feel free to try it yourself- shut down all but one of your monitors and see what happens. This has happened to me on numerous...
  2. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the metal. Both OMV and Truenas have some...
  3. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much of the functionality- which is to say, outside the...
  4. A

    Bond & Bridge Interfaces - Undesired Behavior

    Because I would not expect that vlan to be accessible to virtual machines. adjust that as appropriate. Far be it from me to dissuade you from pursuing NIC level fault tolerance. Suffice it to say I dont; I care about path redundancy- a switch will be reboot far more often than a nic failure...
  5. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    I'm confused. were you not asking for help troubleshooting this? SQL performance is a function of two things- query efficiency and disk IO latency and IOPs. since we know your queries are the same, what remains is the storage. How did you have the storage configured on your ESX environment...
  6. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    cite your sources please. 5 monitors are "suggested" with a high number of OSD nodes. with a typical crush rule of 3:2, this only makes sense IF you have dedicated monitor nodes (eg, no OSD) AND you have environmental issues that take your nodes down routinely. Otherwise, the risk is miniscule...
  7. A

    Bond & Bridge Interfaces - Undesired Behavior

    Not at all. my (and everyone else's) participation in this forum is voluntary. Nothing you provide (or not) is necessary as long as you dont expect anything in return. you have 4 ports. how are you attaching them to 7 different devices? More importantly, are they all connected to each other...
  8. A

    Bond & Bridge Interfaces - Undesired Behavior

    lets back way up. 1. you have 4 physical interfaces. what are the physically connected to? 2. describe your vlan plan, and which physical interfaces you want to have those vlans travel over 3. describe what traffic you want to use the vlans for. I can help you create an interfaces file to...
  9. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    Unknown, especially since you posted output from the cluster in a healthy state. size 4 is generally a bad idea (even number; the last copy offers no utility), but it should not cause you any issues like this, especially with only one node out. My suggestion- remove 2 monitors (you dont need...
  10. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    On reflection, storage.cfg doesnt tell us anything useful; instead, post the storage configuration- raid level, disk technology and count, and subrank block size (eg, if you have a striped mirror using 4k disks, subrank size will be 4k; if you have a 10 drive raid6, block size will be 32k, etc.)...
  11. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    there is nothing unusual about a windows VM laying claim to all assigned memory. this is normal. as for your performance issues, please post the content of: vmid.conf /etc/pve/storage.cfg
  12. A

    Ceph performance seems too slow

    What you're describing is a mesh network. for the purposes of the conversation, this is the same as having a single active link on each node for both public and private networking. so you're sharing the IO for any PG between the client and disk traffic on a single 10g link. Keep adding load...
  13. A

    Ceph performance seems too slow

    you DO understand that your "speed" can't be faster than the transport. and if you are using the same interface for both public and private traffic that essentially caps your performance at 5gbit/s. this is your drives' observed latency, and has nothing to do with ceph.
  14. A

    New to Proxmox..

    That doesnt apply to ANY software applicable to an internet connected device. Security vulnerabilities are constantly identified, exploited, and patched in a never ending cat and mouse game. Moreover, a hypervisor is complex and problems are constantly identified and patched. Updating isnt...
  15. A

    New to Proxmox..

    yes. see https://pve.proxmox.com/wiki/User_Management#pveum_permission_management you will not be DENIED support, but the first answer to any issue would be "make all cluster members the same version." This isnt unique to Nutanix or Proxmox; its just the design criteria of the software- more to...
  16. A

    Kernel 6.17 bug with megaraid-sas (HPE MR416)

    While I dont see what firmware is on your controller, the host bios gives me an idea of how long it's been since you've updated it; time to get the latest SPP.
  17. A

    H740p mini and SAS Intel SSD PX05SMB040

    Thats the problem though isnt it; "linux" wont see the drive until you do.
  18. A

    H740p mini and SAS Intel SSD PX05SMB040

    I just noticed this little bit for the drive in slot 0 you will need to reformat this disk before you can use it. its possible that the controller firmware will not let you map it until you do; so you will need to plug the drive into a real HBA, use sgdisk to reformat to 512b sector size, and...
  19. A

    H740p mini and SAS Intel SSD PX05SMB040

    what about the rest of the devices? if none worked, time to see to firmware updates (run lifecycle) and/or call dell support.