Search results

  1. A

    Bond & Bridge Interfaces - Undesired Behavior

    lets back way up. 1. you have 4 physical interfaces. what are the physically connected to? 2. describe your vlan plan, and which physical interfaces you want to have those vlans travel over 3. describe what traffic you want to use the vlans for. I can help you create an interfaces file to...
  2. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    Unknown, especially since you posted output from the cluster in a healthy state. size 4 is generally a bad idea (even number; the last copy offers no utility), but it should not cause you any issues like this, especially with only one node out. My suggestion- remove 2 monitors (you dont need...
  3. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    On reflection, storage.cfg doesnt tell us anything useful; instead, post the storage configuration- raid level, disk technology and count, and subrank block size (eg, if you have a striped mirror using 4k disks, subrank size will be 4k; if you have a 10 drive raid6, block size will be 32k, etc.)...
  4. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    there is nothing unusual about a windows VM laying claim to all assigned memory. this is normal. as for your performance issues, please post the content of: vmid.conf /etc/pve/storage.cfg
  5. A

    Ceph performance seems too slow

    What you're describing is a mesh network. for the purposes of the conversation, this is the same as having a single active link on each node for both public and private networking. so you're sharing the IO for any PG between the client and disk traffic on a single 10g link. Keep adding load...
  6. A

    Ceph performance seems too slow

    you DO understand that your "speed" can't be faster than the transport. and if you are using the same interface for both public and private traffic that essentially caps your performance at 5gbit/s. this is your drives' observed latency, and has nothing to do with ceph.
  7. A

    New to Proxmox..

    That doesnt apply to ANY software applicable to an internet connected device. Security vulnerabilities are constantly identified, exploited, and patched in a never ending cat and mouse game. Moreover, a hypervisor is complex and problems are constantly identified and patched. Updating isnt...
  8. A

    New to Proxmox..

    yes. see https://pve.proxmox.com/wiki/User_Management#pveum_permission_management you will not be DENIED support, but the first answer to any issue would be "make all cluster members the same version." This isnt unique to Nutanix or Proxmox; its just the design criteria of the software- more to...
  9. A

    Kernel 6.17 bug with megaraid-sas (HPE MR416)

    While I dont see what firmware is on your controller, the host bios gives me an idea of how long it's been since you've updated it; time to get the latest SPP.
  10. A

    H740p mini and SAS Intel SSD PX05SMB040

    Thats the problem though isnt it; "linux" wont see the drive until you do.
  11. A

    H740p mini and SAS Intel SSD PX05SMB040

    I just noticed this little bit for the drive in slot 0 you will need to reformat this disk before you can use it. its possible that the controller firmware will not let you map it until you do; so you will need to plug the drive into a real HBA, use sgdisk to reformat to 512b sector size, and...
  12. A

    H740p mini and SAS Intel SSD PX05SMB040

    what about the rest of the devices? if none worked, time to see to firmware updates (run lifecycle) and/or call dell support.
  13. A

    H740p mini and SAS Intel SSD PX05SMB040

    if you intend to use this in production, I suggest you spend some time familiarizing yourself with the tool (megacli.) there are many resources online for that. To answer your specific question, megacli expects enclosure:slot, so your command will be: /path/to/MegaCli64 -pdmakegood -physdrv...
  14. A

    H740p mini and SAS Intel SSD PX05SMB040

    that step might not be necessary for your specific controller; PERC cards can serve raid and jbod simultaneously.
  15. A

    H740p mini and SAS Intel SSD PX05SMB040

    ahh right. ncurses5 is not available to trixie- but the good news is that we dont actually NEED it. we just need to fool the system into thinking we already have it, like so: apt install -y libncurses6 ln -s /usr/lib/x86_64-linux-gnu/libncurses.so.6 /usr/lib/libncurses.so.5 ln -s...
  16. A

    H740p mini and SAS Intel SSD PX05SMB040

    Luckily for you, you dont have to be dependent on the firmware controls. once booted to PVE, install megacli/perccli. from there, you can /path/to/MegaCli64 -pdlist -a0 /path/to/MegaCli64 -AdpSetProp -EnableJBOD 1 -a0 /path/to/MegaCli64 -pdmakegood -physdrv [your drives identified in line 1]...
  17. A

    OSD errors

    What you're seeing isnt actually errors. It is, however, an indication of OSD resource starvation; if osd 4, 6 and 8 are all on the same host, check overall host load, and might need to make some adjustments hardware-wise. Alternatively, you can lower priority for some osd garbage collection...
  18. A

    Seagate Exos X16 not detected

    SAS3 drives have a "SAS Power Disable" pin which does not exist on SAS2 or earlier backplanes. How are you attaching the disk to the host bus adapter? what kind of backplane does your server have?
  19. A

    Recovery: OS Drive Died and Now I'm Trying to Recover My VMs

    if you can see the location on the filesystem, you can use this to create matching fstab /etc/pve/storage.cfg entries in your new pve installation. You will need to create the configuration files for your resources manually, however- effectively go through the motions of creating your vms as if...