Search results

  1. S

    Operatins System .ISO copied to RAM

    It will be ZFS caching the file : https://www.zfsbuild.com/2010/04/15/explanation-of-arc-and-l2arc/ When the Ram is requested for something else the cache memory should be returned, however you can tune how much ZFS will use for ARC within the RAM.
  2. S

    Linux Kernel 5.3 for Proxmox VE

    Perfect that did the job! Thanks
  3. S

    Linux Kernel 5.3 for Proxmox VE

    Nope no ZFS or UEFI. dpkg -L pve-kernel-5.3.10-1-pve|grep boot /boot /boot/System.map-5.3.10-1-pve /boot/config-5.3.10-1-pve /boot/vmlinuz-5.3.10-1-pve /lib/modules/5.3.10-1-pve/kernel/drivers/mtd/parsers/redboot.ko /lib/modules/5.3.10-1-pve/kernel/drivers/scsi/iscsi_boot_sysfs.ko...
  4. S

    Linux Kernel 5.3 for Proxmox VE

    dpkg -s pve-kernel-5.3.10-1-pve Package: pve-kernel-5.3.10-1-pve Status: install ok installed Priority: optional Section: admin Installed-Size: 277054 Maintainer: Proxmox Support Team <support@proxmox.com> Architecture: amd64 Source: pve-kernel Version: 5.3.10-1 Provides: linux-image...
  5. S

    Linux Kernel 5.3 for Proxmox VE

    I have a server I was testing 5.3 on but had rolled back to 5.0, when I did dist-upgrade to 6.1 it says it has installed 5.3 however it is not listed under /boot I ever tried a reinstall: apt reinstall pve-kernel-5.3 Reading package lists... Done Building dependency tree Reading state...
  6. S

    iGPU Stopped working

    Yeah that fixed it! Thanks, didnt find the bug in my earlier searching.
  7. S

    iGPU Stopped working

    I have had iGPU of a i7-7700 passed through working for a while, however a recent update seems to have stopped it working after I rebooted the node. I have it still working on another identical machine which I haven't yet rebooted, I have cross checked all grub / vfio settings between the two...
  8. S

    Ceph Performance within VMs on Crucial MX500

    During the benchmark have you checked top / nmon on some of the CEPH nodes? See if you can see any point of saturation / heavy I/O wait on the OSD's? Have you got a spare SSD that you can benchmark directly on the same hardware platform to make sure you can atleast get the throughput your...
  9. S

    Ceph Performance within VMs on Crucial MX500

    Firstly I noticed your using 2/1 this is very much not suggested, specially when using consume grade SSD's, you could have total data loss with a single SSD having a corrupted bit. I also noticed your running at quite a high usage, CEPH definite slows down as the usage gets higher, and with...
  10. S

    Ceph Performance within VMs on Crucial MX500

    What settings are you using for the CEPH Pool? Are you using filestore or bluestore?
  11. S

    CEPH Use High Bandwidth Utilization

    No, as previously said CEPH is not made to run on such a small amount of bandwidth. Most people run CEPH on 10Gbps, as even 1Gbps can have performance issues, you will also have issues if 25Mbps is all you have. I would highly suggest reviewing and changing your setup.
  12. S

    CEPH Use High Bandwidth Utilization

    25Mbps really isn't a lot, CEPH will have to send IO across between each host for every write and read. Most people run a CEPH cluster on 10Gbps NIC's.
  13. S

    CEPH Use High Bandwidth Utilization

    How much is high? CEPH will use plenty of bandwidth during I/O operations.
  14. S

    Cannot open ceph.conf

    Just create the empty file : /etc/ceph/ceph.conf You don't need to put anything in it, but you may want to add some ceph client values that the VM's will pickup (cache e.t.c)
  15. S

    Ceph - Bus/Device

    SCSI has the best performance as acts the most like a real drive to the OS. Discard will make sure the OS passes down to CEPH when a file is deleted so the space can be freeded up in CEPH otherwise old data wont be removed from the CEPh Storage layer. Device# leave as 0, if you had a second...
  16. S

    How many OSDs?

    Select SCSI From BusDevice and tick Discard. The rest you can leave as default.
  17. S

    Proxmox 6.0 cluster ceph issue

    You can ignore the vmbr1 line. Can you get more lines of the log from the log file in /var/log
  18. S

    Ceph with slow and fast storage-pool

    Yes, you can use CEPH Crushmap to allocate particular OSD to separate root's and then allocate a Pool to each root, so you can have a SSD Pool and a HDD pool. Some of the config you will probably need to do via the CLI, but once it's setup you can manage via Proxmox GUI as normal.
  19. S

    Proxmox 6.0 cluster ceph issue

    Is the CEPH proccess stopping?, Anything showing in the logs? If you do "service ceph-osd@# status" what does it show? # being the OSD ID.
  20. S

    ceph failure scenario

    What model of SSD?