Search results

  1. tane

    Cifs mount kernel module not working After Connectx 6 drivers not working after OFED Mellanox DOCA-OFED 3.0 Nvidia install

    Hi all , So i am think this a problem with DOCA-OFED drivers which has some dependecies and recompiled parts that cifs needs. So this error if modeprove -v cifs modprobe: DEBUG: ../libkmod/libkmod-module.c:1461 kmod_module_get_options() modname=snd_via82xx_modem mod->name=cifs mod->alias=(null)...
  2. tane

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    Sorry it's a Mesh config I have Dual 2x100G but no switch. Which is ok for me. I have also 6x10Gb/s (Also no switch for this) maybe move ceph public network there but I think It would be probably slower ? What do you think ?
  3. tane

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    Hi Have similar setup with 3 node MESH Configuration. AMD EPYC 7543P 32-Core Processor 512GB ram I have only 2 X 7TB 7450 PRO MTFDKCB7T6TFR per node for now. I did the FRR setup on 100Gbps test with Iperf goes about 96Gbps. NVME goes about with fio --ioengine=libaio...
  4. tane

    [SOLVED] ConnectX5 100Gbps Problem with cards using latest drivers

    I quite shure i got fried brain from compiling kernel drivers. Thank you so much I completely overlooked I am not using a switch . Thank you. Thank you Waltar . Sometime you need a just a small push from good people. thx man so much. Anyway is is some is trying to work with lates promox and this...
  5. tane

    Assign Public IP to a VM

    I think the firewall rules and automate with lot of scripting are only thing you can do.
  6. tane

    [SOLVED] ConnectX5 100Gbps Problem with cards using latest drivers

    Hi I am trying to setup ConnectX5 cards. in Mesh config using DAC cables ┌───────────┐ │ Node1 │ ├─────┬─────┤ │enp197s0f0np0│enp197s0f1np1│ └──┬──┴──┬──┘...
  7. tane

    [SOLVED] LVM-thin SSD Samsung 970pro in raid Benchmark is not the same when choose right driver storage during install

    I hope this saves someone time should be in the documentation for windows when creating virtual machines or default when selecting windows. Thank you very much for responding @Alwin
  8. tane

    [SOLVED] LVM-thin SSD Samsung 970pro in raid Benchmark is not the same when choose right driver storage during install

    You are correct the question is why is such diffrence ? Also if you make a mistake not easy to change.
  9. tane

    [SOLVED] LVM-thin SSD Samsung 970pro in raid Benchmark is not the same when choose right driver storage during install

    So basically I got poor performance on qcow2 and then tried to install raw which fixed the problem but no snapshots and other neat features of qcow2 so basically. But again I run into problems it looks like that the install of windows server must be done right. Storage driver makes major...
  10. tane

    Choosing Performance Cluster FS with Horizontal Scaling

    Hi Guys, We are interesting in creating large scale installation of proxmox. The system is on the same location so the nodes and cluster filesystem bond with 10G networks switches, we are think using 100G. Basic idea: Create 3+ node proxmox cluster 10G link to shared storage FS Create...
  11. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Hi gvalverde, I will write you a guide ASAP ok. For now I would say it works ok very satisfied.
  12. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Hi Guys, Thank you for your answers I did setup exactly as @gkovacs said makes most sense to me. Here is some benchmarks I did maybe some can use it as reference. Slow Storage Randr 75% Read ZIL + L2ARC fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test6 --filename=test7...
  13. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Hi MIr, ZIL is mirrored 2 x 50 I wrote RAID 1 ZIL NVME SSD 50 GB RAID 1 <--- maybe I was not clear
  14. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Well that was the question. I thought that ZFS would not benefit much from NVME but was not sure , Mostly we run a lot of small docker about 50+ 768M per docker they small in memory and do not have big data IO usage only some peaks than we would need High IO to finish as fast as possible with...
  15. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    @Lnxbill tnx for the reply. Well we are thing of testing this in detail but. For now probably about 32GB - to 48 GB MAX for ZFS, Unfortunately we can not use LXC but we need a dockers inside VM's So we have two options doing raid 1 nvme more intensive apps run on NVME or try to balance speed...
  16. tane

    [SOLVED] ZFS vs Other FS system using NVME SSD as cache

    Hi People, We would like to move are KVM host to Proxmox single server on OVH. But we are wondering what FileSystem should we setup as primary: This the server configuration: Intel Xeon E5-1650v3 - 6c/12t - 3.5 GHz/3.8 GHz RAM: 128GB DDR4 ECC 2133 MHz 2X2TB SATA + 2x450 GB NVME SSD Our...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!