Search results

  1. elurex

    Ceph Planning

    wouldn't local ZFS + replication serve your purpose?
  2. elurex

    CephFS storage limitation?

    For your reference, ..... Based on those considerations and operational experience, Mirantis recommends no less than nine-node Ceph clusters for production environments. Recommendation for test, development, or PoC environments is a minimum of five nodes. See details in Ceph cluster sizes. And...
  3. elurex

    CephFS storage limitation?

    . 1. I am very doubtful ceph storage is suitable for you. Ceph is not meant for small environment, just check on ceph minimal requirements you will know. 2. what about rbd export or export-diff? 3. what about rbd import ? 4. for smaller io... cephfs really drop the ball and this is probably...
  4. elurex

    Dual NIC passthrough

    you can try just pass 04:00 to pfsense VM
  5. elurex

    Ceph Planning

    for bluestore ceph storage... sata ssd as journal/wal/db are not fast enough. It is best to use nvme ssd. . keep in mind that firestore are phasing out
  6. elurex

    CPU isolation

    I am looking into assign cpu affinity to a particular VM via /usr/sbin/taskset -cpa 0-3 [PID] However, I notice that pve host itself has also other process will trying to use cpu core 0~3, how to I exclude other process from using core 0~3? I tried add kernel parameter...
  7. elurex

    Proxmox Metric

    login into influxdb > SHOW MEASUREMENTS name: measurements name ---- ballooninfo blockstat cpustat memory nics system > > SHOW FIELD KEYS name: ballooninfo fieldKey fieldType -------- --------- actual float free_mem float last_update float...
  8. elurex

    Kernel 4.15.18-9 incompatible with 10 Gbps card

    yes.... try to compile your own ixgbe driver might be alternative solution
  9. elurex

    Kernel 4.15.18-9 incompatible with 10 Gbps card

    well.... if it is embedded nic... then it is more of motherboard vendor issue/firmware
  10. elurex

    Kernel 4.15.18-9 incompatible with 10 Gbps card

    @tuantv here is my dmesg on my ixgbe device [ 2.046280] ixgbe 0000:05:00.0 enp5s0f0: renamed from eth1 [ 7.369699] ixgbe 0000:05:00.0 enp5s0f0: changing MTU from 1500 to 9000 [ 7.543537] ixgbe 0000:05:00.0: registered PHC device on enp5s0f0 [ 7.648574] IPv6: ADDRCONF(NETDEV_UP)...
  11. elurex

    Kernel 4.15.18-9 incompatible with 10 Gbps card

    you use dac or transceiver ? If you are using transceiver, you may try allow_unsupported_sfp=1 option for the ixgbe driver.
  12. elurex

    [SOLVED] AMD Vega passthrough on Threadripper 2950X

    I am able to pci-passthrough amd vega 64 with following config agent: 1 bios: ovmf boot: c bootdisk: scsi0 cores: 8 cpu: host efidisk0: local-zfs:vm-905-disk-0,size=1M hostpci0: 06:00 machine: q35 memory: 16384 name: amdgpu net0: virtio=EA:8F:88:AB:6D:50,bridge=vmbr0 numa: 1 ostype: l26 scsi0...
  13. elurex

    How can I configure Proxmox and pfSense vm?

    make sure you use vNIC e1000 and not virtio vNIC
  14. elurex

    Kernel 4.15.18-9 incompatible with 10 Gbps card

    tuantv I have no such issue root@pve3:~# pveversion -v proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve) pve-manager: 5.3-5 (running version: 5.3-5/97ae681d) pve-kernel-4.15: 5.2-12 pve-kernel-4.15.18-9-pve: 4.15.18-30 pve-kernel-4.15.18-7-pve: 4.15.18-27 pve-kernel-4.15.18-1-pve: 4.15.18-19...
  15. elurex

    PVE 5.3.6 - pve-api feature request

    i am running vdi service with gpu passthrough instance to users. each user has their own vm disks and depend on servers' gpu availability, user's vm need to dynamically assign with mac and pcie slot id I try to add following code to Qemu.pm return 1 if $authuser eq 'myuser@pam'; but I am...
  16. elurex

    PVE 5.3.6 - pve-api feature request

    Ahhh! I think I found the issue. Its "hostpci0": "05:00,pcie=1,x-vga=1" not editable by any none root user I guess it needs to be a feature request Current Qemu.pm prevents none user to change any options
  17. elurex

    PVE 5.3.6 - pve-api feature request

    My PVE Version proxmox-ve: 5.3-1 (running kernel: 4.15.18-9-pve) pve-manager: 5.3-6 (running version: 5.3-6/37b3c8df) pve-kernel-4.15: 5.2-12 pve-kernel-4.15.18-9-pve: 4.15.18-30 pve-kernel-4.15.18-8-pve: 4.15.18-28 pve-kernel-4.15.18-7-pve: 4.15.18-27 ceph: 12.2.10-pve1 corosync: 2.4.4-pve1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!