Search results

  1. elurex

    Does Proxmox VE 5.0 Beta1 support kvmgt?

    answering my own question /etc/default/grub add iommu_intel = on, i915.enable_gvt=1 to GRUB_CMDLINE_LINUX_DEFAULT add kvm.ignore_msrs=1 in grub or /etc/modprobe.d/kvm.conf echo "options kvm ignore_msrs=1" > /etc/modprobe.d/kvm.conf if you want to run gpu benchmark test you will see...
  2. elurex

    Does Proxmox VE 5.0 Beta1 support kvmgt?

    https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_5.3 any guides on how to make kvmgt GVT-g works? the only info I am able to find is at https://youtu.be/0YyMTg9qc74?t=300 my mDev Did not show I have Xeon CPU E3-1585 v5 @ 3.50GHz with Iris Pro 580 GPU and it supports gvt-g
  3. elurex

    How do I license MS Server 2016 standard and assign cores?

    if you setup three node of pve cluster and capable of Live Migration between them, they you need to buy license on all three nodes with correct amount of physical core count. MS license is very tricky. if you do not create cluster and just use pvesync to each nodes, it consider as cold backup...
  4. elurex

    How do I license MS Server 2016 standard and assign cores?

    You have to buy license for all physical core for windows 2016 standard server, please read ms server license, it states pretty clear about virtualization counts physical core of that hypervisor host and not how many core the vm guest has assigned
  5. elurex

    Can you pass onboard NICs through to a VM?

    /sys/kernel/iommu_groups/15/devices/0000:01:00.0 /sys/kernel/iommu_groups/15/devices/0000:01:00.1 /sys/kernel/iommu_groups/16/devices/0000:02:00.0 /sys/kernel/iommu_groups/16/devices/0000:02:00.1 you can't just pass a single port to your vm but you need to pass 2 port due to they are in the...
  6. elurex

    Isolate tenants

    each vmbr has its only vlan tag, wouldn't it solve the problem?
  7. elurex

    How to benchmark ceph storage

    you need to check ceph benchmark tools cbt
  8. elurex

    Windows 10 Licensing

    that info is not up to date https://docs.microsoft.com/en-us/windows/deployment/vda-subscription-activation requires windows 10 vda must connect to a MS AD or azure AD, unless you are Qualified Multitenant Hoster which 99% of us are not
  9. elurex

    where to get proxmox stickers?

    where to get proxmox stickers?
  10. elurex

    Proxmox VE 5.0 beta2 released!

    any update on this subject? gvt-g on kvm/qemu 2.11 or we must wait for kernel 4.16+ qemu 2.12?
  11. elurex

    Proxmox VE 5.1 or 5.2 connected to external Ceph Mimic cluster?

    @Alwin it is only testing in lab, when client set to Luminous ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it monclient: hunting for new mon 2018-07-06 15:27:59.835155 7f14bc69d700 0 will not decode message of type 41 version 4 because compat_version 4 > supported...
  12. elurex

    Proxmox VE 5.1 or 5.2 connected to external Ceph Mimic cluster?

    I can confirm pve's luminous client will not work with ceph 13 mimic, that's why I went through all the trouble to find upgrade path to ceph mimic wget -q -O- 'https://static.croit.io/keys/release.asc' | apt-key add - echo 'deb https://static.croit.io/debian-mimic/ stretch main' >>...
  13. elurex

    Proxmox VE 5.1 or 5.2 connected to external Ceph Mimic cluster?

    I am running PVE 5.2 with external Ceph Mimic Cluster and you need to upgrade librbd to 13.2.0-1 (ceph mimic) or otherwise it will fail to connect. root@pve1:~# pveversion -v proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve) pve-manager: 5.2-3 (running version: 5.2-3/785ba980) pve-kernel-4.15...
  14. elurex

    Ceph How to connect to External RBD w/o auth

    that did it! thank you spirit!
  15. elurex

    Ceph How to connect to External RBD w/o auth

    I have an external ceph storage cluster and its /etc/ceph/ceph.conf has following options with sets auth =none [global] auth client required = none auth cluster required = none auth service required = none ....... my storage configuration Example for a...
  16. elurex

    glusterfs 4.0 released.. lxc on glusterfs?

    gluster cluster best size is 3~5 node only... more node it gets, more problematic it becomes.
  17. elurex

    Ceph blustore over RDMA performance gain

    well... there is always zfs rollback if anything goes wrong... Currently I turn off RDMA over Ceph already. It is because when memory set to infinite, the monitor will run out of memory and hang periodically, and the only way to avoid that is to cronjob restart the service which in my mind is...
  18. elurex

    ixgbe initialize fails

    currently my test lab switch out all the ixgbe (intel x520-da2 and x550) and replaced with Mellanox NIC for better driver support. Intel has too many packet loss and super high latency.
  19. elurex

    Ceph blustore over RDMA performance gain

    Your method works great and much more elegantly done. No error message at all (but uninstalling the whole pve is a bit Nerve-racking Device #1: ---------- Device Type: ConnectX3 Part Number: MCX354A-FCB_A2-A5 Description: ConnectX-3 VPI adapter card; dual-port QSFP; FDR IB...
  20. elurex

    Ceph blustore over RDMA performance gain

    rbd map command does not work under ceph over rdma (which is pve default use to load ceph block device into kernel), you need to use rbd nbd map command in order to mount ceph block storage.. . this is really going into ceph territories and not what PVE can provide. I would suggest check up...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!