Recent content by n1nj4888

  1. e1000 driver hang

    I’m still using the disable tso gso workaround I posted previously in the following post - since I did this (and restarted the node afterwards), I’ve never had any of the “Detected unit hardware hang” errors that I used to get, so suggest you try that? Post in thread 'e1000 driver hang'...
  2. Dlink DUB-E250 2.5Gbps ethernet adapter reports no auto-negotiation and only half duplex?

    To add to this, I also tried a similar Ugreen 2.5Gbps USB Ethernet NIC and saw the same issues as above... Both the Ugreen USB NIC and the DLink DUB-E250 USB NIC use the same Realtek 8156 based chipset and on both Linux PVE Kernel 5.13 and 5.15 they are seemingly using a "cdc"ncm" driver...
  3. Dlink DUB-E250 2.5Gbps ethernet adapter reports no auto-negotiation and only half duplex?

    Hi All, Thought I'd try a couple of Dlink DUB-250 USB 2.5Gbps ethernet adapters (on two different proxmox nodes) but both only seem to connect at half duplex and both report auto-negotiation is not supported/enabled yet the dlink website suggests they should support this...
  4. Reducing rrdcached writes

    Is it a simple configuration item to set the rrdcached writes to ram disk rather than disk? This approach could leave the WRITE_INTERVAL and FLUSH_INTERVAL as standard but write to ram instead of wearing out SSDs? I don’t fully understand why Proxmox writes so much to disk if it could easily use...
  5. Live Migration with GVT-g (mdev) passthrough device?

    Just following up on this, looks like the qemu patches have been applied to qemu v5.2 and kernel patches are work in progress - https://github.com/intel/gvt-linux/issues/175#issuecomment-792516421
  6. [TUTORIAL] Create a Proxmox 6.2 cluster GlusterFS storage

    Thanks for the tutorial! I think the fsidk -l should be fdisk -l ... Also, it would be good to update this initial brick creation step so that the brick is based on a thinly provisioned logical volume rather than thick provisioned logical volume that would be created above - The added bonus is...
  7. [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    Glad it’s working. You may want to mark the thread as solved?
  8. [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    I have a NUC8i5BEH that I Passthrough the Intel GPU (for Quicksync access only) to an 20.04 Ubuntu VM using meditated device (GVT-g). I don’t recall that I had to add any specific drivers to the Ubuntu VM to get it working so you might want to try that route?
  9. Comet Lake NUC (NUC10i5FNH / NUC10i7FNH) gvt-g support?

    Glad you were able to get gvt-g working. I basically just followed the "General Requirements" and then "Mediated Devices (vGPU, GVT-g)" sections from the WIKI: https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_mediated_devices_vgpu_gvt_g. I don't recall changing anything else but I could check...
  10. External Metrics dictionary and VM disk size/usage?

    Hi There, Is there a dictionary which defines the external metrics (and their meaning) that PVE pushes out to (for example) InfluxDB? For example, It's not clear to me how I'd get the VM disk size (and ideally usage but I suspect this is not possible from the metrics alone?) for the disk...
  11. Error: Message too long in External Metric

    As I recall, you’ll need to run systemctl restart pvestatd after making the change on each box. I’ll wait for the patch before making the changes.
  12. Error: Message too long in External Metric

    Looks like @tom has applied a patch for this to enable the MTU to be configured in /etc/pve/status.cfg in a future release of PVE. So we will have to wait for that and then set the MTU to be 1450 and not the default of 1500 in status.cfg. Thanks @tom and @fabian for sorting this...
  13. Error: Message too long in External Metric

    Hi @fabian , I raised the bug originally over at bugzilla for this so thought I'd put some more observations here on this. I have 3 PVE nodes (pve-host1, pve-host2 and pve-host3 all latest version). The issue now occurs on only pve-host2 and pve-host3 because I made the change you suggested to...
  14. Comet Lake NUC (NUC10i5FNH / NUC10i7FNH) gvt-g support?

    Hi Guys, I currently have a Coffee Lake NUC (NUC8i5BEH) where I use gvt-g for virtual GPUs for some ubuntu VMs (mainly for Intel QuickSync) and wondered whether anyone had tried this on the Comet Lake NUCs (either NUC10i5FNH or NUC10i7FNH) as thinking about buying one of those... According to...
  15. Live Migration with GVT-g (mdev) passthrough device?

    Hi @dcsapak, Are there any specific URLs you can share where I could check the status of this exciting feature? I’ve posted the same question at the phoronix article above but that article is 1yr old now... Thanks!

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!