Search results

  1. N

    e1000 driver hang

    I’m still using the disable tso gso workaround I posted previously in the following post - since I did this (and restarted the node afterwards), I’ve never had any of the “Detected unit hardware hang” errors that I used to get, so suggest you try that? Post in thread 'e1000 driver hang'...
  2. N

    Dlink DUB-E250 2.5Gbps ethernet adapter reports no auto-negotiation and only half duplex?

    To add to this, I also tried a similar Ugreen 2.5Gbps USB Ethernet NIC and saw the same issues as above... Both the Ugreen USB NIC and the DLink DUB-E250 USB NIC use the same Realtek 8156 based chipset and on both Linux PVE Kernel 5.13 and 5.15 they are seemingly using a "cdc"ncm" driver...
  3. N

    Dlink DUB-E250 2.5Gbps ethernet adapter reports no auto-negotiation and only half duplex?

    Hi All, Thought I'd try a couple of Dlink DUB-250 USB 2.5Gbps ethernet adapters (on two different proxmox nodes) but both only seem to connect at half duplex and both report auto-negotiation is not supported/enabled yet the dlink website suggests they should support this...
  4. N

    Reducing rrdcached writes

    Is it a simple configuration item to set the rrdcached writes to ram disk rather than disk? This approach could leave the WRITE_INTERVAL and FLUSH_INTERVAL as standard but write to ram instead of wearing out SSDs? I don’t fully understand why Proxmox writes so much to disk if it could easily use...
  5. N

    Live Migration with GVT-g (mdev) passthrough device?

    Just following up on this, looks like the qemu patches have been applied to qemu v5.2 and kernel patches are work in progress - https://github.com/intel/gvt-linux/issues/175#issuecomment-792516421
  6. N

    [TUTORIAL] Create a Proxmox 6.2 cluster GlusterFS storage

    Thanks for the tutorial! I think the fsidk -l should be fdisk -l ... Also, it would be good to update this initial brick creation step so that the brick is based on a thinly provisioned logical volume rather than thick provisioned logical volume that would be created above - The added bonus is...
  7. N

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    Glad it’s working. You may want to mark the thread as solved?
  8. N

    [SOLVED] - NUC10 GPU Passthrough (PVE 6.3)

    I have a NUC8i5BEH that I Passthrough the Intel GPU (for Quicksync access only) to an 20.04 Ubuntu VM using meditated device (GVT-g). I don’t recall that I had to add any specific drivers to the Ubuntu VM to get it working so you might want to try that route?
  9. N

    Comet Lake NUC (NUC10i5FNH / NUC10i7FNH) gvt-g support?

    Glad you were able to get gvt-g working. I basically just followed the "General Requirements" and then "Mediated Devices (vGPU, GVT-g)" sections from the WIKI: https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_mediated_devices_vgpu_gvt_g. I don't recall changing anything else but I could check...
  10. N

    External Metrics dictionary and VM disk size/usage?

    Hi There, Is there a dictionary which defines the external metrics (and their meaning) that PVE pushes out to (for example) InfluxDB? For example, It's not clear to me how I'd get the VM disk size (and ideally usage but I suspect this is not possible from the metrics alone?) for the disk...
  11. N

    Error: Message too long in External Metric

    As I recall, you’ll need to run systemctl restart pvestatd after making the change on each box. I’ll wait for the patch before making the changes.
  12. N

    Error: Message too long in External Metric

    Looks like @tom has applied a patch for this to enable the MTU to be configured in /etc/pve/status.cfg in a future release of PVE. So we will have to wait for that and then set the MTU to be 1450 and not the default of 1500 in status.cfg. Thanks @tom and @fabian for sorting this...
  13. N

    Error: Message too long in External Metric

    Hi @fabian , I raised the bug originally over at bugzilla for this so thought I'd put some more observations here on this. I have 3 PVE nodes (pve-host1, pve-host2 and pve-host3 all latest version). The issue now occurs on only pve-host2 and pve-host3 because I made the change you suggested to...
  14. N

    Comet Lake NUC (NUC10i5FNH / NUC10i7FNH) gvt-g support?

    Hi Guys, I currently have a Coffee Lake NUC (NUC8i5BEH) where I use gvt-g for virtual GPUs for some ubuntu VMs (mainly for Intel QuickSync) and wondered whether anyone had tried this on the Comet Lake NUCs (either NUC10i5FNH or NUC10i7FNH) as thinking about buying one of those... According to...
  15. N

    Live Migration with GVT-g (mdev) passthrough device?

    Hi @dcsapak, Are there any specific URLs you can share where I could check the status of this exciting feature? I’ve posted the same question at the phoronix article above but that article is 1yr old now... Thanks!
  16. N

    proxmox + ceph - which ssd you use / to choose

    @tom - Is the SM863a still a recommended drive for Proxmox Ceph use or are there better alternatives out there nowadays? Thanks!
  17. N

    Pls help: Upgrade community from 6.04 to 6.2

    My /etc/apt/sources.list includes the following no-subscription line (I’m on latest version): deb http://download.proxmox.com/debian/pve buster pve-no-subscription But yours is pointing to the old “stretch” rather than “buster” repo?
  18. N

    Grafana+influxdb monitoring

    Because Proxmox writes to InfluxDB over UDP rather than TCP, you have to specify the database name written to in the Influxdb (not Proxmox) configuration... I’ve not tested this but if you want to write to multiple InfluxDB databases on the same influxdb host, I think you’d need to do something...
  19. N

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Thanks @tom, I’ve raised the following bug for this. Note that the error message given alternates randomly between the scsi0 and efidisk0 so maybe not 100% related to the efidisk0... https://bugzilla.proxmox.com/show_bug.cgi?id=2805
  20. N

    Full clone feature is not supported for drive 'efidisk0' (500)?

    Hi @tom , pveversion below... Thanks! root@pve-host1:~# pveversion -v proxmox-ve: 6.2-1 (running kernel: 5.4.41-1-pve) pve-manager: 6.2-6 (running version: 6.2-6/ee1d7754) pve-kernel-5.4: 6.2-2 pve-kernel-helper: 6.2-2 pve-kernel-5.3: 6.1-6 pve-kernel-5.0: 6.0-11 pve-kernel-5.4.41-1-pve...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!