I’m still using the disable tso gso workaround I posted previously in the following post - since I did this (and restarted the node afterwards), I’ve never had any of the “Detected unit hardware hang” errors that I used to get, so suggest you try that?
Post in thread 'e1000 driver hang'...
To add to this, I also tried a similar Ugreen 2.5Gbps USB Ethernet NIC and saw the same issues as above... Both the Ugreen USB NIC and the DLink DUB-E250 USB NIC use the same Realtek 8156 based chipset and on both Linux PVE Kernel 5.13 and 5.15 they are seemingly using a "cdc"ncm" driver...
Hi All,
Thought I'd try a couple of Dlink DUB-250 USB 2.5Gbps ethernet adapters (on two different proxmox nodes) but both only seem to connect at half duplex and both report auto-negotiation is not supported/enabled yet the dlink website suggests they should support this...
Is it a simple configuration item to set the rrdcached writes to ram disk rather than disk? This approach could leave the WRITE_INTERVAL and FLUSH_INTERVAL as standard but write to ram instead of wearing out SSDs? I don’t fully understand why Proxmox writes so much to disk if it could easily use...
Just following up on this, looks like the qemu patches have been applied to qemu v5.2 and kernel patches are work in progress - https://github.com/intel/gvt-linux/issues/175#issuecomment-792516421
Thanks for the tutorial!
I think the fsidk -l should be fdisk -l ...
Also, it would be good to update this initial brick creation step so that the brick is based on a thinly provisioned logical volume rather than thick provisioned logical volume that would be created above - The added bonus is...
I have a NUC8i5BEH that I Passthrough the Intel GPU (for Quicksync access only) to an 20.04 Ubuntu VM using meditated device (GVT-g). I don’t recall that I had to add any specific drivers to the Ubuntu VM to get it working so you might want to try that route?
Glad you were able to get gvt-g working. I basically just followed the "General Requirements" and then "Mediated Devices (vGPU, GVT-g)" sections from the WIKI: https://pve.proxmox.com/wiki/PCI(e)_Passthrough#_mediated_devices_vgpu_gvt_g.
I don't recall changing anything else but I could check...
Hi There,
Is there a dictionary which defines the external metrics (and their meaning) that PVE pushes out to (for example) InfluxDB? For example, It's not clear to me how I'd get the VM disk size (and ideally usage but I suspect this is not possible from the metrics alone?) for the disk...
Looks like @tom has applied a patch for this to enable the MTU to be configured in /etc/pve/status.cfg in a future release of PVE. So we will have to wait for that and then set the MTU to be 1450 and not the default of 1500 in status.cfg.
Thanks @tom and @fabian for sorting this...
Hi @fabian ,
I raised the bug originally over at bugzilla for this so thought I'd put some more observations here on this.
I have 3 PVE nodes (pve-host1, pve-host2 and pve-host3 all latest version). The issue now occurs on only pve-host2 and pve-host3 because I made the change you suggested to...
Hi Guys,
I currently have a Coffee Lake NUC (NUC8i5BEH) where I use gvt-g for virtual GPUs for some ubuntu VMs (mainly for Intel QuickSync) and wondered whether anyone had tried this on the Comet Lake NUCs (either NUC10i5FNH or NUC10i7FNH) as thinking about buying one of those...
According to...
Hi @dcsapak,
Are there any specific URLs you can share where I could check the status of this exciting feature?
I’ve posted the same question at the phoronix article above but that article is 1yr old now...
Thanks!
My /etc/apt/sources.list includes the following no-subscription line (I’m on latest version):
deb http://download.proxmox.com/debian/pve buster pve-no-subscription
But yours is pointing to the old “stretch” rather than “buster” repo?
Because Proxmox writes to InfluxDB over UDP rather than TCP, you have to specify the database name written to in the Influxdb (not Proxmox) configuration...
I’ve not tested this but if you want to write to multiple InfluxDB databases on the same influxdb host, I think you’d need to do something...
Thanks @tom, I’ve raised the following bug for this. Note that the error message given alternates randomly between the scsi0 and efidisk0 so maybe not 100% related to the efidisk0...
https://bugzilla.proxmox.com/show_bug.cgi?id=2805
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.