Intel Arc B580

lightstal

New Member
Nov 4, 2024
16
0
1
Hello!

I have followed the following guide https://www.reddit.com/r/homelab/comments/1hggz6l/an_updated_newbies_guide_to_setting_up_a_proxmox/ to try and use my Arc B580 to be used in vm's and LXC's but I followed all the steps up till the point where he is adding a PCI Device but I dont see a single DGx device in the raw device menu. The closest thing I am speculating would be the GPU would be these 2 unnamed PCIE devices 1735130520871.png

Hoping for potential solutions to this as I want to be able to transcode with my GPU

Assuming what this is is the GPU, IDRAC does technically recognize it. Since the PCIDeviceID is E20B which is BattleMage's PCIE Id in Linux
1735130707007.png
 
Last edited:
I have since managed to get Proxmox to see the GPU
1735313093049.png
However, I am still unable to assign said device to the VM and LXC's are not utilising this GPU as well.
 
I have installed the 6.11 kernel and can pass the B580 through to my VM.
The vifo-pci kernel driver is also loaded as soon as I have set up pass-through. I have so far only forwarded the graphics card to a VM and not to an LXC.

The B580 is also recognized there, but regardless of the system (Debian/Ubuntu) it is loaded with the Xe driver only.

I have not yet managed to load it there with the i915, not even on the Proxmox itself with force_probe and I don't know if the B580 works at all with the i915.
My Intel CPU UHD G630 is loading fine with the i915...

I have not managed that Plex can use the b580 for transcoding.
It always shows this error and I don't know if it's just a missing package or if the Xe driver is not supported:

Code:
DEBUG - [Req#174de/Transcode] Codecs: hardware transcoding: testing API vaapi for device '/dev/dri/renderD129' (Intel Battlemage G21 [Intel Graphics])
ERROR - [Req#174de/Transcode] [FFMPEG] - libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed
INFO - [Req#174de/Transcode] Preparing driver imd for GPU Intel Battlemage G21 [Intel Graphics]

If you have the same scenario I would be happy to receive a reply and also if you have found a solution for the transcoding issue.
 
I have installed the 6.11 kernel and can pass the B580 through to my VM.
The vifo-pci kernel driver is also loaded as soon as I have set up pass-through. I have so far only forwarded the graphics card to a VM and not to an LXC.

The B580 is also recognized there, but regardless of the system (Debian/Ubuntu) it is loaded with the Xe driver only.

I have not yet managed to load it there with the i915, not even on the Proxmox itself with force_probe and I don't know if the B580 works at all with the i915.
My Intel CPU UHD G630 is loading fine with the i915...

I have not managed that Plex can use the b580 for transcoding.
It always shows this error and I don't know if it's just a missing package or if the Xe driver is not supported:

Code:
DEBUG - [Req#174de/Transcode] Codecs: hardware transcoding: testing API vaapi for device '/dev/dri/renderD129' (Intel Battlemage G21 [Intel Graphics])
ERROR - [Req#174de/Transcode] [FFMPEG] - libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed
INFO - [Req#174de/Transcode] Preparing driver imd for GPU Intel Battlemage G21 [Intel Graphics]

If you have the same scenario I would be happy to receive a reply and also if you have found a solution for the transcoding issue.
I am not using a VM for plex however I have not been able to use it for my LXC either. In plex there just simply isn't a choice for me to use the B580 as a transcoding device however my A380 works fine. I should add that I'm also running on the 6.11 kernel but I assume its just simply unsupported until we get a 6.12 kernel from proxmox
 
I have not yet managed to load it there with the i915, not even on the Proxmox itself with force_probe and I don't know if the B580 works at all with the i915.
fyi, battlemage is not supported at all with i915 only with xe: https://github.com/intel-gpu/intel-gpu-i915-backports/issues/209

AFAIK battlemage support is only properly there with kernel 6.13 and mesa > 24 (current debian does not even has that, for 24 you'd have to enable the backports repository)

so for now, using pci passthrough with that and using a bleeding edge kernel + mesa in the guest is necessary (AFAIK)

also for the pciid listing in the ui and lspci, you can update the pciids with the

Code:
update-pciids
command. This will query a community run server for a newer pciid database (this is only a visual change, not a functional one)
 
Just as information:
I have not managed to get the Intel B580 running with Plex on any linux (Proxmox, Debian or Ubuntu), not even with kernel 6.12 and mesa-va-drivers installed.

Code:
ERROR - [Req#241ac/Transcode] [FFMPEG] - libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed

DEBUG - [Req#241ac/Transcode] Codecs: hardware transcoding: testing API vaapi for device '/dev/dri/renderD129'

(Intel Battlemage G21 [Intel Graphics])


ERROR - [Req#241ac/Transcode] [FFMPEG] - libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed

DEBUG - [Req#24202/Transcode/oz675gkbhi8ggzby8lr81gs6] Codecs: hardware transcoding: testing API vaapi for device '/dev/dri/renderD129' (Intel Battlemage G21 [Intel Graphics])

ERROR - [Req#24202/Transcode/oz675gkbhi8ggzby8lr81gs6] [FFMPEG] - libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed

INFO - [Req#24202/Transcode/oz675gkbhi8ggzby8lr81gs6] Preparing driver imd for GPU Intel Battlemage G21 [Intel Graphics]


ERROR - [Req#24204/Transcode/oz675gkbhi8ggzby8lr81gs6/4f883795-ad41-40d9-b713-05d0209529ec] [AVHWDeviceContext @ 0x7dc4163aa0c0] libva: /var/lib/plexmediaserver/Library/Application Support/Plex Media Se                                                                                                           rver/Cache/va-dri-linux-x86_64/iHD_drv_video.so init failed

ERROR - [Req#24207/Transcode/oz675gkbhi8ggzby8lr81gs6/4f883795-ad41-40d9-b713-05d0209529ec] Failed to set value 'vaapi=vaapi:/dev/dri/renderD129' for option 'init_hw_device': I/O error

The version of the Intel Media Driver Plex builds and bundles with Plex seems to be Version 24.1.5-3 and the battlemage initial support was added with Version 24.3.4.

I am not an expert, but I suspect it will take a while before the card can be used on Linux, which is very unfortunate since I bought the card specifically for Plex.
 
I have managed to run my Intel B580 with Jellyfin, Ollama and Stable diffusion all on one VM, I did try for a week to use the pre installation of Proxmox that I was running a Tesla
Card on, and the Ubuntu VM that I had the tesla passed through to it.
Only after downloading the lasted Proxmox ISO, I installed a fresh install of Proxmox, then I updated the kernel to 6.11.11-1.
then I created a VM and installed Ubuntu 24.04 and install the kernel 6.13.x

no with proxmox stuff:

with the Grub, I have an AMD motherboard I added the following:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

and blacklisted the following:

echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf
echo "blacklist xe" >> /etc/modprobe.d/blacklist.conf
echo "blacklist snd_hda_intel" >> /etc/modprobe.d/blacklist.conf

after that I added the card to the VM and installed all the GPU drivers, installed Jellyfin, it detected the card and used it with zero issues.

as for Ollama and Diffusion I used pre made docker files to run them in the Jellyfin VM.

hope this helps.
Cheers
 
I have managed to run my Intel B580 with Jellyfin, Ollama and Stable diffusion all on one VM, I did try for a week to use the pre installation of Proxmox that I was running a Tesla
Card on, and the Ubuntu VM that I had the tesla passed through to it.
Only after downloading the lasted Proxmox ISO, I installed a fresh install of Proxmox, then I updated the kernel to 6.11.11-1.
then I created a VM and installed Ubuntu 24.04 and install the kernel 6.13.x

no with proxmox stuff:

with the Grub, I have an AMD motherboard I added the following:
GRUB_CMDLINE_LINUX_DEFAULT="quiet iommu=pt"

and blacklisted the following:

echo "blacklist i915" >> /etc/modprobe.d/blacklist.conf
echo "blacklist xe" >> /etc/modprobe.d/blacklist.conf
echo "blacklist snd_hda_intel" >> /etc/modprobe.d/blacklist.conf

after that I added the card to the VM and installed all the GPU drivers, installed Jellyfin, it detected the card and used it with zero issues.

as for Ollama and Diffusion I used pre made docker files to run them in the Jellyfin VM.

hope this helps.
Cheers
How did you add the GPU to the VM, and which drivers did u install other then the one from https://dgpu-docs.intel.com/driver/overview.html
 
How did you add the GPU to the VM, and which drivers did u install other then the one from https://dgpu-docs.intel.com/driver/overview.html
I added the GPU by adding it from the hardware tab, then chose intel Graphic PCI.
run the following in Proxmox Shell and it will update the device hardware names for you: update-pciids
I only used the drivers from the link you provided.
for ollama docker I used this guide: https://syslynx.net/llm-intel-b580-linux/
and for stable deffusion I used this guide: https://github.com/simonlui/Docker_IPEX_ComfyUI
 
I haven't had much luck myself. I logged this to the ipex-llm project: https://github.com/intel/ipex-llm/issues/12994

I was intending to go with an LXC container but that meant using the 6.11 kernel, which wasn't going to work with the B580. So similar to what you guys did, I installed the 6.11 proxmox kernel, installed a ubuntu 24.10 vm with 6.13 kernel and PCI raw passthrough. Running update-pciids on proxmox helped with identifying the B580 in the list, otherwise I was using the lspci to find the address.

My original plan with the LXC was to easily mount a local directory so I could store the models outside a blob store, with a VM it was even more important to get these things out of file store. So I added a 9p device also:

args: -fsdev local,security_model=mapped,id=fsdev0,path=/mnt/rocket/llm/ -device virtio-9p-pci,id=fs0,fsdev=fsdev0,mount_tag=hostshare

Mounted and added it to /etc/fstab and then in the ipex-llm and open-webui containers I pointed to the mount. All my files are read/write outside the VM now so I don't have to worry about it bloating up with cruft and old blobs I don't know how to manage properly.

I used the https://syslynx.net/llm-intel-b580-linux/ guide but found the ipex-llm would fail to load the model. The logs would indicate that ZES_ENABLE_SYSMAN wasn't enabled, but its in the container env. Further the B580 is noted as being used.

I tried a bunch of models just in case, it looks like that can have their own logs, etc... So I tried phi, llama and Gemma, all the small ones.

ggml_sycl_init: found 1 SYCL devices:
time=2025-03-23T10:51:50.559+08:00 level=INFO source=runner.go:967 msg="starting go runner"
time=2025-03-23T10:51:50.559+08:00 level=INFO source=runner.go:968 msg=system info="CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 | cgo(gcc)" threads=6
get_memory_info: [warning] ext_intel_free_memory is not supported (export/set ZES_ENABLE_SYSMAN=1 to support), use total memory as free memory
llama_load_model_from_file: using device SYCL0 (Intel(R) Graphics [0xe20b]) - 11605 MiB free

time=2025-03-23T10:51:50.559+08:00 level=INFO source=runner.go:1026 msg="Server listening on 127.0.0.1:45721"
time=2025-03-23T10:51:50.704+08:00 level=INFO source=server.go:605 msg="waiting for server to become available" status="llm server loading model"
llama_model_loader: loaded meta data with 34 key-value pairs and 883 tensors from /root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
...
llama_model_loader: - type q4_K: 205 tensors
llama_model_loader: - type q6_K: 34 tensors
llama_model_load: error loading model: error loading model hyperparameters: key not found in model: gemma3.attention.layer_norm_rms_epsilon
llama_load_model_from_file: failed to load model
panic: unable to load model: /root/.ollama/models/blobs/sha256-377655e65351a68cddfbd69b7c8dc60c1890466254628c3e494661a52c2c5ada

goroutine 8 [running]:
ollama/llama/runner.(*Server).loadModel(0xc000119560, {0x3e7, 0x0, 0x0, 0x0, {0x0, 0x0, 0x0}, 0xc000503690, 0x0}, ...)
ollama/llama/runner/runner.go:861 +0x4ee
created by ollama/llama/runner.Execute in goroutine 1
ollama/llama/runner/runner.go:1001 +0xd0d
time=2025-03-23T10:51:50.954+08:00 level=ERROR source=sched.go:455 msg="error loading llama server" error="llama runner process has terminated: error loading model: error loading model hyperparameters: key not found in model: gemma3.attention.layer_norm_rms_epsilon\nllama_load_model_from_file: failed to load model"
 
I just updated to 8.4 with the 6.14 opt in kernel and no dice..

Yeah, I can't get hardware de/encoding to work either on Proxmox 8.4 with opt in 6.14 (jellyfin-ffmpeg 7.0.2 does not even show any hardware de/encode capabilities. Not just in the LXC where I could have just configured something wrong, but also when installed on the host directly, so I am pretty sure it is not my LXC configuration).

So maybe there is something special about the Proxmox version of the Linux kernel, that leaves out something Battlemage needs.

I was blaming my VERY old BIOS (Zen+ is not officially supported by newer BIOS versions for the board, which is why I am hesitant to update, though it should probably just work), but if I am not the only one, maybe it's not that.
 
Yeah, I can't get hardware de/encoding to work either on Proxmox 8.4 with opt in 6.14 (jellyfin-ffmpeg 7.0.2 does not even show any hardware de/encode capabilities. Not just in the LXC where I could have just configured something wrong, but also when installed on the host directly, so I am pretty sure it is not my LXC configuration).

So maybe there is something special about the Proxmox version of the Linux kernel, that leaves out something Battlemage needs.

I was blaming my VERY old BIOS (Zen+ is not officially supported by newer BIOS versions for the board, which is why I am hesitant to update, though it should probably just work), but if I am not the only one, maybe it's not that.
I've been playing around in the VM's and yeah it doesnt seem to be working for me in the VM's as well. VM and Proxmox are both on 6.14 kernel with drivers installed in the VM. I do see card0 but render128 isn't showing up no matter what i do.. Honestly at a loss