Replace the IPEX-LLM portable zip (bundling a patched ollama 0.9.3 with SYCL) with the official ollama 0.15.6 release using the Vulkan backend for Intel GPU acceleration. The official ollama project does not ship a SYCL backend; Vulkan is their supported path for Intel GPUs. - Use official ollama binary with Vulkan runner (OLLAMA_VULKAN=1) - Strip CUDA/MLX runners from image to save space - Add mesa-vulkan-drivers for Intel ANV Vulkan ICD - Remove all IPEX-LLM env vars and wrapper scripts - Simplify entrypoint to /usr/bin/ollama serve directly - Clean up docker-compose.yml: remove IPEX build args and env vars Tested: Intel Arc Graphics (MTL) detected, 17/17 layers offloaded to Vulkan0 Co-authored-by: Cursor <cursoragent@cursor.com>
Ollama for Intel GPU
This repo illustrates the use of Ollama with support for Intel ARC GPU based via ipex-llm and Ollama Portable ZIP support. Run the recently released deepseek-r1 model on your local Intel ARC GPU based PC using Linux
!Note: All Ollama based ipex-llm defects should be reported directly to the ipex-llm project at https://github.com/intel/ipex-llm
Screenshot
Prerequisites
- Ubuntu 24.04 or newer (for Intel ARC GPU kernel driver support. Tested with Ubuntu 24.04.02
- Installed Docker and Docker-compose tools
- Intel ARC series GPU (tested with Intel ARC A770 16GB and Intel(R) Core(TM) Ultra 5 155H integrated GPU)
Usage
The following will build the Ollama with Intel ARC GPU support, and compose those with the public docker image based on OpenWEB UI from https://github.com/open-webui/open-webui
Linux
git clone https://github.com/mattcurf/ollama-intel-gpu
cd ollama-intel-gpu
docker compose up
!NOTE If you have multiple GPU's installed (like integrated and discrete), set the ONEAPI_DEVICE_DELECTOR environment variable in the docker compose file to select the intended device to use.
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3.1:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
Custom start-ollama.sh entrypoint
The upstream IPEX-LLM portable zip ships a start-ollama.sh that hardcodes
OLLAMA_HOST=127.0.0.1 and OLLAMA_KEEP_ALIVE=10m, preventing the container
from accepting connections via Docker port mapping and ignoring Compose
environment overrides.
This repo includes a corrected start-ollama.sh (mounted read-only into the
container) that honours environment variables set in docker-compose.yml,
falling back to sensible defaults (0.0.0.0:11434, 24h).
Update to the latest IPEX-LLM Portable Zip Version
To update to the latest portable zip version of IPEX-LLM's Ollama, update the compose file with the build arguments shown below, using the latest ollama-*.tgz release from https://github.com/intel/ipex-llm/releases/tag/v2.3.0-nightly , then rebuild the image.
