Files
ollama-intel-gpu/README.md
Andriy Oblivantsev 8debf2010b Fix ollama not reachable from host due to hardcoded OLLAMA_HOST in entrypoint
The IPEX-LLM bundled start-ollama.sh hardcodes OLLAMA_HOST=127.0.0.1 and
OLLAMA_KEEP_ALIVE=10m, overriding docker-compose environment variables and
preventing external connections through Docker port mapping.

- Add custom start-ollama.sh that honours env vars with sensible defaults
- Mount it read-only into the container
- Fix LD_LIBRARY_PATH env var syntax (: -> =)
- Add .gitignore for IDE/swap/webui data files
- Update CHANGELOG and README with fix documentation

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 15:18:37 +00:00

58 lines
2.7 KiB
Markdown

# Ollama for Intel GPU
[![GitHub license](https://img.shields.io/github/license/mattcurf/ollama-intel-gpu)](
This repo illustrates the use of Ollama with support for Intel ARC GPU based via ipex-llm and Ollama Portable ZIP support. Run the recently released [deepseek-r1](https://github.com/deepseek-ai/DeepSeek-R1) model on your local Intel ARC GPU based PC using Linux
> !Note: All Ollama based ipex-llm defects should be reported directly to the ipex-llm project at https://github.com/intel/ipex-llm
## Screenshot
![screenshot](doc/screenshot.png)
## Prerequisites
* Ubuntu 24.04 or newer (for Intel ARC GPU kernel driver support. Tested with Ubuntu 24.04.02
* Installed Docker and Docker-compose tools
* Intel ARC series GPU (tested with Intel ARC A770 16GB and Intel(R) Core(TM) Ultra 5 155H integrated GPU)
# Usage
The following will build the Ollama with Intel ARC GPU support, and compose those with the public docker image based on OpenWEB UI from https://github.com/open-webui/open-webui
## Linux
```shell
git clone https://github.com/mattcurf/ollama-intel-gpu
cd ollama-intel-gpu
docker compose up
```
> !NOTE
> If you have multiple GPU's installed (like integrated and discrete), set the ONEAPI_DEVICE_DELECTOR environment variable in the docker compose file to select the intended device to use.
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3.1:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
### Custom `start-ollama.sh` entrypoint
The upstream IPEX-LLM portable zip ships a `start-ollama.sh` that hardcodes
`OLLAMA_HOST=127.0.0.1` and `OLLAMA_KEEP_ALIVE=10m`, preventing the container
from accepting connections via Docker port mapping and ignoring Compose
environment overrides.
This repo includes a corrected `start-ollama.sh` (mounted read-only into the
container) that honours environment variables set in `docker-compose.yml`,
falling back to sensible defaults (`0.0.0.0:11434`, `24h`).
### Update to the latest IPEX-LLM Portable Zip Version
To update to the latest portable zip version of IPEX-LLM's Ollama, update the compose file with the build arguments shown below, using the latest `ollama-*.tgz` release from https://github.com/intel/ipex-llm/releases/tag/v2.3.0-nightly , then rebuild the image.
## References
* https://dgpu-docs.intel.com/driver/client/overview.html
* https://github.com/intel/ipex-llm/blob/main/docs/mddocs/Quickstart/llamacpp_portable_zip_gpu_quickstart.md
* https://github.com/intel/ipex-llm/releases/download/v2.2.0-nightly/ollama-ipex-llm-2.2.0b20250313-ubuntu.tgz