Add GitHub Actions CI to build and push Docker image to GHCR
Build and push Docker image / build-and-push (push) Failing after 3m46s
Build and push Docker image / build-and-push (pull_request) Failing after 29s

Workflow triggers on push to main/release branches, tags, PRs, and
manual dispatch. Uses Docker Buildx with GHA cache for faster rebuilds.
Tags images with ollama version, git SHA, and branch/tag names.

Co-authored-by: Cursor <cursoragent@cursor.com>
This commit is contained in:
2026-02-12 17:34:13 +00:00
parent 971852d3af
commit 52672c34b0
2 changed files with 103 additions and 3 deletions
+88
View File
@@ -0,0 +1,88 @@
name: Build and push Docker image
on:
push:
branches:
- main
- master
- "release/**"
tags:
- "v*"
pull_request:
branches:
- main
- master
workflow_dispatch:
inputs:
ollama_version:
description: "Ollama version to build"
required: false
default: "0.15.6"
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
OLLAMA_VERSION: "0.15.6"
jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 60
permissions:
contents: read
packages: write
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to GitHub Container Registry
if: github.event_name != 'pull_request'
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Resolve Ollama version
id: version
run: |
if [ "${{ github.event_name }}" = "workflow_dispatch" ] && [ -n "${{ inputs.ollama_version }}" ]; then
echo "ollama_version=${{ inputs.ollama_version }}" >> "$GITHUB_OUTPUT"
else
echo "ollama_version=${{ env.OLLAMA_VERSION }}" >> "$GITHUB_OUTPUT"
fi
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
# Tag with ollama version on default branch
type=raw,value=ollama-${{ steps.version.outputs.ollama_version }},enable={{is_default_branch}}
# Tag "latest" on default branch
type=raw,value=latest,enable={{is_default_branch}}
# Tag with git tag (v1.0.0 -> 1.0.0)
type=semver,pattern={{version}}
# Tag with branch name for release branches
type=ref,event=branch,enable=${{ startsWith(github.ref, 'refs/heads/release/') }}
# Tag with short SHA always
type=sha,prefix=
- name: Build and push
uses: docker/build-push-action@v6
with:
context: .
file: Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
build-args: |
OLLAMA_VERSION=${{ steps.version.outputs.ollama_version }}
cache-from: type=gha
cache-to: type=gha,mode=max
+15 -3
View File
@@ -6,22 +6,34 @@ A Docker-based setup that pairs [Ollama](https://github.com/ollama/ollama) **v0.
**Why this exists:** Ollama's official release ships only a Vulkan backend for Intel GPUs, leaving significant performance on the table. This repo builds the `ggml-sycl` backend from source with Intel oneAPI, unlocking oneMKL, oneDNN, and Level-Zero direct GPU access.
![screenshot](doc/screenshot.png)
---
## Quick start
### Option A: Build from source
```shell
git clone https://github.com/mattcurf/ollama-intel-gpu
cd ollama-intel-gpu
docker compose up
```
Open **http://localhost:3000** — pull a model and start chatting.
The first `docker compose up` builds the SYCL backend from source (~2 min on a modern CPU). Subsequent starts are instant.
### Option B: Use the pre-built image
```shell
docker run -d \
--device /dev/dri:/dev/dri \
--shm-size 16G \
-p 11434:11434 \
-v ollama-data:/root/.ollama \
ghcr.io/mattcurf/ollama-intel-gpu:latest
```
Open **http://localhost:3000** (with WebUI) or use the API directly at `http://localhost:11434`.
> **Multiple GPUs?** Set `ONEAPI_DEVICE_SELECTOR=level_zero:0` in `docker-compose.yml` to pick the right device.
---