release/Intel-Core-Ultra-7-155H #1
Reference in New Issue
Block a user
Delete Branch "release/Intel-Core-Ultra-7-155H"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Summary
Project Structure
Refactored into a clean
<service>/Dockerfile+docker-compose.<service>.ymlconvention:Custom Dockerfiles
ipex-ollama/Dockerfile— IPEX-LLM bundle-based image (Ollama v0.9.3):# syntax=docker/dockerfile:1.4with--mount=type=cachefor apt and download cachingARGversion pins for all Intel GPU runtime components (bump in one place)sycl-ollama/Dockerfile— SYCL-from-source image (Ollama v0.16.1):ggml-syclwith Intel oneAPIicpx, Stage 2 is minimal runtimesycl-ollama/patch-sycl.py— backward-compatible API patching (no patches needed since v0.16.1 — APIs converged)ec98e200(llama.cpp tag b7437) matching Ollama v0.16.1libggml-sycl.so+ stripped oneAPI runtime libs alongside official Ollama binarytest-glm-ocr.shvision model test scriptDocker Compose
docker-compose.yml(main stack):${VAR:-default}syntax (override with.envor shell)shm_size: "16G"for SYCL/Level Zero shared memory (Docker defaults to 64 MB)no_proxy/NO_PROXYon all services — prevents corporate/system HTTP proxies from intercepting container-to-container trafficopen-webuiservice withOLLAMA_BASE_URL, RAG web search, telemetry opt-outdocker-compose.sycl-ollama.yml(alternative stack):no_proxy, and Open WebUI configollama-volumeso models persist when switching between stacksDocumentation
docs/sycl-vs-vulkan.md— SYCL vs Vulkan comparison:patch-sycl.pyworks (and why it's a no-op since v0.16.1)docs/intel-arc-a770-context-limits.md— VRAM & context guide:README.md:Build & test verification (2026-02-16)
IPEX-LLM stack (ipex-ollama)
docker build -t ipex-ollama:latest ./ipex-ollama/— all layers build successfullyusing Intel GPU)0.0.0.0:11434SYCL-from-source stack (sycl-ollama)
docker compose -f docker-compose.sycl-ollama.yml build— all stages pass (~87s)patch-sycl.pyexits with code 0 — no patches needed (APIs converged in v0.16.1)ggml-syclcompiled successfully →libggml-sycl.sobuilt and strippedollama listreturns models from shared volume (7 models loaded)ollama run llama3.2:1bresponded correctly usingSYCL0 compute buffer(1074 MiB allocated)docker compose downTest plan (remaining manual checks)
docker compose -f docker-compose.sycl-ollama.yml up --buildpatch-sycl.pyexits with code 0 during builddocker compose up -dOLLAMA_CONTEXT_LENGTH=8192in.env)no_proxyprevents proxy interference on corporate networksSee https://github.com/eleiton/ollama-intel-arc/pull/38
View command line instructions
Checkout
From your project repository, check out a new branch and test the changes.