Merge pull request #6 from eleiton/add-stable-diffusion

Add stable diffusion with SD.Next
This commit is contained in:
eleiton
2025-03-15 15:37:06 +01:00
committed by GitHub
7 changed files with 103 additions and 9 deletions

View File

@@ -1,9 +1,8 @@
# Run Ollama using your Intel Arc GPU # Run Ollama and Stable Diffusion with your Intel Arc GPU
A Docker-based setup for running Ollama as a backend and Open WebUI as a frontend, leveraging Intel Arc Series GPUs on Linux systems. Effortlessly deploy a Docker-based solution that uses [Open WebUI](https://github.com/open-webui/open-webui) as your user-friendly
AI Interface, [Ollama](https://github.com/ollama/ollama) for integrating Large Language Models (LLM), and [SD.Next](https://github.com/vladmandic/sdnext) to
## Overview streamline Stable Diffusion capabilities, all while tapping into the power of Intel Arc Series GPUs on Linux systems by using [Intel® Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
This repository provides a convenient way to run Ollama as a backend and Open WebUI as a frontend, allowing you to interact with Large Language Models (LLM) using an Intel Arc Series GPU on your Linux system.
![screenshot](resources/open-webui.png) ![screenshot](resources/open-webui.png)
@@ -19,9 +18,14 @@ This repository provides a convenient way to run Ollama as a backend and Open We
* The official distribution of Open WebUI. * The official distribution of Open WebUI.
* `WEBUI_AUTH` is turned off for authentication-free usage. * `WEBUI_AUTH` is turned off for authentication-free usage.
* `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only. * `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only.
* `ENABLE_IMAGE_GENERATION` is set to true, allowing you to generate images from the UI.
* `IMAGE_GENERATION_ENGINE` is set to automatic1111 (SD.Next is compatible).
3. SD.Next
* Uses as the base container the official [Intel® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=docker)
* Uses a customized version of the SD.Next [docker file](https://github.com/vladmandic/sdnext/blob/dev/configs/Dockerfile.ipex), making it compatible with the Intel Extension for Pytorch image.
## Setup ## Setup
Run the following commands to start your Ollama instance Run the following commands to start your AI instance
```bash ```bash
$ git clone https://github.com/eleiton/ollama-intel-arc.git $ git clone https://github.com/eleiton/ollama-intel-arc.git
$ cd ollama-intel-arc $ cd ollama-intel-arc
@@ -45,8 +49,23 @@ When using Open WebUI, you should see this partial output in your console, indic
``` ```
## Usage ## Usage
* Open your web browser to http://localhost:7860 to access the SD.Next web page.
* For the purposes of this demonstration, we'll use the [DreamShaper](https://civitai.com/models/4384/dreamshaper) model.
* Follow these steps:
* Download the `dreamshaper_8` model by clicking on its image (1).
* Wait for it to download (~2GB in size) and then select it in the dropbox (2).
* (Optional) If you want to stay in the SD.Next UI, feel free to explore (3).
![screenshot](resources/sd.next.png)
* For more information on using SD.Next, refer to the official [documentation](https://vladmandic.github.io/sdnext-docs/).
* Open your web browser to http://localhost:3000 to access the Open WebUI web page. * Open your web browser to http://localhost:3000 to access the Open WebUI web page.
* For more information on using Open WebUI, refer to the official documentation at https://docs.openwebui.com/ . * Go to the administrator [settings](http://localhost:3000/admin/settings) page.
* Go to the Image section (1)
* Make sure all settings look good, and validate them pressing the refresh button (2)
* (Optional) Save any changes if you made them. (3)
![screenshot](resources/open-webui-settings.png)
* For more information on using Open WebUI, refer to the official [documentation](https://docs.openwebui.com/)
* That's it, go back to Open WebUI main page and start chatting. Make sure to select the `Image` button to indicate you want to generate Images.
![screenshot](resources/open-webui-chat.png)
## Updating the containers ## Updating the containers
If there are new updates in the [ipex-llm-inference-cpp-xpu](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) docker Image or in the Open WebUI docker Image, you may want to update your containers, to stay up to date. If there are new updates in the [ipex-llm-inference-cpp-xpu](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) docker Image or in the Open WebUI docker Image, you may want to update your containers, to stay up to date.

View File

@@ -32,9 +32,38 @@ services:
- WEBUI_AUTH=False - WEBUI_AUTH=False
- ENABLE_OPENAI_API=False - ENABLE_OPENAI_API=False
- ENABLE_OLLAMA_API=True - ENABLE_OLLAMA_API=True
- ENABLE_IMAGE_GENERATION=True
- IMAGE_GENERATION_ENGINE=automatic1111
- IMAGE_GENERATION_MODEL=dreamshaper_8
- IMAGE_SIZE=400x400
- IMAGE_STEPS=30
- AUTOMATIC1111_BASE_URL=http://sdnext-ipex:7860/
- AUTOMATIC1111_CFG_SCALE=9
- AUTOMATIC1111_SAMPLER=DPM++ SDE
- AUTOMATIC1111_SCHEDULER=Karras
extra_hosts: extra_hosts:
- host.docker.internal:host-gateway - host.docker.internal:host-gateway
restart: unless-stopped restart: unless-stopped
sdnext-ipex:
build:
context: sdnext
dockerfile: Dockerfile
image: sdnext-ipex:latest
container_name: sdnext-ipex
restart: unless-stopped
devices:
- /dev/dri:/dev/dri
ports:
- 7860:7860
volumes:
- sdnext-app-volume:/app
- sdnext-mnt-volume:/mnt
- sdnext-huggingface-volume:/root/.cache/huggingface
volumes: volumes:
ollama-volume: {} ollama-volume: {}
open-webui-volume: {} open-webui-volume: {}
sdnext-app-volume: {}
sdnext-mnt-volume: {}
sdnext-huggingface-volume: {}

Binary file not shown.

After

Width:  |  Height:  |  Size: 368 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 98 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 45 KiB

After

Width:  |  Height:  |  Size: 259 KiB

BIN
resources/sd.next.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 637 KiB

46
sdnext/Dockerfile Normal file
View File

@@ -0,0 +1,46 @@
FROM intel/intel-extension-for-pytorch:2.6.10-xpu
# essentials
RUN apt-get update && \
apt-get install -y --no-install-recommends --fix-missing \
software-properties-common \
build-essential \
ca-certificates \
wget \
gpg \
git
# python3.10
RUN apt-get install -y --no-install-recommends --fix-missing python3.10-venv
# jemalloc is not required but it is highly recommended (also used with optional ipexrun)
RUN apt-get install -y --no-install-recommends --fix-missing libjemalloc-dev
ENV LD_PRELOAD=libjemalloc.so.2
# cleanup
RUN /usr/sbin/ldconfig
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# stop pip and uv from caching
ENV PIP_NO_CACHE_DIR=true
ENV UV_NO_CACHE=true
# set paths to use with sdnext
ENV SD_DOCKER=true
ENV SD_DATADIR="/mnt/data"
ENV SD_MODELSDIR="/mnt/models"
ENV venv_dir="/mnt/python/venv"
# git clone and start sdnext
RUN echo '#!/bin/bash\ngit status || git clone https://github.com/vladmandic/sdnext.git .\n/app/webui.sh "$@"' | tee /bin/startup.sh
RUN chmod 755 /bin/startup.sh
# run sdnext
WORKDIR /app
ENTRYPOINT [ "startup.sh", "-f", "--use-ipex", "--uv", "--listen", "--debug", "--api-log", "--log", "sdnext.log" ]
# expose port
EXPOSE 7860
# stop signal
STOPSIGNAL SIGINT