diff --git a/README.md b/README.md index 630162a..77f8a09 100644 --- a/README.md +++ b/README.md @@ -1,9 +1,8 @@ -# Run Ollama using your Intel Arc GPU +# Run Ollama and Stable Diffusion with your Intel Arc GPU -A Docker-based setup for running Ollama as a backend and Open WebUI as a frontend, leveraging Intel Arc Series GPUs on Linux systems. - -## Overview -This repository provides a convenient way to run Ollama as a backend and Open WebUI as a frontend, allowing you to interact with Large Language Models (LLM) using an Intel Arc Series GPU on your Linux system. +Effortlessly deploy a Docker-based solution that uses [Open WebUI](https://github.com/open-webui/open-webui) as your user-friendly +AI Interface, [Ollama](https://github.com/ollama/ollama) for integrating Large Language Models (LLM), and [SD.Next](https://github.com/vladmandic/sdnext) to +streamline Stable Diffusion capabilities, all while tapping into the power of Intel Arc Series GPUs on Linux systems by using [IntelĀ® Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch). ![screenshot](resources/open-webui.png) @@ -18,10 +17,15 @@ This repository provides a convenient way to run Ollama as a backend and Open We 2. Open WebUI * The official distribution of Open WebUI. * `WEBUI_AUTH` is turned off for authentication-free usage. - * `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only. + * `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only. + * `ENABLE_IMAGE_GENERATION` is set to true, allowing you to generate images from the UI. + * `IMAGE_GENERATION_ENGINE` is set to automatic1111 (SD.Next is compatible). +3. SD.Next + * Uses as the base container the official [IntelĀ® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=docker) + * Uses a customized version of the SD.Next [docker file](https://github.com/vladmandic/sdnext/blob/dev/configs/Dockerfile.ipex), making it compatible with the Intel Extension for Pytorch image. ## Setup -Run the following commands to start your Ollama instance +Run the following commands to start your AI instance ```bash $ git clone https://github.com/eleiton/ollama-intel-arc.git $ cd ollama-intel-arc @@ -45,8 +49,23 @@ When using Open WebUI, you should see this partial output in your console, indic ``` ## Usage -* Open your web browser to http://localhost:3000 to access the Open WebUI web page. -* For more information on using Open WebUI, refer to the official documentation at https://docs.openwebui.com/ . +* Open your web browser to http://localhost:7860 to access the SD.Next web page. +* For the purposes of this demonstration, we'll use the [DreamShaper](https://civitai.com/models/4384/dreamshaper) model. +* Follow these steps: +* Download the `dreamshaper_8` model by clicking on its image (1). +* Wait for it to download (~2GB in size) and then select it in the dropbox (2). +* (Optional) If you want to stay in the SD.Next UI, feel free to explore (3). +![screenshot](resources/sd.next.png) +* For more information on using SD.Next, refer to the official [documentation](https://vladmandic.github.io/sdnext-docs/). +* Open your web browser to http://localhost:3000 to access the Open WebUI web page. +* Go to the administrator [settings](http://localhost:3000/admin/settings) page. +* Go to the Image section (1) +* Make sure all settings look good, and validate them pressing the refresh button (2) +* (Optional) Save any changes if you made them. (3) +![screenshot](resources/open-webui-settings.png) +* For more information on using Open WebUI, refer to the official [documentation](https://docs.openwebui.com/) +* That's it, go back to Open WebUI main page and start chatting. Make sure to select the `Image` button to indicate you want to generate Images. +![screenshot](resources/open-webui-chat.png) ## Updating the images Before any updates, be sure to stop your containers diff --git a/docker-compose.yml b/docker-compose.yml index 60cf3df..c54aad4 100644 --- a/docker-compose.yml +++ b/docker-compose.yml @@ -24,9 +24,38 @@ services: - WEBUI_AUTH=False - ENABLE_OPENAI_API=False - ENABLE_OLLAMA_API=True + - ENABLE_IMAGE_GENERATION=True + - IMAGE_GENERATION_ENGINE=automatic1111 + - IMAGE_GENERATION_MODEL=dreamshaper_8 + - IMAGE_SIZE=400x400 + - IMAGE_STEPS=30 + - AUTOMATIC1111_BASE_URL=http://sdnext-ipex:7860/ + - AUTOMATIC1111_CFG_SCALE=9 + - AUTOMATIC1111_SAMPLER=DPM++ SDE + - AUTOMATIC1111_SCHEDULER=Karras extra_hosts: - host.docker.internal:host-gateway restart: unless-stopped + + sdnext-ipex: + build: + context: sdnext + dockerfile: Dockerfile + image: sdnext-ipex:latest + container_name: sdnext-ipex + restart: unless-stopped + devices: + - /dev/dri:/dev/dri + ports: + - 7860:7860 + volumes: + - sdnext-app-volume:/app + - sdnext-mnt-volume:/mnt + - sdnext-huggingface-volume:/root/.cache/huggingface + volumes: ollama-volume: {} open-webui-volume: {} + sdnext-app-volume: {} + sdnext-mnt-volume: {} + sdnext-huggingface-volume: {} diff --git a/resources/open-webui-chat.png b/resources/open-webui-chat.png new file mode 100644 index 0000000..709e968 Binary files /dev/null and b/resources/open-webui-chat.png differ diff --git a/resources/open-webui-settings.png b/resources/open-webui-settings.png new file mode 100644 index 0000000..4a0a612 Binary files /dev/null and b/resources/open-webui-settings.png differ diff --git a/resources/open-webui.png b/resources/open-webui.png index c6687b7..a448cac 100644 Binary files a/resources/open-webui.png and b/resources/open-webui.png differ diff --git a/resources/sd.next.png b/resources/sd.next.png new file mode 100644 index 0000000..3f70d26 Binary files /dev/null and b/resources/sd.next.png differ diff --git a/sdnext/Dockerfile b/sdnext/Dockerfile new file mode 100644 index 0000000..fa611b6 --- /dev/null +++ b/sdnext/Dockerfile @@ -0,0 +1,46 @@ +FROM intel/intel-extension-for-pytorch:2.6.10-xpu + +# essentials +RUN apt-get update && \ + apt-get install -y --no-install-recommends --fix-missing \ + software-properties-common \ + build-essential \ + ca-certificates \ + wget \ + gpg \ + git + +# python3.10 +RUN apt-get install -y --no-install-recommends --fix-missing python3.10-venv + +# jemalloc is not required but it is highly recommended (also used with optional ipexrun) +RUN apt-get install -y --no-install-recommends --fix-missing libjemalloc-dev +ENV LD_PRELOAD=libjemalloc.so.2 + +# cleanup +RUN /usr/sbin/ldconfig +RUN apt-get clean && rm -rf /var/lib/apt/lists/* + +# stop pip and uv from caching +ENV PIP_NO_CACHE_DIR=true +ENV UV_NO_CACHE=true + +# set paths to use with sdnext +ENV SD_DOCKER=true +ENV SD_DATADIR="/mnt/data" +ENV SD_MODELSDIR="/mnt/models" +ENV venv_dir="/mnt/python/venv" + +# git clone and start sdnext +RUN echo '#!/bin/bash\ngit status || git clone https://github.com/vladmandic/sdnext.git .\n/app/webui.sh "$@"' | tee /bin/startup.sh +RUN chmod 755 /bin/startup.sh + +# run sdnext +WORKDIR /app +ENTRYPOINT [ "startup.sh", "-f", "--use-ipex", "--uv", "--listen", "--debug", "--api-log", "--log", "sdnext.log" ] + +# expose port +EXPOSE 7860 + +# stop signal +STOPSIGNAL SIGINT