Merge pull request #6 from eleiton/add-stable-diffusion

Add stable diffusion with SD.Next
This commit is contained in:
eleiton
2025-03-15 15:37:06 +01:00
committed by GitHub
7 changed files with 103 additions and 9 deletions

View File

@@ -1,9 +1,8 @@
# Run Ollama using your Intel Arc GPU
# Run Ollama and Stable Diffusion with your Intel Arc GPU
A Docker-based setup for running Ollama as a backend and Open WebUI as a frontend, leveraging Intel Arc Series GPUs on Linux systems.
## Overview
This repository provides a convenient way to run Ollama as a backend and Open WebUI as a frontend, allowing you to interact with Large Language Models (LLM) using an Intel Arc Series GPU on your Linux system.
Effortlessly deploy a Docker-based solution that uses [Open WebUI](https://github.com/open-webui/open-webui) as your user-friendly
AI Interface, [Ollama](https://github.com/ollama/ollama) for integrating Large Language Models (LLM), and [SD.Next](https://github.com/vladmandic/sdnext) to
streamline Stable Diffusion capabilities, all while tapping into the power of Intel Arc Series GPUs on Linux systems by using [Intel® Extension for PyTorch](https://github.com/intel/intel-extension-for-pytorch).
![screenshot](resources/open-webui.png)
@@ -18,10 +17,15 @@ This repository provides a convenient way to run Ollama as a backend and Open We
2. Open WebUI
* The official distribution of Open WebUI.
* `WEBUI_AUTH` is turned off for authentication-free usage.
* `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only.
* `ENABLE_OPENAI_API` and `ENABLE_OLLAMA_API` flags are set to off and on, respectively, allowing interactions via Ollama only.
* `ENABLE_IMAGE_GENERATION` is set to true, allowing you to generate images from the UI.
* `IMAGE_GENERATION_ENGINE` is set to automatic1111 (SD.Next is compatible).
3. SD.Next
* Uses as the base container the official [Intel® Extension for PyTorch](https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.6.10%2Bxpu&os=linux%2Fwsl2&package=docker)
* Uses a customized version of the SD.Next [docker file](https://github.com/vladmandic/sdnext/blob/dev/configs/Dockerfile.ipex), making it compatible with the Intel Extension for Pytorch image.
## Setup
Run the following commands to start your Ollama instance
Run the following commands to start your AI instance
```bash
$ git clone https://github.com/eleiton/ollama-intel-arc.git
$ cd ollama-intel-arc
@@ -45,8 +49,23 @@ When using Open WebUI, you should see this partial output in your console, indic
```
## Usage
* Open your web browser to http://localhost:3000 to access the Open WebUI web page.
* For more information on using Open WebUI, refer to the official documentation at https://docs.openwebui.com/ .
* Open your web browser to http://localhost:7860 to access the SD.Next web page.
* For the purposes of this demonstration, we'll use the [DreamShaper](https://civitai.com/models/4384/dreamshaper) model.
* Follow these steps:
* Download the `dreamshaper_8` model by clicking on its image (1).
* Wait for it to download (~2GB in size) and then select it in the dropbox (2).
* (Optional) If you want to stay in the SD.Next UI, feel free to explore (3).
![screenshot](resources/sd.next.png)
* For more information on using SD.Next, refer to the official [documentation](https://vladmandic.github.io/sdnext-docs/).
* Open your web browser to http://localhost:3000 to access the Open WebUI web page.
* Go to the administrator [settings](http://localhost:3000/admin/settings) page.
* Go to the Image section (1)
* Make sure all settings look good, and validate them pressing the refresh button (2)
* (Optional) Save any changes if you made them. (3)
![screenshot](resources/open-webui-settings.png)
* For more information on using Open WebUI, refer to the official [documentation](https://docs.openwebui.com/)
* That's it, go back to Open WebUI main page and start chatting. Make sure to select the `Image` button to indicate you want to generate Images.
![screenshot](resources/open-webui-chat.png)
## Updating the containers
If there are new updates in the [ipex-llm-inference-cpp-xpu](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) docker Image or in the Open WebUI docker Image, you may want to update your containers, to stay up to date.