Merge pull request #5 from eleiton/remove-custom-image

Remove the redundant Dockerfile
This commit is contained in:
eleiton
2025-03-12 19:54:03 +01:00
committed by GitHub
4 changed files with 20 additions and 45 deletions

View File

@@ -1,8 +0,0 @@
FROM intelanalytics/ipex-llm-inference-cpp-xpu:latest
ENV DEBIAN_FRONTEND=noninteractive
ENV OLLAMA_HOST=0.0.0.0:11434
COPY ./scripts/serve.sh /usr/share/lib/serve.sh
ENTRYPOINT ["/bin/bash", "/usr/share/lib/serve.sh"]

View File

@@ -48,42 +48,25 @@ When using Open WebUI, you should see this partial output in your console, indic
* Open your web browser to http://localhost:3000 to access the Open WebUI web page.
* For more information on using Open WebUI, refer to the official documentation at https://docs.openwebui.com/ .
## Updating the images
## Updating the containers
If there are new updates in the [ipex-llm-inference-cpp-xpu](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu) docker Image or in the Open WebUI docker Image, you may want to update your containers, to stay up to date.
Before any updates, be sure to stop your containers
```bash
$ podman compose down
```
### ollama-intel-arc Image
If there are new updates in the [ipex-llm docker image](https://hub.docker.com/r/intelanalytics/ipex-llm-inference-cpp-xpu), you may want to update the Ollama image and containers, to stay updated.
First check any containers running the docker image, and remove them
```bash
$ podman ps -a
CONTAINER ID IMAGE
111479fde20f localhost/ollama-intel-arc:latest
$ podman rm <CONTAINER ID>
```
The go ahead and remove the docker image:
```bash
$ podman image list
REPOSITORY TAG
localhost/ollama-intel-arc latest
$ podman rmi <IMAGE ID>
```
After that, you can run compose up, to rebuild the image from scratch
```bash
$ podman compose up
```
### open-webui Image
If there are new updates in Open WebUI, just do a pull and the new changes will be retrieved automatically.
Then just run a pull command to retrieve the `latest` images.
```bash
$ podman compose pull
```
After that, you can run compose up to start your services again.
```bash
$ podman compose up
```
## Manually connecting to your Ollama container
You can connect directly to your Ollama container by running these commands:

View File

@@ -1,8 +1,7 @@
version: '3'
services:
ollama-intel-arc:
build: .
image: ollama-intel-arc:latest
image: intelanalytics/ipex-llm-inference-cpp-xpu:latest
container_name: ollama-intel-arc
restart: unless-stopped
devices:
@@ -11,6 +10,15 @@ services:
- ollama-volume:/root/.ollama
ports:
- 11434:11434
environment:
- no_proxy=localhost,127.0.0.1
- OLLAMA_HOST=0.0.0.0
- DEVICE=Arc
- OLLAMA_INTEL_GPU=true
- OLLAMA_NUM_GPU=999
- ZES_ENABLE_SYSMAN=1
command: sh -c 'mkdir -p /llm/ollama && cd /llm/ollama && init-ollama && exec ./ollama serve'
open-webui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui

View File

@@ -1,8 +0,0 @@
#!/bin/sh
cd /llm/scripts/
source ipex-llm-init --gpu --device Arc
bash start-ollama.sh
tail -f /dev/null