Use the start-ollama.sh script provided in the official docker image.

This commit is contained in:
eleiton
2025-03-12 19:08:33 +01:00
parent 316eb23905
commit c5fda8f9ac
2 changed files with 4 additions and 5 deletions

View File

@@ -99,4 +99,4 @@ $ /llm/ollama/ollama -v
## References ## References
* [Open WebUI documentation](https://docs.openwebui.com/) * [Open WebUI documentation](https://docs.openwebui.com/)
* [Intel - ipex-llm](https://github.com/intel/ipex-llm/blob/main/docs/mddocs/DockerGuides/docker_cpp_xpu_quickstart.md) * [Intel ipex-llm releases](https://github.com/intel/ipex-llm/releases)

View File

@@ -3,7 +3,6 @@
cd /llm/scripts/ cd /llm/scripts/
source ipex-llm-init --gpu --device Arc source ipex-llm-init --gpu --device Arc
mkdir -p /llm/ollama bash start-ollama.sh
cd /llm/ollama
init-ollama tail -f /dev/null
./ollama serve