Update README.md
This commit is contained in:
@@ -2,6 +2,10 @@
|
||||
|
||||
This repo illustrates the use of Ollama with support for Intel ARC GPU based via ipex-llm and Ollama Portable ZIP support. Run the recently released [deepseek-r1](https://github.com/deepseek-ai/DeepSeek-R1) model on your local Intel ARC GPU based PC using Linux
|
||||
|
||||
## Important Note
|
||||
|
||||
All Ollama based ipex-llm defects should be reported directly to the ipex-llm project at https://github.com/intel/ipex-llm
|
||||
|
||||
## Screenshot
|
||||

|
||||
|
||||
@@ -21,7 +25,7 @@ $ cd ollama-intel-gpu
|
||||
$ docker compose up
|
||||
```
|
||||
|
||||
*Note:* If you have multiple GPU's installed (like integrated and discrete), set the ONEAPI_DEVICE_DELECTOR environment variable in the docker compose file to select the correct device to use.
|
||||
*Note:* If you have multiple GPU's installed (like integrated and discrete), set the ONEAPI_DEVICE_DELECTOR environment variable in the docker compose file to select the intended device to use.
|
||||
|
||||
Then launch your web browser to http://localhost:3000 to launch the web ui. Create a local OpenWeb UI credential, then click the settings icon in the top right of the screen, then select 'Models', then click 'Show', then download a model like 'llama3.1:8b-instruct-q8_0' for Intel ARC A770 16GB VRAM
|
||||
|
||||
|
||||
Reference in New Issue
Block a user