Running DeepSeek R1 32B Model on Ollama with OpenUI
Introduction
In this guide, we will walk through the process of installing Ollama, setting up the DeepSeek R1 32B model, running it in the console, and then using OpenUI to interact with the model in a ChatGPT-like interface on Windows 11.

Step 1: Installing Ollama on Windows 11
Ollama is an open-source framework that allows you to run large language models (LLMs) locally on your system. Follow these steps to install Ollama on Windows 11:
- Download Ollama: Visit Ollama’s official website and download the Windows installer.
- Install Ollama: Run the
.exe
installer and follow the on-screen instructions. - Verify Installation: Open Command Prompt (Press
Win + R
, typecmd
, and hitEnter
), then run:ollama --version
If installed correctly, this should return the current version of Ollama.
Step 2: Download and Run DeepSeek R1 32B Model on Windows 11
DeepSeek R1 32B is a powerful AI model that can be run locally using Ollama. To set it up, follow these steps:
- Open Command Prompt as Administrator: Press
Win + R
, typecmd
, then pressCtrl + Shift + Enter
. - Pull the DeepSeek Model: Run the following command to download the DeepSeek R1 32B model:
ollama pull deepseek-ai/deepseek-coder-32b
This may take some time depending on your internet speed. - Run DeepSeek in Console: Once the model is downloaded, start interacting with it by running:
ollama run deepseek-ai/deepseek-coder-32b
This opens an interactive session where you can enter prompts and get responses from the model.
Step 3: Running DeepSeek R1 32B on OpenUI (Windows 11)
To create a UI similar to ChatGPT, we will use OpenUI, which is freely available and integrates with Ollama to provide a web-based chat interface.
- Install Docker on Windows 11:
- Download and install Docker Desktop.
- Ensure Docker is running by searching for “Docker Desktop” and opening it.
- Pull OpenUI Docker Image: Open Command Prompt and run:
docker pull ghcr.io/open-webui/open-webui:main
- Run OpenUI with Docker:
docker run -d --name openui -p 3000:3000 -e OLLAMA_BASE_URL=http://host.docker.internal:11434 ghcr.io/open-webui/open-webui:main
This starts the OpenUI service on port 3000 and connects it to the local Ollama instance. - Access OpenUI in Browser: Open your web browser and go to:
http://localhost:3000
You should now see the OpenUI chat interface, allowing you to interact with DeepSeek R1 32B just like ChatGPT!
Conclusion
By following this guide, you have successfully set up and run the DeepSeek R1 32B model locally using Ollama on Windows 11. You have also integrated OpenUI to create a ChatGPT-like experience for interacting with the model. This setup allows for powerful, offline AI interactions with a user-friendly web interface.

GIPHY App Key not set. Please check settings