If you’re looking to run DeepSeek AI locally on your computer, you’re in the right place! DeepSeek is one of the most powerful open-source AI models in 2025, offering advanced text generation, coding assistance, and research capabilities.
This step-by-step guide will walk you through installing DeepSeek on your Windows, macOS, or Linux machine. We’ll cover:
Table of Contents
Let’s get started!
Step 1: Check System Requirements
Before installing DeepSeek, ensure your PC meets these minimum requirements:
Component | Minimum Requirement | Recommended for Better Performance |
---|---|---|
OS | Windows 10/11, macOS 12+, Linux (Ubuntu 20.04+) | Latest OS version |
RAM | 16GB | 32GB or more |
GPU | NVIDIA GTX 1080 (8GB VRAM) | RTX 3090/4090 (24GB+ VRAM) |
Storage | 50GB free space (for models) | SSD recommended |
Python | Python 3.10 or later | Python 3.11+ |
💡 Note:
- If you don’t have a powerful GPU, you can still run smaller versions of DeepSeek on CPU (but it will be slower).
- For Mac users with Apple Silicon (M1/M2/M3), performance is good but may require extra optimizations.
Step 2: Install Python & Required Tools
DeepSeek runs on Python, so we need to set it up first.
📥 For Windows
- Download Python 3.11+ from python.org.
- Check “Add Python to PATH” during installation.
- Open Command Prompt (Admin) and verify:
python --version
pip --version
📥 For macOS/Linux
- Open Terminal and install Python:
# macOS (if not installed)
brew install python
# Linux (Debian/Ubuntu)
sudo apt update && sudo apt install python3 python3-pip
- Verify installation:
python3 --version
pip3 --version
Step 3: Install DeepSeek AI Model
Option 1: Using Hugging Face (Recommended)
DeepSeek models are available on Hugging Face. Install transformers
and download the model:
- Install required libraries:
pip install torch transformers accelerate
(Use pip3
on macOS/Linux if needed.)
- Run Python and load DeepSeek:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "deepseek-ai/deepseek-llm-7b" # Try "deepseek-coder" for coding tasks
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
print("DeepSeek is ready!")
Option 2: Using Ollama (Simpler for Beginners)
If you prefer a one-click install, use Ollama:
- Download Ollama from ollama.ai.
- Open Terminal and run:
ollama pull deepseek/deepseek-llm
ollama run deepseek/deepseek-llm
Now you can chat with DeepSeek directly in your terminal!
Step 4: Run DeepSeek with a Web Interface (Optional)
If you want a ChatGPT-like UI, use Text Generation WebUI:
- Install it:
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
- Download the DeepSeek model (e.g.,
deepseek-ai/deepseek-llm-7b
). - Launch the WebUI:
python server.py --model deepseek-ai_deepseek-llm-7b --auto-devices
Open http://localhost:7860
in your browser to use DeepSeek with a GUI!
Step 5: Testing DeepSeek
Let’s ask DeepSeek a question:
input_text = "Explain quantum computing in simple terms."
inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
🎉 You’re done! DeepSeek is now running locally on your PC.
Troubleshooting Tips
- Out of Memory? Try a smaller model like
deepseek-ai/deepseek-llm-1b
. - Slow on CPU? Use GGML quantized models for better performance.
- Mac M1/M2 Issues? Install
mlx
for optimized Apple Silicon support.
What’s Next
Now that DeepSeek is installed, you can:
✔ Use it as a coding assistant (try deepseek-coder
).
✔ Integrate it into your apps via API.
✔ Fine-tune it for custom tasks.
Final Thoughts
Running DeepSeek locally gives you full privacy, offline access, and customization. Whether you’re a developer, researcher, or AI enthusiast, this setup unlocks powerful AI capabilities on your own machine.
Need help? Drop a comment below! 🚀