From zero to building real AI applications with Large Language Models.
This is a free, open-source course on Large Language Models (LLMs) — designed for everyone from complete beginners to developers who want to build production-grade AI apps.
No fluff. Just clean explanations, working code, and real examples.
📦 llm-course
┣ 📂 01-introduction → What are LLMs? How do they work?
┣ 📂 02-transformer-architecture → How transformers work under the hood
┣ 📂 03-prompt-engineering → Craft prompts like a pro
┣ 📂 04-llm-apis → Use OpenAI, Claude, Gemini APIs
┣ 📂 05-rag → Build retrieval-augmented generation apps
┗ 📂 06-fine-tuning → Fine-tune models on your own data
| 👤 Profile | ✅ You'll Learn |
|---|---|
| Beginners | What LLMs are, how to prompt them, use APIs |
| Developers | Build RAG pipelines, integrate LLM APIs into apps |
| ML Engineers | Fine-tune models, evaluate outputs, go to production |
Perfect for beginners. No code required.
- What is an LLM?
- How does training work? (Tokens, transformers, attention)
- Famous models: GPT-4, Claude, Gemini, Mistral, LLaMA
- Key concepts: Temperature, context window, hallucinations
Understand what's actually happening inside LLMs.
- Tokenization & embeddings
- Queries, Keys, Values — how attention works
- Multi-head attention with math + diagrams
- Feed-forward networks, residuals, layer norm
- How it all stacks into a full model
- Build a mini GPT from scratch (~150 lines of PyTorch)
📁 02-transformer-architecture/
The skill that 10x's your LLM output quality.
- Zero-shot, few-shot, chain-of-thought prompting
- Role prompting & system prompts
- Prompt templates & reusability
- Common mistakes and how to fix them
Connect your apps to powerful AI models.
- OpenAI API (GPT-4o, GPT-3.5)
- Anthropic Claude API
- Google Gemini API
- Open-source models via Ollama & HuggingFace
- Building a simple chatbot
Give your LLM access to your own data.
- Why RAG? The problem with LLM memory
- Embeddings & vector databases (FAISS, Pinecone, ChromaDB)
- Build a RAG pipeline from scratch
- Chat with your PDF
📁 04-rag/
Train a model on your own data.
- When to fine-tune vs. prompt engineer vs. RAG
- Preparing your dataset (JSONL format)
- Fine-tuning with OpenAI & HuggingFace
- Evaluating your fine-tuned model
# Clone the repo
git clone https://github.com/YOUR_USERNAME/llm-course.git
cd llm-course
# Install Python dependencies
pip install -r requirements.txt
# Set your API keys
cp .env.example .env
# → Edit .env and add your keys- Python 3.9+
- Basic Python knowledge (for code modules)
- API keys: OpenAI | Anthropic (free tiers available)
Contributions are welcome! Found a bug, have a better example, or want to add a new module?
- Fork the repo
- Create a branch:
git checkout -b feature/your-idea - Commit and push
- Open a Pull Request
See CONTRIBUTING.md for details.
MIT License — free to use, share, and build on.
If this helped you, give it a star! It helps others find it.
Made with ❤️ to make AI education accessible to everyone.