Skip to content

ADolbyB/ai-search-methods

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Artificial Intelligence Search Methods

License Python Jupyter Repo Size

A collection of fundamental AI search algorithm implementations from an Artificial Intelligence Applications class. All implementations are written in Python using Jupyter Notebooks, covering the four major families of classical AI search: blind search, informed search, local search, and adversarial minimax search.

Repository Structure

ai-search-methods/
├── BlindSearch/        # BFS and DFS implementations
├── InformedSearch/     # Uniform Cost Search, Greedy Best First, and A* implementations
├── LocalSearch/        # Hill Climbing and Simulated Annealing implementations
├── Minimax/            # Standard Minimax and Alpha-Beta Pruning implementations
├── .gitignore
└── README.md           # This file

Search Methods Overview

1. Blind Search

Located in BlindSearch/

Uninformed search strategies that explore the state space without any problem-specific knowledge about the goal.

Algorithm Description
BFS — Breadth First Search Explores all nodes at the current depth before moving deeper. Guarantees the shortest path in unweighted graphs.
DFS — Depth First Search Explores as far as possible along each branch before backtracking. Memory-efficient but does not guarantee shortest path.

2. Informed Search

Located in InformedSearch/

Heuristic-guided search strategies that use problem-specific knowledge to find solutions more efficiently.

Code adapted from this GitHub Gist.

Algorithm Description
UCS — Uniform Cost Search Expands the lowest-cost node first. f(N) = g(N) Optimal for weighted graphs with non-negative edge costs.
GBF — Greedy Best First Search Uses a heuristic f(N) = h(N) to always expand the node that appears closest to the goal. Fast but not guaranteed to be optimal.
A* — A-Star Search Combines path cost and heuristic estimate (f(n) = g(n) + h(n)). Optimal and complete when using an admissible heuristic.

3. Local Search

Located in LocalSearch/

Optimization-based search strategies that operate on a single current state and move to neighboring states, without tracking full search paths.

Code adapted from this GitHub Repo.

Algorithm Description
HC — Hill Climbing Iteratively moves to the best neighboring state. Efficient but can get stuck in local optima.
SA — Simulated Annealing Probabilistically accepts worse states early on (based on a "temperature" schedule) to escape local optima. Modeled after the metallurgical annealing process.

4. Minimax

Located in Minimax/

Adversarial search strategies for two-player, zero-sum games. Assumes both players play optimally.

Code adapted from this web page.

Algorithm Description
Minimax Recursively evaluates all possible game states, maximizing the AI's score and minimizing the opponent's score.
Alpha-Beta Pruning An optimization of Minimax that prunes branches that cannot influence the final decision, significantly reducing the number of nodes evaluated.

Algorithm Comparison

Algorithm Complete? Optimal? Time Complexity Space Complexity Notes
BFS ✅ Yes ✅ Yes (unweighted) O(b^d) O(b^d) Best for shallow goals
DFS ✅ Yes* ❌ No O(b^m) O(bm) *In finite spaces only
UCS ✅ Yes ✅ Yes O(b^(1+⌊C*/ε⌋)) O(b^(1+⌊C*/ε⌋)) Optimal with non-neg costs
GBF ❌ No ❌ No O(b^m) O(b^m) Fast but not optimal
A* ✅ Yes ✅ Yes O(b^d) O(b^d) Best of both UCS & GBF
Hill Climbing ❌ No ❌ No O(∞) O(1) Prone to local optima
Simulated Annealing ✅ Yes* ❌ No O(∞) O(1) *With slow cooling schedule
Minimax ✅ Yes ✅ Yes O(b^m) O(bm) Full game tree
Alpha-Beta ✅ Yes ✅ Yes O(b^(m/2)) O(bm) ~Doubles search depth

b = branching factor, d = depth of shallowest goal, m = maximum depth, C = optimal cost, ε = minimum edge cost*

Key Concepts

  • State Space: The set of all possible states reachable from the initial state via actions.
  • Heuristic Function h(n): An estimate of the cost from node n to the goal. Must be admissible (never overestimates) for A* to be optimal.
  • Admissibility: A heuristic is admissible if h(n) ≤ h*(n) for all n, where h*(n) is the true cost to the goal.
  • Consistency (Monotonicity): A heuristic is consistent if h(n) ≤ c(n, a, n') + h(n') for every successor n'. Consistency implies admissibility.
  • Zero-Sum Game: A game in which one player's gain is exactly the other player's loss — the foundation of Minimax.

Requirements

  • Python 3.7 or higher
  • Jupyter Notebook or JupyterLab

Setup

  1. Clone the repository:
git clone https://github.com/ADolbyB/ai-search-methods.git
cd ai-search-methods
  1. Install dependencies:
pip install notebook
  1. Launch Jupyter Notebook:
jupyter notebook
  1. Navigate to the desired folder and open the .ipynb file.

License

This project is licensed under the MIT License - see the LICENSE.md file for details.

Acknowledgments


Topics

ai python jupyter-notebook ai-search ai-search-algorithms bfs-algorithm dfs-algorithm ucs-algorithm gbfs-algorithm a-star-algorithm hill-climbing simulated-annealing-algorithm minimax-algorithm alpha-beta-pruning local-search-algorithm agent reinforcement-learning