Skip to content

klsdfernando/LLM-Web-Search-CLI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM WebSearch CLI 🌐🤖

A high-performance, completely free and open-source command-line interface (CLI) that connects your local or cloud LLMs to the open web. It allows your AI models to search the web in real-time, pulling in actual search results to answer your prompts accurately without relying on outdated training data.

Absolutely no paid API keys are required for web searching. It is built explicitly to bypass paywalls by directly querying and elegantly scraping DuckDuckGo.

CLI Screenshot

✨ Features

  • Real-Time Web Search: Dynamically fetches up-to-date web results via an intelligent DuckDuckGo HTML scraper.
  • Customizable Search Counts: Define exactly how many web search snippets to retrieve (from 1 to 10+).
  • 100% Free & Open Source: The web search requires zero API keys. No SerpApi, no Google Custom Search API—just pure Python.
  • Universally Compatible LLM Client: Works out-of-the-box with any OpenAI-compatible endpoint. Easily plug in Ollama, LM Studio, or cloud providers.
  • Interactive Beautiful CLI: Built on top of the magical rich library, offering streaming responses, colored output, and a seamless chat loop.
  • On-the-fly Toggles: Turn web search on (/web on [count]) or off (/web off) natively mid-conversation.

🛠️ Modules and Frameworks Used

This project relies on a few robust, well-established Python packages:

  • requests & beautifulsoup4: Powering the API-less web scraper that stealthily parses search snippets.
  • openai: Providing the standardized connection layer to communicate with any OpenAI-compliant LLM REST API.
  • rich: Delivering the stunning, colorful, and interactive command-line User Experience.

🚀 Getting Started

1. Prerequisites

Ensure you have Python installed, and it is recommended to use a virtual environment.

git clone https://github.com/klsdfernando/LLM-Web-Search-CLI.git
cd LLM-Web-Search-CLI
python -m venv venv
.\venv\Scripts\activate    # On Windows
source venv/bin/activate   # On Mac/Linux
pip install -r requirements.txt

2. Connecting to an LLM

The tool requires an LLM to chat with. It uses OpenAI's standard API format, which means you can connect it anywhere:

🦙 Local: Ollama (Default)

Ollama runs deeply natively on localhost:11434. Ensure Ollama is running, and you have downloaded a model (e.g., llama3).

python main.py --model llama3

🖥️ Local: LM Studio

Start the "Local Server" in LM Studio (usually running on port 1234). The model name is irrelevant since LM Studio handles it for you.

python main.py --endpoint "http://localhost:1234/v1" --model "local-model"

☁️ Cloud Providers (Groq, Together AI, etc.)

You can also connect this to a cloud provider that uses the OpenAI structure. Note: As this application focuses on remaining 100% free with no hardcoded keys, to use a secured cloud provider, you must set an OPENAI_API_KEY system environment variable with your specific key before launching the script, while running the app mapped to their respective endpoint URIs.

3. CLI Arguments Reference

You can configure the CLI right from launch using these arguments:

Argument Description Default Example
--endpoint The base URL to your OpenAI-compatible API endpoint. http://localhost:11434/v1 (Ollama) --endpoint http://localhost:1234/v1
--model The specific model name you want to query. llama2 --model mistral
--web A flag to enable web searching securely upon launch. Disabled --web
--count The default number of search results to fetch from DuckDuckGo. 3 --count 5
--show A flag to automatically display the clean scraped website URLs alongside search results. Disabled --show

Full Launch Example:

python main.py --endpoint "http://localhost:1234/v1" --model "local-model" --web --count 5

💬 In-Chat Commands

Once the CLI is running, you can use these commands right in the interactive prompt:

  • /web on: Enables real-time web search for all following questions (defaults to checking top 3 results).
  • /web on <number>: Enables web search and sets exactly how many top results to retrieve (e.g., /web on 5).
  • /web off: Disables web search, ensuring the LLM replies strictly from its internal memory.
  • /show on: Displays the clean URLs of the websites scraped during a web search as a list.
  • /show off: Hides the display of scraped website URLs.
  • quit or exit: Safely terminates the CLI session.

📄 License

Distributed under the completely open MIT License. See LICENSE for more information.

About

A high-performance, completely free and open-source command-line interface (CLI) that connects your local or cloud LLMs to the open web. It allows your AI models to search the web in real-time, pulling in actual search results to answer your prompts accurately without relying on outdated training data.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages