Skip to content

fapomar/ai-chatbot-local

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spring AI Chatbot with Ollama

This project is a simple Spring Boot chatbot powered by Ollama and Spring AI. It demonstrates how to integrate a local LLM (such as Mistral) with RAG using Spring Boot REST API, and how to handle chat sessions and prompt history.

Overview

This application provides a REST endpoint that accepts chat messages and returns model responses using Ollama as the local inference server.

It supports multi-turn conversations through a simple history_id field that represents the chat session ID.

The RAG ingests the book Winnie-the-Pooh (public domain) as sample knowledge base data.

Prerequisites

  • Java 21
  • Spring Boot
  • Ollama (installed locally)
  • IntelliJ IDEA (optional, for development)
  • Bruno or any API testing tool (e.g., Postman, cURL)

Setup

Install Ollama

brew install ollama

Start the Ollama server

ollama serve

Run or pull models

ollama run mistral
ollama pull nomic-embed-text

To run the application in IntelliJ, create a new configuration with the folowing:

  • Type: Application
  • Main class: dev.fp.aichatbot.Application
  • JDK: Java 21

Testing the Chat Endpoint

Use Bruno, Postman, or cURL to test the chat API.

Endpoints

Request Body:

{
"prompt_message": "What's the name of the donkey in Winnie-the-Pooh?",
"history_id": "1"
}
  • The history_id acts as a chat session ID, allowing the chatbot to maintain context across multiple messages.

References

License

This project is distributed under the MIT License. See LICENSE for details.

About

Fully local chatbot using Ollama and Spring AI, demonstrating prompt history and RAG with the book Winnie-the-Pooh.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages