Skip to content

harshindcoder/NovelAssistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🖋️ Private Novel Strategist

Offline Character & Plot Consistency Engine for Ongoing Web Novels


📖 Project Purpose

This project was created to help an author maintain consistency in characters, plotlines, and world-building while writing an ongoing online novel.

It acts as:

  • 📚 A private story library
  • 🧠 A semantic memory engine
  • 🔎 A fact-checking assistant
  • 🧩 A plot consistency verifier

The goal is to ensure that as new chapters are written, the author can verify:

  • Character motivations remain consistent
  • Timeline continuity is maintained
  • Plot threads are not contradicted
  • Previously established facts are preserved

🎯 Problem It Solves

In long-running serialized fiction:

  • Characters evolve over many chapters
  • Minor details are easily forgotten
  • Retcons accidentally occur
  • Subplots may contradict earlier arcs

This system provides a searchable semantic memory of the entire novel, allowing the author to verify facts instantly instead of manually rereading everything.

Example questions:

  • “Has this character ever met the king before?”
  • “What was the original reason for the betrayal?”
  • “Did I already define this magic rule earlier?”
  • “Is this action consistent with Chapter 12?”

🧠 How It Works (Architecture Overview)

  1. Chapters are stored as .txt files.

  2. Text is:

    • Split into chunks
    • Converted into embeddings
    • Stored in a vector database (ChromaDB)
  3. When a question is asked:

    • Relevant story fragments are retrieved
    • Injected into the prompt
    • The local LLM (phi3) generates a context-aware response

This creates a lightweight Retrieval-Augmented Generation (RAG) pipeline optimized for fiction tracking.


🔒 Fully Offline

  • No external APIs
  • No cloud dependency
  • No data leaves your machine
  • Safe for unpublished manuscripts

⚙️ Tech Stack

  • Streamlit – User Interface
  • ChromaDB – Vector Database
  • Ollama – Local LLM Runtime
  • phi3 – Fast inference model
  • nomic-embed-text – Local embedding model

🚀 How To Run The Project

Follow these steps carefully.


1️⃣ Install Ollama

Download and install from:

👉 https://ollama.com

After installation, pull required models:

ollama pull phi3
ollama pull nomic-embed-text

Verify Ollama is running:

ollama run phi3

Then stop it with Ctrl+C.

Optional (check GPU usage on Apple Silicon):

ollama ps

It should show:

PROCESSOR: gpu

2️⃣ Create a Virtual Environment

Inside your project folder:

python3 -m venv venv
source venv/bin/activate

3️⃣ Install Dependencies

pip install streamlit chromadb ollama

4️⃣ Project Folder Structure

Ensure your structure looks like this:

NovelAssistant/
│
├── app.py
├── story_db/              # Auto-created vector database
├── my_novel_pages/        # Add your .txt chapters here
└── README.md

5️⃣ Add Your Chapters

Place your novel chapters as .txt files inside:

my_novel_pages/

Example:

my_novel_pages/
├── chapter1.txt
├── chapter2.txt
├── notes.txt

6️⃣ Run The Application

From the project root directory:

streamlit run app.py

Your browser will open automatically.


7️⃣ Sync Your Story

Inside the sidebar:

Click:

“Sync New Chapters/Notes”

This will:

  • Split chapters into chunks
  • Generate embeddings
  • Store them in ChromaDB

Once synced, you can start asking questions.


🧪 If You Change Embedding Models

If you ever switch embedding models and get:

Embedding function conflict

Delete the vector database:

rm -rf story_db

Then restart and sync again.


⚡ Performance Tips (Apple M1 Recommended)

For best performance:

  • Use phi3
  • Keep chunk size around 600
  • Use retrieval k = 2
  • Limit output tokens (num_predict = 300)

Expected response time: 1–3 seconds on M1


🧩 Future Improvements

  • Character profile auto-extraction
  • Timeline visualization
  • Relationship graph mapping
  • Contradiction detection alerts
  • Arc-level summarization memory

👨‍💻 Built For

Writers of serialized fiction who want:

  • Long-term narrative consistency
  • A private AI assistant
  • Structured plot validation

"Write freely. Let the system remember everything."

About

RAG model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages