A Multi-Agent System with Lore-Aware Memory for Maintaining Long-Term Coherence in Narrative Generation.
LoreSpring is a narrative generation system that uses multiple AI agents in a coordinated graph-based workflow to create coherent stories. The system maintains lore consistency through a local knowledge graph and employs specialized agents for writing, revision, continuity checking, summarization, and lore management.
- Multi-Agent Architecture: Specialized agents handle different aspects of narrative generation
- Lore-Aware Memory: Local knowledge graph system for maintaining consistency across chapters
- Iterative Refinement: Built-in revision cycles with continuity and quality checks
- Graph-Based Workflow: LangGraph orchestration for agent coordination
- Local Storage: JSON-based persistence for lore and chapter data
lore-spring/
├── api/ # FastAPI application
│ ├── main.py # FastAPI app initialization
│ ├── websocket.py # WebSocket handlers
│ └── routes/ # API endpoints
│ ├── generate.py # Chapter generation endpoints
│ ├── health.py # Health check endpoints
│ └── review.py # Review and revision endpoints
├── auth/ # Authentication & security
│ └── hashing.py # Password hashing utilities
├── database/ # Database management
│ ├── base.py # SQLAlchemy declarative base
│ ├── session.py # Database session management
│ └── models/ # ORM models
│ └── chapter.py # Chapter model
├── src/
│ ├── agents/ # Agent implementations
│ │ ├── writer.py # Initial draft generation
│ │ ├── revision.py # Content revision and quality assessment
│ │ ├── continuity.py # Lore consistency validation
│ │ ├── summarizer.py # Chapter summarization
│ │ ├── lore_keeper.py # Lore database updates
│ │ ├── quality.py # Quality assessment
│ │ ├── human_review.py # Human review workflow
│ │ └── utils.py # Shared utilities
│ ├── graph/ # Workflow orchestration
│ │ ├── state.py # NarrativeState definition
│ │ ├── main.py # StateGraph setup and compilation
│ │ ├── pipeline.py # Graph pipeline management
│ │ └── subgraphs.py # Subgraph implementations
│ ├── memory/ # Memory management
│ │ ├── lightrag.py # LightRAG integration
│ │ └── embedding.py # Sentence transformer embeddings
│ ├── schemas/ # Data models
│ │ ├── lore.py # Lore data structures
│ │ ├── continuity.py # Continuity check results
│ │ ├── revision.py # Revision feedback
│ │ ├── summarizer.py # Summary formats
│ │ └── api/ # API schemas
│ │ ├── generation_request.py
│ │ └── generation_response.py
│ ├── services/ # External service integrations
│ │ ├── neo4j.py # Neo4j graph database
│ │ ├── pinecone.py # Pinecone vector store
│ │ └── postgres.py # PostgreSQL database
│ └── llm/ # Language model interfaces
│ ├── groq_client.py # Groq API client
│ └── prompts.py # Prompt templates
├── config/ # Configuration management
│ └── settings.py # Pydantic settings
├── alembic/ # Database migrations
│ ├── env.py # Migration environment setup
│ ├── versions/ # Migration scripts
│ │ ├── 66dd3f3870f4_init.py # Initial schema
│ │ └── 407d26302ee2_timezone_fix.py # Timezone updates
│ └── script.py.mako # Migration template
├── lore_db/ # Local knowledge graph storage
│ ├── *.json # Key-value stores
│ ├── *.graphml # Graph structure
│ └── vdb_*.json # Vector databases
├── alembic.ini # Alembic configuration
├── memory.json # Chapter memory store
├── requirements.txt # Python dependencies
├── Dockerfile # Container deployment
└── README.md # This file
Generates initial narrative drafts based on:
- Previous chapter summaries
- User direction and metadata
- Lore context from knowledge graph
- Revision feedback and continuity issues
Reviews drafts for:
- Plot coherence and pacing
- Character development
- Prose clarity and style
- Prompt adherence
- Provides quality scores (0-10) across multiple dimensions
Validates lore consistency by checking:
- Character traits and relationships
- Location descriptions and rules
- Object properties and holders
- Timeline coherence against established facts
Creates structured chapter summaries including:
- Key events and plot progression
- Character developments and updates
- Lore changes and new information
Updates the knowledge graph with new information from completed chapters:
- Inserts chapter content into LightRAG
- Maintains evolving lore database
LoreSpring uses LightRAG for lore-aware memory:
- Knowledge Graph: Stores entities (characters, locations, objects) and relationships
- Vector Embeddings: Semantic search using Sentence Transformers
- Local Storage: JSON files for persistence (lore_db/)
- Hybrid Querying: Combines keyword and semantic search
- Dynamic Updates: Lore evolves with each generated chapter
-
Clone the repository:
git clone https://github.com/yourusername/lore-spring.git cd lore-spring -
Create virtual environment:
python -m venv .venv # On Windows: .venv\Scripts\activate # On Unix/Mac: source .venv/bin/activate
-
Install dependencies:
pip install -r requirements.txt
-
Configure environment: Create a
.envfile with required API keys:GROQ_API_KEY=your_groq_api_key_here # Optional LangSmith tracing: LANGCHAIN_TRACING_V2=true LANGCHAIN_ENDPOINT=https://api.smith.langchain.com LANGCHAIN_API_KEY=your_langsmith_key LANGCHAIN_PROJECT=your_project_name
Execute the main narrative generation workflow:
python -m src.graph.mainThis runs the complete agent pipeline:
- Writer generates initial draft
- Revision agent reviews and scores quality
- Continuity agent checks lore consistency
- Summarizer creates chapter summary
- Lore Keeper updates knowledge graph
Start the API server for HTTP and WebSocket access:
uvicorn api.main:app --reloadThe API will be available at http://localhost:8000 with interactive docs at /docs.
- POST /api/generate - Generate a new chapter with given parameters
- GET /api/health - Health check endpoint
- POST /api/review - Submit chapter for review and revision
- WebSocket /ws - Real-time streaming of chapter generation
The system maintains memory in:
memory.json: Chapter summaries and key eventslore_db/: LightRAG knowledge graph files
LoreSpring uses Alembic for database migrations. To set up the database:
# Create a new database migration
alembic revision --autogenerate -m "Description of changes"
# Apply all pending migrations
alembic upgrade head
# Rollback to previous migration
alembic downgrade -1Currently supported migrations:
- Initial schema setup (66dd3f3870f4_init.py)
- Timezone fixes (407d26302ee2_timezone_fix.py)
Update sqlalchemy.url in alembic.ini to point to your database (PostgreSQL, MySQL, etc.).
The system uses a typed state dictionary (NarrativeState) containing:
project_id: Unique project identifierchapter_number: Current chapter being generateduser_direction: User-provided writing instructionsmetadata: Genre, style, and other settingslore_context: Retrieved relevant lore factsdraft: Current chapter draftrevision_count: Number of revision iterationscontinuity_issues: Detected consistency problemsrevision_result: Quality assessment and feedbackfinal_chapter: Completed chapter textchapter_summary: Structured summary
The revision agent evaluates drafts on:
- Pacing: Scene rhythm and timing
- Character Depth: Psychological complexity
- Prose Clarity: Writing precision and style
- Tension: Dramatic buildup
- Prompt Adherence: Following user direction
Scores range from 0-10, with most drafts scoring 5-7.
Core dependencies include:
- LangGraph: Graph-based workflow orchestration
- LangChain-Groq: Groq LLM integration
- LightRAG: Knowledge graph memory system
- Sentence Transformers: Text embeddings
- Pydantic: Data validation and settings
- LangSmith: Optional tracing and monitoring
- FastAPI: Web framework for REST API
- Uvicorn: ASGI server for FastAPI
- SQLAlchemy: ORM for database access
- Alembic: Database migration tool
- python-multipart: Form data parsing
- Core agent pipeline (Writer, Revision, Continuity, Summarizer, Lore Keeper)
- Quality agent for content assessment
- Human review workflow agent
- LightRAG knowledge graph integration
- Local JSON-based persistence
- Groq LLM client with retry logic
- Structured data schemas and API schemas
- Configuration management via Pydantic
- FastAPI REST application structure
- WebSocket streaming support
- Authentication layer with password hashing
- Database models (Chapter) and ORM setup
- Alembic database migrations (versioned)
- Service integration stubs (Neo4j, Pinecone, PostgreSQL)
- API endpoints expansion (generate, health, review routes)
- WebSocket full integration for real-time streaming
- Database integration with PostgreSQL via Alembic
- Complete containerization (Dockerfile)
- External database integrations (Neo4j, Pinecone, PostgreSQL production setup)
- Comprehensive test suite
- Advanced subgraphs for complex workflows
- Prompt template management system
- Deployment pipeline (CI/CD)
- Implement dedicated Quality agent
- Add FastAPI REST endpoints structure
- Implement WebSocket infrastructure
- Add database models and ORM setup
- Setup Alembic for database migrations
- Complete all FastAPI endpoints (generate, review, status)
- Full WebSocket streaming implementation
- PostgreSQL production deployment
- Add external database support (Neo4j, Pinecone)
- Create comprehensive test suite
- Implement advanced subgraphs for workflows
- Add prompt template management system
- Complete Dockerfile and deploy pipeline
- Fork the repository
- Create a feature branch
- Implement changes with proper type hints
- Test locally with the graph workflow
- Submit a pull request
[]
- Built with LangGraph for agent orchestration
- Powered by Groq for fast LLM inference
- Memory system based on LightRAG
- Embeddings via Sentence Transformers
STREAMING IS NOT YET SUPPORTED NOR IS AUTH