This project provides a comprehensive tool for processing YouTube videos and engaging in AI-powered discussions about their content. It combines video downloading, audio transcription, and interactive AI chat capabilities using state-of-the-art open-source tools.
- Download audio from YouTube videos using
yt-dlp - Transcribe audio to text using OpenAI's
whisper(or process a local .mp3 file directly) - Engage in interactive chat sessions about video content using
ollamaAI models
- Runtime model selection from available or running Ollama models
- Smart handling of model availability and startup
- Robust error handling and graceful session management
- Support for multiple exit commands and interrupt handling
-
Clone the repository:
git clone https://github.com/fdepierre/YouTube2Post.git cd YouTube2Post -
Install the required packages:
Make sure you have Python installed. Then run:
pip install -r requirements.txt
-
Additional Setup:
- Ensure
yt-dlpis installed and accessible in your PATH. - Configure your
config.jsonfile with necessary parameters liketmp_directory,work_directory, andmodel.
- Ensure
-
Ollama Setup:
Ollama must be installed and running on your local machine. To start Ollama, use:
ollama serve & ollama run <model_version>
The model version is defined in the
config.jsonfile under the"model"key.
The tool provides several command-line options for different workflows:
# Transcribe a YouTube video
python yt2post.py -t <YouTube_URL>
# Transcribe a local mp3 file (no user prompts, metadata auto-extracted)
python yt2post.py -t /path/to/audio.mp3
# Chat about an existing transcript
python yt2post.py -c <transcript_file>
# Full process: download, transcribe, and chat
python yt2post.py -f <YouTube_URL># Use model selection interface
python yt2post.py -m -c <transcript_file>When using the -m option:
- If models are running, you can select one directly
- If no models are running, available models will be listed
- Press Enter to use a single running model, or type 'no' to see alternatives
- Type 'exit' during chat to end the session
Create a config.json file in the root directory:
{
"tmp_directory": "path/to/tmp",
"work_directory": "path/to/work",
"model": "llama3.2" // Default model, can be overridden with -m option
}- The
modelinconfig.jsonspecifies the default model - Use the
-moption to override this and select a model at runtime - Available models can be listed using
ollama list - Running models can be viewed using
ollama ps
The project maintains high code quality through comprehensive testing:
# Run all tests
pytest
# Run tests with coverage report
pytest --cov=modules tests/
# Generate HTML coverage report
pytest --cov=modules --cov-report=html tests/View the coverage report at htmlcov/index.html
YouTube2Post/
├── modules/ # Core functionality modules
│ ├── config_manager.py # Configuration handling
│ ├── directory_manager.py # File and directory management
│ ├── ollama_manager.py # AI model interaction
│ ├── transcriber.py # Audio transcription
│ └── youtube_downloader.py # YouTube video processing
├── tests/ # Test suite
├── config.json # Configuration file
├── requirements.txt # Python dependencies
└── yt2post.py # Main script
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests for new functionality
- Submit a pull request
Ensure your code follows the project's style and passes all tests.
This project is licensed under the MIT License - see the LICENSE file for details.