This repository contains the backend and frontend services for the amazee.ai application. The project is built using a modern tech stack including Python FastAPI for the backend, Next.js with TypeScript for the frontend, and PostgreSQL for the database.
- Backend: Python FastAPI
- Frontend: Next.js + TypeScript
- Database: PostgreSQL
- Testing: Pytest (backend), Jest (frontend)
- Containerization: Docker & Docker Compose
- Orchestration: Kubernetes with Helm
This project uses semantic versioning (MAJOR.MINOR.PATCH). Version information is maintained in:
app/__version__.py- Python application versionhelm/Chart.yaml- Main Helm chart versionhelm/charts/backend/Chart.yaml- Backend chart versionhelm/charts/frontend/Chart.yaml- Frontend chart version
To bump the version across all files:
# Install bump-my-version (if not already installed)
pip install bump-my-version
# Bump patch version (2.0.0 -> 2.0.1)
bump-my-version bump patch
# Bump minor version (2.0.0 -> 2.1.0)
bump-my-version bump minor
# Bump major version (2.0.0 -> 3.0.0)
bump-my-version bump majorThe version bump will automatically update all version references and create a git tag.
- Docker and Docker Compose
- Make (for running convenience commands)
- Node.js and npm (for local frontend development)
- Python 3.x (for local backend development)
-
Clone the repository:
git clone [repository-url] cd [repository-name] -
Install node dependencies
cd frontend npm install cd ../
-
Environment Setup:
- Copy any example environment files and configure as needed
- Ensure all required API keys are set
- Ensure you have set the
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYvariables
-
Start the services:
docker-compose up -d
This will start:
- PostgreSQL database (port 5432)
- Backend service (port 8000)
- Frontend service (port 3000)
- litellm service (port 4000)
make backend-test # Run backend tests
make backend-test-cov # Run backend tests with coverage report
make backend-test-regex # Waits for a string which pytest will parse to only collect a subset of testsSee [[tests/stripe_test_trigger.md]] for detailed instructions on testing Stripe integration for billing.
make frontend-test # Run frontend tests if they existmake test-all # Run both backend and frontend testsmake test-clean # Clean up test containers and imagesTo clean up test containers and images:
make test-clean-
Start all services in development mode:
docker-compose up -d
-
View logs for all services:
docker-compose logs -f
-
View logs for a specific service:
docker-compose logs -f [service] # e.g. frontend, backend, postgres -
Restart a specific service:
docker-compose restart [service]
-
Stop all services:
docker-compose down
The development environment includes:
- Hot reloading for frontend (Next.js) on port 3000
- Hot reloading for backend (Python) on port 8800
- PostgreSQL database on port 5432
Access the services at:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8800
If you have a database dump, you can restore it into your local PostgreSQL service following these steps:
-
Extract the dump:
mkdir -p ./restore-data tar -xf the-postgres-database-dump.tar -C ./restore-data
-
Prepare the restore script: The dump should contain a
restore.sqlfile, which then contains placeholders and likely a different database name. Update it for your local environment:# Replace the data path placeholder sed -i '' 's/\$\$PATH\$\$/\/tmp\/restore/g' ./restore-data/restore.sql # Replace the dumped database name (`dumped-database-example-name`) with your local one (e.g. `postgres_service`) sed -i '' 's/dumped-database-example-name/postgres_service/g' ./restore-data/restore.sql
-
Stop the backend: To prevent active connections during the restoration, stop the backend container:
docker compose stop backend
-
Get the name of the postgres container Copy the name e.g.
amazeeai-postgres-1and replace<postgres-container-name>in the following commands.docker compose ps
-
Transfer and restore: Copy the files to the database container, fix permissions, and run the restoration:
# Create directory and copy files docker exec <postgres-container-name> mkdir -p /tmp/restore docker cp ./restore-data/. <postgres-container-name>:/tmp/restore/ # Fix permissions so the postgres user can read the .dat files docker exec <postgres-container-name> chown -R postgres:postgres /tmp/restore # Run the restoration script docker exec <postgres-container-name> psql -U postgres -f /tmp/restore/restore.sql
-
Restart and Clean up:
# Start the backend again docker compose start backend # Optional: remove temporary files from the container docker exec <postgres-container-name> rm -rf /tmp/restore
We follow a structured branching and deployment process to ensure stability across environments.
- Default Branch:
devis the default branch, and it is linked to thedevenvironment on Lagoon. - Branching: Always create new feature branches from
dev. Bugfixes can potentially be created from themainbranch if they need to be merged intomainandprodfaster than in-progressdevwork. - Review: Create a Pull Request (PR) back into
dev. All PRs must be reviewed and tested locally before merging.
- Dev Testing: After merging, verify your changes on the
devenvironment. - Staging: Once verified on dev, create a PR from
devtomain. Themainbranch serves as our Stage environment.
- Lagoon: Deployments are managed via Lagoon.
- Promotion: Deploy to Prod by promoting the build from the
mainbranch directly on the Lagoon Dashboard or via Lagoon CLI.
- Create a new branch from
dev:git checkout -b feature/my-feature - Make your changes and commit.
- Run the test suite:
make test-all - Submit a pull request to the
devbranch.
.
├── app/ # Backend Python code
├── docs/ # Documentation around design decisions
├── frontend/ # React frontend application
├── tests/ # Backend tests
├── scripts/ # Utility scripts
├── docker-compose.yml # Docker services configuration
├── Dockerfile # Backend service Dockerfile
├── Dockerfile.test # Test environment Dockerfile
└── Makefile # Development and test commands
DATABASE_URL: PostgreSQL connection stringSECRET_KEY: Application secret keyDYNAMODB_ROLE_NAME: role to assume for accessing DDB resources (created by terraform)SES_ROLE_NAME: Role to assume for SES access (created by terraform)SES_SENDER_EMAIL: Validated identity in SES from which emails are sentENV_SUFFIX: Naming suffix to differentiate resources from different environments. Defaults todev.SES_REGION: Optional, defaults to eu-central-1DYNAMODB_REGION: Optional, defaults to eu-central-2
NEXT_PUBLIC_API_URL: Backend API URL
- Create a new branch for your feature
- Make your changes
- Run the test suite
- Submit a pull request
This project is licensed under the Apache License, Version 2.0 - see below for details:
Copyright 2024 amazee.io
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
For the full license text, please see http://www.apache.org/licenses/LICENSE-2.0