Conversation
Add three health check endpoints to support container orchestration and cloud deployment monitoring: - /health/live - basic liveness probe - /health/ready - readiness probe with database connectivity check - /health/status - detailed status information with pool metrics These endpoints are required for Render's health check configuration during the GCP to Render migration. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add support for DATABASE_URL environment variable in db/engine.py to enable seamless deployment on Render platform. Maintains full backward compatibility with existing POSTGRES_* environment variables. Changes: - Check for DATABASE_URL first (standard for Render/Heroku) - Handle both postgres:// and postgresql:// URL schemes - Fall back to individual POSTGRES_* env vars if DATABASE_URL not set - Automatic conversion to postgresql+pg8000:// for SQLAlchemy This allows the application to work with Render's auto-provided DATABASE_URL while preserving existing local development workflow. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Refactor entrypoint and Dockerfile to support Render's deployment requirements while maintaining local development workflow. Changes: - Switch from uvicorn to gunicorn with uvicorn workers for production - Add $PORT environment variable support (Render requirement) - Conditional PostgreSQL readiness check (skip on Render) - Development mode toggle via RELOAD_ENABLED env var - Update docker-compose.yml to enable hot-reload locally - Dynamic port exposure in Dockerfile Deployment modes: - Production (Render): gunicorn with 4 workers on $PORT - Development (local): uvicorn with --reload on port 8000 Maintains full backward compatibility with existing local setup. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive Render Blueprint (render.yaml) for infrastructure as code deployment with separate staging and production environments. Features: - Dual environment setup (staging and production) - PostgreSQL 17 databases with PostGIS extension - Environment variable groups for organized config management - Automatic DATABASE_URL injection from managed databases - Health check integration with /health/ready endpoint - Pre-deploy hooks for PostGIS setup and migrations - Smart build filtering to optimize rebuild triggers - Auto-deploy staging, manual approval for production Services created: - ocotillo-api-staging (web service, auto-deploy from staging branch) - ocotillo-api-production (web service, manual deploy from main branch) - ocotillo-db-staging (PostgreSQL 17 + PostGIS) - ocotillo-db-production (PostgreSQL 17 + PostGIS) Also includes RENDER_DEPLOYMENT.md with complete deployment guide covering initial setup, environment configuration, workflows, monitoring, troubleshooting, and cost optimization. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
Add comprehensive database setup files to help kickstart new OcotilloAPI instances with schema and realistic sample data. Files added: - db/schema_dump.sql (721 lines) - Complete PostgreSQL schema * 45+ tables with PostGIS spatial support * Lexicon/vocabulary system for controlled terminology * Full-text search indexes (TSVECTOR) * Spatial indexes for geographic queries (SRID 4326) * Foreign keys, unique constraints, and check constraints * Audit trail columns (created_at, created_by, etc.) * Table comments for documentation - db/sample_data.sql (379 lines) - Realistic sample data * 5 contacts (scientists from USGS, NMBG, NMED, LANL, Sandia) * 5 locations (NM coordinates: ABQ, Santa Fe, Las Cruces, Los Alamos, Carlsbad) * 5 monitoring wells with complete metadata * 5 sensors (pressure transducers, barometers, acoustic probes) * 5 deployments, field events, samples * 10 observations (water level and temperature) * 3 aquifer systems and geologic formations * 3 monitoring groups/projects * Supporting data: well screens, notes, data provenance - db/README_DATABASE_SETUP.md (255 lines) - Setup documentation * Quick start instructions (Alembic vs SQL dump) * Schema details and table organization * Prerequisites and dependencies * Docker and Render deployment guides * Verification queries and troubleshooting Usage: Option 1: alembic upgrade head && load sample data Option 2: psql -f schema_dump.sql && load sample data These files enable rapid deployment of new database instances with production-ready schema and test data. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Use free plan for database and web service - Staging only (no production) - Reduced pool sizes for 256MB RAM limit - Disabled authentication for testing - Deploy from render-deploy branch Note: Free tier has 30-day DB expiration and 15-min spin-down Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The entrypoint was waiting for db:5432 even on Render because POSTGRES_HOST wasn't set. Now checks for DATABASE_URL first. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use Render's native Python buildpack - Simpler build/start commands - No Docker entrypoint issues - Python 3.13.1 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Render auto-detects Dockerfile and ignores runtime: python in render.yaml. Removing Docker files ensures Blueprint uses native Python runtime. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Procfile was overriding render.yaml's startCommand with transfers.transfer. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Alembic env.py was not checking for DATABASE_URL, always falling back to individual POSTGRES_* env vars (defaulting to localhost). Now checks DATABASE_URL first (Render/Heroku standard) before falling back to individual env vars for backward compatibility. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
| status_code=status.HTTP_503_SERVICE_UNAVAILABLE, detail=response | ||
| ) | ||
|
|
||
| return response |
Check warning
Code scanning / CodeQL
Information exposure through an exception Medium
Show autofix suggestion
Hide autofix suggestion
Copilot Autofix
AI 2 days ago
In general, to fix this class of problem you should avoid including raw exception messages or stack traces in API responses. Instead, log the detailed error server-side (using your logging framework) and return a generic, non-sensitive status or error description to the client.
For this specific code:
- In
readiness_check, thedetailfield currently includes"error": str(e). This should be replaced with a generic message such as"error": "database connectivity check failed"to avoid leakingstr(e)to the user. Optionally, log the real exception with a logger. - In
status_check, theexceptblock setsdb_status = f"error: {str(e)}", which is then returned to the client inresponse["database"]["status"]. Replace this with a generic description like"error: database connectivity check failed"and, again, optionally log the exception.
We can introduce a logging import at the top of api/health.py and a module logger via logger = logging.getLogger(__name__). In both except blocks, call logger.exception(...) so developers still get full tracebacks in the logs, while the API only returns sanitized messages. No changes to function signatures or overall behavior (HTTP status codes, JSON structure, keys like "status"/"database"/"checks") are needed—only the content of the error strings changes.
| @@ -2,12 +2,15 @@ | ||
|
|
||
| from datetime import datetime | ||
| from typing import Any | ||
| import logging | ||
|
|
||
| from fastapi import APIRouter, HTTPException, status | ||
| from sqlalchemy import text | ||
|
|
||
| from core.dependencies import session_dependency | ||
|
|
||
| logger = logging.getLogger(__name__) | ||
|
|
||
| router = APIRouter(prefix="/health", tags=["Health"]) | ||
|
|
||
|
|
||
| @@ -47,12 +44,13 @@ | ||
| "checks": {"db_connection": True}, | ||
| } | ||
| except Exception as e: | ||
| logger.exception("Readiness database connectivity check failed") | ||
| raise HTTPException( | ||
| status_code=status.HTTP_503_SERVICE_UNAVAILABLE, | ||
| detail={ | ||
| "status": "not_ready", | ||
| "database": "disconnected", | ||
| "error": str(e), | ||
| "error": "database connectivity check failed", | ||
| "checks": {"db_connection": False}, | ||
| }, | ||
| ) | ||
| @@ -96,7 +89,8 @@ | ||
| pool_info = {k: v for k, v in pool_info.items() if v is not None} | ||
|
|
||
| except Exception as e: | ||
| db_status = f"error: {str(e)}" | ||
| logger.exception("Status database connectivity check failed") | ||
| db_status = "error: database connectivity check failed" | ||
|
|
||
| response = { | ||
| "status": "healthy" if db_connected else "degraded", |
Why
This PR addresses the following problem / context:
How
Implementation summary - the following was changed / added / removed:
Notes
Any special considerations, workarounds, or follow-up work to note?