Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 94 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
# ============================================================
# CodeTrans — Environment Configuration
# ============================================================

# Backend port
BACKEND_PORT=5001

# ============================================================
# Inference Provider
# ============================================================
# "remote" — Cloud or enterprise OpenAI-compatible API (e.g. CodeLlama via gateway)
# "ollama" — Local Ollama running natively on the host machine (recommended for Mac)
INFERENCE_PROVIDER=remote

# ============================================================
# Option A: Remote OpenAI-compatible API (INFERENCE_PROVIDER=remote)
# ============================================================
# INFERENCE_API_ENDPOINT: Base URL of your inference service (no /v1 suffix)
# - GenAI Gateway: https://genai-gateway.example.com
# - APISIX Gateway: https://apisix-gateway.example.com/CodeLlama-34b-Instruct-hf
INFERENCE_API_ENDPOINT=https://your-api-endpoint.com/deployment
INFERENCE_API_TOKEN=your-pre-generated-token-here
INFERENCE_MODEL_NAME=codellama/CodeLlama-34b-Instruct-hf

# ============================================================
# Option B: Ollama — native host inference (INFERENCE_PROVIDER=ollama)
# ============================================================
#
# IMPORTANT — Why Ollama runs on the host, NOT in Docker:
# On macOS (Apple Silicon / M-series), running Ollama as a Docker container
# bypasses Metal GPU acceleration. The model falls back to CPU-only inference
# which is dramatically slower. Ollama must be installed natively so the Metal
# Performance Shaders (MPS) backend is used for hardware-accelerated inference.
#
# Setup:
# 1. Install Ollama: https://ollama.com/download
# 2. Pull your model (see options below)
# 3. Ollama starts automatically; confirm it is running:
# curl http://localhost:11434/api/tags
# 4. Set the variables below in your .env
#
# The backend container reaches host-side Ollama via the special DNS name
# `host.docker.internal` which Docker Desktop resolves to the Mac host.
# (On Linux with Docker Engine this requires the extra_hosts entry in docker-compose.yaml,
# which is already configured.)
#
# --- Production / high-quality translation ---
# INFERENCE_PROVIDER=ollama
# INFERENCE_API_ENDPOINT=http://host.docker.internal:11434
# INFERENCE_MODEL_NAME=codellama:34b
# ollama pull codellama:34b # ~20 GB, best quality
#
# --- Testing / SLM performance benchmarking ---
# INFERENCE_PROVIDER=ollama
# INFERENCE_API_ENDPOINT=http://host.docker.internal:11434
# INFERENCE_MODEL_NAME=codellama:7b
# ollama pull codellama:7b # ~4 GB, fast — use this for gauging SLM perf
#
# --- Other recommended code models ---
# ollama pull deepseek-coder:6.7b # ~4 GB, strong at code tasks
# ollama pull qwen2.5-coder:7b # ~4 GB, excellent multilingual code
# ollama pull codellama:13b # ~8 GB, good balance of speed vs quality
#
# Note: INFERENCE_API_TOKEN is not required when using Ollama.

# ============================================================
# LLM Settings
# ============================================================
LLM_TEMPERATURE=0.2
LLM_MAX_TOKENS=4096

# ============================================================
# Code Translation Settings
# ============================================================
MAX_CODE_LENGTH=8000
MAX_FILE_SIZE=10485760

# ============================================================
# CORS Configuration
# ============================================================
CORS_ALLOW_ORIGINS=["http://localhost:5173", "http://localhost:3000"]

# ============================================================
# Local URL Endpoint
# ============================================================
# Only needed if your remote API endpoint is a private domain mapped in /etc/hosts.
# Otherwise leave as "not-needed".
LOCAL_URL_ENDPOINT=not-needed

# ============================================================
# SSL Verification
# ============================================================
# Set to false only for development with self-signed certificates.
VERIFY_SSL=true
104 changes: 104 additions & 0 deletions .github/workflows/code-scans.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
name: SDLE Scans

on:
workflow_dispatch:
inputs:
PR_number:
description: 'Pull request number'
required: true
push:
branches: [ main ]
pull_request:
types: [opened, synchronize, reopened, ready_for_review]

concurrency:
group: sdle-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

jobs:

# -----------------------------
# 1) Trivy Scan
# -----------------------------
trivy_scan:
name: Trivy Vulnerability Scan
runs-on: ubuntu-latest
env:
TRIVY_REPORT_FORMAT: table
TRIVY_SCAN_TYPE: fs
TRIVY_SCAN_PATH: .
TRIVY_EXIT_CODE: '1'
TRIVY_VULN_TYPE: os,library
TRIVY_SEVERITY: CRITICAL,HIGH
steps:
- uses: actions/checkout@v4

- name: Create report directory
run: mkdir -p trivy-reports

- name: Run Trivy FS Scan
uses: aquasecurity/trivy-action@0.24.0
with:
scan-type: 'fs'
scan-ref: '.'
scanners: 'vuln,misconfig,secret,license'
ignore-unfixed: true
format: 'table'
exit-code: '1'
output: 'trivy-reports/trivy_scan_report.txt'
vuln-type: 'os,library'
severity: 'CRITICAL,HIGH'

- name: Upload Trivy Report
uses: actions/upload-artifact@v4
with:
name: trivy-report
path: trivy-reports/trivy_scan_report.txt

- name: Show Trivy Report in Logs
if: failure()
run: |
echo "========= TRIVY FINDINGS ========="
cat trivy-reports/trivy_scan_report.txt
echo "================================="

# -----------------------------
# 2) Bandit Scan
# -----------------------------
bandit_scan:
name: Bandit security scan
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
submodules: 'recursive'
fetch-depth: 0

- uses: actions/setup-python@v5
with:
python-version: "3.x"

- name: Install Bandit
run: pip install bandit

- name: Create Bandit configuration
shell: bash
run: |
cat > .bandit << 'EOF'
[bandit]
exclude_dirs = tests,test,venv,.venv,node_modules
skips = B101
EOF

- name: Run Bandit scan
run: |
bandit -r . -ll -iii -f screen
bandit -r . -ll -iii -f html -o bandit-report.html

- name: Upload Bandit Report
uses: actions/upload-artifact@v4
with:
name: bandit-report
path: bandit-report.html
retention-days: 30
14 changes: 14 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,24 @@ temp/
# Python type checker cache
.mypy_cache/

# Testing
.pytest_cache/
.coverage
htmlcov/
.tox/
.cache/

# Security scan outputs
bandit-*.html
bandit-*.txt

# Local project references (not part of this repo)
Audify/

# Langfuse observability stack (local testing only, never commit)
langfuse/
api/services/observability.py

# Reference documents (local working files, not part of this repo)
*.docx
*.docx.pdf
Expand Down
27 changes: 27 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# Contributing to CodeTrans

Thank you for your interest in contributing to **CodeTrans — AI-Powered Code Translation** by Cloud2 Labs.

## Scope of Contributions

Appropriate contributions include:

- Documentation improvements
- Bug fixes
- Reference architecture enhancements
- Additional LLM provider configurations
- Educational clarity and examples

Major feature additions or architectural changes (e.g., new inference backends,
new supported languages, UI framework changes) require prior discussion with the
Cloud2 Labs maintainers.

## Contribution Guidelines

- Follow existing coding and documentation standards
- Avoid production-specific assumptions
- Do not introduce sensitive, proprietary, or regulated data into examples or tests
- Ensure any new environment variables are documented in `.env.example` and the README

By submitting a contribution, you agree that your work may be used, modified,
and redistributed by Cloud2 Labs under the terms of the project license.
21 changes: 21 additions & 0 deletions DISCLAIMER.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Disclaimer

This blueprint is provided by Cloud2 Labs "as is" and "as available" for
educational and demonstration purposes only.

The **CodeTrans — AI-Powered Code Translation** blueprint is a reference
implementation and does not constitute a production-ready system or
regulatory-compliant solution.

This software is not designed to provide professional software engineering,
legal, or compliance advice. All code translations generated by this blueprint
require independent human review and validation before use in any production
system.

Cloud2 Labs does not assume responsibility or liability for any data loss,
security incident, service disruption, regulatory non-compliance, or adverse
outcome resulting from the use or modification of this blueprint.

Do not submit confidential, proprietary, or sensitive source code to third-party
inference API providers (OpenAI, Groq, OpenRouter, etc.) without first reviewing
their data handling, privacy, and retention policies.
21 changes: 21 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

© 2026 cld2labs

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
Loading