Skip to content

Implement license key generation with batch metadata and error handling#48

Open
saudzahirr wants to merge 3 commits intomainfrom
feature/10-license-key-generator
Open

Implement license key generation with batch metadata and error handling#48
saudzahirr wants to merge 3 commits intomainfrom
feature/10-license-key-generator

Conversation

@saudzahirr
Copy link
Member

@saudzahirr saudzahirr commented Mar 9, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added license activation code generation and validation
    • Introduced device fingerprint identification for node-locked licenses
    • Added license signature verification for authenticity validation
  • Bug Fixes

    • Improved database connectivity handling during startup
  • Refactor

    • Enhanced error handling for license generation failures
    • Improved JWT token parameter flexibility
  • Tests

    • Expanded test coverage with comprehensive authentication and licensing test suites

@saudzahirr saudzahirr self-assigned this Mar 9, 2026
@saudzahirr saudzahirr added enhancement New feature or request feature sprint-2 labels Mar 9, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

📝 Walkthrough

Walkthrough

This PR introduces comprehensive licensing infrastructure with activation codes and Ed25519 signatures. It adds domain models for licenses, activation codes, and device fingerprints; implements Crockford Base32 encoding for activation codes; introduces license signature generation and verification; refactors test infrastructure with session-scoped database fixtures; adds new security utilities for Ed25519 key handling; and updates database schema to rename license_key to activation_code. Removes legacy auth unit/integration tests in favor of new API-level integration tests.

Changes

Cohort / File(s) Summary
Exception & Error Handling
backend/app/core/exceptions.py, backend/app/schemas/response.py, backend/app/core/exception_handlers.py
Added LicenseKeyGenerationError exception and LICENSE_KEY_GENERATION_ERROR enum member; renamed _build_error_details to build_error_details (public); simplified validation error field construction via or operator.
Security & Cryptography
backend/app/core/ed25519.py, backend/app/core/security.py
New Ed25519 key loading module with load_private_ed25519_key and load_public_ed25519_key; updated JWT/password hashing internals: renamed ALGORITHMJWT_ALGORITHM, password_hashPASSWORD_HASH, expanded token creation signatures to accept str | UUID for vendor_id.
Domain Models
backend/app/domain/activation.py, backend/app/domain/fingerprint.py, backend/app/domain/license.py
Introduced ActivationCode with Base32 Crockford encoding/decoding and UUIDv7 round-tripping; Device fingerprinting with SHA-256 caching; BaseLicense and NodeLockedLicense with canonical JSON serialization and Ed25519 signature support via LicenseFile.
Encoding Implementation
backend/app/internal/base32_crockford.py
New Crockford Base32 implementation with encode, decode, normalize supporting checksum validation, hyphen grouping, and ambiguous character normalization.
API Routes & Routing
backend/app/api/routes/auth.py, backend/app/api/routes/login.py
Removed explicit response_model declarations from auth endpoint decorators; deleted login.py router module (consolidation with auth routes).
License Services
backend/app/services/license_gen.py
New license signature verification service exposing verify_license for Ed25519 signature validation with LicenseKeyGenerationError on failure.
Database Schema
migrations/03_app.sql, migrations/04_audit.sql, migrations/07_audit_triggers.sql, migrations/tests/helpers.py, migrations/tests/test_rls_isolation.py
Renamed node_locked_license_data."license_key" column to "activation_code"; updated view projections, audit triggers, and test helpers to reference activation_code; updated audit documentation.
Core Utilities
backend/app/crud/vendor.py, backend/app/main.py, backend/app/pre_start.py
Refactored SQL strings to multi-line triple-quoted format; changed ConnectionPool initialization to include open=True; updated retry condition logic in pre-start initialization.
Test Infrastructure
backend/tests/conftest.py, backend/tests/__init__.py
Replaced module-scoped Testcontainer with session-scoped pooled database (pg_container, migrated_db_pool); added session-scoped app_settings and faker; introduced PgReadyWaitStrategy and transactional db_conn fixture; added test-only /tests/protected-test endpoint; removed module-level environment variable bootstrapping.
Unit Tests
backend/tests/core/test_exceptions.py, backend/tests/core/test_config.py, backend/tests/core/test_ed25519.py, backend/tests/core/test_security.py, backend/tests/core/test_exception_handlers.py
Extended exception matrix with LicenseKeyGenerationError; switched parametrize decorators to explicit tuple style; replaced hardcoded literals with faker-generated values; added tests for JWT/password hashing round-trips, Settings DSN construction, Ed25519 key loading (positive/negative), and error detail normalization.
Domain Unit Tests
backend/tests/domain/test_activation.py, backend/tests/domain/test_fingerprint.py, backend/tests/domain/test_license.py
Validates ActivationCode round-tripping, UUIDv7 format enforcement, length constraints; verifies Device fingerprint determinism and caching; confirms BaseLicense/NodeLockedLicense validation (expiry ordering, session limits), canonical JSON sorting, Ed25519 key type enforcement.
Internal & Service Unit Tests
backend/tests/internal/test_base32_crockford.py, backend/tests/services/test_license_gen.py
Comprehensive Crockford Base32 encode/decode coverage with parametrized inputs, checksum validation, error paths, normalization; license signature verification with real Ed25519 keys and tampered signature rejection.
CRUD & Schema Unit Tests
backend/tests/schemas/test_auth.py, backend/tests/crud/test_vendor.py, backend/tests/schemas/test_response.py
Added auth schema validation (payload acceptance, error location assertions, token pair defaults); vendor CRUD integration tests (case-insensitive email lookup, duplicate prevention, soft-delete filtering); updated response schema tests with parametrized error/validation assertions and faker integration.
API Integration Tests
backend/tests/api/routes/test_auth.py, backend/tests/api/routes/test_health.py, backend/tests/api/test_deps.py, backend/tests/api/test_main.py, backend/tests/api/test_middlewares.py
New comprehensive auth flow tests (signup 201/409/422, login 200/401, refresh 200/401, protected route access 401/200); health endpoint integration with context manager pattern; dependency injection tests verifying get_db, get_settings, get_rls_cursor, get_current_vendor_id behaviors; router registration validation; request-id middleware preservation/generation.
Service Integration Tests
backend/tests/services/test_auth.py
New auth service integration tests covering signup persistence, duplicate email conflict, login token issuance, credential validation, password hash upgrade detection, refresh token issuance, and multiple invalid refresh token scenarios.
Startup & Lifespan Tests
backend/tests/test_main.py, backend/tests/test_pre_start.py
Lifespan tests verifying pool initialization, state assignment, connectivity checks, error handling with proper cleanup; startup tests for pool construction and error propagation.
Legacy Test Removals
backend/tests/unit/test_auth.py, backend/tests/integration/test_auth.py
Deleted unit and integration auth test modules (functionality replicated in new API/service layer tests).
Dependencies & Configuration
backend/pyproject.toml, .github/workflows/code_test.yml, .github/workflows/sonarcloud.yml, .gitignore
Removed python-multipart and pydantic dependencies; added cryptography, base58, uuid6; updated Postgres driver from psycopg[binary] to psycopg[binary,pool]; reorganized dev/test groups with test as new group; expanded Ruff linting rules (FAST, ASYNC, C4, LOG, G, PT, FURB); updated CI workflows to use --group test and set UV_NO_DEV=true; added cache/database directory ignores.
Parametrization Style Updates
migrations/tests/test_constraints.py, migrations/tests/test_indexes.py, migrations/tests/test_partitioning.py, migrations/tests/test_privileges.py, migrations/tests/test_rls_structure.py, migrations/tests/test_seed_data.py
Standardized pytest.mark.parametrize decorators to use explicit tuple syntax instead of comma-separated string literals.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant API as Auth API
    participant Service as Auth Service
    participant DB as Database
    participant Security as Security Util

    Client->>API: POST /signup (email, password, client_id)
    API->>Service: signup(email, password, client_id)
    Service->>Security: get_password_hash(password)
    Security-->>Service: hashed_password
    Service->>DB: create_vendor(email, hashed_password)
    DB-->>Service: vendor_id
    Service-->>API: Vendor
    API-->>Client: 201 Created

    Client->>API: POST /login (email, password)
    API->>Service: login(email, password)
    Service->>DB: get_vendor_by_email(email)
    DB-->>Service: vendor + stored_hash
    Service->>Security: verify_password(password, stored_hash)
    Security-->>Service: valid, upgraded_hash?
    Service->>Security: create_access_token(vendor_id)
    Service->>Security: create_refresh_token(vendor_id)
    Security-->>Service: access_token, refresh_token
    Service-->>API: TokenPair
    API-->>Client: 200 OK

    Client->>API: POST /refresh (refresh_token)
    API->>Security: decode_token(refresh_token)
    Security-->>API: vendor_id, token_type
    API->>Service: refresh(refresh_payload)
    Service->>Security: create_access_token(vendor_id)
    Service->>Security: create_refresh_token(vendor_id)
    Security-->>Service: new_access_token, new_refresh_token
    Service-->>API: TokenPair
    API-->>Client: 200 OK

    Client->>API: GET /protected (Authorization: Bearer access_token)
    API->>Security: decode_token(access_token)
    Security-->>API: vendor_id (if valid)
    API->>API: get_current_vendor_id() yields vendor_id
    API-->>Client: 200 Protected Resource
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly Related PRs

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 72.92% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The PR title 'Implement license key generation with batch metadata and error handling' describes the overall objective but is misleading about the primary implementation scope. The changeset focuses on introducing license domain models, activation codes, and supporting infrastructure, but the title emphasizes 'batch metadata' and 'error handling' which are not substantial features in this PR. The actual work involves architectural refactoring, domain model introduction, test infrastructure improvements, and migration updates. Consider revising the title to better reflect the primary changes: e.g., 'Add license domain models, activation codes, and test infrastructure' or 'Introduce license generation domain layer with comprehensive test suite' would more accurately summarize the main deliverables.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feature/10-license-key-generator
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can customize the tone of the review comments and chat replies.

Configure the tone_instructions setting to customize the tone of the review comments and chat replies. For example, you can set the tone to Act like a strict teacher, Act like a pirate and more.

@codecov
Copy link

codecov bot commented Mar 9, 2026

Codecov Report

❌ Patch coverage is 99.49367% with 6 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
backend/tests/conftest.py 94.50% 5 Missing ⚠️
backend/tests/test_main.py 98.41% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/services/license_key_generator.py`:
- Around line 61-73: GeneratedLicenseKey.metadata (a BatchMetadata containing
batch_id, campaign, issued_by) is never persisted to the DB so batch metadata is
lost on save; update the license persistence logic that inserts/updates license
rows to include metadata.batch_id, metadata.campaign, and metadata.issued_by
into the new columns added by the migration. Locate where GeneratedLicenseKey
instances are stored (the code path that builds INSERT/UPDATE statements or the
repository/DAO methods handling license writes) and add mapping from
GeneratedLicenseKey.metadata -> DB fields (batch_id, campaign, issued_by),
ensure null-safe defaults if metadata is missing, update any SQL/ORM parameter
lists and tests accordingly, and keep GeneratedLicenseKey and BatchMetadata
types as the source of truth for these values.
- Around line 191-193: The debug log is emitting raw license credentials via
logger.debug("Batch key %d/%d generated: %s", i + 1, count, generated.key);
remove the raw generated.key from logs and instead log only non-secret context
(batch index and count) and a masked or hashed fingerprint; for example, compute
a short deterministic fingerprint of generated.key (e.g., SHA-256 hex prefix) or
mask all but 4 chars, then replace generated.key in the logger.debug call with
that fingerprint/mask so no full key is ever written to logs.

In `@migrations/06_license_key_updates.sql`:
- Around line 30-31: Add a preflight validation before the ALTER COLUMN change
to VARCHAR(24) on app."node_locked_license_data"."license_key": run a SELECT to
detect any rows where length("license_key") > 24 and fail the migration with a
clear error (or output offending keys) instead of proceeding; if none are found,
proceed with ALTER COLUMN; reference the table name
app."node_locked_license_data", the column "license_key", and the ALTER COLUMN
statement so reviewers can find and verify the guard.

In `@migrations/down/06_license_key_updates_down.sql`:
- Around line 21-23: The rollback currently only reverts the column type for
app."node_locked_license_data"."license_key" but does not restore the original
column comment; add a statement to reset the column comment in
migrations/down/06_license_key_updates_down.sql (use a COMMENT ON COLUMN for
app."node_locked_license_data"."license_key") to the prior text (i.e., remove
the forward-migration note about VARCHAR(24) and restore the original comment
content) so schema docs are correct after rollback.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: efc78644-8dca-4a22-98da-ffc5e3c8b23f

📥 Commits

Reviewing files that changed from the base of the PR and between b4cdcf9 and fed2fc2.

📒 Files selected for processing (7)
  • backend/app/core/exceptions.py
  • backend/app/schemas/response.py
  • backend/app/services/license_key_generator.py
  • backend/tests/core/test_exceptions.py
  • backend/tests/unit/test_license_key_generator.py
  • migrations/06_license_key_updates.sql
  • migrations/down/06_license_key_updates_down.sql

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.codacy.yml:
- Around line 7-9: The change disables tsqllint in .codacy.yml which removes
linting for SQL migration files; restore SQL linting by re-enabling tsqllint and
adding a configuration block that includes the migrations/ and migrations/down/
paths, or remove the tsqllint: enabled: false entry and instead add an
alternative linter configuration (for example sqlfluff) targeted at the same
directories; update .codacy.yml to reference the chosen linter and ensure it
covers the 12 migration files so Codacy will validate them again.

In `@backend/tests/unit/test_license_key_generator.py`:
- Around line 71-80: Replace the probabilistic uniqueness tests with
deterministic, API-contract-aligned tests by patching the underlying key
generator used by generate_license_key to first yield a colliding value then a
unique value; update test_generated_keys_are_unique to patch the generator (or
its random/uuid helper) to return a fixed sequence of distinct keys and assert
both are produced and different, and update
test_collision_with_existing_keys_triggers_retry to patch the helper to return
the same colliding key first and a different key second, call
generate_license_key(existing_keys={colliding_key}), then assert the returned
key is the second value and that the patched generator was invoked twice
(verifying the retry path in generate_license_key).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: ca8f76a6-5755-4492-9fe0-982d0d53b68b

📥 Commits

Reviewing files that changed from the base of the PR and between fed2fc2 and e472ce2.

📒 Files selected for processing (2)
  • .codacy.yml
  • backend/tests/unit/test_license_key_generator.py

@saudzahirr saudzahirr requested a review from laraibg786 March 9, 2026 07:56
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
migrations/03_app.sql (1)

174-179: ⚠️ Potential issue | 🟠 Major

Use a forward migration for this type change.

This only changes the bootstrap definition for fresh installs. Any database that already created app."node_locked_license_data" will hit duplicate_table and keep the old license_key type, so upgrade environments drift from new ones. Add a new numbered ALTER TABLE ... ALTER COLUMN migration and handle rows longer than 24 before tightening the type.

🛠️ Safer migration strategy
-        "license_key"             VARCHAR(24) NOT NULL UNIQUE,
+        "license_key"             TEXT NOT NULL UNIQUE,
ALTER TABLE app."node_locked_license_data"
    ALTER COLUMN "license_key" TYPE VARCHAR(24);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@migrations/03_app.sql` around lines 174 - 179, The migration currently only
changes the CREATE TABLE for app."node_locked_license_data" and will not alter
existing DBs; add a new forward migration file (next sequence number) that runs
an ALTER TABLE app."node_locked_license_data" ALTER COLUMN "license_key" TYPE
VARCHAR(24) and before changing the type handle existing rows longer than 24
(e.g., trim, validate, or fail with a clear message) so upgrades don't break;
reference the table name app."node_locked_license_data" and column "license_key"
in the new migration and include a safe remediation step for overlength values
before tightening the type.
♻️ Duplicate comments (1)
.codacy.yml (1)

3-5: ⚠️ Potential issue | 🟠 Major

Don't drop SQL lint coverage from migrations/**.

include_paths still scopes Codacy to migrations/**, but tsqllint now excludes that same tree. With no replacement SQL engine configured here, the migration set stops being linted again.

#!/bin/bash
set -euo pipefail

# Verify whether SQL files still have any linter coverage configured.
echo "SQL files under migrations/:"
fd -e sql . migrations || true

echo
echo "Repository references to SQL linters:"
rg -n --hidden -S 'tsqllint|sqlfluff|sql[- ]lint' || true

Expected result: either migrations/** remains covered by tsqllint, or another SQL linter is configured for those files.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.codacy.yml around lines 3 - 5, The Codacy config currently excludes
migrations/** from tsqllint (tsqllint.exclude_paths) while include_paths still
targets migrations/**, effectively removing SQL linting for migrations; either
remove "migrations/**" from the tsqllint.exclude_paths entry so tsqllint
continues to cover migrations, or instead add/configure an alternative SQL
linter block (e.g., sqlfluff or sql-lint) for the migrations/** path in
.codacy.yml so those files remain linted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/services/license_key_generator.py`:
- Around line 267-268: The code currently decodes license_file.signature with
base64.b64decode which tolerates junk; update the decoding in
license_key_generator.py to call base64.b64decode(license_file.signature,
validate=True) and then check the resulting signature length against the
expected signature byte length (the value used by the verify() call or the
public key algorithm) and raise/reject if lengths mismatch or decoding fails,
before calling verify(payload, signature) (references: _canonical_json, payload,
license_file.signature, base64.b64decode, verify).
- Around line 77-84: Replace the loose string types on the LicenseData model
with stricter types and validations: import UUID from uuid and change license_id
and vendor_id to type UUID, and set activation_code to a constrained str
validated against ACTIVATION_CODE_PATTERN (e.g., using a regex constraint or
Pydantic Field with regex); ensure issued_at, expires_at, and metadata remain
unchanged. Update any places that construct LicenseData (tests like
test_license_data_default_issued_at, test_license_data_optional_fields,
test_license_data_with_metadata, test_license_data_with_expiry) to pass UUID
objects (e.g., uuid.uuid4()) for license_id and vendor_id and ensure
activation_code fixtures conform to ACTIVATION_CODE_PATTERN so invalid payloads
are rejected before signing.

---

Outside diff comments:
In `@migrations/03_app.sql`:
- Around line 174-179: The migration currently only changes the CREATE TABLE for
app."node_locked_license_data" and will not alter existing DBs; add a new
forward migration file (next sequence number) that runs an ALTER TABLE
app."node_locked_license_data" ALTER COLUMN "license_key" TYPE VARCHAR(24) and
before changing the type handle existing rows longer than 24 (e.g., trim,
validate, or fail with a clear message) so upgrades don't break; reference the
table name app."node_locked_license_data" and column "license_key" in the new
migration and include a safe remediation step for overlength values before
tightening the type.

---

Duplicate comments:
In @.codacy.yml:
- Around line 3-5: The Codacy config currently excludes migrations/** from
tsqllint (tsqllint.exclude_paths) while include_paths still targets
migrations/**, effectively removing SQL linting for migrations; either remove
"migrations/**" from the tsqllint.exclude_paths entry so tsqllint continues to
cover migrations, or instead add/configure an alternative SQL linter block
(e.g., sqlfluff or sql-lint) for the migrations/** path in .codacy.yml so those
files remain linted.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 288331b3-d154-469b-b539-121e2fa00e95

📥 Commits

Reviewing files that changed from the base of the PR and between e472ce2 and b004e81.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (9)
  • .codacy.yml
  • backend/app/services/license_key_generator.py
  • backend/pyproject.toml
  • backend/tests/integration/test_auth.py
  • backend/tests/services/__init__.py
  • backend/tests/services/test_license_key_generator.py
  • backend/tests/unit/__init__.py
  • backend/tests/unit/test_auth.py
  • migrations/03_app.sql
💤 Files with no reviewable changes (2)
  • backend/tests/unit/test_auth.py
  • backend/tests/integration/test_auth.py

@saudzahirr saudzahirr requested a review from laraibg786 March 11, 2026 04:47
- add preflight check for license key length in migration scripts

- fix migration tests
- update backend tests and add docstrings
- update ruff configuration
- dependency cleanup and add missing deps
@laraibg786 laraibg786 force-pushed the feature/10-license-key-generator branch from b004e81 to b16931e Compare March 19, 2026 11:07
@sonarqubecloud
Copy link

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 21

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.gitignore:
- Around line 19-21: Remove the unrelated patterns from .gitignore: delete the
lines for ".fastembed_cache", ".amdb", and ".database" since FastEmbed isn't a
dependency and those artifacts aren't used by the project; keep only
repository-specific, relevant ignore entries (e.g., OS, editor, build artifacts,
or any actual local DB files if required), then commit the updated .gitignore so
the unnecessary cache/database patterns are no longer ignored.

In `@backend/app/core/ed25519.py`:
- Around line 17-30: The function load_public_ed25519_key has an inconsistent
type hint for its pem parameter (bytes) compared to load_private_ed25519_key
(Buffer); update load_public_ed25519_key's signature to accept the same Buffer
type as load_private_ed25519_key (or the equivalent union like bytes | bytearray
| memoryview) so both loaders use consistent input types, and keep the existing
runtime behavior (call load_pem_public_key and the isinstance check against
Ed25519PublicKey).

In `@backend/app/domain/activation.py`:
- Around line 52-55: The check that currently raises TypeError when uuid.version
!= 7 should instead raise ValueError because the UUID is the correct type but
has an invalid value; update the raise in the function/method performing the
version check (the block referencing uuid.version != 7) to raise ValueError with
the same descriptive message, and update any tests that assert a TypeError to
expect ValueError instead.
- Around line 39-44: The uuid property re-normalizes and re-decodes the
activation code redundantly; fix by caching the decoded integer during
validation (e.g., in the validate_code validator store the decoded int on a
private attribute like _decoded_uuid_int) and then have the uuid property return
UUID(int=self._decoded_uuid_int), or alternatively make the uuid property a
cached result (use functools.cached_property or pydantic model caching) so
base32_crockford.normalize/decode only run once; update the validate_code (or
switch to a model_validator) and the uuid property accordingly to use the cached
value.
- Around line 46-55: The isinstance(uuid, UUID) check in Activation.generate is
redundant after assigning uuid6.uuid7() when uuid is None; update the validation
to only enforce type when an external value was passed: after the initial if
uuid is None: uuid = uuid6.uuid7() block, validate with "if not isinstance(uuid,
UUID): raise TypeError(...)" but only when the original argument was not None
(or simply remove the unreachable branch by performing the type check inside an
else branch of the None handling), then keep the existing UUID version check
(uuid.version != 7) to raise the same TypeError if needed; target the
classmethod generate in activation.py.

In `@backend/app/domain/license.py`:
- Around line 44-46: NodeLockedLicense currently allows any string for
device_fingerprint which can embed malformed fingerprints into signed licenses;
update the model to validate/normalize device_fingerprint at the boundary by
either using the existing fingerprint value object (e.g., a
Fingerprint/DeviceFingerprint type) or adding a pydantic validator on
NodeLockedLicense.device_fingerprint that enforces the canonical format
(normalize case, strip separators, check length/charset) and raises
ValidationError on invalid input so only normalized, validated fingerprints
become part of the signed payload.

In `@backend/app/internal/base32_crockford.py`:
- Around line 117-125: normalize() currently allows a trailing checksum symbol
to remain in symbol_string and DECODE_SYMBOLS contains mappings for checksum
characters; when checksum=False inputs like "0U" get decoded instead of
rejected. After calling normalize(symbol_string, strict=strict) and before
decoding loop, if checksum is False validate that symbol_string contains no
checksum characters (e.g., any symbol whose DECODE_SYMBOLS value >=
BASE_CHECK_INDEX or explicitly any of the checksum chars "*~$=U"); if any are
present raise a ValueError (or the existing InvalidSymbol error) so
checksum-only symbols are rejected; keep the existing checksum=True branch that
strips the last symbol into check_symbol.

In `@backend/app/services/license_gen.py`:
- Around line 22-26: Replace the inappropriate LicenseKeyGenerationError raised
in the except InvalidSignature block with a validation-oriented exception:
either introduce a new LicenseVerificationError exception class or reuse an
existing validation exception (e.g., ValidationError) and raise that from the
caught InvalidSignature (preserving the original exception via "from exc");
update the except block that currently references InvalidSignature and
LicenseKeyGenerationError to raise LicenseVerificationError (or the chosen
validation type) with a clear message like "License signature verification
failed" so callers can distinguish verification failures from generation errors.
- Around line 17-19: Wrap the base58.b58decode call in a try/except to catch
ValueError from invalid Base58 input (the call that produces sig_bytes from
signature) and raise or return a clearer error (e.g., raise ValueError("invalid
base58-encoded signature") from err) so the failure is handled instead of
propagating raw; keep the rest of the flow (payload =
lic.canonical_json().encode(), public_key = load_public_ed25519_key(...))
unchanged and ensure the exception chaining uses "from" to preserve the original
error.

In `@backend/tests/api/routes/test_auth.py`:
- Around line 411-427: The test builds an auth payload then creates an
invalid_token using create_access_token with a random UUID, which may not match
the signed-up vendor; instead, when token_kind == "access_token" call
signup(client, payload) then call login(client, payload) and extract the access
token from that login response to use as invalid_token so the vendor/client pair
matches and only token_type differs; update the branch that currently calls
create_access_token(faker.uuid4(), app_settings) to reuse the logged-in access
token returned by login(client, payload) (referencing build_auth_payload,
create_access_token, signup, login, and invalid_token).

In `@backend/tests/api/test_deps.py`:
- Around line 214-219: The tests currently set request.app.state attributes by
calling build_request(db_pool=None) or build_request(settings=None), so they
never exercise the true "attribute missing" path; update the failing-case tests
that call build_request (the ones exercising deps.get_db and any similar
deps.get_settings) to construct a request that does not include the keys on
request.app.state at all (i.e., call/build a request without adding
app.state.db_pool or app.state.settings) so that next(deps.get_db(request))
raises ServiceUnavailableException and the same for the other tests; locate
usages of build_request and deps.get_db / deps.get_settings in the test file and
change the setup to omit adding those state attributes rather than adding them
with None.

In `@backend/tests/conftest.py`:
- Around line 312-315: The autouse fixture override_function_dependencies (which
depends on db_conn) forces PostgreSQL to start for every test; change it so it
no longer runs globally: remove autouse=True (or set autouse=False) and either
register a new fixture (e.g., integration_override_function_dependencies) that
depends on db_conn for API/integration tests only, or require explicit use via
pytest.mark.usefixtures on integration test modules; ensure any references to
override_function_dependencies in test modules are updated to opt in so pure
unit tests (like tests/domain/test_license.py and
tests/services/test_license_gen.py) no longer pull in db_conn.

In `@backend/tests/core/test_exception_handlers.py`:
- Around line 47-143: Docstrings in the tests refer to the old internal name
`_build_error_details`; update each docstring to reference the public function
name `build_error_details` instead (affecting tests:
test_build_error_details_none_returns_empty_list,
test_build_error_details_accepts_error_detail_instance,
test_build_error_details_dict_uses_fallback_message_when_missing,
test_build_error_details_non_dict_item_is_stringified) so the documented
"Covers" section matches the actual symbol under test; simply replace
`app.core.exception_handlers._build_error_details` with
`app.core.exception_handlers.build_error_details` in those docstrings.

In `@backend/tests/core/test_security.py`:
- Around line 63-66: The test may occasionally generate identical values for
original_password and wrong_password using faker.password(), causing a false
positive; modify the test around original_password, wrong_password,
get_password_hash, and verify_password to guarantee they differ (e.g., generate
wrong_password in a loop or mutate it until wrong_password != original_password)
before calling get_password_hash(original_password) and
verify_password(wrong_password, hashed_password) so the “wrong” case is
deterministic.

In `@backend/tests/domain/test_activation.py`:
- Around line 128-144: The code currently treats a non-v7 UUID as a TypeError;
change the validation in ActivationCode.generate to raise ValueError for wrong
UUID version (e.g., if uuid.version != 7: raise ValueError("UUID version 7
required") ) and update any tests (the test in test_activation.py) to expect
ValueError instead of TypeError so the error semantics reflect “correct type,
invalid value.”

In `@backend/tests/internal/test_base32_crockford.py`:
- Around line 134-135: The test's expected error message is misleading for
base32_crockford.encode when passing a negative integer; update the assertion to
match the actual validation (which allows zero) by changing the pytest.raises
match from "is not a positive integer" to a correct message such as "is
negative" or "is not a non-negative integer" so the test reflects that the
function rejects negative values but accepts zero (refer to
base32_crockford.encode and the test invocation base32_crockford.encode(-1)).

In `@backend/tests/schemas/test_auth.py`:
- Around line 116-122: The test's scenario branching silently treats any
unexpected scenario as "missing_client_id"; update the branch in the test (where
`scenario` is checked and `payload` is mutated) to replace the final `else` with
an explicit `elif scenario == "missing_client_id": payload.pop("client_id")` and
add an `else: raise ValueError(f"Unknown scenario: {scenario}")` (or an assert)
to fail fast on typos/new scenarios; locate this logic in the test function
handling `scenario` in test_auth.py and apply the check to ensure unknown
scenarios are guarded.

In `@backend/tests/services/test_auth.py`:
- Around line 281-314: The test currently only checks tokens are non-empty but
doesn't assert they are newly rotated; update
test_refresh_success_returns_new_token_pair to assert that result.access_token
!= tokens.access_token and result.refresh_token != tokens.refresh_token to
guarantee new values were issued, and optionally attempt calling
refresh(tokens.refresh_token, client_id, db_cursor, app_settings) again (or the
equivalent invalidation check) and assert it fails to ensure the old refresh
token was invalidated.
- Around line 354-389: The test test_refresh_rejects_invalid_tokens uses a
random client id (faker.uuid4()) when calling refresh, which can make failures
depend on validation order; change the call in that test to use a
known/persisted client id instead (e.g., obtain an existing vendor/client id set
up in the test fixtures or create and persist a test vendor before invoking
refresh) so the test isolates token-related errors; update the invocation of
refresh(refresh_token, faker.uuid4(), db_cursor, app_settings) to use the known
id and ensure the fixture or setup code that creates the persisted id is
referenced/used in this test.

In `@backend/tests/test_main.py`:
- Around line 86-88: The test docstring for the route-id test is truncated;
update the docstring in the test in backend/tests/test_main.py to complete the
sentence so it clearly states that the test verifies
app.main.custom_generate_unique_id prefixes generated route ids (so generated
OpenAPI operation names remain stable across tagged and untagged routes). Locate
the test that references app.main.custom_generate_unique_id and replace the
incomplete sentence with a full, clear sentence describing that intent.
- Around line 209-210: The failure-path test currently monkeypatches
app_main.ConnectionPool with a lambda but doesn't assert what constructor args
were passed; modify the patch to record the arguments and assert them (mirroring
the success-path test). Replace the lambda used in monkeypatch.setattr(app_main,
"ConnectionPool", ...) with a small spy/factory that captures the dsn and open
keyword (e.g., store into captured_dsn/captured_open variables or a list) and
returns the same pool object, then add assertions that the captured dsn and open
flag equal the expected values; keep the existing monkeypatch of
app_main.Settings unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: e231ffb5-982b-4960-9e63-f9cc47e7e923

📥 Commits

Reviewing files that changed from the base of the PR and between b004e81 and b16931e.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (57)
  • .github/workflows/code_test.yml
  • .github/workflows/sonarcloud.yml
  • .gitignore
  • backend/app/api/routes/auth.py
  • backend/app/api/routes/login.py
  • backend/app/core/ed25519.py
  • backend/app/core/exception_handlers.py
  • backend/app/core/exceptions.py
  • backend/app/core/security.py
  • backend/app/crud/vendor.py
  • backend/app/domain/activation.py
  • backend/app/domain/fingerprint.py
  • backend/app/domain/license.py
  • backend/app/internal/.gitkeep
  • backend/app/internal/base32_crockford.py
  • backend/app/main.py
  • backend/app/pre_start.py
  • backend/app/schemas/response.py
  • backend/app/services/license_gen.py
  • backend/pyproject.toml
  • backend/tests/__init__.py
  • backend/tests/api/routes/test_auth.py
  • backend/tests/api/routes/test_health.py
  • backend/tests/api/test_deps.py
  • backend/tests/api/test_main.py
  • backend/tests/api/test_middlewares.py
  • backend/tests/conftest.py
  • backend/tests/core/test_config.py
  • backend/tests/core/test_ed25519.py
  • backend/tests/core/test_exception_handlers.py
  • backend/tests/core/test_exceptions.py
  • backend/tests/core/test_security.py
  • backend/tests/crud/test_vendor.py
  • backend/tests/domain/test_activation.py
  • backend/tests/domain/test_fingerprint.py
  • backend/tests/domain/test_license.py
  • backend/tests/integration/test_auth.py
  • backend/tests/internal/test_base32_crockford.py
  • backend/tests/schemas/test_auth.py
  • backend/tests/schemas/test_response.py
  • backend/tests/services/__init__.py
  • backend/tests/services/test_auth.py
  • backend/tests/services/test_license_gen.py
  • backend/tests/test_main.py
  • backend/tests/test_pre_start.py
  • backend/tests/unit/test_auth.py
  • migrations/03_app.sql
  • migrations/04_audit.sql
  • migrations/07_audit_triggers.sql
  • migrations/tests/helpers.py
  • migrations/tests/test_constraints.py
  • migrations/tests/test_indexes.py
  • migrations/tests/test_partitioning.py
  • migrations/tests/test_privileges.py
  • migrations/tests/test_rls_isolation.py
  • migrations/tests/test_rls_structure.py
  • migrations/tests/test_seed_data.py
💤 Files with no reviewable changes (4)
  • backend/app/api/routes/login.py
  • backend/tests/init.py
  • backend/tests/unit/test_auth.py
  • backend/tests/integration/test_auth.py

Comment on lines +19 to +21
.fastembed_cache
.database
.amdb
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify usage of fastembed and identify what .amdb files are

# Check for fastembed imports or references
echo "=== Checking for fastembed usage ==="
rg -i 'fastembed' --type=py -C 2

# Check for .amdb file references or existing files
echo -e "\n=== Checking for .amdb references ==="
rg -i '\.amdb' -C 2

# Look for any existing files matching these patterns
echo -e "\n=== Checking for existing files matching patterns ==="
fd -H -t f '^\.fastembed_cache$'
fd -H -t f '^\.database$'
fd -H '\.amdb$'

# Check requirements/dependencies for fastembed
echo -e "\n=== Checking dependencies ==="
fd -t f 'requirements.*\.txt|pyproject\.toml|setup\.py|Pipfile' --exec cat {}

Repository: eggzec/permit

Length of output: 5047


Remove .fastembed_cache from .gitignore — it's not a project dependency.

The verification shows FastEmbed is neither listed in pyproject.toml dependencies nor referenced in the codebase. This pattern appears to be out of scope for the license key generation PR. Additionally, .amdb is unidentified and lacks any references in the codebase. The .database pattern is unclear given the project uses PostgreSQL (server-based) rather than file-based databases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.gitignore around lines 19 - 21, Remove the unrelated patterns from
.gitignore: delete the lines for ".fastembed_cache", ".amdb", and ".database"
since FastEmbed isn't a dependency and those artifacts aren't used by the
project; keep only repository-specific, relevant ignore entries (e.g., OS,
editor, build artifacts, or any actual local DB files if required), then commit
the updated .gitignore so the unnecessary cache/database patterns are no longer
ignored.

Comment on lines +17 to +30
def load_private_ed25519_key(
pem: Buffer, password: bytes | None = None
) -> Ed25519PrivateKey:
key = load_pem_private_key(pem, password=password)
if not isinstance(key, Ed25519PrivateKey):
raise TypeError("Provided key is not an Ed25519 private key")
return key


def load_public_ed25519_key(pem: bytes) -> Ed25519PublicKey:
key = load_pem_public_key(pem)
if not isinstance(key, Ed25519PublicKey):
raise TypeError("Provided key is not an Ed25519 public key")
return key
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Type signature inconsistency between private and public key loaders.

load_private_ed25519_key accepts Buffer (which includes bytes, bytearray, memoryview) for the pem parameter, but load_public_ed25519_key only accepts bytes. Since load_pem_public_key from cryptography also accepts Buffer, consider using consistent types for both functions.

♻️ Proposed fix for type consistency
-def load_public_ed25519_key(pem: bytes) -> Ed25519PublicKey:
+def load_public_ed25519_key(pem: Buffer) -> Ed25519PublicKey:
     key = load_pem_public_key(pem)
     if not isinstance(key, Ed25519PublicKey):
         raise TypeError("Provided key is not an Ed25519 public key")
     return key
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/core/ed25519.py` around lines 17 - 30, The function
load_public_ed25519_key has an inconsistent type hint for its pem parameter
(bytes) compared to load_private_ed25519_key (Buffer); update
load_public_ed25519_key's signature to accept the same Buffer type as
load_private_ed25519_key (or the equivalent union like bytes | bytearray |
memoryview) so both loaders use consistent input types, and keep the existing
runtime behavior (call load_pem_public_key and the isinstance check against
Ed25519PublicKey).

Comment on lines +39 to +44
@computed_field
@property
def uuid(self) -> UUID:
flat = base32_crockford.normalize(self.code)
n = base32_crockford.decode(flat, checksum=True)
return UUID(int=n)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant normalization and decode in uuid property.

The validate_code validator already normalizes and decodes the code. The uuid property re-normalizes and re-decodes the same value. Consider caching or restructuring to avoid duplicate work.

♻️ Option: Store decoded value during validation

One approach is to store the decoded integer as a private attribute during validation, then use it in the uuid property. Alternatively, accept the small overhead since the operations are fast and the model is likely short-lived.

+    _decoded_int: int | None = None  # Private field for cached decode result
+
     `@field_validator`("code")
     `@classmethod`
     def validate_code(cls, v: str) -> str:
         normalized = base32_crockford.normalize(v)
         if len(normalized) != cls.LENGTH:
             raise ValueError("Activation code must contain 30 symbols")
-        base32_crockford.decode(normalized, checksum=True)
+        # Decode is performed for checksum validation; result used by uuid property
+        base32_crockford.decode(normalized, checksum=True)
         return "-".join(
             normalized[i : i + cls.GROUP]
             for i in range(0, cls.LENGTH, cls.GROUP)
         )

Note: Pydantic's field_validator doesn't easily allow setting other fields. If performance matters, consider using model_validator instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/domain/activation.py` around lines 39 - 44, The uuid property
re-normalizes and re-decodes the activation code redundantly; fix by caching the
decoded integer during validation (e.g., in the validate_code validator store
the decoded int on a private attribute like _decoded_uuid_int) and then have the
uuid property return UUID(int=self._decoded_uuid_int), or alternatively make the
uuid property a cached result (use functools.cached_property or pydantic model
caching) so base32_crockford.normalize/decode only run once; update the
validate_code (or switch to a model_validator) and the uuid property accordingly
to use the cached value.

Comment on lines +46 to +55
@classmethod
def generate(cls, uuid: UUID | None = None) -> ActivationCode:
if uuid is None:
uuid = uuid6.uuid7()
if not isinstance(uuid, UUID):
raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}")
if uuid.version != 7: # noqa: PLR2004
raise TypeError(
f"uuid must be a UUID version 7, got {uuid.version}"
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Redundant type check after None handling.

The isinstance(uuid, UUID) check at line 50 is unreachable when uuid was initially None, since uuid6.uuid7() returns a UUID object. The check only matters when a non-None value was passed, but the type hint already indicates UUID | None.

♻️ Suggested refactor to clarify intent
     `@classmethod`
     def generate(cls, uuid: UUID | None = None) -> ActivationCode:
+        if uuid is not None and not isinstance(uuid, UUID):
+            raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}")
         if uuid is None:
             uuid = uuid6.uuid7()
-        if not isinstance(uuid, UUID):
-            raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}")
         if uuid.version != 7:  # noqa: PLR2004
             raise TypeError(
                 f"uuid must be a UUID version 7, got {uuid.version}"
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/domain/activation.py` around lines 46 - 55, The isinstance(uuid,
UUID) check in Activation.generate is redundant after assigning uuid6.uuid7()
when uuid is None; update the validation to only enforce type when an external
value was passed: after the initial if uuid is None: uuid = uuid6.uuid7() block,
validate with "if not isinstance(uuid, UUID): raise TypeError(...)" but only
when the original argument was not None (or simply remove the unreachable branch
by performing the type check inside an else branch of the None handling), then
keep the existing UUID version check (uuid.version != 7) to raise the same
TypeError if needed; target the classmethod generate in activation.py.

Comment on lines +52 to +55
if uuid.version != 7: # noqa: PLR2004
raise TypeError(
f"uuid must be a UUID version 7, got {uuid.version}"
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider ValueError instead of TypeError for version mismatch.

The UUID type is correct; only the version value is wrong. ValueError is more semantically appropriate for "correct type, invalid value" scenarios.

♻️ Proposed change
         if uuid.version != 7:  # noqa: PLR2004
-            raise TypeError(
+            raise ValueError(
                 f"uuid must be a UUID version 7, got {uuid.version}"
             )

Note: This would require updating the corresponding test to expect ValueError instead of TypeError.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/domain/activation.py` around lines 52 - 55, The check that
currently raises TypeError when uuid.version != 7 should instead raise
ValueError because the UUID is the correct type but has an invalid value; update
the raise in the function/method performing the version check (the block
referencing uuid.version != 7) to raise ValueError with the same descriptive
message, and update any tests that assert a TypeError to expect ValueError
instead.

Comment on lines +116 to +122
if scenario == "invalid_email":
payload["email"] = faker.word()
elif scenario == "short_password":
payload["password"] = faker.password(length=7, special_chars=False)
else:
payload.pop("client_id")

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Guard unknown invalid-payload scenarios explicitly.

The current else path silently treats any new/typo scenario as missing_client_id, which can mask test mistakes.

Suggested hardening
-    if scenario == "invalid_email":
+    if scenario == "invalid_email":
         payload["email"] = faker.word()
     elif scenario == "short_password":
         payload["password"] = faker.password(length=7, special_chars=False)
-    else:
+    elif scenario == "missing_client_id":
         payload.pop("client_id")
+    else:
+        raise AssertionError(f"Unhandled scenario: {scenario}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/schemas/test_auth.py` around lines 116 - 122, The test's
scenario branching silently treats any unexpected scenario as
"missing_client_id"; update the branch in the test (where `scenario` is checked
and `payload` is mutated) to replace the final `else` with an explicit `elif
scenario == "missing_client_id": payload.pop("client_id")` and add an `else:
raise ValueError(f"Unknown scenario: {scenario}")` (or an assert) to fail fast
on typos/new scenarios; locate this logic in the test function handling
`scenario` in test_auth.py and apply the check to ensure unknown scenarios are
guarded.

Comment on lines +281 to +314
def test_refresh_success_returns_new_token_pair(
db_conn: Connection, app_settings: Settings, faker
) -> None:
"""
Verifies that `app.services.auth.refresh` exchanges a valid refresh token for a replacement token pair.
This matters because the auth service owns the token-refresh contract used by the API route.

Covers:
- `app.services.auth.login`
- `app.services.auth.refresh`

Rationale:
The test performs the full signup-login-refresh sequence through the real service functions because refresh correctness depends on issued token state.

Fixtures:
db_conn: Transactional database connection rolled back after the test.
app_settings: Shared `Settings` object used to sign and decode tokens.
faker: Session-scoped `Faker` instance used to generate auth inputs.
"""
email, password, client_id = build_signup_args(faker)
with db_conn.cursor() as db_cursor:
signup(db_cursor, email, password, client_id, app_settings)
tokens = login(db_cursor, email, password, client_id, app_settings)
result = refresh(
tokens.refresh_token, client_id, db_cursor, app_settings
)

assert result.access_token, (
"Expected refresh to return a non-empty replacement access token"
)
assert result.refresh_token, (
"Expected refresh to return a non-empty replacement refresh token"
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Assert rotation semantics explicitly in refresh success test.

Line 308-Line 313 only verify non-empty tokens. This does not prove a new token pair was issued, so token-reuse regressions could pass undetected.

Suggested test assertion update
@@
     with db_conn.cursor() as db_cursor:
         signup(db_cursor, email, password, client_id, app_settings)
         tokens = login(db_cursor, email, password, client_id, app_settings)
         result = refresh(
             tokens.refresh_token, client_id, db_cursor, app_settings
         )
@@
     assert result.refresh_token, (
         "Expected refresh to return a non-empty replacement refresh token"
     )
+    assert result.access_token != tokens.access_token, (
+        "Expected refresh to rotate and return a new access token"
+    )
+    assert result.refresh_token != tokens.refresh_token, (
+        "Expected refresh to rotate and return a new refresh token"
+    )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/services/test_auth.py` around lines 281 - 314, The test
currently only checks tokens are non-empty but doesn't assert they are newly
rotated; update test_refresh_success_returns_new_token_pair to assert that
result.access_token != tokens.access_token and result.refresh_token !=
tokens.refresh_token to guarantee new values were issued, and optionally attempt
calling refresh(tokens.refresh_token, client_id, db_cursor, app_settings) again
(or the equivalent invalidation check) and assert it fails to ensure the old
refresh token was invalidated.

Comment on lines +354 to +389
def test_refresh_rejects_invalid_tokens(
db_conn: Connection,
app_settings: Settings,
faker,
token_factory,
expected_message: str,
) -> None:
"""
Verifies that `app.services.auth.refresh` rejects invalid refresh tokens, malformed payloads, and unknown-vendor claims.
This matters because the service must not mint new tokens from unusable or untrusted refresh input.

Covers:
- `app.services.auth.refresh`

Rationale:
The test signs real token variants instead of patching decode helpers, so it documents the actual service boundary. This shape came from REM-003.

Fixtures:
db_conn: Transactional database connection rolled back after the test.
app_settings: Shared `Settings` object used to sign token variants.
faker: Session-scoped `Faker` instance used to generate claims and ids.

Parametrize:
token_factory: Produces the invalid refresh token variant for the scenario.
expected_message: The authentication error expected from the service.
Cases:
- <id="access_token"> — supplies an access token where a refresh token is required.
- <id="expired_refresh_token"> — supplies a refresh token whose expiry is already in the past.
- <id="malformed_vendor_id"> — supplies a refresh token whose vendor id claim is not a UUID.
- <id="unknown_vendor"> — supplies a refresh token whose vendor id does not exist in the database.
"""
refresh_token = token_factory(faker, app_settings)
with db_conn.cursor() as db_cursor:
with pytest.raises(AuthenticationException, match=expected_message):
refresh(refresh_token, faker.uuid4(), db_cursor, app_settings)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Use a known client id in invalid-token scenarios to isolate failure causes.

Line 388 currently passes a random client id, which can couple expected errors to validation order. Using a persisted/known client id keeps these cases focused on token invalidity only.

Suggested stabilization change
@@
 def test_refresh_rejects_invalid_tokens(
@@
 ) -> None:
@@
-    refresh_token = token_factory(faker, app_settings)
     with db_conn.cursor() as db_cursor:
+        email, password, client_id = build_signup_args(faker)
+        signup(db_cursor, email, password, client_id, app_settings)
+        refresh_token = token_factory(faker, app_settings)
         with pytest.raises(AuthenticationException, match=expected_message):
-            refresh(refresh_token, faker.uuid4(), db_cursor, app_settings)
+            refresh(refresh_token, client_id, db_cursor, app_settings)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/services/test_auth.py` around lines 354 - 389, The test
test_refresh_rejects_invalid_tokens uses a random client id (faker.uuid4()) when
calling refresh, which can make failures depend on validation order; change the
call in that test to use a known/persisted client id instead (e.g., obtain an
existing vendor/client id set up in the test fixtures or create and persist a
test vendor before invoking refresh) so the test isolates token-related errors;
update the invocation of refresh(refresh_token, faker.uuid4(), db_cursor,
app_settings) to use the known id and ensure the fixture or setup code that
creates the persisted id is referenced/used in this test.

Comment on lines +86 to +88
Verifies that `app.main.custom_generate_unique_id` prefixes generated
This matters because route ids feed generated OpenAPI operation names
and should stay stable across tagged and untagged routes.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix incomplete sentence in the route-id test docstring.

The sentence at Line 87 is truncated, which reduces readability of test intent documentation.

Suggested wording fix
-    Verifies that `app.main.custom_generate_unique_id` prefixes generated
+    Verifies that `app.main.custom_generate_unique_id` prefixes generated route ids.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Verifies that `app.main.custom_generate_unique_id` prefixes generated
This matters because route ids feed generated OpenAPI operation names
and should stay stable across tagged and untagged routes.
Verifies that `app.main.custom_generate_unique_id` prefixes generated route ids.
This matters because route ids feed generated OpenAPI operation names
and should stay stable across tagged and untagged routes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_main.py` around lines 86 - 88, The test docstring for the
route-id test is truncated; update the docstring in the test in
backend/tests/test_main.py to complete the sentence so it clearly states that
the test verifies app.main.custom_generate_unique_id prefixes generated route
ids (so generated OpenAPI operation names remain stable across tagged and
untagged routes). Locate the test that references
app.main.custom_generate_unique_id and replace the incomplete sentence with a
full, clear sentence describing that intent.

Comment on lines +209 to +210
monkeypatch.setattr(app_main, "Settings", lambda: settings)
monkeypatch.setattr(app_main, "ConnectionPool", lambda dsn, *, open: pool) # noqa: A006
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Assert ConnectionPool constructor arguments in the failure-path test too.

This keeps contract coverage symmetric with the success-path test and guards regressions in startup wiring.

Suggested update
-    monkeypatch.setattr(app_main, "ConnectionPool", lambda dsn, *, open: pool)  # noqa: A006
+    def build_pool(dsn: str, *, open: bool) -> SimpleNamespace:  # noqa: A002
+        assert dsn == str(settings.DATABASE_DSN), (
+            f"Expected ConnectionPool DSN '{settings.DATABASE_DSN}', got '{dsn}'"
+        )
+        assert open is True, (
+            "Expected lifespan to construct ConnectionPool with open=True"
+        )
+        return pool
+
+    monkeypatch.setattr(app_main, "ConnectionPool", build_pool)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_main.py` around lines 209 - 210, The failure-path test
currently monkeypatches app_main.ConnectionPool with a lambda but doesn't assert
what constructor args were passed; modify the patch to record the arguments and
assert them (mirroring the success-path test). Replace the lambda used in
monkeypatch.setattr(app_main, "ConnectionPool", ...) with a small spy/factory
that captures the dsn and open keyword (e.g., store into
captured_dsn/captured_open variables or a list) and returns the same pool
object, then add assertions that the captured dsn and open flag equal the
expected values; keep the existing monkeypatch of app_main.Settings unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants