Implement license key generation with batch metadata and error handling#48
Implement license key generation with batch metadata and error handling#48saudzahirr wants to merge 3 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughThis PR introduces comprehensive licensing infrastructure with activation codes and Ed25519 signatures. It adds domain models for licenses, activation codes, and device fingerprints; implements Crockford Base32 encoding for activation codes; introduces license signature generation and verification; refactors test infrastructure with session-scoped database fixtures; adds new security utilities for Ed25519 key handling; and updates database schema to rename license_key to activation_code. Removes legacy auth unit/integration tests in favor of new API-level integration tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as Auth API
participant Service as Auth Service
participant DB as Database
participant Security as Security Util
Client->>API: POST /signup (email, password, client_id)
API->>Service: signup(email, password, client_id)
Service->>Security: get_password_hash(password)
Security-->>Service: hashed_password
Service->>DB: create_vendor(email, hashed_password)
DB-->>Service: vendor_id
Service-->>API: Vendor
API-->>Client: 201 Created
Client->>API: POST /login (email, password)
API->>Service: login(email, password)
Service->>DB: get_vendor_by_email(email)
DB-->>Service: vendor + stored_hash
Service->>Security: verify_password(password, stored_hash)
Security-->>Service: valid, upgraded_hash?
Service->>Security: create_access_token(vendor_id)
Service->>Security: create_refresh_token(vendor_id)
Security-->>Service: access_token, refresh_token
Service-->>API: TokenPair
API-->>Client: 200 OK
Client->>API: POST /refresh (refresh_token)
API->>Security: decode_token(refresh_token)
Security-->>API: vendor_id, token_type
API->>Service: refresh(refresh_payload)
Service->>Security: create_access_token(vendor_id)
Service->>Security: create_refresh_token(vendor_id)
Security-->>Service: new_access_token, new_refresh_token
Service-->>API: TokenPair
API-->>Client: 200 OK
Client->>API: GET /protected (Authorization: Bearer access_token)
API->>Security: decode_token(access_token)
Security-->>API: vendor_id (if valid)
API->>API: get_current_vendor_id() yields vendor_id
API-->>Client: 200 Protected Resource
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly Related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment Tip You can customize the tone of the review comments and chat replies.Configure the |
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/services/license_key_generator.py`:
- Around line 61-73: GeneratedLicenseKey.metadata (a BatchMetadata containing
batch_id, campaign, issued_by) is never persisted to the DB so batch metadata is
lost on save; update the license persistence logic that inserts/updates license
rows to include metadata.batch_id, metadata.campaign, and metadata.issued_by
into the new columns added by the migration. Locate where GeneratedLicenseKey
instances are stored (the code path that builds INSERT/UPDATE statements or the
repository/DAO methods handling license writes) and add mapping from
GeneratedLicenseKey.metadata -> DB fields (batch_id, campaign, issued_by),
ensure null-safe defaults if metadata is missing, update any SQL/ORM parameter
lists and tests accordingly, and keep GeneratedLicenseKey and BatchMetadata
types as the source of truth for these values.
- Around line 191-193: The debug log is emitting raw license credentials via
logger.debug("Batch key %d/%d generated: %s", i + 1, count, generated.key);
remove the raw generated.key from logs and instead log only non-secret context
(batch index and count) and a masked or hashed fingerprint; for example, compute
a short deterministic fingerprint of generated.key (e.g., SHA-256 hex prefix) or
mask all but 4 chars, then replace generated.key in the logger.debug call with
that fingerprint/mask so no full key is ever written to logs.
In `@migrations/06_license_key_updates.sql`:
- Around line 30-31: Add a preflight validation before the ALTER COLUMN change
to VARCHAR(24) on app."node_locked_license_data"."license_key": run a SELECT to
detect any rows where length("license_key") > 24 and fail the migration with a
clear error (or output offending keys) instead of proceeding; if none are found,
proceed with ALTER COLUMN; reference the table name
app."node_locked_license_data", the column "license_key", and the ALTER COLUMN
statement so reviewers can find and verify the guard.
In `@migrations/down/06_license_key_updates_down.sql`:
- Around line 21-23: The rollback currently only reverts the column type for
app."node_locked_license_data"."license_key" but does not restore the original
column comment; add a statement to reset the column comment in
migrations/down/06_license_key_updates_down.sql (use a COMMENT ON COLUMN for
app."node_locked_license_data"."license_key") to the prior text (i.e., remove
the forward-migration note about VARCHAR(24) and restore the original comment
content) so schema docs are correct after rollback.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: efc78644-8dca-4a22-98da-ffc5e3c8b23f
📒 Files selected for processing (7)
backend/app/core/exceptions.pybackend/app/schemas/response.pybackend/app/services/license_key_generator.pybackend/tests/core/test_exceptions.pybackend/tests/unit/test_license_key_generator.pymigrations/06_license_key_updates.sqlmigrations/down/06_license_key_updates_down.sql
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.codacy.yml:
- Around line 7-9: The change disables tsqllint in .codacy.yml which removes
linting for SQL migration files; restore SQL linting by re-enabling tsqllint and
adding a configuration block that includes the migrations/ and migrations/down/
paths, or remove the tsqllint: enabled: false entry and instead add an
alternative linter configuration (for example sqlfluff) targeted at the same
directories; update .codacy.yml to reference the chosen linter and ensure it
covers the 12 migration files so Codacy will validate them again.
In `@backend/tests/unit/test_license_key_generator.py`:
- Around line 71-80: Replace the probabilistic uniqueness tests with
deterministic, API-contract-aligned tests by patching the underlying key
generator used by generate_license_key to first yield a colliding value then a
unique value; update test_generated_keys_are_unique to patch the generator (or
its random/uuid helper) to return a fixed sequence of distinct keys and assert
both are produced and different, and update
test_collision_with_existing_keys_triggers_retry to patch the helper to return
the same colliding key first and a different key second, call
generate_license_key(existing_keys={colliding_key}), then assert the returned
key is the second value and that the patched generator was invoked twice
(verifying the retry path in generate_license_key).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ca8f76a6-5755-4492-9fe0-982d0d53b68b
📒 Files selected for processing (2)
.codacy.ymlbackend/tests/unit/test_license_key_generator.py
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
migrations/03_app.sql (1)
174-179:⚠️ Potential issue | 🟠 MajorUse a forward migration for this type change.
This only changes the bootstrap definition for fresh installs. Any database that already created
app."node_locked_license_data"will hitduplicate_tableand keep the oldlicense_keytype, so upgrade environments drift from new ones. Add a new numberedALTER TABLE ... ALTER COLUMNmigration and handle rows longer than 24 before tightening the type.🛠️ Safer migration strategy
- "license_key" VARCHAR(24) NOT NULL UNIQUE, + "license_key" TEXT NOT NULL UNIQUE,ALTER TABLE app."node_locked_license_data" ALTER COLUMN "license_key" TYPE VARCHAR(24);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@migrations/03_app.sql` around lines 174 - 179, The migration currently only changes the CREATE TABLE for app."node_locked_license_data" and will not alter existing DBs; add a new forward migration file (next sequence number) that runs an ALTER TABLE app."node_locked_license_data" ALTER COLUMN "license_key" TYPE VARCHAR(24) and before changing the type handle existing rows longer than 24 (e.g., trim, validate, or fail with a clear message) so upgrades don't break; reference the table name app."node_locked_license_data" and column "license_key" in the new migration and include a safe remediation step for overlength values before tightening the type.
♻️ Duplicate comments (1)
.codacy.yml (1)
3-5:⚠️ Potential issue | 🟠 MajorDon't drop SQL lint coverage from
migrations/**.
include_pathsstill scopes Codacy tomigrations/**, buttsqllintnow excludes that same tree. With no replacement SQL engine configured here, the migration set stops being linted again.#!/bin/bash set -euo pipefail # Verify whether SQL files still have any linter coverage configured. echo "SQL files under migrations/:" fd -e sql . migrations || true echo echo "Repository references to SQL linters:" rg -n --hidden -S 'tsqllint|sqlfluff|sql[- ]lint' || trueExpected result: either
migrations/**remains covered bytsqllint, or another SQL linter is configured for those files.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.codacy.yml around lines 3 - 5, The Codacy config currently excludes migrations/** from tsqllint (tsqllint.exclude_paths) while include_paths still targets migrations/**, effectively removing SQL linting for migrations; either remove "migrations/**" from the tsqllint.exclude_paths entry so tsqllint continues to cover migrations, or instead add/configure an alternative SQL linter block (e.g., sqlfluff or sql-lint) for the migrations/** path in .codacy.yml so those files remain linted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/services/license_key_generator.py`:
- Around line 267-268: The code currently decodes license_file.signature with
base64.b64decode which tolerates junk; update the decoding in
license_key_generator.py to call base64.b64decode(license_file.signature,
validate=True) and then check the resulting signature length against the
expected signature byte length (the value used by the verify() call or the
public key algorithm) and raise/reject if lengths mismatch or decoding fails,
before calling verify(payload, signature) (references: _canonical_json, payload,
license_file.signature, base64.b64decode, verify).
- Around line 77-84: Replace the loose string types on the LicenseData model
with stricter types and validations: import UUID from uuid and change license_id
and vendor_id to type UUID, and set activation_code to a constrained str
validated against ACTIVATION_CODE_PATTERN (e.g., using a regex constraint or
Pydantic Field with regex); ensure issued_at, expires_at, and metadata remain
unchanged. Update any places that construct LicenseData (tests like
test_license_data_default_issued_at, test_license_data_optional_fields,
test_license_data_with_metadata, test_license_data_with_expiry) to pass UUID
objects (e.g., uuid.uuid4()) for license_id and vendor_id and ensure
activation_code fixtures conform to ACTIVATION_CODE_PATTERN so invalid payloads
are rejected before signing.
---
Outside diff comments:
In `@migrations/03_app.sql`:
- Around line 174-179: The migration currently only changes the CREATE TABLE for
app."node_locked_license_data" and will not alter existing DBs; add a new
forward migration file (next sequence number) that runs an ALTER TABLE
app."node_locked_license_data" ALTER COLUMN "license_key" TYPE VARCHAR(24) and
before changing the type handle existing rows longer than 24 (e.g., trim,
validate, or fail with a clear message) so upgrades don't break; reference the
table name app."node_locked_license_data" and column "license_key" in the new
migration and include a safe remediation step for overlength values before
tightening the type.
---
Duplicate comments:
In @.codacy.yml:
- Around line 3-5: The Codacy config currently excludes migrations/** from
tsqllint (tsqllint.exclude_paths) while include_paths still targets
migrations/**, effectively removing SQL linting for migrations; either remove
"migrations/**" from the tsqllint.exclude_paths entry so tsqllint continues to
cover migrations, or instead add/configure an alternative SQL linter block
(e.g., sqlfluff or sql-lint) for the migrations/** path in .codacy.yml so those
files remain linted.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 288331b3-d154-469b-b539-121e2fa00e95
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (9)
.codacy.ymlbackend/app/services/license_key_generator.pybackend/pyproject.tomlbackend/tests/integration/test_auth.pybackend/tests/services/__init__.pybackend/tests/services/test_license_key_generator.pybackend/tests/unit/__init__.pybackend/tests/unit/test_auth.pymigrations/03_app.sql
💤 Files with no reviewable changes (2)
- backend/tests/unit/test_auth.py
- backend/tests/integration/test_auth.py
- add preflight check for license key length in migration scripts - fix migration tests - update backend tests and add docstrings - update ruff configuration - dependency cleanup and add missing deps
b004e81 to
b16931e
Compare
|
There was a problem hiding this comment.
Actionable comments posted: 21
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.gitignore:
- Around line 19-21: Remove the unrelated patterns from .gitignore: delete the
lines for ".fastembed_cache", ".amdb", and ".database" since FastEmbed isn't a
dependency and those artifacts aren't used by the project; keep only
repository-specific, relevant ignore entries (e.g., OS, editor, build artifacts,
or any actual local DB files if required), then commit the updated .gitignore so
the unnecessary cache/database patterns are no longer ignored.
In `@backend/app/core/ed25519.py`:
- Around line 17-30: The function load_public_ed25519_key has an inconsistent
type hint for its pem parameter (bytes) compared to load_private_ed25519_key
(Buffer); update load_public_ed25519_key's signature to accept the same Buffer
type as load_private_ed25519_key (or the equivalent union like bytes | bytearray
| memoryview) so both loaders use consistent input types, and keep the existing
runtime behavior (call load_pem_public_key and the isinstance check against
Ed25519PublicKey).
In `@backend/app/domain/activation.py`:
- Around line 52-55: The check that currently raises TypeError when uuid.version
!= 7 should instead raise ValueError because the UUID is the correct type but
has an invalid value; update the raise in the function/method performing the
version check (the block referencing uuid.version != 7) to raise ValueError with
the same descriptive message, and update any tests that assert a TypeError to
expect ValueError instead.
- Around line 39-44: The uuid property re-normalizes and re-decodes the
activation code redundantly; fix by caching the decoded integer during
validation (e.g., in the validate_code validator store the decoded int on a
private attribute like _decoded_uuid_int) and then have the uuid property return
UUID(int=self._decoded_uuid_int), or alternatively make the uuid property a
cached result (use functools.cached_property or pydantic model caching) so
base32_crockford.normalize/decode only run once; update the validate_code (or
switch to a model_validator) and the uuid property accordingly to use the cached
value.
- Around line 46-55: The isinstance(uuid, UUID) check in Activation.generate is
redundant after assigning uuid6.uuid7() when uuid is None; update the validation
to only enforce type when an external value was passed: after the initial if
uuid is None: uuid = uuid6.uuid7() block, validate with "if not isinstance(uuid,
UUID): raise TypeError(...)" but only when the original argument was not None
(or simply remove the unreachable branch by performing the type check inside an
else branch of the None handling), then keep the existing UUID version check
(uuid.version != 7) to raise the same TypeError if needed; target the
classmethod generate in activation.py.
In `@backend/app/domain/license.py`:
- Around line 44-46: NodeLockedLicense currently allows any string for
device_fingerprint which can embed malformed fingerprints into signed licenses;
update the model to validate/normalize device_fingerprint at the boundary by
either using the existing fingerprint value object (e.g., a
Fingerprint/DeviceFingerprint type) or adding a pydantic validator on
NodeLockedLicense.device_fingerprint that enforces the canonical format
(normalize case, strip separators, check length/charset) and raises
ValidationError on invalid input so only normalized, validated fingerprints
become part of the signed payload.
In `@backend/app/internal/base32_crockford.py`:
- Around line 117-125: normalize() currently allows a trailing checksum symbol
to remain in symbol_string and DECODE_SYMBOLS contains mappings for checksum
characters; when checksum=False inputs like "0U" get decoded instead of
rejected. After calling normalize(symbol_string, strict=strict) and before
decoding loop, if checksum is False validate that symbol_string contains no
checksum characters (e.g., any symbol whose DECODE_SYMBOLS value >=
BASE_CHECK_INDEX or explicitly any of the checksum chars "*~$=U"); if any are
present raise a ValueError (or the existing InvalidSymbol error) so
checksum-only symbols are rejected; keep the existing checksum=True branch that
strips the last symbol into check_symbol.
In `@backend/app/services/license_gen.py`:
- Around line 22-26: Replace the inappropriate LicenseKeyGenerationError raised
in the except InvalidSignature block with a validation-oriented exception:
either introduce a new LicenseVerificationError exception class or reuse an
existing validation exception (e.g., ValidationError) and raise that from the
caught InvalidSignature (preserving the original exception via "from exc");
update the except block that currently references InvalidSignature and
LicenseKeyGenerationError to raise LicenseVerificationError (or the chosen
validation type) with a clear message like "License signature verification
failed" so callers can distinguish verification failures from generation errors.
- Around line 17-19: Wrap the base58.b58decode call in a try/except to catch
ValueError from invalid Base58 input (the call that produces sig_bytes from
signature) and raise or return a clearer error (e.g., raise ValueError("invalid
base58-encoded signature") from err) so the failure is handled instead of
propagating raw; keep the rest of the flow (payload =
lic.canonical_json().encode(), public_key = load_public_ed25519_key(...))
unchanged and ensure the exception chaining uses "from" to preserve the original
error.
In `@backend/tests/api/routes/test_auth.py`:
- Around line 411-427: The test builds an auth payload then creates an
invalid_token using create_access_token with a random UUID, which may not match
the signed-up vendor; instead, when token_kind == "access_token" call
signup(client, payload) then call login(client, payload) and extract the access
token from that login response to use as invalid_token so the vendor/client pair
matches and only token_type differs; update the branch that currently calls
create_access_token(faker.uuid4(), app_settings) to reuse the logged-in access
token returned by login(client, payload) (referencing build_auth_payload,
create_access_token, signup, login, and invalid_token).
In `@backend/tests/api/test_deps.py`:
- Around line 214-219: The tests currently set request.app.state attributes by
calling build_request(db_pool=None) or build_request(settings=None), so they
never exercise the true "attribute missing" path; update the failing-case tests
that call build_request (the ones exercising deps.get_db and any similar
deps.get_settings) to construct a request that does not include the keys on
request.app.state at all (i.e., call/build a request without adding
app.state.db_pool or app.state.settings) so that next(deps.get_db(request))
raises ServiceUnavailableException and the same for the other tests; locate
usages of build_request and deps.get_db / deps.get_settings in the test file and
change the setup to omit adding those state attributes rather than adding them
with None.
In `@backend/tests/conftest.py`:
- Around line 312-315: The autouse fixture override_function_dependencies (which
depends on db_conn) forces PostgreSQL to start for every test; change it so it
no longer runs globally: remove autouse=True (or set autouse=False) and either
register a new fixture (e.g., integration_override_function_dependencies) that
depends on db_conn for API/integration tests only, or require explicit use via
pytest.mark.usefixtures on integration test modules; ensure any references to
override_function_dependencies in test modules are updated to opt in so pure
unit tests (like tests/domain/test_license.py and
tests/services/test_license_gen.py) no longer pull in db_conn.
In `@backend/tests/core/test_exception_handlers.py`:
- Around line 47-143: Docstrings in the tests refer to the old internal name
`_build_error_details`; update each docstring to reference the public function
name `build_error_details` instead (affecting tests:
test_build_error_details_none_returns_empty_list,
test_build_error_details_accepts_error_detail_instance,
test_build_error_details_dict_uses_fallback_message_when_missing,
test_build_error_details_non_dict_item_is_stringified) so the documented
"Covers" section matches the actual symbol under test; simply replace
`app.core.exception_handlers._build_error_details` with
`app.core.exception_handlers.build_error_details` in those docstrings.
In `@backend/tests/core/test_security.py`:
- Around line 63-66: The test may occasionally generate identical values for
original_password and wrong_password using faker.password(), causing a false
positive; modify the test around original_password, wrong_password,
get_password_hash, and verify_password to guarantee they differ (e.g., generate
wrong_password in a loop or mutate it until wrong_password != original_password)
before calling get_password_hash(original_password) and
verify_password(wrong_password, hashed_password) so the “wrong” case is
deterministic.
In `@backend/tests/domain/test_activation.py`:
- Around line 128-144: The code currently treats a non-v7 UUID as a TypeError;
change the validation in ActivationCode.generate to raise ValueError for wrong
UUID version (e.g., if uuid.version != 7: raise ValueError("UUID version 7
required") ) and update any tests (the test in test_activation.py) to expect
ValueError instead of TypeError so the error semantics reflect “correct type,
invalid value.”
In `@backend/tests/internal/test_base32_crockford.py`:
- Around line 134-135: The test's expected error message is misleading for
base32_crockford.encode when passing a negative integer; update the assertion to
match the actual validation (which allows zero) by changing the pytest.raises
match from "is not a positive integer" to a correct message such as "is
negative" or "is not a non-negative integer" so the test reflects that the
function rejects negative values but accepts zero (refer to
base32_crockford.encode and the test invocation base32_crockford.encode(-1)).
In `@backend/tests/schemas/test_auth.py`:
- Around line 116-122: The test's scenario branching silently treats any
unexpected scenario as "missing_client_id"; update the branch in the test (where
`scenario` is checked and `payload` is mutated) to replace the final `else` with
an explicit `elif scenario == "missing_client_id": payload.pop("client_id")` and
add an `else: raise ValueError(f"Unknown scenario: {scenario}")` (or an assert)
to fail fast on typos/new scenarios; locate this logic in the test function
handling `scenario` in test_auth.py and apply the check to ensure unknown
scenarios are guarded.
In `@backend/tests/services/test_auth.py`:
- Around line 281-314: The test currently only checks tokens are non-empty but
doesn't assert they are newly rotated; update
test_refresh_success_returns_new_token_pair to assert that result.access_token
!= tokens.access_token and result.refresh_token != tokens.refresh_token to
guarantee new values were issued, and optionally attempt calling
refresh(tokens.refresh_token, client_id, db_cursor, app_settings) again (or the
equivalent invalidation check) and assert it fails to ensure the old refresh
token was invalidated.
- Around line 354-389: The test test_refresh_rejects_invalid_tokens uses a
random client id (faker.uuid4()) when calling refresh, which can make failures
depend on validation order; change the call in that test to use a
known/persisted client id instead (e.g., obtain an existing vendor/client id set
up in the test fixtures or create and persist a test vendor before invoking
refresh) so the test isolates token-related errors; update the invocation of
refresh(refresh_token, faker.uuid4(), db_cursor, app_settings) to use the known
id and ensure the fixture or setup code that creates the persisted id is
referenced/used in this test.
In `@backend/tests/test_main.py`:
- Around line 86-88: The test docstring for the route-id test is truncated;
update the docstring in the test in backend/tests/test_main.py to complete the
sentence so it clearly states that the test verifies
app.main.custom_generate_unique_id prefixes generated route ids (so generated
OpenAPI operation names remain stable across tagged and untagged routes). Locate
the test that references app.main.custom_generate_unique_id and replace the
incomplete sentence with a full, clear sentence describing that intent.
- Around line 209-210: The failure-path test currently monkeypatches
app_main.ConnectionPool with a lambda but doesn't assert what constructor args
were passed; modify the patch to record the arguments and assert them (mirroring
the success-path test). Replace the lambda used in monkeypatch.setattr(app_main,
"ConnectionPool", ...) with a small spy/factory that captures the dsn and open
keyword (e.g., store into captured_dsn/captured_open variables or a list) and
returns the same pool object, then add assertions that the captured dsn and open
flag equal the expected values; keep the existing monkeypatch of
app_main.Settings unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: e231ffb5-982b-4960-9e63-f9cc47e7e923
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (57)
.github/workflows/code_test.yml.github/workflows/sonarcloud.yml.gitignorebackend/app/api/routes/auth.pybackend/app/api/routes/login.pybackend/app/core/ed25519.pybackend/app/core/exception_handlers.pybackend/app/core/exceptions.pybackend/app/core/security.pybackend/app/crud/vendor.pybackend/app/domain/activation.pybackend/app/domain/fingerprint.pybackend/app/domain/license.pybackend/app/internal/.gitkeepbackend/app/internal/base32_crockford.pybackend/app/main.pybackend/app/pre_start.pybackend/app/schemas/response.pybackend/app/services/license_gen.pybackend/pyproject.tomlbackend/tests/__init__.pybackend/tests/api/routes/test_auth.pybackend/tests/api/routes/test_health.pybackend/tests/api/test_deps.pybackend/tests/api/test_main.pybackend/tests/api/test_middlewares.pybackend/tests/conftest.pybackend/tests/core/test_config.pybackend/tests/core/test_ed25519.pybackend/tests/core/test_exception_handlers.pybackend/tests/core/test_exceptions.pybackend/tests/core/test_security.pybackend/tests/crud/test_vendor.pybackend/tests/domain/test_activation.pybackend/tests/domain/test_fingerprint.pybackend/tests/domain/test_license.pybackend/tests/integration/test_auth.pybackend/tests/internal/test_base32_crockford.pybackend/tests/schemas/test_auth.pybackend/tests/schemas/test_response.pybackend/tests/services/__init__.pybackend/tests/services/test_auth.pybackend/tests/services/test_license_gen.pybackend/tests/test_main.pybackend/tests/test_pre_start.pybackend/tests/unit/test_auth.pymigrations/03_app.sqlmigrations/04_audit.sqlmigrations/07_audit_triggers.sqlmigrations/tests/helpers.pymigrations/tests/test_constraints.pymigrations/tests/test_indexes.pymigrations/tests/test_partitioning.pymigrations/tests/test_privileges.pymigrations/tests/test_rls_isolation.pymigrations/tests/test_rls_structure.pymigrations/tests/test_seed_data.py
💤 Files with no reviewable changes (4)
- backend/app/api/routes/login.py
- backend/tests/init.py
- backend/tests/unit/test_auth.py
- backend/tests/integration/test_auth.py
| .fastembed_cache | ||
| .database | ||
| .amdb |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify usage of fastembed and identify what .amdb files are
# Check for fastembed imports or references
echo "=== Checking for fastembed usage ==="
rg -i 'fastembed' --type=py -C 2
# Check for .amdb file references or existing files
echo -e "\n=== Checking for .amdb references ==="
rg -i '\.amdb' -C 2
# Look for any existing files matching these patterns
echo -e "\n=== Checking for existing files matching patterns ==="
fd -H -t f '^\.fastembed_cache$'
fd -H -t f '^\.database$'
fd -H '\.amdb$'
# Check requirements/dependencies for fastembed
echo -e "\n=== Checking dependencies ==="
fd -t f 'requirements.*\.txt|pyproject\.toml|setup\.py|Pipfile' --exec cat {}Repository: eggzec/permit
Length of output: 5047
Remove .fastembed_cache from .gitignore — it's not a project dependency.
The verification shows FastEmbed is neither listed in pyproject.toml dependencies nor referenced in the codebase. This pattern appears to be out of scope for the license key generation PR. Additionally, .amdb is unidentified and lacks any references in the codebase. The .database pattern is unclear given the project uses PostgreSQL (server-based) rather than file-based databases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.gitignore around lines 19 - 21, Remove the unrelated patterns from
.gitignore: delete the lines for ".fastembed_cache", ".amdb", and ".database"
since FastEmbed isn't a dependency and those artifacts aren't used by the
project; keep only repository-specific, relevant ignore entries (e.g., OS,
editor, build artifacts, or any actual local DB files if required), then commit
the updated .gitignore so the unnecessary cache/database patterns are no longer
ignored.
| def load_private_ed25519_key( | ||
| pem: Buffer, password: bytes | None = None | ||
| ) -> Ed25519PrivateKey: | ||
| key = load_pem_private_key(pem, password=password) | ||
| if not isinstance(key, Ed25519PrivateKey): | ||
| raise TypeError("Provided key is not an Ed25519 private key") | ||
| return key | ||
|
|
||
|
|
||
| def load_public_ed25519_key(pem: bytes) -> Ed25519PublicKey: | ||
| key = load_pem_public_key(pem) | ||
| if not isinstance(key, Ed25519PublicKey): | ||
| raise TypeError("Provided key is not an Ed25519 public key") | ||
| return key |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Type signature inconsistency between private and public key loaders.
load_private_ed25519_key accepts Buffer (which includes bytes, bytearray, memoryview) for the pem parameter, but load_public_ed25519_key only accepts bytes. Since load_pem_public_key from cryptography also accepts Buffer, consider using consistent types for both functions.
♻️ Proposed fix for type consistency
-def load_public_ed25519_key(pem: bytes) -> Ed25519PublicKey:
+def load_public_ed25519_key(pem: Buffer) -> Ed25519PublicKey:
key = load_pem_public_key(pem)
if not isinstance(key, Ed25519PublicKey):
raise TypeError("Provided key is not an Ed25519 public key")
return key🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/core/ed25519.py` around lines 17 - 30, The function
load_public_ed25519_key has an inconsistent type hint for its pem parameter
(bytes) compared to load_private_ed25519_key (Buffer); update
load_public_ed25519_key's signature to accept the same Buffer type as
load_private_ed25519_key (or the equivalent union like bytes | bytearray |
memoryview) so both loaders use consistent input types, and keep the existing
runtime behavior (call load_pem_public_key and the isinstance check against
Ed25519PublicKey).
| @computed_field | ||
| @property | ||
| def uuid(self) -> UUID: | ||
| flat = base32_crockford.normalize(self.code) | ||
| n = base32_crockford.decode(flat, checksum=True) | ||
| return UUID(int=n) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant normalization and decode in uuid property.
The validate_code validator already normalizes and decodes the code. The uuid property re-normalizes and re-decodes the same value. Consider caching or restructuring to avoid duplicate work.
♻️ Option: Store decoded value during validation
One approach is to store the decoded integer as a private attribute during validation, then use it in the uuid property. Alternatively, accept the small overhead since the operations are fast and the model is likely short-lived.
+ _decoded_int: int | None = None # Private field for cached decode result
+
`@field_validator`("code")
`@classmethod`
def validate_code(cls, v: str) -> str:
normalized = base32_crockford.normalize(v)
if len(normalized) != cls.LENGTH:
raise ValueError("Activation code must contain 30 symbols")
- base32_crockford.decode(normalized, checksum=True)
+ # Decode is performed for checksum validation; result used by uuid property
+ base32_crockford.decode(normalized, checksum=True)
return "-".join(
normalized[i : i + cls.GROUP]
for i in range(0, cls.LENGTH, cls.GROUP)
)Note: Pydantic's field_validator doesn't easily allow setting other fields. If performance matters, consider using model_validator instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/domain/activation.py` around lines 39 - 44, The uuid property
re-normalizes and re-decodes the activation code redundantly; fix by caching the
decoded integer during validation (e.g., in the validate_code validator store
the decoded int on a private attribute like _decoded_uuid_int) and then have the
uuid property return UUID(int=self._decoded_uuid_int), or alternatively make the
uuid property a cached result (use functools.cached_property or pydantic model
caching) so base32_crockford.normalize/decode only run once; update the
validate_code (or switch to a model_validator) and the uuid property accordingly
to use the cached value.
| @classmethod | ||
| def generate(cls, uuid: UUID | None = None) -> ActivationCode: | ||
| if uuid is None: | ||
| uuid = uuid6.uuid7() | ||
| if not isinstance(uuid, UUID): | ||
| raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}") | ||
| if uuid.version != 7: # noqa: PLR2004 | ||
| raise TypeError( | ||
| f"uuid must be a UUID version 7, got {uuid.version}" | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Redundant type check after None handling.
The isinstance(uuid, UUID) check at line 50 is unreachable when uuid was initially None, since uuid6.uuid7() returns a UUID object. The check only matters when a non-None value was passed, but the type hint already indicates UUID | None.
♻️ Suggested refactor to clarify intent
`@classmethod`
def generate(cls, uuid: UUID | None = None) -> ActivationCode:
+ if uuid is not None and not isinstance(uuid, UUID):
+ raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}")
if uuid is None:
uuid = uuid6.uuid7()
- if not isinstance(uuid, UUID):
- raise TypeError(f"uuid cannot be of type {uuid.__class__.__name__}")
if uuid.version != 7: # noqa: PLR2004
raise TypeError(
f"uuid must be a UUID version 7, got {uuid.version}"
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/domain/activation.py` around lines 46 - 55, The isinstance(uuid,
UUID) check in Activation.generate is redundant after assigning uuid6.uuid7()
when uuid is None; update the validation to only enforce type when an external
value was passed: after the initial if uuid is None: uuid = uuid6.uuid7() block,
validate with "if not isinstance(uuid, UUID): raise TypeError(...)" but only
when the original argument was not None (or simply remove the unreachable branch
by performing the type check inside an else branch of the None handling), then
keep the existing UUID version check (uuid.version != 7) to raise the same
TypeError if needed; target the classmethod generate in activation.py.
| if uuid.version != 7: # noqa: PLR2004 | ||
| raise TypeError( | ||
| f"uuid must be a UUID version 7, got {uuid.version}" | ||
| ) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider ValueError instead of TypeError for version mismatch.
The UUID type is correct; only the version value is wrong. ValueError is more semantically appropriate for "correct type, invalid value" scenarios.
♻️ Proposed change
if uuid.version != 7: # noqa: PLR2004
- raise TypeError(
+ raise ValueError(
f"uuid must be a UUID version 7, got {uuid.version}"
)Note: This would require updating the corresponding test to expect ValueError instead of TypeError.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/domain/activation.py` around lines 52 - 55, The check that
currently raises TypeError when uuid.version != 7 should instead raise
ValueError because the UUID is the correct type but has an invalid value; update
the raise in the function/method performing the version check (the block
referencing uuid.version != 7) to raise ValueError with the same descriptive
message, and update any tests that assert a TypeError to expect ValueError
instead.
| if scenario == "invalid_email": | ||
| payload["email"] = faker.word() | ||
| elif scenario == "short_password": | ||
| payload["password"] = faker.password(length=7, special_chars=False) | ||
| else: | ||
| payload.pop("client_id") | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Guard unknown invalid-payload scenarios explicitly.
The current else path silently treats any new/typo scenario as missing_client_id, which can mask test mistakes.
Suggested hardening
- if scenario == "invalid_email":
+ if scenario == "invalid_email":
payload["email"] = faker.word()
elif scenario == "short_password":
payload["password"] = faker.password(length=7, special_chars=False)
- else:
+ elif scenario == "missing_client_id":
payload.pop("client_id")
+ else:
+ raise AssertionError(f"Unhandled scenario: {scenario}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/schemas/test_auth.py` around lines 116 - 122, The test's
scenario branching silently treats any unexpected scenario as
"missing_client_id"; update the branch in the test (where `scenario` is checked
and `payload` is mutated) to replace the final `else` with an explicit `elif
scenario == "missing_client_id": payload.pop("client_id")` and add an `else:
raise ValueError(f"Unknown scenario: {scenario}")` (or an assert) to fail fast
on typos/new scenarios; locate this logic in the test function handling
`scenario` in test_auth.py and apply the check to ensure unknown scenarios are
guarded.
| def test_refresh_success_returns_new_token_pair( | ||
| db_conn: Connection, app_settings: Settings, faker | ||
| ) -> None: | ||
| """ | ||
| Verifies that `app.services.auth.refresh` exchanges a valid refresh token for a replacement token pair. | ||
| This matters because the auth service owns the token-refresh contract used by the API route. | ||
|
|
||
| Covers: | ||
| - `app.services.auth.login` | ||
| - `app.services.auth.refresh` | ||
|
|
||
| Rationale: | ||
| The test performs the full signup-login-refresh sequence through the real service functions because refresh correctness depends on issued token state. | ||
|
|
||
| Fixtures: | ||
| db_conn: Transactional database connection rolled back after the test. | ||
| app_settings: Shared `Settings` object used to sign and decode tokens. | ||
| faker: Session-scoped `Faker` instance used to generate auth inputs. | ||
| """ | ||
| email, password, client_id = build_signup_args(faker) | ||
| with db_conn.cursor() as db_cursor: | ||
| signup(db_cursor, email, password, client_id, app_settings) | ||
| tokens = login(db_cursor, email, password, client_id, app_settings) | ||
| result = refresh( | ||
| tokens.refresh_token, client_id, db_cursor, app_settings | ||
| ) | ||
|
|
||
| assert result.access_token, ( | ||
| "Expected refresh to return a non-empty replacement access token" | ||
| ) | ||
| assert result.refresh_token, ( | ||
| "Expected refresh to return a non-empty replacement refresh token" | ||
| ) | ||
|
|
There was a problem hiding this comment.
Assert rotation semantics explicitly in refresh success test.
Line 308-Line 313 only verify non-empty tokens. This does not prove a new token pair was issued, so token-reuse regressions could pass undetected.
Suggested test assertion update
@@
with db_conn.cursor() as db_cursor:
signup(db_cursor, email, password, client_id, app_settings)
tokens = login(db_cursor, email, password, client_id, app_settings)
result = refresh(
tokens.refresh_token, client_id, db_cursor, app_settings
)
@@
assert result.refresh_token, (
"Expected refresh to return a non-empty replacement refresh token"
)
+ assert result.access_token != tokens.access_token, (
+ "Expected refresh to rotate and return a new access token"
+ )
+ assert result.refresh_token != tokens.refresh_token, (
+ "Expected refresh to rotate and return a new refresh token"
+ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/services/test_auth.py` around lines 281 - 314, The test
currently only checks tokens are non-empty but doesn't assert they are newly
rotated; update test_refresh_success_returns_new_token_pair to assert that
result.access_token != tokens.access_token and result.refresh_token !=
tokens.refresh_token to guarantee new values were issued, and optionally attempt
calling refresh(tokens.refresh_token, client_id, db_cursor, app_settings) again
(or the equivalent invalidation check) and assert it fails to ensure the old
refresh token was invalidated.
| def test_refresh_rejects_invalid_tokens( | ||
| db_conn: Connection, | ||
| app_settings: Settings, | ||
| faker, | ||
| token_factory, | ||
| expected_message: str, | ||
| ) -> None: | ||
| """ | ||
| Verifies that `app.services.auth.refresh` rejects invalid refresh tokens, malformed payloads, and unknown-vendor claims. | ||
| This matters because the service must not mint new tokens from unusable or untrusted refresh input. | ||
|
|
||
| Covers: | ||
| - `app.services.auth.refresh` | ||
|
|
||
| Rationale: | ||
| The test signs real token variants instead of patching decode helpers, so it documents the actual service boundary. This shape came from REM-003. | ||
|
|
||
| Fixtures: | ||
| db_conn: Transactional database connection rolled back after the test. | ||
| app_settings: Shared `Settings` object used to sign token variants. | ||
| faker: Session-scoped `Faker` instance used to generate claims and ids. | ||
|
|
||
| Parametrize: | ||
| token_factory: Produces the invalid refresh token variant for the scenario. | ||
| expected_message: The authentication error expected from the service. | ||
| Cases: | ||
| - <id="access_token"> — supplies an access token where a refresh token is required. | ||
| - <id="expired_refresh_token"> — supplies a refresh token whose expiry is already in the past. | ||
| - <id="malformed_vendor_id"> — supplies a refresh token whose vendor id claim is not a UUID. | ||
| - <id="unknown_vendor"> — supplies a refresh token whose vendor id does not exist in the database. | ||
| """ | ||
| refresh_token = token_factory(faker, app_settings) | ||
| with db_conn.cursor() as db_cursor: | ||
| with pytest.raises(AuthenticationException, match=expected_message): | ||
| refresh(refresh_token, faker.uuid4(), db_cursor, app_settings) | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Use a known client id in invalid-token scenarios to isolate failure causes.
Line 388 currently passes a random client id, which can couple expected errors to validation order. Using a persisted/known client id keeps these cases focused on token invalidity only.
Suggested stabilization change
@@
def test_refresh_rejects_invalid_tokens(
@@
) -> None:
@@
- refresh_token = token_factory(faker, app_settings)
with db_conn.cursor() as db_cursor:
+ email, password, client_id = build_signup_args(faker)
+ signup(db_cursor, email, password, client_id, app_settings)
+ refresh_token = token_factory(faker, app_settings)
with pytest.raises(AuthenticationException, match=expected_message):
- refresh(refresh_token, faker.uuid4(), db_cursor, app_settings)
+ refresh(refresh_token, client_id, db_cursor, app_settings)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/services/test_auth.py` around lines 354 - 389, The test
test_refresh_rejects_invalid_tokens uses a random client id (faker.uuid4()) when
calling refresh, which can make failures depend on validation order; change the
call in that test to use a known/persisted client id instead (e.g., obtain an
existing vendor/client id set up in the test fixtures or create and persist a
test vendor before invoking refresh) so the test isolates token-related errors;
update the invocation of refresh(refresh_token, faker.uuid4(), db_cursor,
app_settings) to use the known id and ensure the fixture or setup code that
creates the persisted id is referenced/used in this test.
| Verifies that `app.main.custom_generate_unique_id` prefixes generated | ||
| This matters because route ids feed generated OpenAPI operation names | ||
| and should stay stable across tagged and untagged routes. |
There was a problem hiding this comment.
Fix incomplete sentence in the route-id test docstring.
The sentence at Line 87 is truncated, which reduces readability of test intent documentation.
Suggested wording fix
- Verifies that `app.main.custom_generate_unique_id` prefixes generated
+ Verifies that `app.main.custom_generate_unique_id` prefixes generated route ids.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Verifies that `app.main.custom_generate_unique_id` prefixes generated | |
| This matters because route ids feed generated OpenAPI operation names | |
| and should stay stable across tagged and untagged routes. | |
| Verifies that `app.main.custom_generate_unique_id` prefixes generated route ids. | |
| This matters because route ids feed generated OpenAPI operation names | |
| and should stay stable across tagged and untagged routes. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_main.py` around lines 86 - 88, The test docstring for the
route-id test is truncated; update the docstring in the test in
backend/tests/test_main.py to complete the sentence so it clearly states that
the test verifies app.main.custom_generate_unique_id prefixes generated route
ids (so generated OpenAPI operation names remain stable across tagged and
untagged routes). Locate the test that references
app.main.custom_generate_unique_id and replace the incomplete sentence with a
full, clear sentence describing that intent.
| monkeypatch.setattr(app_main, "Settings", lambda: settings) | ||
| monkeypatch.setattr(app_main, "ConnectionPool", lambda dsn, *, open: pool) # noqa: A006 |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Assert ConnectionPool constructor arguments in the failure-path test too.
This keeps contract coverage symmetric with the success-path test and guards regressions in startup wiring.
Suggested update
- monkeypatch.setattr(app_main, "ConnectionPool", lambda dsn, *, open: pool) # noqa: A006
+ def build_pool(dsn: str, *, open: bool) -> SimpleNamespace: # noqa: A002
+ assert dsn == str(settings.DATABASE_DSN), (
+ f"Expected ConnectionPool DSN '{settings.DATABASE_DSN}', got '{dsn}'"
+ )
+ assert open is True, (
+ "Expected lifespan to construct ConnectionPool with open=True"
+ )
+ return pool
+
+ monkeypatch.setattr(app_main, "ConnectionPool", build_pool)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_main.py` around lines 209 - 210, The failure-path test
currently monkeypatches app_main.ConnectionPool with a lambda but doesn't assert
what constructor args were passed; modify the patch to record the arguments and
assert them (mirroring the success-path test). Replace the lambda used in
monkeypatch.setattr(app_main, "ConnectionPool", ...) with a small spy/factory
that captures the dsn and open keyword (e.g., store into
captured_dsn/captured_open variables or a list) and returns the same pool
object, then add assertions that the captured dsn and open flag equal the
expected values; keep the existing monkeypatch of app_main.Settings unchanged.



Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Refactor
Tests