Skip to content

Feature/dev gateway auth wall#215

Closed
fdaei wants to merge 95 commits intomainfrom
feature/dev-gateway-auth-wall
Closed

Feature/dev gateway auth wall#215
fdaei wants to merge 95 commits intomainfrom
feature/dev-gateway-auth-wall

Conversation

@fdaei
Copy link
Copy Markdown
Collaborator

@fdaei fdaei commented Feb 15, 2026

Summary by CodeRabbit

  • New Features

    • Admin file upload for bulk imports with job creation, processing, worker pool, retries, job status and failed-records endpoints
    • Role-aware auth: contributor role returned in responses and used for permissions; /me endpoint supports token rotation and refresh
  • Bug Fixes / Improvements

    • File upload validation (size & MIME) via middleware; DB migrations and seeds for roles/permissions; build/infra config consistency updates
  • Documentation

    • Added authentication guideline and updated integration/MVP workflow docs

mzfarshad and others added 30 commits December 1, 2025 22:03
- Add Watermill publisher to historical fetcher
- Publish events to rankr_raw_events topic after DB save
- Create NATS publisher in fetch-historical command
- Add project gRPC adapter
- Add RedisLeaderboardRepository for public leaderboard storage
- Implement SetPublicLeaderboard and GetPublicLeaderboard
- Add scheduler cron job for public leaderboard update
- Add run-scheduler command for manual testing
- Add gRPC handler for GetPublicLeaderboard
- Add migration to change resource_id to TEXT
Update SaveHistoricalEventsBulk to return list of inserted events
alongside BulkInsertResult. The fetcher now only publishes events
that were actually inserted, not duplicates skipped by ON CONFLICT.
arash-mosavi and others added 13 commits February 13, 2026 18:22
When page parameter is 0 or negative, Redis ZREVRANGE would use
negative indices and return data from the end of the sorted set
instead of the beginning. This fix normalizes page < 1 to page = 1.
- Fix pgx.ErrNoRows comparison (was using sql.ErrNoRows incorrectly)
- Fix context key mismatch in handler (Authorization -> userInfo)
- Fix migration numbering (duplicate 000003, 000004)
- Simplify role migration syntax for gorp compatibility
- Add missing production config sections (middleware, validate, job, broker, worker)
…b.com:gocasters/rankr into feature/integrate-auth-and-contributor-import
…b.com:gocasters/rankr into feature/integrate-auth-and-contributor-import
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Feb 15, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Consolidates RBAC to enum roles, adds role-aware token verification/rotation (me + refresh-token), extends contributor VerifyPassword to return role, and implements a Redis-backed job pipeline (broker, worker pool, job service) with upload endpoints, file validation, migrations, config and deployment updates. (50 words)

Changes

Cohort / File(s) Summary
Auth RBAC & Token
authapp/repository/migrations/001_rbac.sql, authapp/repository/migrations/002_seed_roles_permissions.sql, authapp/repository/role_repository.go, authapp/service/auth/*, authapp/delivery/http/handler.go, authapp/delivery/http/server.go, authapp/service/tokenservice/*.go, pkg/role/*, pkg/echo_middleware/require_auth.go
Replace role CRUD with enum-based RBAC and permission-list lookup; add access list to token claims; replace verifyToken with me + refresh-token endpoints; enforce permission checks; remove DEV API-key bypass; add role parsing utilities.
Contributor VerifyPassword / Proto / Client
adapter/contributor/client.go, protobuf/contributor/v1/contributor.proto, contributorapp/delivery/grpc/handler.go, contributorapp/service/contributor/*, contributorapp/repository/contributor.go
Extend VerifyPassword to return role (proto, adapter, gRPC handler, service, repo); add Contributor.Role field and persist/scan role; update responses and validators.
Job Domain: Service, Repo, Entities, Validator, Processor
contributorapp/service/job/*, contributorapp/repository/job.go, contributorapp/repository/fail_records.go, contributorapp/repository/migrations/000003_*.sql, 000004_*.sql, contributorapp/service/job/validator.go
Add job domain: Job/ProduceJob/ContributorRecord/FailRecord types; CSV/XLSX processors; job service with CreateImportJob/ProcessJob/JobStatus/GetFailRecords; DB repos, fail_records, and migrations; request validation.
Broker & Worker Pool
contributorapp/repository/broker.go, contributorapp/worker/*
New Redis Streams broker (publish/consume/DLQ/requeue/retry) and worker Pool coordinating consumption, acking, failure handling and dispatch to processors.
HTTP Upload, Middleware & Server Routing
contributorapp/delivery/http/handler.go, contributorapp/delivery/http/middleware/middleware.go, contributorapp/delivery/http/server.go
Add admin-gated upload endpoint, job status and fail-record endpoints, file size/type middleware, handler wiring for JobService and AuthClient, and server route integration.
Contributor Upsert & Validation
contributorapp/service/contributor/service.go, contributorapp/service/contributor/validator.go, contributorapp/service/contributor/param.go, contributorapp/service/contributor/message.go
Add Upsert flow and types, privacy-mode validation, NotFound sentinel errors, and include Role in VerifyPassword responses.
Migrations & Seeds
authapp/repository/migrations/*, contributorapp/repository/migrations/*
Add RBAC enum and permission seeds; add jobs and fail_records tables; add contributors.role enum and seed admin contributor.
App Wiring & Config
contributorapp/app.go, contributorapp/config.go, contributorapp/client/auth.go
Wire worker pool consumer into Application, add broker/worker/job/auth client configs, initialize components and support degraded contributor client.
Build & Deploy
.air/*, many deploy/*/Dockerfile, docker-compose and env files, go.mod, .gitignore
Add go mod tidy / GOFLAGS=-mod=mod build flags and copy module cache into images, update go.mod (excelize + indirects), add .gitignore rule, and remove many host port mappings.
OpenResty / Gateway & Docs
deploy/infrastructure/openresty/development/conf.d/*, doc/auth-guideline.md, doc/mvp-guideline.md
Gateways call auth /v1/me (forward X-Original-*), propagate X-User-ID/X-Role/X-User-Info/(X-Access), add many service Nginx blocks, and new auth documentation.
Misc / Utilities & Tests
leaderboardstatapp/repository/redis-leaderboardstat.go, pkg/migrator/migrator.go, authapp/delivery/http/handler_test.go, pkg/role/access_test.go, pkg/authhttp/helpers.go
Pagination normalization, minor import cleanup, token helper tests, role access tests, and new auth HTTP helper functions (bearer/refresh token extraction, user-info encoding).

Sequence Diagram(s)

sequenceDiagram
    rect rgba(200,200,255,0.5)
    participant Client as Client
    participant Gateway as OpenResty
    participant Auth as Auth Service
    participant Service as Upstream Service
    end
    Client->>Gateway: Request (Authorization: Bearer <token>)
    Gateway->>Auth: GET /v1/me (forward Authorization + X-Original-URI/Method/Host)
    Auth->>Auth: Verify token, check role, compute access list, possibly rotate tokens
    alt Unauthorized or permission missing
        Auth-->>Gateway: 401/403
        Gateway-->>Client: 401/403
    else Authorized
        Auth-->>Gateway: 200 + headers (X-User-ID, X-Role, X-User-Info, X-Access)
        Gateway->>Service: Forward request + user headers
        Service-->>Client: Response
    end
Loading
sequenceDiagram
    rect rgba(200,255,200,0.5)
    participant Uploader as Uploader
    participant App as Contributor App Handler
    participant JobSvc as Job Service
    participant DB as Postgres
    participant Broker as Redis Broker
    participant Pool as Worker Pool
    participant Proc as CSV/XLSX Processor
    participant Contr as Contributor Adapter
    end
    Uploader->>App: POST /jobs/upload (file + token)
    App->>Auth: Verify role (via AuthClient)
    App->>App: Validate file (middleware)
    App->>JobSvc: CreateImportJob(file)
    JobSvc->>DB: INSERT job (status=pending) -> job_id
    JobSvc->>Broker: Publish(job_id)
    JobSvc-->>App: 202 Accepted (job_id)
    Pool->>Broker: Consume()
    Broker-->>Pool: Message(job_id)
    Pool->>JobSvc: ProcessJob(job_id)
    JobSvc->>DB: Update status -> processing
    JobSvc->>Proc: Process(file)
    Proc-->>JobSvc: ProcessResult (success/fail records)
    loop per fail record
        JobSvc->>DB: Create fail_records
    end
    loop per success record
        JobSvc->>Contr: UpsertContributor(record)
    end
    JobSvc->>DB: Update job counts/status
    Pool->>Broker: Ack(message_id)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

  • login #198: Overlapping changes to VerifyPassword signature and contributor auth flows (proto, client, service).
  • add nginx for auth #124: Related OpenResty/Nginx gateway auth_request integration and config updates.
  • Feature/auth #95: Related auth HTTP verification/token handling changes and helpers.

Suggested labels

done

Suggested reviewers

  • gocastsian

Poem

🐇 I twitched my nose and scanned the code,
Jobs hop in queues where Redis flowed.
Roles compacted, tokens gently spun,
Files checked and workers run.
Hooray — imports land, and processors have fun!

🚥 Pre-merge checks | ✅ 2 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feature/dev gateway auth wall' is vague and uses generic formatting with 'Feature/' prefix. It does not clearly convey the main technical change—which spans authentication refactoring, RBAC role-based access control, API gateway integration, job processing infrastructure, and multiple service updates. The title is too broad and non-descriptive to effectively summarize the substantial changeset. Revise the title to be more specific about the primary technical change, such as 'Implement role-based access control and API gateway authentication' or similar, avoiding generic prefixes and ensuring clarity about the core objective.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/dev-gateway-auth-wall

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
deploy/notification/development/Dockerfile (1)

1-14: ⚠️ Potential issue | 🟡 Minor

Missing GOPROXY env — inconsistent with other development service Dockerfiles.

All other development Dockerfiles in this repo (auth, contributor, leaderboardscoring, leaderboardstat, project, realtime, task, userprofile, webhook) set ENV GOPROXY=https://mirrors.aliyun.com/goproxy,direct, but this one doesn't. The go mod tidy on Line 12 and build on Line 14 will fall back to the default proxy, which may fail or be slow in restricted network environments.

Proposed fix
 WORKDIR /app
 
+ENV GOPROXY=https://mirrors.aliyun.com/goproxy,direct
+
 COPY go.mod go.sum ./
cmd/contributor/command/migrate.go (1)

70-79: ⚠️ Potential issue | 🟠 Major

Migration errors are silently swallowed — process exits 0 on failure.

Two issues:

  1. Line 72: fmt.Printf without a newline, and no os.Exit(1) or log.Fatalf — a failed Up migration will appear to succeed to CI/CD pipelines.
  2. Line 75: mgr.Down() return value (if any error) is completely ignored, matching the same pattern.

This should use log.Fatalf (consistent with the rest of this file) or at minimum exit non-zero on failure.

Proposed fix
 	if up {
-		if err := mgr.Up(); err != nil {
-			fmt.Printf("failed to migrate up: %v", err)
-		}
+		if err := mgr.Up(); err != nil {
+			log.Fatalf("failed to migrate up: %v", err)
+		}
 	} else if down {
-		mgr.Down()
+		if err := mgr.Down(); err != nil {
+			log.Fatalf("failed to migrate down: %v", err)
+		}
 	} else {
contributorapp/app.go (1)

165-188: ⚠️ Potential issue | 🟠 Major

shutdownServers ignores the passed context's deadline.

Line 169 creates a new context.Background() for the shutdown goroutines, discarding the timeout context passed as the ctx parameter. The actual shutdown operations (shutdownHTTPServer, shutdownGRPCServer) run without a deadline — the timeout only governs the select statement. If the HTTP/gRPC shutdown hangs, the goroutine leaks even after the function returns false.

Pass the received ctx (which already carries the timeout) to the shutdown goroutines instead:

Proposed fix
 func (app Application) shutdownServers(ctx context.Context) bool {
 	app.Logger.Info("Starting server shutdown process...")
 	shutdownDone := make(chan struct{})

-	parentCtx := context.Background()
 	go func() {
 		var shutdownWg sync.WaitGroup
 		shutdownWg.Add(2)
-		go app.shutdownHTTPServer(parentCtx, &shutdownWg)
+		go app.shutdownHTTPServer(ctx, &shutdownWg)
 		go app.shutdownGRPCServer(&shutdownWg)

 		shutdownWg.Wait()
 		close(shutdownDone)

Based on learnings: "In the taskapp/app.go shutdown flow, the shutdownServers method receives a context with timeout (TotalShutdownTimeout) from the Start method, but the current implementation ignores this by creating context.Background() instead of using the passed context parameter."

authapp/delivery/http/handler.go (1)

109-117: ⚠️ Potential issue | 🟡 Minor

handleError fallback exposes raw error messages to clients.

Line 116 returns err.Error() directly in the response body for untyped errors. This could leak internal implementation details (e.g., database errors, connection strings). Consider returning a generic message for the fallback case.

Proposed fix
-	return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()})
+	return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})
contributorapp/delivery/http/server.go (1)

46-53: ⚠️ Potential issue | 🟡 Minor

Redundant RequireAuth middleware on the upload route.

The v1 group (line 46) already applies echomiddleware.RequireAuth to every route in the group. Line 51 adds it again as route-level middleware on /jobs/upload, causing the auth check to execute twice for every upload request.

Suggested fix
-	v1.POST("/jobs/upload", s.Handler.uploadFile, echomiddleware.RequireAuth, s.Middleware.CheckFile)
+	v1.POST("/jobs/upload", s.Handler.uploadFile, s.Middleware.CheckFile)
🤖 Fix all issues with AI agents
In `@contributorapp/app.go`:
- Around line 119-125: The shutdownServers function ignores the timeout-bound
context by creating parentCtx := context.Background(); change this to use the
passed ctx (parentCtx := ctx) so the shutdownTimeoutCtx created earlier (with
TotalShutdownTimeout) is respected; locate shutdownServers, the parentCtx
variable and any places where server shutdown contexts are derived (e.g.,
shutdownTimeoutCtx) and ensure they use the passed ctx to enforce the graceful
shutdown timeout.

In `@contributorapp/repository/broker.go`:
- Around line 118-146: Malformed messages in the loop over stream.Messages
(where you build Message structs from m.Values and append to msg) are currently
skipped with continue and never acknowledged; update the error branches for
missing "job_id", missing "retry", and retry parse errors to acknowledge the
message via the Redis consumer group XAck call (use the same stream and consumer
identifiers you use elsewhere in this file) before continuing, and optionally
route the bad payload to your DLQ handler or log with the message ID; reference
the loop over stream.Messages, the message variable m (m.ID and m.Values), and
the Message struct so you add XAck calls in those three error paths to prevent
infinite redelivery.

In `@contributorapp/service/job/process.go`:
- Around line 159-169: The map lookup for column indices (colIdx) is using
direct indexing so missing keys yield Go's zero value (0) and safeGet(r.data, 0)
returns the first column instead of empty; update the code that builds
ContributorRecord (where fields like DisplayName, ProfileImage, Bio,
PrivacyMode, Email are fetched) to first check the map with the comma-ok idiom
(idx, ok := colIdx[strings.ToLower(DisplayName.String())]) and only call safeGet
when ok is true, otherwise use an empty string; alternatively add a small helper
(e.g., getOptional(colIdx, key, data) or safeGetOptional) that performs the
comma-ok check and returns "" when the column key is missing, then replace the
direct colIdx[...] usages in the ContributorRecord construction with that
helper.

In `@contributorapp/service/job/service.go`:
- Around line 175-185: The publish-failure path sets the job status to
PendingToQueue (via s.jobRepo.UpdateStatus after s.publisher.Publish fails) but
ProcessJob only accepts jobs with status Pending, causing retries to fail;
update ProcessJob to treat PendingToQueue as a valid processable status (i.e.,
accept both Pending and PendingToQueue in the status check) or normalize
PendingToQueue to Pending before validation so retry/scheduler flows can
proceed; adjust any status-check branches in ProcessJob and unit tests
referencing Pending to include PendingToQueue handling.

In `@contributorapp/worker/consumer.go`:
- Around line 83-92: When CheckMaxRetry(ctx, m) returns an error you must not
simply continue — either acknowledge the message or move it to a
dead-letter/poison queue; update the loop in consumer.go where
p.consumer.CheckMaxRetry is called so that on error you call the failure
handling path (e.g., p.consumer.HandleFailure(ctx, m, err) or the equivalent
failure handler) or explicitly acknowledge/commit the message (e.g.,
m.Ack()/p.consumer.Ack(ctx, m)) instead of skipping it, and keep the existing
log; this prevents the message from being re-consumed forever.

In `@deploy/auth/production/docker-compose.yml`:
- Line 8: The docker-compose port mapping "18081:8080" is incorrect for the auth
service; update the service's port mapping from "18081:8080" to "18081:8081" so
the external port 18081 maps to the container/app port 8081 (matching the
config.yml setting port: 8081).

In `@deploy/infrastructure/openresty/development/conf.d/webhook.conf`:
- Around line 38-52: The current catch-all location block (location /) enforces
auth_request /auth_verify, which will reject GitHub webhook deliveries; add a
separate unauthenticated location for the webhook path (e.g., location
/github-webhook/ { ... }) placed before the existing location / block that omits
auth_request /auth_verify and proxies to the same upstream ($webhook_upstream);
include the same necessary proxy_set_header lines (Host, X-Real-IP,
X-Forwarded-For, X-Forwarded-Proto) and ensure the X-Hub-Signature-256 header is
preserved/forwarded so GitHub can authenticate the payload, leaving other
internal endpoints under location / protected by auth_request /auth_verify.

In `@deploy/webhook/production/.env.production`:
- Around line 1-8: This file commits hardcoded production secrets (e.g.,
WEBHOOK_POSTGRES_DB__PASSWORD, WEBHOOK_POSTGRES_DB__USER,
WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST) — remove the committed
.env.production from version control, add a rule like deploy/**/.env.production
to .gitignore, replace the committed file with a .env.production.example
containing placeholder values for the keys shown (WEBHOOK_POSTGRES_DB__HOST,
WEBHOOK_POSTGRES_DB__PORT, WEBHOOK_POSTGRES_DB__USER,
WEBHOOK_POSTGRES_DB__PASSWORD, WEBHOOK_POSTGRES_DB__DB_NAME,
WEBHOOK_REDIS__HOST, WEBHOOK_REDIS__PORT, WEBHOOK_NATS__URL), and update
deployment/CI to inject real secrets at runtime via Kubernetes Secrets, Vault,
or cloud secret manager instead of storing them in this file.
🟠 Major comments (20)
deploy/leaderboardscoring/production/.env.production-1-5 (1)

1-5: ⚠️ Potential issue | 🟠 Major

Do not commit plaintext DB credentials.

These values include a database password in a production env file, which risks secret exposure and credential reuse. Store secrets in a secrets manager or CI/CD-injected runtime env vars, and keep only non-sensitive defaults in repo.

deploy/infrastructure/openresty/development/conf.d/realtime.conf-38-52 (1)

38-52: ⚠️ Potential issue | 🟠 Major

Missing WebSocket upgrade headers breaks realtime service.

The realtime service handles WebSocket connections via /v1/ws (using gorilla/websocket), but the Nginx configuration lacks the required headers to proxy WebSocket frames. Without proxy_http_version 1.1, Upgrade, and Connection headers, Nginx will not upgrade the HTTP connection, causing all WebSocket clients to fail.

Proposed fix
     location / {
         auth_request /auth_verify;
         auth_request_set $auth_user_id $upstream_http_x_user_id;
         auth_request_set $auth_role $upstream_http_x_role;
         auth_request_set $auth_user_info $upstream_http_x_user_info;

         proxy_set_header X-User-ID $auth_user_id;
         proxy_set_header X-Role $auth_role;
         proxy_set_header X-User-Info $auth_user_info;
         proxy_pass http://$realtime_upstream;
         proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
         proxy_set_header X-Forwarded-Proto $scheme;
+        proxy_http_version 1.1;
+        proxy_set_header Upgrade $http_upgrade;
+        proxy_set_header Connection "upgrade";
     }
deploy/leaderboardstat/production/.env.production-1-5 (1)

1-5: ⚠️ Potential issue | 🟠 Major

Plaintext credentials in production config committed to version control.

Committing passwords (even placeholder ones) in a file under deploy/*/production/ risks them reaching actual production. Consider using a secrets manager, CI/CD-injected env vars, or a .env.production.example with dummy values while keeping real .env.production in .gitignore.

deploy/contributor/production/.env.production-1-8 (1)

1-8: ⚠️ Potential issue | 🟠 Major

Production credentials should not be committed to version control.

Hardcoded database passwords (contributor_pass) and connection details in a production .env file checked into the repo is a security risk. Even if these are placeholder values, they tend to leak into actual deployments. Use a secrets manager (e.g., Vault, AWS Secrets Manager, Docker secrets) or inject these via CI/CD at deploy time, and add this file to .gitignore.

deploy/task/production/.env.production-1-5 (1)

1-5: ⚠️ Potential issue | 🟠 Major

Same concern: production credentials committed to the repository.

task_pass is hardcoded on Line 4. Same recommendation as the contributor production env file — keep production secrets out of version control and use a secrets management solution.

contributorapp/client/auth.go-36-37 (1)

36-37: ⚠️ Potential issue | 🟠 Major

Unsanitized id is interpolated into the URL path.

If id contains slashes, query strings, or other special characters (e.g., from user input or a compromised upstream), fmt.Sprintf("%s/%s", ...) can produce an unintended URL. Use url.JoinPath or url.PathEscape(id) to safely construct the path.

Proposed fix
+	"net/url"
 ...
 func (a AuthClient) GetRole(ctx context.Context, id string) (GetRoleResponse, error) {
-	url := fmt.Sprintf("%s/%s", a.config.Url, id)
+	reqURL, err := url.JoinPath(a.config.Url, id)
+	if err != nil {
+		return GetRoleResponse{}, fmt.Errorf("invalid auth URL: %w", err)
+	}
 
-	req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
+	req, err := http.NewRequestWithContext(ctx, http.MethodGet, reqURL, nil)
contributorapp/delivery/http/middleware/middleware.go-42-51 (1)

42-51: ⚠️ Potential issue | 🟠 Major

Fix middleware validation to handle MIME type charset suffixes for CSV uploads.

http.DetectContentType returns "text/plain; charset=utf-8" for CSV files (not "text/csv"), but the middleware's validFileType function at lines 79-82 uses exact string matching (v == ext). Since the config includes "text/plain" without the charset suffix, CSV uploads will be rejected. Update validFileType to handle charset like the service-level validator does (lines 62-65 in contributorapp/service/job/validator.go), or check the file extension from fileHeader.Filename instead.

contributorapp/repository/migrations/000006_seed_admin_contributor.sql-1-22 (1)

1-22: ⚠️ Potential issue | 🟠 Major

Hardcoded admin password hash in a migration is a security risk.

This seeds an admin account with a bcrypt hash committed to version control. Anyone with repo access can attempt an offline brute-force attack against this hash. If this migration runs in production, the password is effectively public.

Recommendations:

  1. Seed the admin account without a password in the migration, and set credentials through a secure out-of-band process (e.g., CLI tool, env-var-driven script).
  2. If this is strictly for local development, gate it so it cannot run in production (e.g., separate seed script, not a numbered migration).
  3. The DOWN migration deletes the entire contributor row — this could cascade-delete associated data if foreign keys exist. Consider only resetting the role instead.
pkg/role/access.go-46-48 (1)

46-48: ⚠️ Potential issue | 🟠 Major

strings.Contains for health-check paths is overly permissive — potential auth bypass.

Any request whose path contains "health-check" or "health_check" anywhere (e.g., /v1/admin/health-check-secret, /v1/contributors?q=health_check) will be treated as public and skip permission checks. Use exact-match or prefix-match on a well-known segment instead.

Proposed fix — match on specific path prefixes
-	if strings.Contains(path, "health-check") || strings.Contains(path, "health_check") {
-		return true
-	}
+	if path == "/health-check" || path == "/health_check" ||
+		strings.HasPrefix(path, "/health-check/") || strings.HasPrefix(path, "/health_check/") ||
+		path == "/v1/health-check" || path == "/v1/health_check" {
+		return true
+	}
authapp/delivery/http/handler.go-76-79 (1)

76-79: ⚠️ Potential issue | 🟠 Major

Unknown or unmapped paths silently bypass authorization (allow-by-default).

When RequiredPermission returns "" (because the module or operation couldn't be determined from the path/host), HasPermission treats an empty permission as "no check needed" and returns true. This means any request to an unmapped or unexpected path passes authorization.

A deny-by-default approach is safer for a gateway auth wall: if the system can't determine the required permission, it should reject rather than allow.

#!/bin/bash
# Inspect RequiredPermission and isPublicPath to understand which paths are explicitly allowed
rg -n -A5 'func RequiredPermission|func isPublicPath|func moduleFromHost|func moduleFromPath|func operationFromMethod' --type go
contributorapp/service/job/param.go-15-18 (1)

15-18: ⚠️ Potential issue | 🟠 Major

ID fields use uint instead of uint64 / types.ID — potential truncation on 32-bit platforms.

The domain ID type is uint64 (see type/id.go), but ImportContributorResponse.JobID, GetJobResponse.ID, and GetFailRecordsResponse.ID/JobID are all uint. In Go, uint is platform-dependent (32-bit on 32-bit architectures), which could silently truncate IDs. Use uint64 or types.ID for consistency with the rest of the codebase.

Also applies to: 20-21, 34-36

contributorapp/worker/process.go-9-10 (1)

9-10: ⚠️ Potential issue | 🟠 Major

uint truncation risk propagated through the Processor interface.

strconv.ParseUint(jobID, 10, 64) correctly parses a 64-bit value, but the cast uint(uID) on line 31 and the Processor interface parameter type (jobID uint) will silently truncate on 32-bit platforms. Consider using uint64 consistently across the worker and Processor interface, aligned with types.ID (uint64).

Also applies to: 26-31

contributorapp/service/job/param.go-28-28 (1)

28-28: ⚠️ Potential issue | 🟠 Major

Exposing FilePath in API response may leak internal server paths.

FilePath exposes the server-side storage path (e.g., ./uploads/import_contributors/...) to API consumers. This is an information disclosure risk that can aid attackers in path-traversal or enumeration attacks. Consider omitting it from the response or replacing it with a sanitized/opaque reference.

contributorapp/service/job/adapter.go-23-26 (1)

23-26: 🛠️ Refactor suggestion | 🟠 Major

Use errors.As instead of a direct type assertion to handle wrapped errors.

If the Upsert implementation (or anything in its call chain) wraps a validator.Error with fmt.Errorf("...: %w", vErr), the type assertion err.(validator.Error) will fail and the error will be misclassified as ErrTypeUnexpected. errors.As unwraps the error chain.

Suggested fix
+import "errors"
+
 func (c ContributorAdapter) UpsertContributor(ctx context.Context, req ContributorRecord) error {
 	_, err := c.contributorSvc.Upsert(ctx, req.mapContributorRecordToUpsertRequest())
 	if err != nil {
-		if vErr, ok := err.(validator.Error); ok {
+		var vErr validator.Error
+		if errors.As(err, &vErr) {
 			return RecordProcessError{Err: vErr, Type: ErrTypeValidation}
 		}
deploy/contributor/development/config.yml-150-150 (1)

150-150: ⚠️ Potential issue | 🟠 Major

YAML key path_of_migrations does not match koanf tag path_of_migration at the top level.

In contributorapp/config.go line 26, the struct field uses koanf:"path_of_migration" (singular), but line 150 in this YAML file specifies path_of_migrations (plural). Koanf will not bind this value, leaving cfg.PathOfMigration empty.

This does not cause a runtime failure because the application correctly reads from the nested postgres_db.path_of_migrations (line 109, which matches pkg/database/config.go). However, it breaks the deprecation handling logic: the fallback to the top-level field and its associated warning message will never execute.

Change line 150 from path_of_migrations: to path_of_migration: to fix the key name.

contributorapp/service/contributor/validator.go-42-49 (1)

42-49: 🛠️ Refactor suggestion | 🟠 Major

Redundant PrivacyMode validation: the When block is unreachable.

Lines 44-45 make PrivacyMode required and validate it's in {real, anonymous}. The When(req.PrivacyMode != "", ...) block on lines 46-48 can never add value because Required already rejects empty values, and the In on line 45 already covers non-empty values.

The same duplication exists in ValidateUpdateProfileRequest (lines 65-71).

Proposed fix for ValidateCreateContributorRequest
 		validation.Field(
 			&req.PrivacyMode,
 			validation.Required.Error(ErrValidationRequired),
 			validation.In(PrivacyModeReal, PrivacyModeAnonymous).Error(ErrValidationEnumPrivacy),
-			validation.When(req.PrivacyMode != "",
-				validation.In(PrivacyModeReal, PrivacyModeAnonymous).Error(ErrValidationEnumPrivacy),
-			),
 		),

Apply the same to ValidateUpdateProfileRequest.

contributorapp/service/job/process.go-79-80 (1)

79-80: ⚠️ Potential issue | 🟠 Major

Silently ignoring Columns() error can corrupt the header.

If rowsIter.Columns() returns an error on the first row, header will be nil/empty, and processRowsWithHeader will flag every required column as missing — a confusing error message when the real cause is a read failure. For subsequent rows, corrupted data could silently slip through.

Proposed fix
-		row, _ := rowsIter.Columns()
+		row, err := rowsIter.Columns()
+		if err != nil {
+			return ProcessResult{}, fmt.Errorf("failed to read row columns: %w", err)
+		}
contributorapp/delivery/http/handler.go-193-212 (1)

193-212: ⚠️ Potential issue | 🟠 Major

getJobStatus and getFailRecords lack role-based authorization and job ownership validation.

These endpoints are protected by the group-level RequireAuth middleware, so they require authentication. However, they lack the admin role check that uploadFile enforces, and neither endpoint validates that the authenticated user owns or is authorized to view the requested job. Any authenticated user can query job status or fail records for any job ID, potentially exposing sensitive contributor data from uploads they did not create.

contributorapp/service/job/service.go-311-323 (1)

311-323: ⚠️ Potential issue | 🟠 Major

FilePath is exposed in GetJobResponse, leaking internal server paths.

Returning the absolute filesystem path (e.g., /var/data/uploads/1234_file.csv) to API consumers reveals server directory structure, which is an information disclosure concern.

♻️ Return only the filename or a download URL instead
 	return GetJobResponse{
 		ID:             job.ID,
 		SuccessCount:   job.SuccessCount,
 		TotalRecords:   job.TotalRecords,
 		FailCount:      job.FailCount,
-		FilePath:       job.FilePath,
 		FileName:       job.FileName,
 		FileHash:       job.FileHash,
 		IdempotencyKey: job.IdempotencyKey,
contributorapp/service/job/service.go-249-270 (1)

249-270: ⚠️ Potential issue | 🟠 Major

Non-RecordProcessError upsert failures are counted but never persisted to failRepo.

When UpsertContributor returns an error that does not satisfy RecordProcessError (Line 251 type assertion fails), the loop increments failCount (Line 266) but skips the failRepo.Create call entirely. This means the failure is silently lost — no raw data, no reason, no error type recorded — making it impossible to debug or retry these records.

🐛 Proposed fix — record all upsert failures
 	for _, record := range pResult.SuccessRecords {
 		if err := s.contributorAdapter.UpsertContributor(ctx, record); err != nil {
-			if aErr, ok := err.(RecordProcessError); ok {
-				cErr := s.failRepo.Create(ctx, FailRecord{
-					JobID:        jobID,
-					RecordNumber: record.RowNumber,
-					Reason:       err.Error(),
-					RawData:      record.mapToSlice(),
-					RetryCount:   0,
-					LastError:    aErr.Err.Error(),
-					ErrType:      aErr.Type,
-				})
-				if cErr != nil {
-					logger.L().Error("failed to insert fail record", "row", record.RowNumber, "error", cErr.Error())
-				}
+			fr := FailRecord{
+				JobID:        jobID,
+				RecordNumber: record.RowNumber,
+				Reason:       err.Error(),
+				RawData:      record.mapToSlice(),
+				RetryCount:   0,
+				LastError:    err.Error(),
+			}
+			if aErr, ok := err.(RecordProcessError); ok {
+				fr.LastError = aErr.Err.Error()
+				fr.ErrType = aErr.Type
+			}
+			if cErr := s.failRepo.Create(ctx, fr); cErr != nil {
+				logger.L().Error("failed to insert fail record", "row", record.RowNumber, "error", cErr.Error())
 			}
 
 			failCount++
 			continue
 		}
🟡 Minor comments (21)
leaderboardstatapp/repository/redis-leaderboardstat.go-31-35 (1)

31-35: ⚠️ Potential issue | 🟡 Minor

Consider also validating pageSize.

The page guard is a good addition, but pageSize is left unchecked. If pageSize <= 0, then stop becomes negative — and in Redis, a negative stop index in ZREVRANGE means "count from the end," which would return the entire sorted set instead of an empty page.

Suggested fix
 if page < 1 {
     page = 1
 }
+if pageSize < 1 {
+    pageSize = 1
+}
 start := (page - 1) * pageSize
 stop := start + pageSize - 1

Alternatively, return early with an empty slice if pageSize is invalid, depending on the desired contract.

deploy/auth/production/docker-compose.yml-10-11 (1)

10-11: ⚠️ Potential issue | 🟡 Minor

Production config references a "development" network.

The file lives under deploy/auth/production/, sets ENV=production, but uses rankr-shared-development-network. If this config is ever used for an actual production deployment, the network name is misleading and could cause the service to join the wrong network. If all environments intentionally share this single Docker network for local development, consider renaming it to something environment-neutral (e.g., rankr-shared-network) to avoid confusion.

Also applies to: 18-21

deploy/infrastructure/openresty/development/conf.d/auth.conf-37-45 (1)

37-45: ⚠️ Potential issue | 🟡 Minor

Verify the /auth/ rewrite behavior for edge cases.

The rewrite ^/auth/?(.*)$ /$1 break; strips the /auth/ prefix — so /auth/v1/login/v1/login. This is fine for typical use, but note that a request to exactly /auth/ will rewrite to / and be proxied to auth-app's root, and a request to /auth (no trailing slash) won't match this location block at all (it would fall through to the catch-all 404). Ensure clients always use the trailing-slash form or adjust the location directive.

deploy/contributor/development/docker-compose.yml-19-21 (1)

19-21: ⚠️ Potential issue | 🟡 Minor

Lowercase env var prefix (contributor_) — same inconsistency flagged in the auth production env.

Consider standardizing to uppercase (CONTRIBUTOR_POSTGRES_DB_HOST, etc.) across all services.

deploy/auth/production/.env.production-1-5 (1)

1-5: ⚠️ Potential issue | 🟡 Minor

Inconsistent env var prefix casing across services.

This file uses a lowercase prefix (auth_POSTGRES_DB__*) while the leaderboardstat production env uses an uppercase prefix (STAT_POSTGRES_DB__*). Standardize on one convention (uppercase is the norm for environment variables) to avoid confusion and potential config-parsing mismatches.

Suggested fix
-auth_POSTGRES_DB__HOST=shared-postgres
-auth_POSTGRES_DB__PORT=5432
-auth_POSTGRES_DB__USER=auth_user
-auth_POSTGRES_DB__PASSWORD=auth_pass
-auth_POSTGRES_DB__DB_NAME=auth_db
+AUTH_POSTGRES_DB__HOST=shared-postgres
+AUTH_POSTGRES_DB__PORT=5432
+AUTH_POSTGRES_DB__USER=auth_user
+AUTH_POSTGRES_DB__PASSWORD=auth_pass
+AUTH_POSTGRES_DB__DB_NAME=auth_db
contributorapp/repository/migrations/000003_create_jobs_table.sql-6-6 (1)

6-6: ⚠️ Potential issue | 🟡 Minor

Redundant unique index on idempotency_key.

Line 6 declares UNIQUE on idempotency_key, which already causes PostgreSQL to create a unique index automatically. The explicit CREATE UNIQUE INDEX on Line 18 is redundant and results in a duplicate index consuming extra storage and write overhead.

Proposed fix

Either remove the explicit index:

-CREATE UNIQUE INDEX IF NOT EXISTS idx_jobs_idempotency_key ON jobs(idempotency_key);

Or remove the UNIQUE constraint from the column definition and keep only the named index (if you want a specific index name).

Also applies to: 18-18

authapp/repository/migrations/001_rbac.sql-21-21 (1)

21-21: ⚠️ Potential issue | 🟡 Minor

Index idx_role_permissions_role is redundant with the composite primary key.

The PRIMARY KEY (role, permission_id) on Line 21 already creates a B-tree index with role as the leading column, which satisfies lookups filtering by role alone. The additional index on Line 24 is a duplicate that doubles write overhead for no query benefit.

Proposed fix
-CREATE INDEX IF NOT EXISTS idx_role_permissions_role ON role_permissions (role);

And in the down migration:

-DROP INDEX IF EXISTS idx_role_permissions_role;

Also applies to: 24-24

contributorapp/repository/migrations/000004_create_fail_records_table.sql-11-12 (1)

11-12: ⚠️ Potential issue | 🟡 Minor

updated_at has no default, unlike created_at and the jobs table convention.

created_at defaults to NOW() and the jobs table sets both created_at and updated_at to NOW(). Here, updated_at will be NULL on insert, which creates an inconsistency. If the intent is "not yet updated", that's fine but should be deliberate; otherwise add DEFAULT NOW().

contributorapp/repository/migrations/000003_create_jobs_table.sql-11-13 (1)

11-13: ⚠️ Potential issue | 🟡 Minor

Count columns are nullable — arithmetic on NULLs may surprise.

total_records, success_count, and fail_count have a DEFAULT 0 but no NOT NULL constraint. An explicit UPDATE ... SET total_records = NULL (or an ORM zero-value mishap) would cause downstream arithmetic (success_count + fail_count) to return NULL. Add NOT NULL to match the intent.

Proposed fix
-    total_records BIGINT DEFAULT 0,
-    success_count BIGINT DEFAULT 0,
-    fail_count BIGINT DEFAULT 0,
+    total_records BIGINT NOT NULL DEFAULT 0,
+    success_count BIGINT NOT NULL DEFAULT 0,
+    fail_count BIGINT NOT NULL DEFAULT 0,
deploy/task/development/Dockerfile-24-25 (1)

24-25: ⚠️ Potential issue | 🟡 Minor

Air binary and module cache are copied but unused in CMD.

Air is installed and copied to the runtime image (line 23), and the module cache is copied (lines 24-25), but the CMD on line 30 runs the pre-built binary directly instead of invoking Air for hot-reload. Either remove these unnecessary copies or update CMD to use Air (e.g., CMD ["air", "-c", "/app/.air.toml"]). This pattern appears in most development Dockerfiles, but only deploy/leaderboardstat/development/Dockerfile and deploy/auth/development/Dockerfile actually invoke Air in their CMD.

contributorapp/service/job/validator.go-62-66 (1)

62-66: ⚠️ Potential issue | 🟡 Minor

MIME prefix-match logic has an inverted first condition.

strings.HasPrefix(m, mime) checks if the allowed type starts with the detected type. This is backwards — it would, for example, let a detected "text/" (hypothetically) match any "text/..." config entry. The second condition already correctly handles base-type prefix matching. The first condition should either be removed or flipped.

Proposed fix
 	for _, m := range v.config.HttpFileType {
-		if strings.HasPrefix(m, mime) || strings.HasPrefix(mime, strings.Split(m, ";")[0]) {
+		if m == mime || strings.HasPrefix(mime, strings.Split(m, ";")[0]) {
 			return nil
 		}
 	}
contributorapp/service/job/param.go-21-21 (1)

21-21: ⚠️ Potential issue | 🟡 Minor

Inconsistent JSON tag casing: "ID" vs snake_case used elsewhere.

GetJobResponse.ID has JSON tag "ID" (PascalCase), while every other field in this file and other response structs uses snake_case. This will produce inconsistent API output.

Suggested fix
-	ID             uint      `json:"ID"`
+	ID             uint      `json:"id"`
contributorapp/service/job/param.go-8-13 (1)

8-13: ⚠️ Potential issue | 🟡 Minor

json:"file" tag on multipart.File is misleading and non-functional.

multipart.File is an interface (io.Reader + io.ReaderAt + io.Seeker) and cannot be marshaled/unmarshaled via encoding/json. While this struct is likely populated manually from a multipart form, the JSON tag is misleading and will silently produce null on marshal or fail on unmarshal.

Suggested fix
 type ImportContributorRequest struct {
-	File           multipart.File `json:"file"`
+	File           multipart.File `json:"-"`
 	FileName       string         `json:"file_name"`
 	FileType       string         `json:"file_type"`
 	IdempotencyKey string         `json:"idempotency_key"`
 }
contributorapp/service/contributor/param.go-63-63 (1)

63-63: ⚠️ Potential issue | 🟡 Minor

Typo: CreateAt should be CreatedAt (and JSON tag created_at).

Every other timestamp field in this file and the codebase uses CreatedAt / created_at (e.g., line 16 GetProfileResponse.CreatedAt, line 51 UpdateProfileResponse.CreatedAt). The inconsistency is also evident in the service layer, which maps this field to CreatedAt on internal structures (contributorapp/service/contributor/service.go lines 179, 203).

Suggested fix
-	CreateAt       time.Time   `json:"create_at"`
+	CreatedAt      time.Time   `json:"created_at"`
contributorapp/delivery/http/handler.go-136-142 (1)

136-142: ⚠️ Potential issue | 🟡 Minor

Missing/malformed file upload should return 400, not 500.

c.FormFile("file") fails when the client doesn't include a file field or sends a malformed multipart request — that's a client error, not a server error.

Proposed fix
 	fileHeader, err := c.FormFile("file")
 	if err != nil {
-		return c.JSON(http.StatusInternalServerError, map[string]string{
+		return c.JSON(http.StatusBadRequest, map[string]string{
 			"message": "fail to get file",
 			"error":   err.Error(),
 		})
 	}
contributorapp/delivery/http/handler.go-109-109 (1)

109-109: ⚠️ Potential issue | 🟡 Minor

roleName should be a const, not a var.

A mutable package-level variable for a security-critical policy value is risky — it can be accidentally reassigned at runtime. Since "admin" is a fixed value, declare it as a constant.

Proposed fix
-var roleName = "admin"
+const roleName = "admin"
contributorapp/service/contributor/service.go-152-215 (1)

152-215: ⚠️ Potential issue | 🟡 Minor

Remove CreatedAt assignment in the update path (line 203).

The Upsert method sets upRequest.CreatedAt = req.CreateAt when updating an existing contributor. While the database does not actually use this field (it's not included in the SQL UPDATE parameters), assigning CreatedAt is semantically incorrect. A record's creation timestamp should remain immutable after the initial insert. This causes confusion and makes the code's intent unclear to maintainers.

UpdatedAt is already properly handled by the database (automatically set to NOW() in the update query), so no changes are needed there.

Proposed fix: remove CreatedAt from the update path
 	upRequest.ID = c.ID
 	upRequest.GitHubID = req.GitHubID
 	upRequest.GitHubUsername = req.GitHubUsername
 	upRequest.DisplayName = req.DisplayName
 	upRequest.Bio = req.Bio
 	upRequest.PrivacyMode = req.PrivacyMode
 	upRequest.ProfileImage = req.ProfileImage
 	upRequest.Email = req.Email
-	upRequest.CreatedAt = req.CreateAt
contributorapp/service/job/service.go-280-285 (1)

280-285: ⚠️ Potential issue | 🟡 Minor

Empty file (zero records) is marked Failed.

If both successCount and failCount are 0 (e.g., an empty CSV), the default finalStatus is Failed. An empty file isn't really a failure — consider handling this edge case explicitly, perhaps as Success or a dedicated status.

contributorapp/service/job/service.go-86-99 (1)

86-99: ⚠️ Potential issue | 🟡 Minor

Idempotency-key lookup swallows errors when j != nil.

If GetJobByIdempotencyKey returns both a non-nil j and a non-nil err (e.g., a partial result plus a warning/error), Lines 87-88 short-circuit and the error is never inspected. While most repository implementations return one or the other, this ordering is fragile. Checking the error first is the safer Go convention.

♻️ Suggested reordering
 	j, err := s.jobRepo.GetJobByIdempotencyKey(ctx, req.IdempotencyKey)
-	if j != nil {
-		return ImportContributorResponse{JobID: j.ID, Message: "The file with this idempotency key exists"}, nil
-	}
-
-	if err != nil {
+	if err != nil {
 		if !errors.Is(err, ErrJobNotExists) {
 			return ImportContributorResponse{}, errmsg.ErrorResponse{
 				Message:         "failed to get job by idempotency key",
 				Errors:          map[string]interface{}{"error": err.Error()},
 				InternalErrCode: statuscode.IntCodeUnExpected,
 			}
 		}
 	}
+
+	if j != nil {
+		return ImportContributorResponse{JobID: j.ID, Message: "The file with this idempotency key exists"}, nil
+	}
contributorapp/service/job/service.go-132-136 (1)

132-136: ⚠️ Potential issue | 🟡 Minor

Inconsistent error handling — raw error returned from io.Copy.

Every other failure path in CreateImportJob wraps the error in errmsg.ErrorResponse, but Line 135 returns the bare io.Copy error. This may cause downstream error-handling logic (e.g., HTTP middleware mapping ErrorResponse to status codes) to behave differently for this path.

♻️ Wrap consistently
 	if _, err := io.Copy(writer, req.File); err != nil {
-		return ImportContributorResponse{}, err
+		return ImportContributorResponse{}, errmsg.ErrorResponse{
+			Message:         "failed to save uploaded file",
+			Errors:          map[string]interface{}{"error": err.Error()},
+			InternalErrCode: statuscode.IntCodeUnExpected,
+		}
 	}
contributorapp/service/job/entity.go-64-75 (1)

64-75: ⚠️ Potential issue | 🟡 Minor

mapToSlice() omits Email, causing incomplete raw data in fail records.

mapToSlice() is used to populate FailRecord.RawData (in service.go Line 256), but it excludes Email and CreatedAt. Since mapContributorRecordToUpsertRequest() maps all fields including Email, a failed upsert's raw-data snapshot will be missing the email, making debugging and retries harder.

Proposed fix — include Email
 func (c ContributorRecord) mapToSlice() []string {
-	s := make([]string, 0)
-	idStr := strconv.Itoa(int(c.GithubID))
-	s = append(s, idStr)
-	s = append(s, c.GithubUsername)
-	s = append(s, c.DisplayName)
-	s = append(s, c.ProfileImage)
-	s = append(s, c.Bio)
-	s = append(s, c.PrivacyMode)
-
-	return s
+	return []string{
+		strconv.FormatInt(c.GithubID, 10),
+		c.GithubUsername,
+		c.DisplayName,
+		c.ProfileImage,
+		c.Bio,
+		c.PrivacyMode,
+		c.Email,
+	}
 }

Note: strconv.FormatInt is preferred over strconv.Itoa(int(...)) since GithubID is int64 — the narrowing cast to int can silently truncate on 32-bit platforms.

Comment on lines +119 to +125
wg.Add(1)
go func() {
defer wg.Done()
app.Logger.Info("consumer started")
app.Consumer.Start(ctx)
app.Logger.Info("consumer stopped")
}()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's find the contributorapp/app.go file and examine the relevant code
find . -name "app.go" -path "*/contributorapp/*" | head -5

Repository: gocasters/rankr

Length of output: 82


🏁 Script executed:

#!/bin/bash
# Let's explore the directory structure to understand the repo layout
ls -la

Repository: gocasters/rankr

Length of output: 2458


🏁 Script executed:

#!/bin/bash
# Search for worker.Pool.Start implementation
rg "func.*Pool.*Start" --type go -A 10

Repository: gocasters/rankr

Length of output: 1690


🏁 Script executed:

#!/bin/bash
# Read the full consumer.go file to understand the complete implementation
wc -l ./contributorapp/worker/consumer.go

Repository: gocasters/rankr

Length of output: 97


🏁 Script executed:

#!/bin/bash
# Read the complete consumer.go file
cat -n ./contributorapp/worker/consumer.go

Repository: gocasters/rankr

Length of output: 2845


🏁 Script executed:

#!/bin/bash
# Read the app.go file focusing on the shutdown and context handling
wc -l ./contributorapp/app.go

Repository: gocasters/rankr

Length of output: 86


🏁 Script executed:

#!/bin/bash
# Read app.go around the lines of interest (119-125 and the shutdown logic around line 141)
cat -n ./contributorapp/app.go | sed -n '100,160p'

Repository: gocasters/rankr

Length of output: 2252


🏁 Script executed:

#!/bin/bash
# Find the shutdownServers method in app.go
cat -n ./contributorapp/app.go | sed -n '160,209p'

Repository: gocasters/rankr

Length of output: 1893


🏁 Script executed:

#!/bin/bash
# Search for shutdownServers implementation
rg "func.*shutdownServers" --type go -A 20

Repository: gocasters/rankr

Length of output: 8300


Consumer lifecycle is correct, but graceful shutdown timeout is being ignored.

The consumer goroutine properly respects context cancellation and returns promptly when ctx.Done() is signalled. However, there's a critical issue on line 169: shutdownServers() receives a timeout-bound context but ignores it by creating parentCtx := context.Background(). This causes the shutdownTimeoutCtx (created on line 131 with TotalShutdownTimeout) to be discarded, defeating the graceful shutdown timeout mechanism. The passed ctx parameter should be used instead:

parentCtx := ctx

This matches the pattern used in taskapp and ensures the shutdown process respects the configured timeout.

🤖 Prompt for AI Agents
In `@contributorapp/app.go` around lines 119 - 125, The shutdownServers function
ignores the timeout-bound context by creating parentCtx := context.Background();
change this to use the passed ctx (parentCtx := ctx) so the shutdownTimeoutCtx
created earlier (with TotalShutdownTimeout) is respected; locate
shutdownServers, the parentCtx variable and any places where server shutdown
contexts are derived (e.g., shutdownTimeoutCtx) and ensure they use the passed
ctx to enforce the graceful shutdown timeout.

Comment on lines +118 to +146
for _, m := range stream.Messages {

jobIDRaw, ok := m.Values["job_id"]
if !ok {
logger.L().Error("message missing job_id", "message_id", m.ID)

continue
}

retryRaw, ok := m.Values["retry"]
if !ok {
logger.L().Error("message missing retry", "message_id", m.ID)

continue
}

retry, err := strconv.Atoi(fmt.Sprint(retryRaw))
if err != nil {
logger.L().Error("invalid retry value", "message_id", m.ID, "error", err)

continue
}

msg = append(msg, Message{
ID: m.ID,
Payload: []byte(fmt.Sprint(jobIDRaw)),
Retry: retry,
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Malformed messages are skipped but never acknowledged — they will be redelivered infinitely.

When a message is missing job_id, retry, or has an unparseable retry value (Lines 121–139), the loop continues without calling XAck. Since the consumer group has already delivered these messages, they will reappear on every Consume call (as pending entries), flooding logs indefinitely and wasting resources.

Either acknowledge (and optionally DLQ) the bad messages, or explicitly handle them as failures.

Proposed fix — ack malformed messages
 		for _, m := range stream.Messages {
 
 			jobIDRaw, ok := m.Values["job_id"]
 			if !ok {
 				logger.L().Error("message missing job_id", "message_id", m.ID)
-
-				continue
+				_ = b.Ack(ctx, m.ID)
+				continue
 			}
 
 			retryRaw, ok := m.Values["retry"]
 			if !ok {
 				logger.L().Error("message missing retry", "message_id", m.ID)
-
-				continue
+				_ = b.Ack(ctx, m.ID)
+				continue
 			}
 
 			retry, err := strconv.Atoi(fmt.Sprint(retryRaw))
 			if err != nil {
 				logger.L().Error("invalid retry value", "message_id", m.ID, "error", err)
-
-				continue
+				_ = b.Ack(ctx, m.ID)
+				continue
 			}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for _, m := range stream.Messages {
jobIDRaw, ok := m.Values["job_id"]
if !ok {
logger.L().Error("message missing job_id", "message_id", m.ID)
continue
}
retryRaw, ok := m.Values["retry"]
if !ok {
logger.L().Error("message missing retry", "message_id", m.ID)
continue
}
retry, err := strconv.Atoi(fmt.Sprint(retryRaw))
if err != nil {
logger.L().Error("invalid retry value", "message_id", m.ID, "error", err)
continue
}
msg = append(msg, Message{
ID: m.ID,
Payload: []byte(fmt.Sprint(jobIDRaw)),
Retry: retry,
})
}
for _, m := range stream.Messages {
jobIDRaw, ok := m.Values["job_id"]
if !ok {
logger.L().Error("message missing job_id", "message_id", m.ID)
_ = b.Ack(ctx, m.ID)
continue
}
retryRaw, ok := m.Values["retry"]
if !ok {
logger.L().Error("message missing retry", "message_id", m.ID)
_ = b.Ack(ctx, m.ID)
continue
}
retry, err := strconv.Atoi(fmt.Sprint(retryRaw))
if err != nil {
logger.L().Error("invalid retry value", "message_id", m.ID, "error", err)
_ = b.Ack(ctx, m.ID)
continue
}
msg = append(msg, Message{
ID: m.ID,
Payload: []byte(fmt.Sprint(jobIDRaw)),
Retry: retry,
})
}
🤖 Prompt for AI Agents
In `@contributorapp/repository/broker.go` around lines 118 - 146, Malformed
messages in the loop over stream.Messages (where you build Message structs from
m.Values and append to msg) are currently skipped with continue and never
acknowledged; update the error branches for missing "job_id", missing "retry",
and retry parse errors to acknowledge the message via the Redis consumer group
XAck call (use the same stream and consumer identifiers you use elsewhere in
this file) before continuing, and optionally route the bad payload to your DLQ
handler or log with the message ID; reference the loop over stream.Messages, the
message variable m (m.ID and m.Values), and the Message struct so you add XAck
calls in those three error paths to prevent infinite redelivery.

Comment on lines +159 to +169
cr := ContributorRecord{
RowNumber: r.rowNumber,
GithubID: id,
GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]),
DisplayName: safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]),
ProfileImage: safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]),
Bio: safeGet(r.data, colIdx[strings.ToLower(Bio.String())]),
PrivacyMode: safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]),
Email: safeGet(r.data, colIdx[strings.ToLower(Email.String())]),
CreatedAt: time.Now(),
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Optional column lookups silently return index 0 when the column is missing, causing incorrect field assignments.

When a column like DisplayName is absent from the header, colIdx[strings.ToLower(DisplayName.String())] returns 0 (Go's zero value for missing map keys). safeGet(r.data, 0) then returns the first column's value instead of an empty string.

Use a comma-ok idiom to distinguish "column at index 0" from "column not found":

Proposed fix — helper that returns empty string for missing columns
+func getField(row []string, colIdx map[string]int, col ColumnName) string {
+	idx, ok := colIdx[strings.ToLower(col.String())]
+	if !ok {
+		return ""
+	}
+	return safeGet(row, idx)
+}

Then replace usages:

 			cr := ContributorRecord{
 				RowNumber:      r.rowNumber,
 				GithubID:       id,
-				GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]),
-				DisplayName:    safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]),
-				ProfileImage:   safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]),
-				Bio:            safeGet(r.data, colIdx[strings.ToLower(Bio.String())]),
-				PrivacyMode:    safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]),
-				Email:          safeGet(r.data, colIdx[strings.ToLower(Email.String())]),
+				GithubUsername: getField(r.data, colIdx, GithubUsername),
+				DisplayName:    getField(r.data, colIdx, DisplayName),
+				ProfileImage:   getField(r.data, colIdx, ProfileImage),
+				Bio:            getField(r.data, colIdx, Bio),
+				PrivacyMode:    getField(r.data, colIdx, PrivacyMode),
+				Email:          getField(r.data, colIdx, Email),
 				CreatedAt:      time.Now(),
 			}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cr := ContributorRecord{
RowNumber: r.rowNumber,
GithubID: id,
GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]),
DisplayName: safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]),
ProfileImage: safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]),
Bio: safeGet(r.data, colIdx[strings.ToLower(Bio.String())]),
PrivacyMode: safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]),
Email: safeGet(r.data, colIdx[strings.ToLower(Email.String())]),
CreatedAt: time.Now(),
}
cr := ContributorRecord{
RowNumber: r.rowNumber,
GithubID: id,
GithubUsername: getField(r.data, colIdx, GithubUsername),
DisplayName: getField(r.data, colIdx, DisplayName),
ProfileImage: getField(r.data, colIdx, ProfileImage),
Bio: getField(r.data, colIdx, Bio),
PrivacyMode: getField(r.data, colIdx, PrivacyMode),
Email: getField(r.data, colIdx, Email),
CreatedAt: time.Now(),
}
🤖 Prompt for AI Agents
In `@contributorapp/service/job/process.go` around lines 159 - 169, The map lookup
for column indices (colIdx) is using direct indexing so missing keys yield Go's
zero value (0) and safeGet(r.data, 0) returns the first column instead of empty;
update the code that builds ContributorRecord (where fields like DisplayName,
ProfileImage, Bio, PrivacyMode, Email are fetched) to first check the map with
the comma-ok idiom (idx, ok := colIdx[strings.ToLower(DisplayName.String())])
and only call safeGet when ok is true, otherwise use an empty string;
alternatively add a small helper (e.g., getOptional(colIdx, key, data) or
safeGetOptional) that performs the comma-ok check and returns "" when the column
key is missing, then replace the direct colIdx[...] usages in the
ContributorRecord construction with that helper.

Comment on lines +175 to +185
if err := s.publisher.Publish(ctx, ProduceJob{JobID: job.ID,
IdempotencyKey: fmt.Sprintf("%s:%s", job.IdempotencyKey, job.FileHash)}); err != nil {
_ = s.jobRepo.UpdateStatus(ctx, job.ID, PendingToQueue)

successSaveFile = true

return ImportContributorResponse{
JobID: job.ID,
Message: "job saved and will be queued shortly",
}, nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

When Publish fails, job is marked PendingToQueue — but ProcessJob only accepts Pending.

Line 177 sets the status to PendingToQueue on publish failure, and the response message says the job "will be queued shortly." However, ProcessJob (Line 208) rejects any job whose status is not Pending. This means a retry/scheduler that later picks up PendingToQueue jobs and calls ProcessJob will always fail with "invalid job status: pending_to_queue".

Either ProcessJob should also accept PendingToQueue, or the retry path needs a separate re-publish flow that resets status to Pending first.

🐛 Minimal fix — accept PendingToQueue in ProcessJob
-	if job.Status != Pending {
+	if job.Status != Pending && job.Status != PendingToQueue {
 		return fmt.Errorf("invalid job status: %s", job.Status)
 	}
🤖 Prompt for AI Agents
In `@contributorapp/service/job/service.go` around lines 175 - 185, The
publish-failure path sets the job status to PendingToQueue (via
s.jobRepo.UpdateStatus after s.publisher.Publish fails) but ProcessJob only
accepts jobs with status Pending, causing retries to fail; update ProcessJob to
treat PendingToQueue as a valid processable status (i.e., accept both Pending
and PendingToQueue in the status check) or normalize PendingToQueue to Pending
before validation so retry/scheduler flows can proceed; adjust any status-check
branches in ProcessJob and unit tests referencing Pending to include
PendingToQueue handling.

Comment on lines +83 to +92
for _, m := range msgs {
if err := p.consumer.CheckMaxRetry(ctx, m); err != nil {
logger.L().Error("max retry done",
"message id", m.ID,
"error", err.Error())

continue
}
msgCh <- m
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Messages that fail CheckMaxRetry are never acknowledged — they will be re-consumed forever.

When CheckMaxRetry returns an error (line 84), the message is logged and skipped via continue, but it is never acknowledged or moved to a dead-letter queue. On the next Consume call, the same message will be fetched again, causing an infinite retry loop and log spam.

Either acknowledge the message or invoke HandleFailure to move it to a dead-letter / poison queue when max retries are exceeded.

🤖 Prompt for AI Agents
In `@contributorapp/worker/consumer.go` around lines 83 - 92, When
CheckMaxRetry(ctx, m) returns an error you must not simply continue — either
acknowledge the message or move it to a dead-letter/poison queue; update the
loop in consumer.go where p.consumer.CheckMaxRetry is called so that on error
you call the failure handling path (e.g., p.consumer.HandleFailure(ctx, m, err)
or the equivalent failure handler) or explicitly acknowledge/commit the message
(e.g., m.Ack()/p.consumer.Ack(ctx, m)) instead of skipping it, and keep the
existing log; this prevents the message from being re-consumed forever.

container_name: auth-app-prod
ports:
- "8081:8080"
- "18081:8080"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for references to port 8081 in various file types
echo "=== Searching for '8081' references in YAML files ==="
rg '8081' --type yaml 2>/dev/null || echo "No matches in YAML"

echo -e "\n=== Searching for '8081' in config/env files ==="
rg '8081' -g '*.conf' -g '*.env*' -g '*.sh' -g '*.lua' -g '*.nginx' 2>/dev/null || echo "No matches in config/env files"

echo -e "\n=== Searching for '8081' in all files (excluding node_modules) ==="
rg '8081' --type-list | head -20
rg '8081' -g '!node_modules/**' -g '!.git/**' 2>/dev/null | head -30 || echo "No other matches found"

Repository: gocasters/rankr

Length of output: 50780


🏁 Script executed:

#!/bin/bash
# Also check the actual docker-compose files to understand the context
echo "=== Checking deploy/auth/production/docker-compose.yml ==="
cat -n deploy/auth/production/docker-compose.yml | grep -A 2 -B 2 "8080\|8081\|18081" || echo "File not found or no matches"

echo -e "\n=== Checking for other docker-compose files that may reference this service ==="
find . -name "docker-compose*.yml" -o -name "docker-compose*.yaml" 2>/dev/null | grep -v node_modules | head -20

Repository: gocasters/rankr

Length of output: 1455


Fix the docker-compose port mapping to match the application configuration port.

The port mapping "18081:8080" is inconsistent with deploy/auth/production/config.yml which specifies port: 8081. This mismatch will cause the auth service to fail. Change the docker-compose port mapping to "18081:8081" to align with the internal app port, consistent with other services in the deployment.

🤖 Prompt for AI Agents
In `@deploy/auth/production/docker-compose.yml` at line 8, The docker-compose port
mapping "18081:8080" is incorrect for the auth service; update the service's
port mapping from "18081:8080" to "18081:8081" so the external port 18081 maps
to the container/app port 8081 (matching the config.yml setting port: 8081).

Comment on lines +38 to +52
location / {
auth_request /auth_verify;
auth_request_set $auth_user_id $upstream_http_x_user_id;
auth_request_set $auth_role $upstream_http_x_role;
auth_request_set $auth_user_info $upstream_http_x_user_info;

proxy_set_header X-User-ID $auth_user_id;
proxy_set_header X-Role $auth_role;
proxy_set_header X-User-Info $auth_user_info;
proxy_pass http://$webhook_upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

GitHub webhook deliveries will be rejected — external callers don't carry JWT tokens.

GitHub sends webhook payloads with its own authentication mechanism (X-Hub-Signature-256), not with a JWT bearer token from your auth service. Requiring auth_request /auth_verify on the catch-all location / means every incoming GitHub webhook POST will receive a 401, effectively breaking the integration.

You likely need to either:

  1. Exempt the webhook delivery path (e.g., /github-webhook/) from the auth wall entirely, or
  2. Add a separate location block for the webhook delivery path that bypasses JWT auth (similar to the health-check).
Option 1: Add an unauthenticated location for the webhook path
+    location /github-webhook/ {
+        proxy_pass http://$webhook_upstream;
+        proxy_set_header Host $host;
+        proxy_set_header X-Real-IP $remote_addr;
+        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+        proxy_set_header X-Forwarded-Proto $scheme;
+    }
+
     location / {
         auth_request /auth_verify;

If there are other internal/admin endpoints on the webhook service that do need JWT auth, they can remain under the authenticated location / block.

🤖 Prompt for AI Agents
In `@deploy/infrastructure/openresty/development/conf.d/webhook.conf` around lines
38 - 52, The current catch-all location block (location /) enforces auth_request
/auth_verify, which will reject GitHub webhook deliveries; add a separate
unauthenticated location for the webhook path (e.g., location /github-webhook/ {
... }) placed before the existing location / block that omits auth_request
/auth_verify and proxies to the same upstream ($webhook_upstream); include the
same necessary proxy_set_header lines (Host, X-Real-IP, X-Forwarded-For,
X-Forwarded-Proto) and ensure the X-Hub-Signature-256 header is
preserved/forwarded so GitHub can authenticate the payload, leaving other
internal endpoints under location / protected by auth_request /auth_verify.

Comment on lines +1 to +8
WEBHOOK_POSTGRES_DB__HOST=shared-postgres
WEBHOOK_POSTGRES_DB__PORT=5432
WEBHOOK_POSTGRES_DB__USER=webhook_user
WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass
WEBHOOK_POSTGRES_DB__DB_NAME=webhook_db
WEBHOOK_REDIS__HOST=shared-redis
WEBHOOK_REDIS__PORT=6379
WEBHOOK_NATS__URL=nats://shared-nats:4222
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify the critical security concerns in the review comment

echo "=== 1. Checking if deploy/webhook/production/.env.production exists ==="
if [ -f "deploy/webhook/production/.env.production" ]; then
  echo "✓ File exists"
  echo "Contents:"
  cat -n "deploy/webhook/production/.env.production"
else
  echo "✗ File does not exist"
fi

echo -e "\n=== 2. Checking .gitignore for .env patterns ==="
if [ -f ".gitignore" ]; then
  echo "Lines matching .env patterns:"
  rg '\.env' .gitignore -n || echo "No .env patterns found in .gitignore"
else
  echo "✗ No .gitignore found"
fi

echo -e "\n=== 3. Listing all .env files in repository ==="
fd -H -t f '\.env' --exclude node_modules --exclude .git --exclude .venv

echo -e "\n=== 4. Checking git status for .env.production file ==="
git ls-files | grep -E '\.env' || echo "No .env files tracked by git"

echo -e "\n=== 5. Searching for hardcoded password patterns ==="
rg -i 'password\s*=\s*\w+' --type-add 'env:*.env*' --type env -n 2>/dev/null || echo "Search completed"

echo -e "\n=== 6. Checking deployment/config structure ==="
find deploy -type f -name ".*" -o -name "*.env*" -o -name "*secret*" -o -name "*config*" | head -20

Repository: gocasters/rankr

Length of output: 4148


🏁 Script executed:

# Verify .gitignore pattern specificity
echo "=== Checking .gitignore .env pattern ==="
head -35 .gitignore | tail -10

echo -e "\n=== Understanding .env pattern matching ==="
echo "Testing what '.env' pattern matches:"
echo ".env would match: .env (exact)"
echo ".env would NOT match: .env.production"
echo "To match both, need: .env* or .env**"

Repository: gocasters/rankr

Length of output: 423


CRITICAL: Hardcoded credentials in production environment file.

This production configuration file contains hardcoded credentials in plain text (e.g., webhook_pass on line 4) and is committed to version control. The .gitignore pattern .env does not exclude .env.production files, leaving these credentials exposed.

Recommended fixes:

  1. Remove this file from version control and add deploy/**/.env.production to .gitignore
  2. Use a template file (e.g., .env.production.example) with placeholder values instead
  3. Use secrets management for production deployments:
    • Kubernetes Secrets
    • HashiCorp Vault
    • Cloud provider secret managers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault)
    • Environment variables injected at runtime by CI/CD

Note: This pattern appears to affect multiple services across the deploy directory.

🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 4-4: [UnorderedKey] The WEBHOOK_POSTGRES_DB__PASSWORD key should go before the WEBHOOK_POSTGRES_DB__PORT key

(UnorderedKey)


[warning] 5-5: [UnorderedKey] The WEBHOOK_POSTGRES_DB__DB_NAME key should go before the WEBHOOK_POSTGRES_DB__HOST key

(UnorderedKey)


[warning] 8-8: [UnorderedKey] The WEBHOOK_NATS__URL key should go before the WEBHOOK_POSTGRES_DB__DB_NAME key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@deploy/webhook/production/.env.production` around lines 1 - 8, This file
commits hardcoded production secrets (e.g., WEBHOOK_POSTGRES_DB__PASSWORD,
WEBHOOK_POSTGRES_DB__USER, WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST) —
remove the committed .env.production from version control, add a rule like
deploy/**/.env.production to .gitignore, replace the committed file with a
.env.production.example containing placeholder values for the keys shown
(WEBHOOK_POSTGRES_DB__HOST, WEBHOOK_POSTGRES_DB__PORT,
WEBHOOK_POSTGRES_DB__USER, WEBHOOK_POSTGRES_DB__PASSWORD,
WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST, WEBHOOK_REDIS__PORT,
WEBHOOK_NATS__URL), and update deployment/CI to inject real secrets at runtime
via Kubernetes Secrets, Vault, or cloud secret manager instead of storing them
in this file.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
authapp/delivery/http/handler.go (1)

111-119: ⚠️ Potential issue | 🟠 Major

Avoid exposing raw error messages in 500 responses.

Line 118 returns err.Error() directly to the client, which can leak internal details (stack traces, DB errors, etc.).

Proposed fix
-	return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()})
+	return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})
🤖 Fix all issues with AI agents
In `@authapp/app.go`:
- Around line 61-71: When grpc.NewClient returns successfully but
contributor.New fails, the rpcClient connection is leaked; update the error path
in the block that calls grpc.NewClient and contributor.New so that
rpcClient.Close() is invoked before logging/returning (or use a short-lived
defer right after successful grpc.NewClient until contributor.New succeeds), and
avoid shadowing the outer err so you call rpcClient.Close() when contributor.New
returns an error; modify the code around grpc.NewClient, contributor.New,
contributorClient and contributorProvider to ensure rpcClient.Close() is always
called on the contributor.New failure path.

In `@authapp/delivery/http/handler.go`:
- Around line 84-93: The handler currently calls h.tokenService.IssueTokens
whenever originalURI == "" && originalMethod == "" (direct /v1/me), effectively
doing silent refresh; change this to require and validate a refresh token for
re-issuance: instead of unconditionally calling IssueTokens in that branch,
check for a refresh token in the request (e.g., Authorization cookie/header or
form field), validate it via the token service (e.g.,
h.tokenService.ValidateRefreshToken or an IssueTokensFromRefresh method), and
only call IssueTokens (or the dedicated refresh method) when the refresh token
is present and valid; otherwise return 401/403. Keep use of claims.UserID,
claims.Role, claims.Access for authorization and still set
response["access_token"] and response["refresh_token"] only after successful
refresh validation.

In `@doc/auth-guideline.md`:
- Around line 38-49: Add a clear disclaimer to doc/auth-guideline.md near the
hardcoded bcrypt hash and plaintext password (the UPDATE contributors statement
that sets password = '$2a$10$gPAOVqKrU6Vtew1eqQU35...') stating this credential
(demo_pass_123) is only for local development/seeding, must never be used in
staging/production, and should not be committed or deployed; also add a short
note to ensure CI, deployment scripts, and .gitattributes/.gitignore do not
package or expose this seed data (mention contributor_user and github_username
'fdaei' as the example record).
- Around line 79-82: Replace the hardcoded JWT in the curl example with a
placeholder environment variable to avoid committing secrets; specifically
update the Authorization header in the auth-guideline.md curl example (the line
starting with -H "Authorization: Bearer ...") to use an env var like
$ADMIN_ACCESS_TOKEN (or similar) and update the surrounding example text to
instruct readers to export/set that variable locally before running the curl
command.

In `@pkg/role/access.go`:
- Around line 21-36: RequiredPermission currently returns empty Permission when
module or operation can't be resolved which is treated by HasPermission as "no
permission required" and thus allows requests; change RequiredPermission (and
callers) to fail-closed by returning a clear sentinel Permission value (e.g.,
Permission("deny") or Permission("unresolvable")) whenever
moduleFromHost/moduleFromPath or operationFromMethod yields empty, and update
HasPermission to explicitly deny any request whose required permission equals
that sentinel so unclassifiable requests are rejected by default.
- Around line 46-48: The current check using strings.Contains on the variable
path is too permissive and can allow auth bypass; change the logic in
pkg/role/access.go that tests path so it only treats requests as public when the
final path segment exactly equals "health-check" or "health_check" (or when the
full path exactly/prefix-matches the known public endpoint), e.g. trim
leading/trailing slashes, split on "/", inspect the last segment and compare for
equality instead of using strings.Contains; update the conditional that returns
true accordingly.
🧹 Nitpick comments (5)
pkg/role/access.go (2)

79-83: Dead code: len(parts) == 0 is unreachable.

strings.Split on a non-empty string always returns a slice with at least one element, so the guard on line 80 can never be true.

Simplify
 	parts := strings.Split(host, ".")
-	if len(parts) == 0 {
-		return ""
-	}
 	return strings.ToLower(parts[0])

96-101: Same dead-code pattern: len(parts) == 0 after SplitN on a non-empty string.

Same as moduleFromHoststrings.SplitN with a non-empty input always returns at least one element.

Simplify
 	parts := strings.SplitN(path, "/", 2)
-	if len(parts) == 0 {
-		return ""
-	}
 	return strings.ToLower(parts[0])
authapp/app.go (1)

81-91: Contributor field can be nil in degraded mode — consider documenting this.

Since contributorClient may remain nil when the contributor service is unavailable, Application.Contributor (an exported field) will be nil. The shutdown path at line 173 handles this correctly, but other consumers of Application should be aware. A brief doc comment on the field would help.

deploy/infrastructure/openresty/development/conf.d/auth.conf (1)

64-79: Contributor service path prefixes are quite generic.

Patterns like create, update, and password under /v1/ are very generic. If another service later introduces /v1/create-* or /v1/update-* paths, they'll silently route to the contributor service due to regex match order. Consider more specific prefixes (e.g., /v1/contributor/) if the API design allows it.

authapp/delivery/http/handler.go (1)

133-139: extractBearerToken is correct but could be slightly more robust.

The implementation is fine for standard usage. Minor note: strings.CutPrefix (Go 1.20+) would be slightly more idiomatic and avoids the manual length check.

Optional refactor
 func extractBearerToken(r *http.Request) string {
 	authz := r.Header.Get("Authorization")
-	if len(authz) > 7 && authz[:7] == "Bearer " {
-		return authz[7:]
+	if token, ok := strings.CutPrefix(authz, "Bearer "); ok {
+		return token
 	}
 	return ""
 }

Comment on lines +38 to +49
If you do not know the current password, set a known one:

```bash
docker exec -it rankr-shared-postgres psql -U contributor_user -d contributor_db -c "
UPDATE contributors
SET password = '\$2a\$10\$gPAOVqKrU6Vtew1eqQU35.XMAhxtIEqo0hiyBOnVWzgh27WwOe0Zq',
updated_at = NOW()
WHERE github_username = 'fdaei';
"
```

`$2a$10$gPAOVqKrU6Vtew1eqQU35.XMAhxtIEqo0hiyBOnVWzgh27WwOe0Zq` is bcrypt hash for `demo_pass_123`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded password and bcrypt hash in docs — acceptable for dev seed, but add a disclaimer.

The plaintext password demo_pass_123 and its bcrypt hash are documented for local development seeding. This is reasonable for a dev guide, but consider adding a note that this must never be used in staging/production, and ensure CI/.gitattributes don't accidentally deploy this seed.

🤖 Prompt for AI Agents
In `@doc/auth-guideline.md` around lines 38 - 49, Add a clear disclaimer to
doc/auth-guideline.md near the hardcoded bcrypt hash and plaintext password (the
UPDATE contributors statement that sets password =
'$2a$10$gPAOVqKrU6Vtew1eqQU35...') stating this credential (demo_pass_123) is
only for local development/seeding, must never be used in staging/production,
and should not be committed or deployed; also add a short note to ensure CI,
deployment scripts, and .gitattributes/.gitignore do not package or expose this
seed data (mention contributor_user and github_username 'fdaei' as the example
record).

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
authapp/delivery/http/handler.go (1)

124-132: ⚠️ Potential issue | 🟡 Minor

handleError leaks internal error details to the client.

Line 131 returns err.Error() for untyped errors. This can expose internal implementation details (stack traces, DB errors, etc.) to external callers.

Proposed fix
-	return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()})
+	return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})
authapp/service/tokenservice/auth_service.go (1)

10-14: ⚠️ Potential issue | 🟠 Major

Same signing secret and claim structure for access and refresh tokens — token confusion risk.

Both accessManager and refreshManager are initialized with the identical secret and produce tokens with the same UserClaims structure (no distinguishing claim such as "type" or "aud"). This means:

  1. An access token will pass VerifyRefreshToken, and a refresh token will pass VerifyToken.
  2. In the me handler, an attacker can submit the same access token as both Authorization: Bearer <tok> and X-Refresh-Token: <tok> — both verifications succeed, the user-ID/role/access equality check passes, and new tokens are issued. This turns any leaked access token into a self-refreshing credential.

Add a distinguishing claim (e.g., "type": "access" vs "type": "refresh") to UserClaims and reject tokens whose type doesn't match the manager that verifies them.

🤖 Fix all issues with AI agents
In `@authapp/delivery/http/handler.go`:
- Around line 154-170: The extractRefreshToken function currently accepts tokens
from headers, a cookie, and r.FormValue("refresh_token") which allows query
string/form extraction; remove the form/query extraction so extractRefreshToken
only returns tokens from the X-Refresh-Token or Refresh-Token headers and the
refresh_token cookie, i.e., delete the r.FormValue branch in extractRefreshToken
and ensure callers of extractRefreshToken still handle empty string cases
appropriately.

In `@pkg/role/access.go`:
- Around line 53-58: The current public endpoint check in pkg/role/access.go
uses publicEndpoints ({"v1/login","v1/me"}) plus
strings.HasPrefix(normalizedPath, endpoint+"/") which allows subpaths to bypass
auth but has no tests or docs; either (A) add unit tests asserting the function
that contains publicEndpoints returns true for subpaths like "/v1/me/settings"
and "/v1/login/callback" (create tests referencing the function or variable in
pkg/role/access_test.go to document intent), or (B) if only exact matches should
be public, remove the strings.HasPrefix branch from the function in access.go
and add tests that ensure "/v1/me" and "/v1/login" are allowed but
"/v1/me/settings" and "/v1/login/callback" are denied. Ensure tests reference
the same function/variable used in the diff (publicEndpoints / the
endpoint-checking function) so behavior is explicit.
🧹 Nitpick comments (3)
authapp/delivery/http/handler_test.go (2)

9-64: Good coverage of extraction sources; consider adding a "no token present" case.

The table-driven tests cover all four extraction paths. A negative case where no source provides a token (expecting "") would round out the coverage.

Suggested additional test case
+		{
+			name:    "no token anywhere",
+			setup:   func(r *http.Request) {},
+			wantTok: "",
+		},

66-92: sameAccessList tests cover the key scenarios — consider adding a duplicate-element case.

A case like a=["x","x"], b=["x","y"] would explicitly verify the counting logic rejects mismatched duplicates.

Suggested additional test case
+		{name: "duplicates differ", a: []string{"a:read", "a:read"}, b: []string{"a:read", "b:read"}, want: false},
pkg/role/access.go (1)

94-100: Dead code: len(parts) == 0 is unreachable after strings.Split / strings.SplitN.

strings.Split and strings.SplitN(..., 2) always return at least one element, so the len(parts) == 0 guards on lines 96 and 114 are dead code.

Cleanup
 	parts := strings.Split(host, ".")
-	if len(parts) == 0 {
-		return ""
-	}
 	return strings.ToLower(parts[0])
 	parts := strings.SplitN(path, "/", 2)
-	if len(parts) == 0 {
-		return ""
-	}
 	return strings.ToLower(parts[0])

Also applies to: 112-117

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@doc/auth-guideline.md`:
- Around line 66-67: The doc incorrectly states missing Authorization header
returns 400; update doc/auth-guideline.md to state that a missing Authorization
header returns http.StatusUnauthorized (401) to match the middleware
implementation in authapp/delivery/http/middleware.go which currently returns
http.StatusUnauthorized when the token is missing; ensure the bullet list
replaces "Missing `Authorization` header returns `400`." with "Missing
`Authorization` header returns `401`." (or equivalent wording referencing
401/Unauthorized).
🧹 Nitpick comments (3)
pkg/role/access_test.go (1)

29-29: Remove unnecessary loop variable capture on line 29.

The tc := tc pattern is not needed for Go 1.25.3—loop variables have been scoped per-iteration since Go 1.22. This can be safely removed as cleanup.

authapp/delivery/http/handler.go (2)

57-62: Consider using && instead of || for the gateway permission check.

With ||, a request carrying only one of the two headers (e.g., X-Original-Method without X-Original-URI) will enter the permission branch with incomplete context, causing RequiredPermission to return PermissionUnresolvable and always yielding a 403. Using && would ensure you only enforce permission when the gateway has supplied both routing hints, and silently skip the check when either is absent.

-	if originalURI != "" || originalMethod != "" {
+	if originalURI != "" && originalMethod != "" {

If the gateway is guaranteed to always send both headers together, this is a no-op change, but && communicates the intent more clearly and is safer against partial-header edge cases.


78-83: Silent failure on EncodeUserInfo may hinder debugging.

If claims.UserID is not a parseable uint64, the X-User-Info header is silently omitted. Consider logging the error at debug/warn level so operators can trace missing identity headers.

Comment on lines +66 to +67
- Missing `Authorization` header returns `400`.
- Invalid token or invalid role returns `401`.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Documentation says 400 for missing Authorization, but code returns 401.

The middleware in authapp/delivery/http/middleware.go (line 18) returns http.StatusUnauthorized (401) when the token is missing, not 400. Update the doc to match the implementation.

 Notes:
-- Missing `Authorization` header returns `400`.
+- Missing `Authorization` header returns `401`.
 - Invalid token or invalid role returns `401`.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- Missing `Authorization` header returns `400`.
- Invalid token or invalid role returns `401`.
Notes:
- Missing `Authorization` header returns `401`.
- Invalid token or invalid role returns `401`.
🤖 Prompt for AI Agents
In `@doc/auth-guideline.md` around lines 66 - 67, The doc incorrectly states
missing Authorization header returns 400; update doc/auth-guideline.md to state
that a missing Authorization header returns http.StatusUnauthorized (401) to
match the middleware implementation in authapp/delivery/http/middleware.go which
currently returns http.StatusUnauthorized when the token is missing; ensure the
bullet list replaces "Missing `Authorization` header returns `400`." with
"Missing `Authorization` header returns `401`." (or equivalent wording
referencing 401/Unauthorized).

@fdaei fdaei closed this Feb 18, 2026
@fdaei fdaei reopened this Feb 18, 2026
@fdaei fdaei closed this Feb 18, 2026
@fdaei fdaei reopened this Feb 18, 2026
@fdaei fdaei closed this Feb 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants