Conversation
- Add Watermill publisher to historical fetcher - Publish events to rankr_raw_events topic after DB save - Create NATS publisher in fetch-historical command
- Add project gRPC adapter - Add RedisLeaderboardRepository for public leaderboard storage - Implement SetPublicLeaderboard and GetPublicLeaderboard - Add scheduler cron job for public leaderboard update - Add run-scheduler command for manual testing - Add gRPC handler for GetPublicLeaderboard - Add migration to change resource_id to TEXT
Update SaveHistoricalEventsBulk to return list of inserted events alongside BulkInsertResult. The fetcher now only publishes events that were actually inserted, not duplicates skipped by ON CONFLICT.
…ntributor-from-file
When page parameter is 0 or negative, Redis ZREVRANGE would use negative indices and return data from the end of the sorted set instead of the beginning. This fix normalizes page < 1 to page = 1.
- Fix pgx.ErrNoRows comparison (was using sql.ErrNoRows incorrectly) - Fix context key mismatch in handler (Authorization -> userInfo) - Fix migration numbering (duplicate 000003, 000004) - Simplify role migration syntax for gorp compatibility - Add missing production config sections (middleware, validate, job, broker, worker)
This reverts commit 7599198.
…b.com:gocasters/rankr into feature/integrate-auth-and-contributor-import
…b.com:gocasters/rankr into feature/integrate-auth-and-contributor-import
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughConsolidates RBAC to enum roles, adds role-aware token verification/rotation (me + refresh-token), extends contributor VerifyPassword to return role, and implements a Redis-backed job pipeline (broker, worker pool, job service) with upload endpoints, file validation, migrations, config and deployment updates. (50 words) Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant Client as Client
participant Gateway as OpenResty
participant Auth as Auth Service
participant Service as Upstream Service
end
Client->>Gateway: Request (Authorization: Bearer <token>)
Gateway->>Auth: GET /v1/me (forward Authorization + X-Original-URI/Method/Host)
Auth->>Auth: Verify token, check role, compute access list, possibly rotate tokens
alt Unauthorized or permission missing
Auth-->>Gateway: 401/403
Gateway-->>Client: 401/403
else Authorized
Auth-->>Gateway: 200 + headers (X-User-ID, X-Role, X-User-Info, X-Access)
Gateway->>Service: Forward request + user headers
Service-->>Client: Response
end
sequenceDiagram
rect rgba(200,255,200,0.5)
participant Uploader as Uploader
participant App as Contributor App Handler
participant JobSvc as Job Service
participant DB as Postgres
participant Broker as Redis Broker
participant Pool as Worker Pool
participant Proc as CSV/XLSX Processor
participant Contr as Contributor Adapter
end
Uploader->>App: POST /jobs/upload (file + token)
App->>Auth: Verify role (via AuthClient)
App->>App: Validate file (middleware)
App->>JobSvc: CreateImportJob(file)
JobSvc->>DB: INSERT job (status=pending) -> job_id
JobSvc->>Broker: Publish(job_id)
JobSvc-->>App: 202 Accepted (job_id)
Pool->>Broker: Consume()
Broker-->>Pool: Message(job_id)
Pool->>JobSvc: ProcessJob(job_id)
JobSvc->>DB: Update status -> processing
JobSvc->>Proc: Process(file)
Proc-->>JobSvc: ProcessResult (success/fail records)
loop per fail record
JobSvc->>DB: Create fail_records
end
loop per success record
JobSvc->>Contr: UpsertContributor(record)
end
JobSvc->>DB: Update job counts/status
Pool->>Broker: Ack(message_id)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
deploy/notification/development/Dockerfile (1)
1-14:⚠️ Potential issue | 🟡 MinorMissing
GOPROXYenv — inconsistent with other development service Dockerfiles.All other development Dockerfiles in this repo (auth, contributor, leaderboardscoring, leaderboardstat, project, realtime, task, userprofile, webhook) set
ENV GOPROXY=https://mirrors.aliyun.com/goproxy,direct, but this one doesn't. Thego mod tidyon Line 12 and build on Line 14 will fall back to the default proxy, which may fail or be slow in restricted network environments.Proposed fix
WORKDIR /app +ENV GOPROXY=https://mirrors.aliyun.com/goproxy,direct + COPY go.mod go.sum ./cmd/contributor/command/migrate.go (1)
70-79:⚠️ Potential issue | 🟠 MajorMigration errors are silently swallowed — process exits 0 on failure.
Two issues:
- Line 72:
fmt.Printfwithout a newline, and noos.Exit(1)orlog.Fatalf— a failedUpmigration will appear to succeed to CI/CD pipelines.- Line 75:
mgr.Down()return value (if any error) is completely ignored, matching the same pattern.This should use
log.Fatalf(consistent with the rest of this file) or at minimum exit non-zero on failure.Proposed fix
if up { - if err := mgr.Up(); err != nil { - fmt.Printf("failed to migrate up: %v", err) - } + if err := mgr.Up(); err != nil { + log.Fatalf("failed to migrate up: %v", err) + } } else if down { - mgr.Down() + if err := mgr.Down(); err != nil { + log.Fatalf("failed to migrate down: %v", err) + } } else {contributorapp/app.go (1)
165-188:⚠️ Potential issue | 🟠 Major
shutdownServersignores the passed context's deadline.Line 169 creates a new
context.Background()for the shutdown goroutines, discarding the timeout context passed as thectxparameter. The actual shutdown operations (shutdownHTTPServer,shutdownGRPCServer) run without a deadline — the timeout only governs theselectstatement. If the HTTP/gRPC shutdown hangs, the goroutine leaks even after the function returnsfalse.Pass the received
ctx(which already carries the timeout) to the shutdown goroutines instead:Proposed fix
func (app Application) shutdownServers(ctx context.Context) bool { app.Logger.Info("Starting server shutdown process...") shutdownDone := make(chan struct{}) - parentCtx := context.Background() go func() { var shutdownWg sync.WaitGroup shutdownWg.Add(2) - go app.shutdownHTTPServer(parentCtx, &shutdownWg) + go app.shutdownHTTPServer(ctx, &shutdownWg) go app.shutdownGRPCServer(&shutdownWg) shutdownWg.Wait() close(shutdownDone)Based on learnings: "In the taskapp/app.go shutdown flow, the shutdownServers method receives a context with timeout (TotalShutdownTimeout) from the Start method, but the current implementation ignores this by creating context.Background() instead of using the passed context parameter."
authapp/delivery/http/handler.go (1)
109-117:⚠️ Potential issue | 🟡 Minor
handleErrorfallback exposes raw error messages to clients.Line 116 returns
err.Error()directly in the response body for untyped errors. This could leak internal implementation details (e.g., database errors, connection strings). Consider returning a generic message for the fallback case.Proposed fix
- return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()}) + return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})contributorapp/delivery/http/server.go (1)
46-53:⚠️ Potential issue | 🟡 MinorRedundant
RequireAuthmiddleware on the upload route.The
v1group (line 46) already appliesechomiddleware.RequireAuthto every route in the group. Line 51 adds it again as route-level middleware on/jobs/upload, causing the auth check to execute twice for every upload request.Suggested fix
- v1.POST("/jobs/upload", s.Handler.uploadFile, echomiddleware.RequireAuth, s.Middleware.CheckFile) + v1.POST("/jobs/upload", s.Handler.uploadFile, s.Middleware.CheckFile)
🤖 Fix all issues with AI agents
In `@contributorapp/app.go`:
- Around line 119-125: The shutdownServers function ignores the timeout-bound
context by creating parentCtx := context.Background(); change this to use the
passed ctx (parentCtx := ctx) so the shutdownTimeoutCtx created earlier (with
TotalShutdownTimeout) is respected; locate shutdownServers, the parentCtx
variable and any places where server shutdown contexts are derived (e.g.,
shutdownTimeoutCtx) and ensure they use the passed ctx to enforce the graceful
shutdown timeout.
In `@contributorapp/repository/broker.go`:
- Around line 118-146: Malformed messages in the loop over stream.Messages
(where you build Message structs from m.Values and append to msg) are currently
skipped with continue and never acknowledged; update the error branches for
missing "job_id", missing "retry", and retry parse errors to acknowledge the
message via the Redis consumer group XAck call (use the same stream and consumer
identifiers you use elsewhere in this file) before continuing, and optionally
route the bad payload to your DLQ handler or log with the message ID; reference
the loop over stream.Messages, the message variable m (m.ID and m.Values), and
the Message struct so you add XAck calls in those three error paths to prevent
infinite redelivery.
In `@contributorapp/service/job/process.go`:
- Around line 159-169: The map lookup for column indices (colIdx) is using
direct indexing so missing keys yield Go's zero value (0) and safeGet(r.data, 0)
returns the first column instead of empty; update the code that builds
ContributorRecord (where fields like DisplayName, ProfileImage, Bio,
PrivacyMode, Email are fetched) to first check the map with the comma-ok idiom
(idx, ok := colIdx[strings.ToLower(DisplayName.String())]) and only call safeGet
when ok is true, otherwise use an empty string; alternatively add a small helper
(e.g., getOptional(colIdx, key, data) or safeGetOptional) that performs the
comma-ok check and returns "" when the column key is missing, then replace the
direct colIdx[...] usages in the ContributorRecord construction with that
helper.
In `@contributorapp/service/job/service.go`:
- Around line 175-185: The publish-failure path sets the job status to
PendingToQueue (via s.jobRepo.UpdateStatus after s.publisher.Publish fails) but
ProcessJob only accepts jobs with status Pending, causing retries to fail;
update ProcessJob to treat PendingToQueue as a valid processable status (i.e.,
accept both Pending and PendingToQueue in the status check) or normalize
PendingToQueue to Pending before validation so retry/scheduler flows can
proceed; adjust any status-check branches in ProcessJob and unit tests
referencing Pending to include PendingToQueue handling.
In `@contributorapp/worker/consumer.go`:
- Around line 83-92: When CheckMaxRetry(ctx, m) returns an error you must not
simply continue — either acknowledge the message or move it to a
dead-letter/poison queue; update the loop in consumer.go where
p.consumer.CheckMaxRetry is called so that on error you call the failure
handling path (e.g., p.consumer.HandleFailure(ctx, m, err) or the equivalent
failure handler) or explicitly acknowledge/commit the message (e.g.,
m.Ack()/p.consumer.Ack(ctx, m)) instead of skipping it, and keep the existing
log; this prevents the message from being re-consumed forever.
In `@deploy/auth/production/docker-compose.yml`:
- Line 8: The docker-compose port mapping "18081:8080" is incorrect for the auth
service; update the service's port mapping from "18081:8080" to "18081:8081" so
the external port 18081 maps to the container/app port 8081 (matching the
config.yml setting port: 8081).
In `@deploy/infrastructure/openresty/development/conf.d/webhook.conf`:
- Around line 38-52: The current catch-all location block (location /) enforces
auth_request /auth_verify, which will reject GitHub webhook deliveries; add a
separate unauthenticated location for the webhook path (e.g., location
/github-webhook/ { ... }) placed before the existing location / block that omits
auth_request /auth_verify and proxies to the same upstream ($webhook_upstream);
include the same necessary proxy_set_header lines (Host, X-Real-IP,
X-Forwarded-For, X-Forwarded-Proto) and ensure the X-Hub-Signature-256 header is
preserved/forwarded so GitHub can authenticate the payload, leaving other
internal endpoints under location / protected by auth_request /auth_verify.
In `@deploy/webhook/production/.env.production`:
- Around line 1-8: This file commits hardcoded production secrets (e.g.,
WEBHOOK_POSTGRES_DB__PASSWORD, WEBHOOK_POSTGRES_DB__USER,
WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST) — remove the committed
.env.production from version control, add a rule like deploy/**/.env.production
to .gitignore, replace the committed file with a .env.production.example
containing placeholder values for the keys shown (WEBHOOK_POSTGRES_DB__HOST,
WEBHOOK_POSTGRES_DB__PORT, WEBHOOK_POSTGRES_DB__USER,
WEBHOOK_POSTGRES_DB__PASSWORD, WEBHOOK_POSTGRES_DB__DB_NAME,
WEBHOOK_REDIS__HOST, WEBHOOK_REDIS__PORT, WEBHOOK_NATS__URL), and update
deployment/CI to inject real secrets at runtime via Kubernetes Secrets, Vault,
or cloud secret manager instead of storing them in this file.
🟠 Major comments (20)
deploy/leaderboardscoring/production/.env.production-1-5 (1)
1-5:⚠️ Potential issue | 🟠 MajorDo not commit plaintext DB credentials.
These values include a database password in a production env file, which risks secret exposure and credential reuse. Store secrets in a secrets manager or CI/CD-injected runtime env vars, and keep only non-sensitive defaults in repo.
deploy/infrastructure/openresty/development/conf.d/realtime.conf-38-52 (1)
38-52:⚠️ Potential issue | 🟠 MajorMissing WebSocket upgrade headers breaks realtime service.
The realtime service handles WebSocket connections via
/v1/ws(using gorilla/websocket), but the Nginx configuration lacks the required headers to proxy WebSocket frames. Withoutproxy_http_version 1.1,Upgrade, andConnectionheaders, Nginx will not upgrade the HTTP connection, causing all WebSocket clients to fail.Proposed fix
location / { auth_request /auth_verify; auth_request_set $auth_user_id $upstream_http_x_user_id; auth_request_set $auth_role $upstream_http_x_role; auth_request_set $auth_user_info $upstream_http_x_user_info; proxy_set_header X-User-ID $auth_user_id; proxy_set_header X-Role $auth_role; proxy_set_header X-User-Info $auth_user_info; proxy_pass http://$realtime_upstream; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; }deploy/leaderboardstat/production/.env.production-1-5 (1)
1-5:⚠️ Potential issue | 🟠 MajorPlaintext credentials in production config committed to version control.
Committing passwords (even placeholder ones) in a file under
deploy/*/production/risks them reaching actual production. Consider using a secrets manager, CI/CD-injected env vars, or a.env.production.examplewith dummy values while keeping real.env.productionin.gitignore.deploy/contributor/production/.env.production-1-8 (1)
1-8:⚠️ Potential issue | 🟠 MajorProduction credentials should not be committed to version control.
Hardcoded database passwords (
contributor_pass) and connection details in a production.envfile checked into the repo is a security risk. Even if these are placeholder values, they tend to leak into actual deployments. Use a secrets manager (e.g., Vault, AWS Secrets Manager, Docker secrets) or inject these via CI/CD at deploy time, and add this file to.gitignore.deploy/task/production/.env.production-1-5 (1)
1-5:⚠️ Potential issue | 🟠 MajorSame concern: production credentials committed to the repository.
task_passis hardcoded on Line 4. Same recommendation as the contributor production env file — keep production secrets out of version control and use a secrets management solution.contributorapp/client/auth.go-36-37 (1)
36-37:⚠️ Potential issue | 🟠 MajorUnsanitized
idis interpolated into the URL path.If
idcontains slashes, query strings, or other special characters (e.g., from user input or a compromised upstream),fmt.Sprintf("%s/%s", ...)can produce an unintended URL. Useurl.JoinPathorurl.PathEscape(id)to safely construct the path.Proposed fix
+ "net/url" ... func (a AuthClient) GetRole(ctx context.Context, id string) (GetRoleResponse, error) { - url := fmt.Sprintf("%s/%s", a.config.Url, id) + reqURL, err := url.JoinPath(a.config.Url, id) + if err != nil { + return GetRoleResponse{}, fmt.Errorf("invalid auth URL: %w", err) + } - req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil) + req, err := http.NewRequestWithContext(ctx, http.MethodGet, reqURL, nil)contributorapp/delivery/http/middleware/middleware.go-42-51 (1)
42-51:⚠️ Potential issue | 🟠 MajorFix middleware validation to handle MIME type charset suffixes for CSV uploads.
http.DetectContentTypereturns"text/plain; charset=utf-8"for CSV files (not"text/csv"), but the middleware'svalidFileTypefunction at lines 79-82 uses exact string matching (v == ext). Since the config includes"text/plain"without the charset suffix, CSV uploads will be rejected. UpdatevalidFileTypeto handle charset like the service-level validator does (lines 62-65 incontributorapp/service/job/validator.go), or check the file extension fromfileHeader.Filenameinstead.contributorapp/repository/migrations/000006_seed_admin_contributor.sql-1-22 (1)
1-22:⚠️ Potential issue | 🟠 MajorHardcoded admin password hash in a migration is a security risk.
This seeds an admin account with a bcrypt hash committed to version control. Anyone with repo access can attempt an offline brute-force attack against this hash. If this migration runs in production, the password is effectively public.
Recommendations:
- Seed the admin account without a password in the migration, and set credentials through a secure out-of-band process (e.g., CLI tool, env-var-driven script).
- If this is strictly for local development, gate it so it cannot run in production (e.g., separate seed script, not a numbered migration).
- The
DOWNmigration deletes the entire contributor row — this could cascade-delete associated data if foreign keys exist. Consider only resetting the role instead.pkg/role/access.go-46-48 (1)
46-48:⚠️ Potential issue | 🟠 Major
strings.Containsfor health-check paths is overly permissive — potential auth bypass.Any request whose path contains
"health-check"or"health_check"anywhere (e.g.,/v1/admin/health-check-secret,/v1/contributors?q=health_check) will be treated as public and skip permission checks. Use exact-match or prefix-match on a well-known segment instead.Proposed fix — match on specific path prefixes
- if strings.Contains(path, "health-check") || strings.Contains(path, "health_check") { - return true - } + if path == "/health-check" || path == "/health_check" || + strings.HasPrefix(path, "/health-check/") || strings.HasPrefix(path, "/health_check/") || + path == "/v1/health-check" || path == "/v1/health_check" { + return true + }authapp/delivery/http/handler.go-76-79 (1)
76-79:⚠️ Potential issue | 🟠 MajorUnknown or unmapped paths silently bypass authorization (allow-by-default).
When
RequiredPermissionreturns""(because the module or operation couldn't be determined from the path/host),HasPermissiontreats an empty permission as "no check needed" and returnstrue. This means any request to an unmapped or unexpected path passes authorization.A deny-by-default approach is safer for a gateway auth wall: if the system can't determine the required permission, it should reject rather than allow.
#!/bin/bash # Inspect RequiredPermission and isPublicPath to understand which paths are explicitly allowed rg -n -A5 'func RequiredPermission|func isPublicPath|func moduleFromHost|func moduleFromPath|func operationFromMethod' --type gocontributorapp/service/job/param.go-15-18 (1)
15-18:⚠️ Potential issue | 🟠 MajorID fields use
uintinstead ofuint64/types.ID— potential truncation on 32-bit platforms.The domain ID type is
uint64(seetype/id.go), butImportContributorResponse.JobID,GetJobResponse.ID, andGetFailRecordsResponse.ID/JobIDare alluint. In Go,uintis platform-dependent (32-bit on 32-bit architectures), which could silently truncate IDs. Useuint64ortypes.IDfor consistency with the rest of the codebase.Also applies to: 20-21, 34-36
contributorapp/worker/process.go-9-10 (1)
9-10:⚠️ Potential issue | 🟠 Major
uinttruncation risk propagated through theProcessorinterface.
strconv.ParseUint(jobID, 10, 64)correctly parses a 64-bit value, but the castuint(uID)on line 31 and theProcessorinterface parameter type (jobID uint) will silently truncate on 32-bit platforms. Consider usinguint64consistently across the worker andProcessorinterface, aligned withtypes.ID(uint64).Also applies to: 26-31
contributorapp/service/job/param.go-28-28 (1)
28-28:⚠️ Potential issue | 🟠 MajorExposing
FilePathin API response may leak internal server paths.
FilePathexposes the server-side storage path (e.g.,./uploads/import_contributors/...) to API consumers. This is an information disclosure risk that can aid attackers in path-traversal or enumeration attacks. Consider omitting it from the response or replacing it with a sanitized/opaque reference.contributorapp/service/job/adapter.go-23-26 (1)
23-26: 🛠️ Refactor suggestion | 🟠 MajorUse
errors.Asinstead of a direct type assertion to handle wrapped errors.If the
Upsertimplementation (or anything in its call chain) wraps avalidator.Errorwithfmt.Errorf("...: %w", vErr), the type assertionerr.(validator.Error)will fail and the error will be misclassified asErrTypeUnexpected.errors.Asunwraps the error chain.Suggested fix
+import "errors" + func (c ContributorAdapter) UpsertContributor(ctx context.Context, req ContributorRecord) error { _, err := c.contributorSvc.Upsert(ctx, req.mapContributorRecordToUpsertRequest()) if err != nil { - if vErr, ok := err.(validator.Error); ok { + var vErr validator.Error + if errors.As(err, &vErr) { return RecordProcessError{Err: vErr, Type: ErrTypeValidation} }deploy/contributor/development/config.yml-150-150 (1)
150-150:⚠️ Potential issue | 🟠 MajorYAML key
path_of_migrationsdoes not match koanf tagpath_of_migrationat the top level.In
contributorapp/config.goline 26, the struct field useskoanf:"path_of_migration"(singular), but line 150 in this YAML file specifiespath_of_migrations(plural). Koanf will not bind this value, leavingcfg.PathOfMigrationempty.This does not cause a runtime failure because the application correctly reads from the nested
postgres_db.path_of_migrations(line 109, which matchespkg/database/config.go). However, it breaks the deprecation handling logic: the fallback to the top-level field and its associated warning message will never execute.Change line 150 from
path_of_migrations:topath_of_migration:to fix the key name.contributorapp/service/contributor/validator.go-42-49 (1)
42-49: 🛠️ Refactor suggestion | 🟠 MajorRedundant PrivacyMode validation: the
Whenblock is unreachable.Lines 44-45 make
PrivacyModerequired and validate it's in{real, anonymous}. TheWhen(req.PrivacyMode != "", ...)block on lines 46-48 can never add value becauseRequiredalready rejects empty values, and theInon line 45 already covers non-empty values.The same duplication exists in
ValidateUpdateProfileRequest(lines 65-71).Proposed fix for ValidateCreateContributorRequest
validation.Field( &req.PrivacyMode, validation.Required.Error(ErrValidationRequired), validation.In(PrivacyModeReal, PrivacyModeAnonymous).Error(ErrValidationEnumPrivacy), - validation.When(req.PrivacyMode != "", - validation.In(PrivacyModeReal, PrivacyModeAnonymous).Error(ErrValidationEnumPrivacy), - ), ),Apply the same to
ValidateUpdateProfileRequest.contributorapp/service/job/process.go-79-80 (1)
79-80:⚠️ Potential issue | 🟠 MajorSilently ignoring
Columns()error can corrupt the header.If
rowsIter.Columns()returns an error on the first row,headerwill benil/empty, andprocessRowsWithHeaderwill flag every required column as missing — a confusing error message when the real cause is a read failure. For subsequent rows, corrupted data could silently slip through.Proposed fix
- row, _ := rowsIter.Columns() + row, err := rowsIter.Columns() + if err != nil { + return ProcessResult{}, fmt.Errorf("failed to read row columns: %w", err) + }contributorapp/delivery/http/handler.go-193-212 (1)
193-212:⚠️ Potential issue | 🟠 Major
getJobStatusandgetFailRecordslack role-based authorization and job ownership validation.These endpoints are protected by the group-level
RequireAuthmiddleware, so they require authentication. However, they lack the admin role check thatuploadFileenforces, and neither endpoint validates that the authenticated user owns or is authorized to view the requested job. Any authenticated user can query job status or fail records for any job ID, potentially exposing sensitive contributor data from uploads they did not create.contributorapp/service/job/service.go-311-323 (1)
311-323:⚠️ Potential issue | 🟠 Major
FilePathis exposed inGetJobResponse, leaking internal server paths.Returning the absolute filesystem path (e.g.,
/var/data/uploads/1234_file.csv) to API consumers reveals server directory structure, which is an information disclosure concern.♻️ Return only the filename or a download URL instead
return GetJobResponse{ ID: job.ID, SuccessCount: job.SuccessCount, TotalRecords: job.TotalRecords, FailCount: job.FailCount, - FilePath: job.FilePath, FileName: job.FileName, FileHash: job.FileHash, IdempotencyKey: job.IdempotencyKey,contributorapp/service/job/service.go-249-270 (1)
249-270:⚠️ Potential issue | 🟠 MajorNon-
RecordProcessErrorupsert failures are counted but never persisted tofailRepo.When
UpsertContributorreturns an error that does not satisfyRecordProcessError(Line 251 type assertion fails), the loop incrementsfailCount(Line 266) but skips thefailRepo.Createcall entirely. This means the failure is silently lost — no raw data, no reason, no error type recorded — making it impossible to debug or retry these records.🐛 Proposed fix — record all upsert failures
for _, record := range pResult.SuccessRecords { if err := s.contributorAdapter.UpsertContributor(ctx, record); err != nil { - if aErr, ok := err.(RecordProcessError); ok { - cErr := s.failRepo.Create(ctx, FailRecord{ - JobID: jobID, - RecordNumber: record.RowNumber, - Reason: err.Error(), - RawData: record.mapToSlice(), - RetryCount: 0, - LastError: aErr.Err.Error(), - ErrType: aErr.Type, - }) - if cErr != nil { - logger.L().Error("failed to insert fail record", "row", record.RowNumber, "error", cErr.Error()) - } + fr := FailRecord{ + JobID: jobID, + RecordNumber: record.RowNumber, + Reason: err.Error(), + RawData: record.mapToSlice(), + RetryCount: 0, + LastError: err.Error(), + } + if aErr, ok := err.(RecordProcessError); ok { + fr.LastError = aErr.Err.Error() + fr.ErrType = aErr.Type + } + if cErr := s.failRepo.Create(ctx, fr); cErr != nil { + logger.L().Error("failed to insert fail record", "row", record.RowNumber, "error", cErr.Error()) } failCount++ continue }
🟡 Minor comments (21)
leaderboardstatapp/repository/redis-leaderboardstat.go-31-35 (1)
31-35:⚠️ Potential issue | 🟡 MinorConsider also validating
pageSize.The
pageguard is a good addition, butpageSizeis left unchecked. IfpageSize <= 0, thenstopbecomes negative — and in Redis, a negative stop index inZREVRANGEmeans "count from the end," which would return the entire sorted set instead of an empty page.Suggested fix
if page < 1 { page = 1 } +if pageSize < 1 { + pageSize = 1 +} start := (page - 1) * pageSize stop := start + pageSize - 1Alternatively, return early with an empty slice if
pageSizeis invalid, depending on the desired contract.deploy/auth/production/docker-compose.yml-10-11 (1)
10-11:⚠️ Potential issue | 🟡 MinorProduction config references a "development" network.
The file lives under
deploy/auth/production/, setsENV=production, but usesrankr-shared-development-network. If this config is ever used for an actual production deployment, the network name is misleading and could cause the service to join the wrong network. If all environments intentionally share this single Docker network for local development, consider renaming it to something environment-neutral (e.g.,rankr-shared-network) to avoid confusion.Also applies to: 18-21
deploy/infrastructure/openresty/development/conf.d/auth.conf-37-45 (1)
37-45:⚠️ Potential issue | 🟡 MinorVerify the
/auth/rewrite behavior for edge cases.The rewrite
^/auth/?(.*)$ /$1 break;strips the/auth/prefix — so/auth/v1/login→/v1/login. This is fine for typical use, but note that a request to exactly/auth/will rewrite to/and be proxied to auth-app's root, and a request to/auth(no trailing slash) won't match this location block at all (it would fall through to the catch-all 404). Ensure clients always use the trailing-slash form or adjust the location directive.deploy/contributor/development/docker-compose.yml-19-21 (1)
19-21:⚠️ Potential issue | 🟡 MinorLowercase env var prefix (
contributor_) — same inconsistency flagged in the auth production env.Consider standardizing to uppercase (
CONTRIBUTOR_POSTGRES_DB_HOST, etc.) across all services.deploy/auth/production/.env.production-1-5 (1)
1-5:⚠️ Potential issue | 🟡 MinorInconsistent env var prefix casing across services.
This file uses a lowercase prefix (
auth_POSTGRES_DB__*) while the leaderboardstat production env uses an uppercase prefix (STAT_POSTGRES_DB__*). Standardize on one convention (uppercase is the norm for environment variables) to avoid confusion and potential config-parsing mismatches.Suggested fix
-auth_POSTGRES_DB__HOST=shared-postgres -auth_POSTGRES_DB__PORT=5432 -auth_POSTGRES_DB__USER=auth_user -auth_POSTGRES_DB__PASSWORD=auth_pass -auth_POSTGRES_DB__DB_NAME=auth_db +AUTH_POSTGRES_DB__HOST=shared-postgres +AUTH_POSTGRES_DB__PORT=5432 +AUTH_POSTGRES_DB__USER=auth_user +AUTH_POSTGRES_DB__PASSWORD=auth_pass +AUTH_POSTGRES_DB__DB_NAME=auth_dbcontributorapp/repository/migrations/000003_create_jobs_table.sql-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorRedundant unique index on
idempotency_key.Line 6 declares
UNIQUEonidempotency_key, which already causes PostgreSQL to create a unique index automatically. The explicitCREATE UNIQUE INDEXon Line 18 is redundant and results in a duplicate index consuming extra storage and write overhead.Proposed fix
Either remove the explicit index:
-CREATE UNIQUE INDEX IF NOT EXISTS idx_jobs_idempotency_key ON jobs(idempotency_key);Or remove the
UNIQUEconstraint from the column definition and keep only the named index (if you want a specific index name).Also applies to: 18-18
authapp/repository/migrations/001_rbac.sql-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorIndex
idx_role_permissions_roleis redundant with the composite primary key.The
PRIMARY KEY (role, permission_id)on Line 21 already creates a B-tree index withroleas the leading column, which satisfies lookups filtering byrolealone. The additional index on Line 24 is a duplicate that doubles write overhead for no query benefit.Proposed fix
-CREATE INDEX IF NOT EXISTS idx_role_permissions_role ON role_permissions (role);And in the down migration:
-DROP INDEX IF EXISTS idx_role_permissions_role;Also applies to: 24-24
contributorapp/repository/migrations/000004_create_fail_records_table.sql-11-12 (1)
11-12:⚠️ Potential issue | 🟡 Minor
updated_athas no default, unlikecreated_atand thejobstable convention.
created_atdefaults toNOW()and thejobstable sets bothcreated_atandupdated_attoNOW(). Here,updated_atwill beNULLon insert, which creates an inconsistency. If the intent is "not yet updated", that's fine but should be deliberate; otherwise addDEFAULT NOW().contributorapp/repository/migrations/000003_create_jobs_table.sql-11-13 (1)
11-13:⚠️ Potential issue | 🟡 MinorCount columns are nullable — arithmetic on NULLs may surprise.
total_records,success_count, andfail_counthave aDEFAULT 0but noNOT NULLconstraint. An explicitUPDATE ... SET total_records = NULL(or an ORM zero-value mishap) would cause downstream arithmetic (success_count + fail_count) to return NULL. AddNOT NULLto match the intent.Proposed fix
- total_records BIGINT DEFAULT 0, - success_count BIGINT DEFAULT 0, - fail_count BIGINT DEFAULT 0, + total_records BIGINT NOT NULL DEFAULT 0, + success_count BIGINT NOT NULL DEFAULT 0, + fail_count BIGINT NOT NULL DEFAULT 0,deploy/task/development/Dockerfile-24-25 (1)
24-25:⚠️ Potential issue | 🟡 MinorAir binary and module cache are copied but unused in CMD.
Air is installed and copied to the runtime image (line 23), and the module cache is copied (lines 24-25), but the CMD on line 30 runs the pre-built binary directly instead of invoking Air for hot-reload. Either remove these unnecessary copies or update CMD to use Air (e.g.,
CMD ["air", "-c", "/app/.air.toml"]). This pattern appears in most development Dockerfiles, but onlydeploy/leaderboardstat/development/Dockerfileanddeploy/auth/development/Dockerfileactually invoke Air in their CMD.contributorapp/service/job/validator.go-62-66 (1)
62-66:⚠️ Potential issue | 🟡 MinorMIME prefix-match logic has an inverted first condition.
strings.HasPrefix(m, mime)checks if the allowed type starts with the detected type. This is backwards — it would, for example, let a detected"text/"(hypothetically) match any"text/..."config entry. The second condition already correctly handles base-type prefix matching. The first condition should either be removed or flipped.Proposed fix
for _, m := range v.config.HttpFileType { - if strings.HasPrefix(m, mime) || strings.HasPrefix(mime, strings.Split(m, ";")[0]) { + if m == mime || strings.HasPrefix(mime, strings.Split(m, ";")[0]) { return nil } }contributorapp/service/job/param.go-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorInconsistent JSON tag casing:
"ID"vs snake_case used elsewhere.
GetJobResponse.IDhas JSON tag"ID"(PascalCase), while every other field in this file and other response structs usessnake_case. This will produce inconsistent API output.Suggested fix
- ID uint `json:"ID"` + ID uint `json:"id"`contributorapp/service/job/param.go-8-13 (1)
8-13:⚠️ Potential issue | 🟡 Minor
json:"file"tag onmultipart.Fileis misleading and non-functional.
multipart.Fileis an interface (io.Reader+io.ReaderAt+io.Seeker) and cannot be marshaled/unmarshaled viaencoding/json. While this struct is likely populated manually from a multipart form, the JSON tag is misleading and will silently producenullon marshal or fail on unmarshal.Suggested fix
type ImportContributorRequest struct { - File multipart.File `json:"file"` + File multipart.File `json:"-"` FileName string `json:"file_name"` FileType string `json:"file_type"` IdempotencyKey string `json:"idempotency_key"` }contributorapp/service/contributor/param.go-63-63 (1)
63-63:⚠️ Potential issue | 🟡 MinorTypo:
CreateAtshould beCreatedAt(and JSON tagcreated_at).Every other timestamp field in this file and the codebase uses
CreatedAt/created_at(e.g., line 16GetProfileResponse.CreatedAt, line 51UpdateProfileResponse.CreatedAt). The inconsistency is also evident in the service layer, which maps this field toCreatedAton internal structures (contributorapp/service/contributor/service.go lines 179, 203).Suggested fix
- CreateAt time.Time `json:"create_at"` + CreatedAt time.Time `json:"created_at"`contributorapp/delivery/http/handler.go-136-142 (1)
136-142:⚠️ Potential issue | 🟡 MinorMissing/malformed file upload should return 400, not 500.
c.FormFile("file")fails when the client doesn't include a file field or sends a malformed multipart request — that's a client error, not a server error.Proposed fix
fileHeader, err := c.FormFile("file") if err != nil { - return c.JSON(http.StatusInternalServerError, map[string]string{ + return c.JSON(http.StatusBadRequest, map[string]string{ "message": "fail to get file", "error": err.Error(), }) }contributorapp/delivery/http/handler.go-109-109 (1)
109-109:⚠️ Potential issue | 🟡 Minor
roleNameshould be aconst, not avar.A mutable package-level variable for a security-critical policy value is risky — it can be accidentally reassigned at runtime. Since
"admin"is a fixed value, declare it as a constant.Proposed fix
-var roleName = "admin" +const roleName = "admin"contributorapp/service/contributor/service.go-152-215 (1)
152-215:⚠️ Potential issue | 🟡 MinorRemove
CreatedAtassignment in the update path (line 203).The
Upsertmethod setsupRequest.CreatedAt = req.CreateAtwhen updating an existing contributor. While the database does not actually use this field (it's not included in the SQL UPDATE parameters), assigningCreatedAtis semantically incorrect. A record's creation timestamp should remain immutable after the initial insert. This causes confusion and makes the code's intent unclear to maintainers.
UpdatedAtis already properly handled by the database (automatically set toNOW()in the update query), so no changes are needed there.Proposed fix: remove CreatedAt from the update path
upRequest.ID = c.ID upRequest.GitHubID = req.GitHubID upRequest.GitHubUsername = req.GitHubUsername upRequest.DisplayName = req.DisplayName upRequest.Bio = req.Bio upRequest.PrivacyMode = req.PrivacyMode upRequest.ProfileImage = req.ProfileImage upRequest.Email = req.Email - upRequest.CreatedAt = req.CreateAtcontributorapp/service/job/service.go-280-285 (1)
280-285:⚠️ Potential issue | 🟡 MinorEmpty file (zero records) is marked
Failed.If both
successCountandfailCountare 0 (e.g., an empty CSV), the defaultfinalStatusisFailed. An empty file isn't really a failure — consider handling this edge case explicitly, perhaps asSuccessor a dedicated status.contributorapp/service/job/service.go-86-99 (1)
86-99:⚠️ Potential issue | 🟡 MinorIdempotency-key lookup swallows errors when
j != nil.If
GetJobByIdempotencyKeyreturns both a non-niljand a non-nilerr(e.g., a partial result plus a warning/error), Lines 87-88 short-circuit and the error is never inspected. While most repository implementations return one or the other, this ordering is fragile. Checking the error first is the safer Go convention.♻️ Suggested reordering
j, err := s.jobRepo.GetJobByIdempotencyKey(ctx, req.IdempotencyKey) - if j != nil { - return ImportContributorResponse{JobID: j.ID, Message: "The file with this idempotency key exists"}, nil - } - - if err != nil { + if err != nil { if !errors.Is(err, ErrJobNotExists) { return ImportContributorResponse{}, errmsg.ErrorResponse{ Message: "failed to get job by idempotency key", Errors: map[string]interface{}{"error": err.Error()}, InternalErrCode: statuscode.IntCodeUnExpected, } } } + + if j != nil { + return ImportContributorResponse{JobID: j.ID, Message: "The file with this idempotency key exists"}, nil + }contributorapp/service/job/service.go-132-136 (1)
132-136:⚠️ Potential issue | 🟡 MinorInconsistent error handling — raw error returned from
io.Copy.Every other failure path in
CreateImportJobwraps the error inerrmsg.ErrorResponse, but Line 135 returns the bareio.Copyerror. This may cause downstream error-handling logic (e.g., HTTP middleware mappingErrorResponseto status codes) to behave differently for this path.♻️ Wrap consistently
if _, err := io.Copy(writer, req.File); err != nil { - return ImportContributorResponse{}, err + return ImportContributorResponse{}, errmsg.ErrorResponse{ + Message: "failed to save uploaded file", + Errors: map[string]interface{}{"error": err.Error()}, + InternalErrCode: statuscode.IntCodeUnExpected, + } }contributorapp/service/job/entity.go-64-75 (1)
64-75:⚠️ Potential issue | 🟡 Minor
mapToSlice()omits
mapToSlice()is used to populateFailRecord.RawData(inservice.goLine 256), but it excludesCreatedAt. SincemapContributorRecordToUpsertRequest()maps all fields includingProposed fix — include Email
func (c ContributorRecord) mapToSlice() []string { - s := make([]string, 0) - idStr := strconv.Itoa(int(c.GithubID)) - s = append(s, idStr) - s = append(s, c.GithubUsername) - s = append(s, c.DisplayName) - s = append(s, c.ProfileImage) - s = append(s, c.Bio) - s = append(s, c.PrivacyMode) - - return s + return []string{ + strconv.FormatInt(c.GithubID, 10), + c.GithubUsername, + c.DisplayName, + c.ProfileImage, + c.Bio, + c.PrivacyMode, + c.Email, + } }Note:
strconv.FormatIntis preferred overstrconv.Itoa(int(...))sinceGithubIDisint64— the narrowing cast tointcan silently truncate on 32-bit platforms.
| wg.Add(1) | ||
| go func() { | ||
| defer wg.Done() | ||
| app.Logger.Info("consumer started") | ||
| app.Consumer.Start(ctx) | ||
| app.Logger.Info("consumer stopped") | ||
| }() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's find the contributorapp/app.go file and examine the relevant code
find . -name "app.go" -path "*/contributorapp/*" | head -5Repository: gocasters/rankr
Length of output: 82
🏁 Script executed:
#!/bin/bash
# Let's explore the directory structure to understand the repo layout
ls -laRepository: gocasters/rankr
Length of output: 2458
🏁 Script executed:
#!/bin/bash
# Search for worker.Pool.Start implementation
rg "func.*Pool.*Start" --type go -A 10Repository: gocasters/rankr
Length of output: 1690
🏁 Script executed:
#!/bin/bash
# Read the full consumer.go file to understand the complete implementation
wc -l ./contributorapp/worker/consumer.goRepository: gocasters/rankr
Length of output: 97
🏁 Script executed:
#!/bin/bash
# Read the complete consumer.go file
cat -n ./contributorapp/worker/consumer.goRepository: gocasters/rankr
Length of output: 2845
🏁 Script executed:
#!/bin/bash
# Read the app.go file focusing on the shutdown and context handling
wc -l ./contributorapp/app.goRepository: gocasters/rankr
Length of output: 86
🏁 Script executed:
#!/bin/bash
# Read app.go around the lines of interest (119-125 and the shutdown logic around line 141)
cat -n ./contributorapp/app.go | sed -n '100,160p'Repository: gocasters/rankr
Length of output: 2252
🏁 Script executed:
#!/bin/bash
# Find the shutdownServers method in app.go
cat -n ./contributorapp/app.go | sed -n '160,209p'Repository: gocasters/rankr
Length of output: 1893
🏁 Script executed:
#!/bin/bash
# Search for shutdownServers implementation
rg "func.*shutdownServers" --type go -A 20Repository: gocasters/rankr
Length of output: 8300
Consumer lifecycle is correct, but graceful shutdown timeout is being ignored.
The consumer goroutine properly respects context cancellation and returns promptly when ctx.Done() is signalled. However, there's a critical issue on line 169: shutdownServers() receives a timeout-bound context but ignores it by creating parentCtx := context.Background(). This causes the shutdownTimeoutCtx (created on line 131 with TotalShutdownTimeout) to be discarded, defeating the graceful shutdown timeout mechanism. The passed ctx parameter should be used instead:
parentCtx := ctx
This matches the pattern used in taskapp and ensures the shutdown process respects the configured timeout.
🤖 Prompt for AI Agents
In `@contributorapp/app.go` around lines 119 - 125, The shutdownServers function
ignores the timeout-bound context by creating parentCtx := context.Background();
change this to use the passed ctx (parentCtx := ctx) so the shutdownTimeoutCtx
created earlier (with TotalShutdownTimeout) is respected; locate
shutdownServers, the parentCtx variable and any places where server shutdown
contexts are derived (e.g., shutdownTimeoutCtx) and ensure they use the passed
ctx to enforce the graceful shutdown timeout.
| for _, m := range stream.Messages { | ||
|
|
||
| jobIDRaw, ok := m.Values["job_id"] | ||
| if !ok { | ||
| logger.L().Error("message missing job_id", "message_id", m.ID) | ||
|
|
||
| continue | ||
| } | ||
|
|
||
| retryRaw, ok := m.Values["retry"] | ||
| if !ok { | ||
| logger.L().Error("message missing retry", "message_id", m.ID) | ||
|
|
||
| continue | ||
| } | ||
|
|
||
| retry, err := strconv.Atoi(fmt.Sprint(retryRaw)) | ||
| if err != nil { | ||
| logger.L().Error("invalid retry value", "message_id", m.ID, "error", err) | ||
|
|
||
| continue | ||
| } | ||
|
|
||
| msg = append(msg, Message{ | ||
| ID: m.ID, | ||
| Payload: []byte(fmt.Sprint(jobIDRaw)), | ||
| Retry: retry, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Malformed messages are skipped but never acknowledged — they will be redelivered infinitely.
When a message is missing job_id, retry, or has an unparseable retry value (Lines 121–139), the loop continues without calling XAck. Since the consumer group has already delivered these messages, they will reappear on every Consume call (as pending entries), flooding logs indefinitely and wasting resources.
Either acknowledge (and optionally DLQ) the bad messages, or explicitly handle them as failures.
Proposed fix — ack malformed messages
for _, m := range stream.Messages {
jobIDRaw, ok := m.Values["job_id"]
if !ok {
logger.L().Error("message missing job_id", "message_id", m.ID)
-
- continue
+ _ = b.Ack(ctx, m.ID)
+ continue
}
retryRaw, ok := m.Values["retry"]
if !ok {
logger.L().Error("message missing retry", "message_id", m.ID)
-
- continue
+ _ = b.Ack(ctx, m.ID)
+ continue
}
retry, err := strconv.Atoi(fmt.Sprint(retryRaw))
if err != nil {
logger.L().Error("invalid retry value", "message_id", m.ID, "error", err)
-
- continue
+ _ = b.Ack(ctx, m.ID)
+ continue
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for _, m := range stream.Messages { | |
| jobIDRaw, ok := m.Values["job_id"] | |
| if !ok { | |
| logger.L().Error("message missing job_id", "message_id", m.ID) | |
| continue | |
| } | |
| retryRaw, ok := m.Values["retry"] | |
| if !ok { | |
| logger.L().Error("message missing retry", "message_id", m.ID) | |
| continue | |
| } | |
| retry, err := strconv.Atoi(fmt.Sprint(retryRaw)) | |
| if err != nil { | |
| logger.L().Error("invalid retry value", "message_id", m.ID, "error", err) | |
| continue | |
| } | |
| msg = append(msg, Message{ | |
| ID: m.ID, | |
| Payload: []byte(fmt.Sprint(jobIDRaw)), | |
| Retry: retry, | |
| }) | |
| } | |
| for _, m := range stream.Messages { | |
| jobIDRaw, ok := m.Values["job_id"] | |
| if !ok { | |
| logger.L().Error("message missing job_id", "message_id", m.ID) | |
| _ = b.Ack(ctx, m.ID) | |
| continue | |
| } | |
| retryRaw, ok := m.Values["retry"] | |
| if !ok { | |
| logger.L().Error("message missing retry", "message_id", m.ID) | |
| _ = b.Ack(ctx, m.ID) | |
| continue | |
| } | |
| retry, err := strconv.Atoi(fmt.Sprint(retryRaw)) | |
| if err != nil { | |
| logger.L().Error("invalid retry value", "message_id", m.ID, "error", err) | |
| _ = b.Ack(ctx, m.ID) | |
| continue | |
| } | |
| msg = append(msg, Message{ | |
| ID: m.ID, | |
| Payload: []byte(fmt.Sprint(jobIDRaw)), | |
| Retry: retry, | |
| }) | |
| } |
🤖 Prompt for AI Agents
In `@contributorapp/repository/broker.go` around lines 118 - 146, Malformed
messages in the loop over stream.Messages (where you build Message structs from
m.Values and append to msg) are currently skipped with continue and never
acknowledged; update the error branches for missing "job_id", missing "retry",
and retry parse errors to acknowledge the message via the Redis consumer group
XAck call (use the same stream and consumer identifiers you use elsewhere in
this file) before continuing, and optionally route the bad payload to your DLQ
handler or log with the message ID; reference the loop over stream.Messages, the
message variable m (m.ID and m.Values), and the Message struct so you add XAck
calls in those three error paths to prevent infinite redelivery.
| cr := ContributorRecord{ | ||
| RowNumber: r.rowNumber, | ||
| GithubID: id, | ||
| GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]), | ||
| DisplayName: safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]), | ||
| ProfileImage: safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]), | ||
| Bio: safeGet(r.data, colIdx[strings.ToLower(Bio.String())]), | ||
| PrivacyMode: safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]), | ||
| Email: safeGet(r.data, colIdx[strings.ToLower(Email.String())]), | ||
| CreatedAt: time.Now(), | ||
| } |
There was a problem hiding this comment.
Optional column lookups silently return index 0 when the column is missing, causing incorrect field assignments.
When a column like DisplayName is absent from the header, colIdx[strings.ToLower(DisplayName.String())] returns 0 (Go's zero value for missing map keys). safeGet(r.data, 0) then returns the first column's value instead of an empty string.
Use a comma-ok idiom to distinguish "column at index 0" from "column not found":
Proposed fix — helper that returns empty string for missing columns
+func getField(row []string, colIdx map[string]int, col ColumnName) string {
+ idx, ok := colIdx[strings.ToLower(col.String())]
+ if !ok {
+ return ""
+ }
+ return safeGet(row, idx)
+}Then replace usages:
cr := ContributorRecord{
RowNumber: r.rowNumber,
GithubID: id,
- GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]),
- DisplayName: safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]),
- ProfileImage: safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]),
- Bio: safeGet(r.data, colIdx[strings.ToLower(Bio.String())]),
- PrivacyMode: safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]),
- Email: safeGet(r.data, colIdx[strings.ToLower(Email.String())]),
+ GithubUsername: getField(r.data, colIdx, GithubUsername),
+ DisplayName: getField(r.data, colIdx, DisplayName),
+ ProfileImage: getField(r.data, colIdx, ProfileImage),
+ Bio: getField(r.data, colIdx, Bio),
+ PrivacyMode: getField(r.data, colIdx, PrivacyMode),
+ Email: getField(r.data, colIdx, Email),
CreatedAt: time.Now(),
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| cr := ContributorRecord{ | |
| RowNumber: r.rowNumber, | |
| GithubID: id, | |
| GithubUsername: safeGet(r.data, colIdx[strings.ToLower(GithubUsername.String())]), | |
| DisplayName: safeGet(r.data, colIdx[strings.ToLower(DisplayName.String())]), | |
| ProfileImage: safeGet(r.data, colIdx[strings.ToLower(ProfileImage.String())]), | |
| Bio: safeGet(r.data, colIdx[strings.ToLower(Bio.String())]), | |
| PrivacyMode: safeGet(r.data, colIdx[strings.ToLower(PrivacyMode.String())]), | |
| Email: safeGet(r.data, colIdx[strings.ToLower(Email.String())]), | |
| CreatedAt: time.Now(), | |
| } | |
| cr := ContributorRecord{ | |
| RowNumber: r.rowNumber, | |
| GithubID: id, | |
| GithubUsername: getField(r.data, colIdx, GithubUsername), | |
| DisplayName: getField(r.data, colIdx, DisplayName), | |
| ProfileImage: getField(r.data, colIdx, ProfileImage), | |
| Bio: getField(r.data, colIdx, Bio), | |
| PrivacyMode: getField(r.data, colIdx, PrivacyMode), | |
| Email: getField(r.data, colIdx, Email), | |
| CreatedAt: time.Now(), | |
| } |
🤖 Prompt for AI Agents
In `@contributorapp/service/job/process.go` around lines 159 - 169, The map lookup
for column indices (colIdx) is using direct indexing so missing keys yield Go's
zero value (0) and safeGet(r.data, 0) returns the first column instead of empty;
update the code that builds ContributorRecord (where fields like DisplayName,
ProfileImage, Bio, PrivacyMode, Email are fetched) to first check the map with
the comma-ok idiom (idx, ok := colIdx[strings.ToLower(DisplayName.String())])
and only call safeGet when ok is true, otherwise use an empty string;
alternatively add a small helper (e.g., getOptional(colIdx, key, data) or
safeGetOptional) that performs the comma-ok check and returns "" when the column
key is missing, then replace the direct colIdx[...] usages in the
ContributorRecord construction with that helper.
| if err := s.publisher.Publish(ctx, ProduceJob{JobID: job.ID, | ||
| IdempotencyKey: fmt.Sprintf("%s:%s", job.IdempotencyKey, job.FileHash)}); err != nil { | ||
| _ = s.jobRepo.UpdateStatus(ctx, job.ID, PendingToQueue) | ||
|
|
||
| successSaveFile = true | ||
|
|
||
| return ImportContributorResponse{ | ||
| JobID: job.ID, | ||
| Message: "job saved and will be queued shortly", | ||
| }, nil | ||
| } |
There was a problem hiding this comment.
When Publish fails, job is marked PendingToQueue — but ProcessJob only accepts Pending.
Line 177 sets the status to PendingToQueue on publish failure, and the response message says the job "will be queued shortly." However, ProcessJob (Line 208) rejects any job whose status is not Pending. This means a retry/scheduler that later picks up PendingToQueue jobs and calls ProcessJob will always fail with "invalid job status: pending_to_queue".
Either ProcessJob should also accept PendingToQueue, or the retry path needs a separate re-publish flow that resets status to Pending first.
🐛 Minimal fix — accept PendingToQueue in ProcessJob
- if job.Status != Pending {
+ if job.Status != Pending && job.Status != PendingToQueue {
return fmt.Errorf("invalid job status: %s", job.Status)
}🤖 Prompt for AI Agents
In `@contributorapp/service/job/service.go` around lines 175 - 185, The
publish-failure path sets the job status to PendingToQueue (via
s.jobRepo.UpdateStatus after s.publisher.Publish fails) but ProcessJob only
accepts jobs with status Pending, causing retries to fail; update ProcessJob to
treat PendingToQueue as a valid processable status (i.e., accept both Pending
and PendingToQueue in the status check) or normalize PendingToQueue to Pending
before validation so retry/scheduler flows can proceed; adjust any status-check
branches in ProcessJob and unit tests referencing Pending to include
PendingToQueue handling.
| for _, m := range msgs { | ||
| if err := p.consumer.CheckMaxRetry(ctx, m); err != nil { | ||
| logger.L().Error("max retry done", | ||
| "message id", m.ID, | ||
| "error", err.Error()) | ||
|
|
||
| continue | ||
| } | ||
| msgCh <- m | ||
| } |
There was a problem hiding this comment.
Messages that fail CheckMaxRetry are never acknowledged — they will be re-consumed forever.
When CheckMaxRetry returns an error (line 84), the message is logged and skipped via continue, but it is never acknowledged or moved to a dead-letter queue. On the next Consume call, the same message will be fetched again, causing an infinite retry loop and log spam.
Either acknowledge the message or invoke HandleFailure to move it to a dead-letter / poison queue when max retries are exceeded.
🤖 Prompt for AI Agents
In `@contributorapp/worker/consumer.go` around lines 83 - 92, When
CheckMaxRetry(ctx, m) returns an error you must not simply continue — either
acknowledge the message or move it to a dead-letter/poison queue; update the
loop in consumer.go where p.consumer.CheckMaxRetry is called so that on error
you call the failure handling path (e.g., p.consumer.HandleFailure(ctx, m, err)
or the equivalent failure handler) or explicitly acknowledge/commit the message
(e.g., m.Ack()/p.consumer.Ack(ctx, m)) instead of skipping it, and keep the
existing log; this prevents the message from being re-consumed forever.
| container_name: auth-app-prod | ||
| ports: | ||
| - "8081:8080" | ||
| - "18081:8080" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for references to port 8081 in various file types
echo "=== Searching for '8081' references in YAML files ==="
rg '8081' --type yaml 2>/dev/null || echo "No matches in YAML"
echo -e "\n=== Searching for '8081' in config/env files ==="
rg '8081' -g '*.conf' -g '*.env*' -g '*.sh' -g '*.lua' -g '*.nginx' 2>/dev/null || echo "No matches in config/env files"
echo -e "\n=== Searching for '8081' in all files (excluding node_modules) ==="
rg '8081' --type-list | head -20
rg '8081' -g '!node_modules/**' -g '!.git/**' 2>/dev/null | head -30 || echo "No other matches found"Repository: gocasters/rankr
Length of output: 50780
🏁 Script executed:
#!/bin/bash
# Also check the actual docker-compose files to understand the context
echo "=== Checking deploy/auth/production/docker-compose.yml ==="
cat -n deploy/auth/production/docker-compose.yml | grep -A 2 -B 2 "8080\|8081\|18081" || echo "File not found or no matches"
echo -e "\n=== Checking for other docker-compose files that may reference this service ==="
find . -name "docker-compose*.yml" -o -name "docker-compose*.yaml" 2>/dev/null | grep -v node_modules | head -20Repository: gocasters/rankr
Length of output: 1455
Fix the docker-compose port mapping to match the application configuration port.
The port mapping "18081:8080" is inconsistent with deploy/auth/production/config.yml which specifies port: 8081. This mismatch will cause the auth service to fail. Change the docker-compose port mapping to "18081:8081" to align with the internal app port, consistent with other services in the deployment.
🤖 Prompt for AI Agents
In `@deploy/auth/production/docker-compose.yml` at line 8, The docker-compose port
mapping "18081:8080" is incorrect for the auth service; update the service's
port mapping from "18081:8080" to "18081:8081" so the external port 18081 maps
to the container/app port 8081 (matching the config.yml setting port: 8081).
| location / { | ||
| auth_request /auth_verify; | ||
| auth_request_set $auth_user_id $upstream_http_x_user_id; | ||
| auth_request_set $auth_role $upstream_http_x_role; | ||
| auth_request_set $auth_user_info $upstream_http_x_user_info; | ||
|
|
||
| proxy_set_header X-User-ID $auth_user_id; | ||
| proxy_set_header X-Role $auth_role; | ||
| proxy_set_header X-User-Info $auth_user_info; | ||
| proxy_pass http://$webhook_upstream; | ||
| proxy_set_header Host $host; | ||
| proxy_set_header X-Real-IP $remote_addr; | ||
| proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; | ||
| proxy_set_header X-Forwarded-Proto $scheme; | ||
| } |
There was a problem hiding this comment.
GitHub webhook deliveries will be rejected — external callers don't carry JWT tokens.
GitHub sends webhook payloads with its own authentication mechanism (X-Hub-Signature-256), not with a JWT bearer token from your auth service. Requiring auth_request /auth_verify on the catch-all location / means every incoming GitHub webhook POST will receive a 401, effectively breaking the integration.
You likely need to either:
- Exempt the webhook delivery path (e.g.,
/github-webhook/) from the auth wall entirely, or - Add a separate
locationblock for the webhook delivery path that bypasses JWT auth (similar to the health-check).
Option 1: Add an unauthenticated location for the webhook path
+ location /github-webhook/ {
+ proxy_pass http://$webhook_upstream;
+ proxy_set_header Host $host;
+ proxy_set_header X-Real-IP $remote_addr;
+ proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
+ proxy_set_header X-Forwarded-Proto $scheme;
+ }
+
location / {
auth_request /auth_verify;If there are other internal/admin endpoints on the webhook service that do need JWT auth, they can remain under the authenticated location / block.
🤖 Prompt for AI Agents
In `@deploy/infrastructure/openresty/development/conf.d/webhook.conf` around lines
38 - 52, The current catch-all location block (location /) enforces auth_request
/auth_verify, which will reject GitHub webhook deliveries; add a separate
unauthenticated location for the webhook path (e.g., location /github-webhook/ {
... }) placed before the existing location / block that omits auth_request
/auth_verify and proxies to the same upstream ($webhook_upstream); include the
same necessary proxy_set_header lines (Host, X-Real-IP, X-Forwarded-For,
X-Forwarded-Proto) and ensure the X-Hub-Signature-256 header is
preserved/forwarded so GitHub can authenticate the payload, leaving other
internal endpoints under location / protected by auth_request /auth_verify.
| WEBHOOK_POSTGRES_DB__HOST=shared-postgres | ||
| WEBHOOK_POSTGRES_DB__PORT=5432 | ||
| WEBHOOK_POSTGRES_DB__USER=webhook_user | ||
| WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass | ||
| WEBHOOK_POSTGRES_DB__DB_NAME=webhook_db | ||
| WEBHOOK_REDIS__HOST=shared-redis | ||
| WEBHOOK_REDIS__PORT=6379 | ||
| WEBHOOK_NATS__URL=nats://shared-nats:4222 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the critical security concerns in the review comment
echo "=== 1. Checking if deploy/webhook/production/.env.production exists ==="
if [ -f "deploy/webhook/production/.env.production" ]; then
echo "✓ File exists"
echo "Contents:"
cat -n "deploy/webhook/production/.env.production"
else
echo "✗ File does not exist"
fi
echo -e "\n=== 2. Checking .gitignore for .env patterns ==="
if [ -f ".gitignore" ]; then
echo "Lines matching .env patterns:"
rg '\.env' .gitignore -n || echo "No .env patterns found in .gitignore"
else
echo "✗ No .gitignore found"
fi
echo -e "\n=== 3. Listing all .env files in repository ==="
fd -H -t f '\.env' --exclude node_modules --exclude .git --exclude .venv
echo -e "\n=== 4. Checking git status for .env.production file ==="
git ls-files | grep -E '\.env' || echo "No .env files tracked by git"
echo -e "\n=== 5. Searching for hardcoded password patterns ==="
rg -i 'password\s*=\s*\w+' --type-add 'env:*.env*' --type env -n 2>/dev/null || echo "Search completed"
echo -e "\n=== 6. Checking deployment/config structure ==="
find deploy -type f -name ".*" -o -name "*.env*" -o -name "*secret*" -o -name "*config*" | head -20Repository: gocasters/rankr
Length of output: 4148
🏁 Script executed:
# Verify .gitignore pattern specificity
echo "=== Checking .gitignore .env pattern ==="
head -35 .gitignore | tail -10
echo -e "\n=== Understanding .env pattern matching ==="
echo "Testing what '.env' pattern matches:"
echo ".env would match: .env (exact)"
echo ".env would NOT match: .env.production"
echo "To match both, need: .env* or .env**"Repository: gocasters/rankr
Length of output: 423
CRITICAL: Hardcoded credentials in production environment file.
This production configuration file contains hardcoded credentials in plain text (e.g., webhook_pass on line 4) and is committed to version control. The .gitignore pattern .env does not exclude .env.production files, leaving these credentials exposed.
Recommended fixes:
- Remove this file from version control and add
deploy/**/.env.productionto.gitignore - Use a template file (e.g.,
.env.production.example) with placeholder values instead - Use secrets management for production deployments:
- Kubernetes Secrets
- HashiCorp Vault
- Cloud provider secret managers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault)
- Environment variables injected at runtime by CI/CD
Note: This pattern appears to affect multiple services across the deploy directory.
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 4-4: [UnorderedKey] The WEBHOOK_POSTGRES_DB__PASSWORD key should go before the WEBHOOK_POSTGRES_DB__PORT key
(UnorderedKey)
[warning] 5-5: [UnorderedKey] The WEBHOOK_POSTGRES_DB__DB_NAME key should go before the WEBHOOK_POSTGRES_DB__HOST key
(UnorderedKey)
[warning] 8-8: [UnorderedKey] The WEBHOOK_NATS__URL key should go before the WEBHOOK_POSTGRES_DB__DB_NAME key
(UnorderedKey)
🤖 Prompt for AI Agents
In `@deploy/webhook/production/.env.production` around lines 1 - 8, This file
commits hardcoded production secrets (e.g., WEBHOOK_POSTGRES_DB__PASSWORD,
WEBHOOK_POSTGRES_DB__USER, WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST) —
remove the committed .env.production from version control, add a rule like
deploy/**/.env.production to .gitignore, replace the committed file with a
.env.production.example containing placeholder values for the keys shown
(WEBHOOK_POSTGRES_DB__HOST, WEBHOOK_POSTGRES_DB__PORT,
WEBHOOK_POSTGRES_DB__USER, WEBHOOK_POSTGRES_DB__PASSWORD,
WEBHOOK_POSTGRES_DB__DB_NAME, WEBHOOK_REDIS__HOST, WEBHOOK_REDIS__PORT,
WEBHOOK_NATS__URL), and update deployment/CI to inject real secrets at runtime
via Kubernetes Secrets, Vault, or cloud secret manager instead of storing them
in this file.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
authapp/delivery/http/handler.go (1)
111-119:⚠️ Potential issue | 🟠 MajorAvoid exposing raw error messages in 500 responses.
Line 118 returns
err.Error()directly to the client, which can leak internal details (stack traces, DB errors, etc.).Proposed fix
- return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()}) + return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})
🤖 Fix all issues with AI agents
In `@authapp/app.go`:
- Around line 61-71: When grpc.NewClient returns successfully but
contributor.New fails, the rpcClient connection is leaked; update the error path
in the block that calls grpc.NewClient and contributor.New so that
rpcClient.Close() is invoked before logging/returning (or use a short-lived
defer right after successful grpc.NewClient until contributor.New succeeds), and
avoid shadowing the outer err so you call rpcClient.Close() when contributor.New
returns an error; modify the code around grpc.NewClient, contributor.New,
contributorClient and contributorProvider to ensure rpcClient.Close() is always
called on the contributor.New failure path.
In `@authapp/delivery/http/handler.go`:
- Around line 84-93: The handler currently calls h.tokenService.IssueTokens
whenever originalURI == "" && originalMethod == "" (direct /v1/me), effectively
doing silent refresh; change this to require and validate a refresh token for
re-issuance: instead of unconditionally calling IssueTokens in that branch,
check for a refresh token in the request (e.g., Authorization cookie/header or
form field), validate it via the token service (e.g.,
h.tokenService.ValidateRefreshToken or an IssueTokensFromRefresh method), and
only call IssueTokens (or the dedicated refresh method) when the refresh token
is present and valid; otherwise return 401/403. Keep use of claims.UserID,
claims.Role, claims.Access for authorization and still set
response["access_token"] and response["refresh_token"] only after successful
refresh validation.
In `@doc/auth-guideline.md`:
- Around line 38-49: Add a clear disclaimer to doc/auth-guideline.md near the
hardcoded bcrypt hash and plaintext password (the UPDATE contributors statement
that sets password = '$2a$10$gPAOVqKrU6Vtew1eqQU35...') stating this credential
(demo_pass_123) is only for local development/seeding, must never be used in
staging/production, and should not be committed or deployed; also add a short
note to ensure CI, deployment scripts, and .gitattributes/.gitignore do not
package or expose this seed data (mention contributor_user and github_username
'fdaei' as the example record).
- Around line 79-82: Replace the hardcoded JWT in the curl example with a
placeholder environment variable to avoid committing secrets; specifically
update the Authorization header in the auth-guideline.md curl example (the line
starting with -H "Authorization: Bearer ...") to use an env var like
$ADMIN_ACCESS_TOKEN (or similar) and update the surrounding example text to
instruct readers to export/set that variable locally before running the curl
command.
In `@pkg/role/access.go`:
- Around line 21-36: RequiredPermission currently returns empty Permission when
module or operation can't be resolved which is treated by HasPermission as "no
permission required" and thus allows requests; change RequiredPermission (and
callers) to fail-closed by returning a clear sentinel Permission value (e.g.,
Permission("deny") or Permission("unresolvable")) whenever
moduleFromHost/moduleFromPath or operationFromMethod yields empty, and update
HasPermission to explicitly deny any request whose required permission equals
that sentinel so unclassifiable requests are rejected by default.
- Around line 46-48: The current check using strings.Contains on the variable
path is too permissive and can allow auth bypass; change the logic in
pkg/role/access.go that tests path so it only treats requests as public when the
final path segment exactly equals "health-check" or "health_check" (or when the
full path exactly/prefix-matches the known public endpoint), e.g. trim
leading/trailing slashes, split on "/", inspect the last segment and compare for
equality instead of using strings.Contains; update the conditional that returns
true accordingly.
🧹 Nitpick comments (5)
pkg/role/access.go (2)
79-83: Dead code:len(parts) == 0is unreachable.
strings.Spliton a non-empty string always returns a slice with at least one element, so the guard on line 80 can never be true.Simplify
parts := strings.Split(host, ".") - if len(parts) == 0 { - return "" - } return strings.ToLower(parts[0])
96-101: Same dead-code pattern:len(parts) == 0afterSplitNon a non-empty string.Same as
moduleFromHost—strings.SplitNwith a non-empty input always returns at least one element.Simplify
parts := strings.SplitN(path, "/", 2) - if len(parts) == 0 { - return "" - } return strings.ToLower(parts[0])authapp/app.go (1)
81-91:Contributorfield can benilin degraded mode — consider documenting this.Since
contributorClientmay remainnilwhen the contributor service is unavailable,Application.Contributor(an exported field) will benil. The shutdown path at line 173 handles this correctly, but other consumers ofApplicationshould be aware. A brief doc comment on the field would help.deploy/infrastructure/openresty/development/conf.d/auth.conf (1)
64-79: Contributor service path prefixes are quite generic.Patterns like
create,update, andpasswordunder/v1/are very generic. If another service later introduces/v1/create-*or/v1/update-*paths, they'll silently route to the contributor service due to regex match order. Consider more specific prefixes (e.g.,/v1/contributor/) if the API design allows it.authapp/delivery/http/handler.go (1)
133-139:extractBearerTokenis correct but could be slightly more robust.The implementation is fine for standard usage. Minor note:
strings.CutPrefix(Go 1.20+) would be slightly more idiomatic and avoids the manual length check.Optional refactor
func extractBearerToken(r *http.Request) string { authz := r.Header.Get("Authorization") - if len(authz) > 7 && authz[:7] == "Bearer " { - return authz[7:] + if token, ok := strings.CutPrefix(authz, "Bearer "); ok { + return token } return "" }
doc/auth-guideline.md
Outdated
| If you do not know the current password, set a known one: | ||
|
|
||
| ```bash | ||
| docker exec -it rankr-shared-postgres psql -U contributor_user -d contributor_db -c " | ||
| UPDATE contributors | ||
| SET password = '\$2a\$10\$gPAOVqKrU6Vtew1eqQU35.XMAhxtIEqo0hiyBOnVWzgh27WwOe0Zq', | ||
| updated_at = NOW() | ||
| WHERE github_username = 'fdaei'; | ||
| " | ||
| ``` | ||
|
|
||
| `$2a$10$gPAOVqKrU6Vtew1eqQU35.XMAhxtIEqo0hiyBOnVWzgh27WwOe0Zq` is bcrypt hash for `demo_pass_123`. |
There was a problem hiding this comment.
Hardcoded password and bcrypt hash in docs — acceptable for dev seed, but add a disclaimer.
The plaintext password demo_pass_123 and its bcrypt hash are documented for local development seeding. This is reasonable for a dev guide, but consider adding a note that this must never be used in staging/production, and ensure CI/.gitattributes don't accidentally deploy this seed.
🤖 Prompt for AI Agents
In `@doc/auth-guideline.md` around lines 38 - 49, Add a clear disclaimer to
doc/auth-guideline.md near the hardcoded bcrypt hash and plaintext password (the
UPDATE contributors statement that sets password =
'$2a$10$gPAOVqKrU6Vtew1eqQU35...') stating this credential (demo_pass_123) is
only for local development/seeding, must never be used in staging/production,
and should not be committed or deployed; also add a short note to ensure CI,
deployment scripts, and .gitattributes/.gitignore do not package or expose this
seed data (mention contributor_user and github_username 'fdaei' as the example
record).
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
authapp/delivery/http/handler.go (1)
124-132:⚠️ Potential issue | 🟡 Minor
handleErrorleaks internal error details to the client.Line 131 returns
err.Error()for untyped errors. This can expose internal implementation details (stack traces, DB errors, etc.) to external callers.Proposed fix
- return c.JSON(http.StatusInternalServerError, echo.Map{"error": err.Error()}) + return c.JSON(http.StatusInternalServerError, echo.Map{"error": "internal server error"})authapp/service/tokenservice/auth_service.go (1)
10-14:⚠️ Potential issue | 🟠 MajorSame signing secret and claim structure for access and refresh tokens — token confusion risk.
Both
accessManagerandrefreshManagerare initialized with the identical secret and produce tokens with the sameUserClaimsstructure (no distinguishing claim such as"type"or"aud"). This means:
- An access token will pass
VerifyRefreshToken, and a refresh token will passVerifyToken.- In the
mehandler, an attacker can submit the same access token as bothAuthorization: Bearer <tok>andX-Refresh-Token: <tok>— both verifications succeed, the user-ID/role/access equality check passes, and new tokens are issued. This turns any leaked access token into a self-refreshing credential.Add a distinguishing claim (e.g.,
"type": "access"vs"type": "refresh") toUserClaimsand reject tokens whose type doesn't match the manager that verifies them.
🤖 Fix all issues with AI agents
In `@authapp/delivery/http/handler.go`:
- Around line 154-170: The extractRefreshToken function currently accepts tokens
from headers, a cookie, and r.FormValue("refresh_token") which allows query
string/form extraction; remove the form/query extraction so extractRefreshToken
only returns tokens from the X-Refresh-Token or Refresh-Token headers and the
refresh_token cookie, i.e., delete the r.FormValue branch in extractRefreshToken
and ensure callers of extractRefreshToken still handle empty string cases
appropriately.
In `@pkg/role/access.go`:
- Around line 53-58: The current public endpoint check in pkg/role/access.go
uses publicEndpoints ({"v1/login","v1/me"}) plus
strings.HasPrefix(normalizedPath, endpoint+"/") which allows subpaths to bypass
auth but has no tests or docs; either (A) add unit tests asserting the function
that contains publicEndpoints returns true for subpaths like "/v1/me/settings"
and "/v1/login/callback" (create tests referencing the function or variable in
pkg/role/access_test.go to document intent), or (B) if only exact matches should
be public, remove the strings.HasPrefix branch from the function in access.go
and add tests that ensure "/v1/me" and "/v1/login" are allowed but
"/v1/me/settings" and "/v1/login/callback" are denied. Ensure tests reference
the same function/variable used in the diff (publicEndpoints / the
endpoint-checking function) so behavior is explicit.
🧹 Nitpick comments (3)
authapp/delivery/http/handler_test.go (2)
9-64: Good coverage of extraction sources; consider adding a "no token present" case.The table-driven tests cover all four extraction paths. A negative case where no source provides a token (expecting
"") would round out the coverage.Suggested additional test case
+ { + name: "no token anywhere", + setup: func(r *http.Request) {}, + wantTok: "", + },
66-92:sameAccessListtests cover the key scenarios — consider adding a duplicate-element case.A case like
a=["x","x"], b=["x","y"]would explicitly verify the counting logic rejects mismatched duplicates.Suggested additional test case
+ {name: "duplicates differ", a: []string{"a:read", "a:read"}, b: []string{"a:read", "b:read"}, want: false},pkg/role/access.go (1)
94-100: Dead code:len(parts) == 0is unreachable afterstrings.Split/strings.SplitN.
strings.Splitandstrings.SplitN(..., 2)always return at least one element, so thelen(parts) == 0guards on lines 96 and 114 are dead code.Cleanup
parts := strings.Split(host, ".") - if len(parts) == 0 { - return "" - } return strings.ToLower(parts[0])parts := strings.SplitN(path, "/", 2) - if len(parts) == 0 { - return "" - } return strings.ToLower(parts[0])Also applies to: 112-117
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@doc/auth-guideline.md`:
- Around line 66-67: The doc incorrectly states missing Authorization header
returns 400; update doc/auth-guideline.md to state that a missing Authorization
header returns http.StatusUnauthorized (401) to match the middleware
implementation in authapp/delivery/http/middleware.go which currently returns
http.StatusUnauthorized when the token is missing; ensure the bullet list
replaces "Missing `Authorization` header returns `400`." with "Missing
`Authorization` header returns `401`." (or equivalent wording referencing
401/Unauthorized).
🧹 Nitpick comments (3)
pkg/role/access_test.go (1)
29-29: Remove unnecessary loop variable capture on line 29.The
tc := tcpattern is not needed for Go 1.25.3—loop variables have been scoped per-iteration since Go 1.22. This can be safely removed as cleanup.authapp/delivery/http/handler.go (2)
57-62: Consider using&&instead of||for the gateway permission check.With
||, a request carrying only one of the two headers (e.g.,X-Original-MethodwithoutX-Original-URI) will enter the permission branch with incomplete context, causingRequiredPermissionto returnPermissionUnresolvableand always yielding a 403. Using&&would ensure you only enforce permission when the gateway has supplied both routing hints, and silently skip the check when either is absent.- if originalURI != "" || originalMethod != "" { + if originalURI != "" && originalMethod != "" {If the gateway is guaranteed to always send both headers together, this is a no-op change, but
&&communicates the intent more clearly and is safer against partial-header edge cases.
78-83: Silent failure onEncodeUserInfomay hinder debugging.If
claims.UserIDis not a parseable uint64, theX-User-Infoheader is silently omitted. Consider logging the error at debug/warn level so operators can trace missing identity headers.
| - Missing `Authorization` header returns `400`. | ||
| - Invalid token or invalid role returns `401`. |
There was a problem hiding this comment.
Documentation says 400 for missing Authorization, but code returns 401.
The middleware in authapp/delivery/http/middleware.go (line 18) returns http.StatusUnauthorized (401) when the token is missing, not 400. Update the doc to match the implementation.
Notes:
-- Missing `Authorization` header returns `400`.
+- Missing `Authorization` header returns `401`.
- Invalid token or invalid role returns `401`.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Missing `Authorization` header returns `400`. | |
| - Invalid token or invalid role returns `401`. | |
| Notes: | |
| - Missing `Authorization` header returns `401`. | |
| - Invalid token or invalid role returns `401`. |
🤖 Prompt for AI Agents
In `@doc/auth-guideline.md` around lines 66 - 67, The doc incorrectly states
missing Authorization header returns 400; update doc/auth-guideline.md to state
that a missing Authorization header returns http.StatusUnauthorized (401) to
match the middleware implementation in authapp/delivery/http/middleware.go which
currently returns http.StatusUnauthorized when the token is missing; ensure the
bullet list replaces "Missing `Authorization` header returns `400`." with
"Missing `Authorization` header returns `401`." (or equivalent wording
referencing 401/Unauthorized).
Summary by CodeRabbit
New Features
Bug Fixes / Improvements
Documentation