Skip to content

Feature/integrate auth and contributor import#214

Merged
gocastsian merged 102 commits intomainfrom
feature/integrate-auth-and-contributor-import
Feb 18, 2026
Merged

Feature/integrate auth and contributor import#214
gocastsian merged 102 commits intomainfrom
feature/integrate-auth-and-contributor-import

Conversation

@arash-mosavi
Copy link
Copy Markdown
Collaborator

@arash-mosavi arash-mosavi commented Feb 13, 2026

Summary by CodeRabbit

  • New Features

    • Bulk contributor import (CSV/XLSX) with job creation, progress/status endpoints, fail record reporting, and retries
    • Background worker pool and broker for queued job processing
    • File upload validation (size/type) with idempotent uploads
  • Enhancements

    • Per-request role & permission enforcement; tokens include role and explicit access scopes
    • Contributors now carry assigned roles used during auth and token issuance
  • Chores

    • Database and infra updates (migrations, configs, Docker) to support jobs, roles, and routing

mzfarshad and others added 30 commits December 1, 2025 22:03
- Add Watermill publisher to historical fetcher
- Publish events to rankr_raw_events topic after DB save
- Create NATS publisher in fetch-historical command
- Add project gRPC adapter
- Add RedisLeaderboardRepository for public leaderboard storage
- Implement SetPublicLeaderboard and GetPublicLeaderboard
- Add scheduler cron job for public leaderboard update
- Add run-scheduler command for manual testing
- Add gRPC handler for GetPublicLeaderboard
- Add migration to change resource_id to TEXT
Update SaveHistoricalEventsBulk to return list of inserted events
alongside BulkInsertResult. The fetcher now only publishes events
that were actually inserted, not duplicates skipped by ON CONFLICT.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
contributorapp/repository/contributor.go (1)

228-273: ⚠️ Potential issue | 🔴 Critical

Bug: scanning nullable github_id directly into int64 will fail for NULL values.

GetContributorByID and GetContributorByGitHubUsername correctly use sql.NullInt64 for github_id (which is nullable per the schema and seed migration). Here, github_id is scanned directly into c.GitHubID (an int64), which will cause a scan error for any contributor with a NULL github_id.

🐛 Proposed fix
 	for rows.Next() {
 		var c contributor.Contributor
+		var githubID sql.NullInt64
 		err := rows.Scan(
 			&c.ID,
-			&c.GitHubID,
+			&githubID,
 			&c.GitHubUsername,
 			&c.Email,
 			&c.IsVerified,
 			&c.TwoFactor,
 			&c.PrivacyMode,
 			&c.DisplayName,
 			&c.ProfileImage,
 			&c.Bio,
 			&c.CreatedAt,
 		)
 		if err != nil {
 			return nil, fmt.Errorf("failed to scan contributor: %w", err)
 		}
+		if githubID.Valid {
+			c.GitHubID = githubID.Int64
+		}
 		contributors = append(contributors, &c)
 	}
contributorapp/delivery/http/handler.go (1)

31-48: ⚠️ Potential issue | 🟠 Major

Unsafe type assertion on error at line 45 will panic if err is not errmsg.ErrorResponse.

This is pre-existing code, but worth noting: getProfile (line 45) performs a bare err.(errmsg.ErrorResponse) without the comma-ok guard. If err is neither validator.Error nor errmsg.ErrorResponse, this panics. The newer handlers (createContributor, updateProfile, uploadFile) correctly use the two-value form. Consider aligning this handler with the safer pattern.

Proposed fix
 	if err != nil {
 		if vErr, ok := err.(validator.Error); ok {
 			return c.JSON(vErr.StatusCode(), vErr)
 		}
-		return c.JSON(statuscode.MapToHTTPStatusCode(err.(errmsg.ErrorResponse)), err)
+		if eResp, ok := err.(errmsg.ErrorResponse); ok {
+			return c.JSON(statuscode.MapToHTTPStatusCode(eResp), eResp)
+		}
+		return c.JSON(http.StatusInternalServerError, map[string]string{"error": err.Error()})
 	}
🤖 Fix all issues with AI agents
In `@contributorapp/delivery/http/handler.go`:
- Around line 171-190: The handlers getJobStatus and getFailRecords convert the
job_id param with strconv.Atoi then cast to uint, which allows negative values
(e.g. "-1") to wrap to a large uint; after parsing (in both getJobStatus and
getFailRecords) validate that jobID > 0 and return a 400 JSON error like
"invalid job id" if not, before calling h.JobService.JobStatus /
h.JobService.FailRecords; ensure you check the parsed int rather than the
unsigned value to prevent silent wrapping.
- Line 143: The current idempotencyKey built as fmt.Sprintf("%s-%d-%d",
fileHeader.Filename, fileHeader.Size, claim.ID) can collide for different files
with identical name+size; compute a content-based hash (e.g. SHA-256 of the full
upload or of the first N bytes) and include it in the idempotency key instead of
or in addition to fileHeader.Size (e.g. "%s-%s-%d" with filename, hexHash,
claim.ID). When computing the hash, avoid losing the upload stream—hash while
streaming (io.TeeReader or hash during file save) and use that resulting hex
string in the idempotencyKey variable so distinct contents produce distinct
keys.
- Around line 114-120: The handler currently treats a failure from
c.FormFile("file") as a server error (HTTP 500); change it to return a client
error (HTTP 400) when FormFile fails—update the error response in the block
handling fileHeader, err (the c.FormFile call) to use http.StatusBadRequest and
a clearer message (e.g., "missing or invalid file upload") while still including
err.Error() for debugging; ensure this logic is in the same handler function
where fileHeader is used so client faults are correctly reported.
- Around line 108-112: The current check in handler.go conflates unauthenticated
and unauthorized cases; update the conditional around claim, claim.Role.String()
and types.Admin.String() so that if the claim is missing (!ok || claim == nil)
you return HTTP 401 via c.JSON with an "unauthorized" error, and if the claim
exists but claim.Role.String() != types.Admin.String() you return HTTP 403 via
c.JSON with an "forbidden" (or "insufficient permissions") error; keep using the
same c.JSON pattern and status constants to implement the two distinct
responses.

In `@contributorapp/repository/migrations/000005_create_fail_records_table.sql`:
- Around line 11-12: The fail_records table's updated_at column is missing a
DEFAULT value while jobs.updated_at uses DEFAULT NOW(); to make them consistent,
update the migration so fail_records.updated_at is defined as "updated_at
TIMESTAMP WITH TIME ZONE DEFAULT NOW()" (or explicitly document if it should
remain NULL-only and only set on updates); locate the fail_records table
definition (symbol: fail_records) and change the updated_at column declaration
accordingly or add a comment documenting the intentional difference.

In `@deploy/contributor/production/.env.production`:
- Around line 1-4: Remove the extra blank lines in the placeholder
.env.production so dotenv-linter no longer flags it: trim the file down to a
single trailing newline (or optionally add commented example variables as
templates) and commit the cleaned file; ensure the file ends with exactly one
newline and contains no consecutive blank lines.

In `@deploy/contributor/production/config.yml`:
- Around line 12-19: The MIME allowlists diverge between middleware.file_type
and validate.http_file_type causing inconsistent accept/reject behavior; update
them to match by either adding "application/octet-stream" to
middleware.file_type or replacing "text/plain; charset=utf-8" with a canonical
"text/plain" entry (or vice versa) so both lists are identical, or refactor to a
single shared constant used by both middleware.file_type and
validate.http_file_type to ensure one source of truth.
- Around line 130-134: The config currently allows application/zip but the job
processor in contributorapp/service/job/service.go (the logic around handling
file extensions at the branch handling CSV/XLSX, e.g., the code that checks for
".csv" and ".xlsx" around lines 222-229) only supports CSV and XLSX; update
either the config or the processor: either remove "application/zip" from the
MIME allowlists used by middleware.file_type and validate.http_file_type to
prevent ZIP uploads, or implement ZIP extraction in the job pipeline (add a
handler in the service.go upload/ingest flow that detects "application/zip",
unpacks the archive, finds contained .csv/.xlsx files and forwards them to the
existing CSV/XLSX processing path). Ensure you modify middleware.file_type and
validate.http_file_type to match the chosen approach and update
contributorapp/service/job/service.go to route extracted files into the existing
CSV/XLSX handlers.
🧹 Nitpick comments (4)
contributorapp/repository/contributor.go (1)

228-228: provider parameter is accepted but never used.

The query always filters on github_username regardless of the provider value. If multi-VCS support is planned, the query should branch on the provider; otherwise remove the parameter to avoid a misleading API.

contributorapp/repository/migrations/000004_create_jobs_table.sql (1)

6-6: Redundant unique index on idempotency_key.

The UNIQUE constraint on line 6 already creates a unique index in PostgreSQL. The explicit CREATE UNIQUE INDEX on line 18 is unnecessary and just adds clutter.

♻️ Remove the redundant index
-CREATE UNIQUE INDEX IF NOT EXISTS idx_jobs_idempotency_key ON jobs(idempotency_key);

Also applies to: 18-18

contributorapp/repository/migrations/000005_create_fail_records_table.sql (1)

4-5: Consider adding a unique constraint on (job_id, record_number).

Without it, duplicate failure records for the same record within a job can be inserted. If the application retries and re-inserts fail records, this could lead to duplicates rather than updates.

♻️ Add composite unique constraint
 CREATE TABLE IF NOT EXISTS fail_records(
     id BIGSERIAL PRIMARY KEY,
     job_id BIGINT NOT NULL REFERENCES jobs(id) ON DELETE CASCADE,
     record_number INT NOT NULL,
+    UNIQUE (job_id, record_number),
contributorapp/delivery/http/handler.go (1)

152-156: Typo: vEervErr.

Minor naming inconsistency with every other handler in this file.

Proposed fix
-		if vEer, ok := err.(validator.Error); ok {
-			return c.JSON(vEer.StatusCode(), map[string]interface{}{
-				"message": vEer.Err,
-				"errors":  vEer.Fields,
+		if vErr, ok := err.(validator.Error); ok {
+			return c.JSON(vErr.StatusCode(), map[string]interface{}{
+				"message": vErr.Err,
+				"errors":  vErr.Fields,
 			})

Also note that other handlers return the validator.Error struct directly (e.g., line 43, 63, 90), whereas here a custom map is built. This creates an inconsistent error response shape for clients. Consider returning vErr directly for consistency, unless there's a specific reason to reshape it.

Comment on lines +108 to +112
if !ok || claim == nil || claim.Role.String() != types.Admin.String() {
return c.JSON(http.StatusUnauthorized, map[string]string{
"error": "unauthorized",
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Authorization failure should return 403 Forbidden, not 401 Unauthorized.

The user is authenticated (claim exists), but lacks the admin role. HTTP 401 signals missing/invalid credentials; HTTP 403 signals insufficient permissions. Also, the nil claim / missing userInfo case (truly unauthenticated) could be separated from the non-admin case for clearer semantics.

Proposed fix
 	claimVal := c.Get("userInfo")
 	claim, ok := claimVal.(*types.UserClaim)
-	if !ok || claim == nil || claim.Role.String() != types.Admin.String() {
-		return c.JSON(http.StatusUnauthorized, map[string]string{
-			"error": "unauthorized",
+	if !ok || claim == nil {
+		return c.JSON(http.StatusUnauthorized, map[string]string{
+			"error": "unauthenticated",
+		})
+	}
+	if claim.Role.String() != types.Admin.String() {
+		return c.JSON(http.StatusForbidden, map[string]string{
+			"error": "forbidden: admin role required",
 		})
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if !ok || claim == nil || claim.Role.String() != types.Admin.String() {
return c.JSON(http.StatusUnauthorized, map[string]string{
"error": "unauthorized",
})
}
claimVal := c.Get("userInfo")
claim, ok := claimVal.(*types.UserClaim)
if !ok || claim == nil {
return c.JSON(http.StatusUnauthorized, map[string]string{
"error": "unauthenticated",
})
}
if claim.Role.String() != types.Admin.String() {
return c.JSON(http.StatusForbidden, map[string]string{
"error": "forbidden: admin role required",
})
}
🤖 Prompt for AI Agents
In `@contributorapp/delivery/http/handler.go` around lines 108 - 112, The current
check in handler.go conflates unauthenticated and unauthorized cases; update the
conditional around claim, claim.Role.String() and types.Admin.String() so that
if the claim is missing (!ok || claim == nil) you return HTTP 401 via c.JSON
with an "unauthorized" error, and if the claim exists but claim.Role.String() !=
types.Admin.String() you return HTTP 403 via c.JSON with an "forbidden" (or
"insufficient permissions") error; keep using the same c.JSON pattern and status
constants to implement the two distinct responses.

Comment on lines +114 to +120
fileHeader, err := c.FormFile("file")
if err != nil {
return c.JSON(http.StatusInternalServerError, map[string]string{
"message": "fail to get file",
"error": err.Error(),
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

FormFile failure is most likely a client error — return 400, not 500.

When the client doesn't attach a file or uses the wrong field name, c.FormFile returns an error. Responding with 500 misattributes the fault.

Proposed fix
 	fileHeader, err := c.FormFile("file")
 	if err != nil {
-		return c.JSON(http.StatusInternalServerError, map[string]string{
+		return c.JSON(http.StatusBadRequest, map[string]string{
 			"message": "fail to get file",
 			"error":   err.Error(),
 		})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fileHeader, err := c.FormFile("file")
if err != nil {
return c.JSON(http.StatusInternalServerError, map[string]string{
"message": "fail to get file",
"error": err.Error(),
})
}
fileHeader, err := c.FormFile("file")
if err != nil {
return c.JSON(http.StatusBadRequest, map[string]string{
"message": "fail to get file",
"error": err.Error(),
})
}
🤖 Prompt for AI Agents
In `@contributorapp/delivery/http/handler.go` around lines 114 - 120, The handler
currently treats a failure from c.FormFile("file") as a server error (HTTP 500);
change it to return a client error (HTTP 400) when FormFile fails—update the
error response in the block handling fileHeader, err (the c.FormFile call) to
use http.StatusBadRequest and a clearer message (e.g., "missing or invalid file
upload") while still including err.Error() for debugging; ensure this logic is
in the same handler function where fileHeader is used so client faults are
correctly reported.

})
}

idempotencyKey := fmt.Sprintf("%s-%d-%d", fileHeader.Filename, fileHeader.Size, claim.ID)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Weak idempotency key — different file contents with identical name + size collide.

fmt.Sprintf("%s-%d-%d", fileHeader.Filename, fileHeader.Size, claim.ID) means two distinct files that happen to share the same name and byte length (uploaded by the same user) produce the same key, silently deduplicating different imports. Consider incorporating a content hash (e.g. SHA-256 of the first N bytes or the full stream) or a client-supplied nonce.

🤖 Prompt for AI Agents
In `@contributorapp/delivery/http/handler.go` at line 143, The current
idempotencyKey built as fmt.Sprintf("%s-%d-%d", fileHeader.Filename,
fileHeader.Size, claim.ID) can collide for different files with identical
name+size; compute a content-based hash (e.g. SHA-256 of the full upload or of
the first N bytes) and include it in the idempotency key instead of or in
addition to fileHeader.Size (e.g. "%s-%s-%d" with filename, hexHash, claim.ID).
When computing the hash, avoid losing the upload stream—hash while streaming
(io.TeeReader or hash during file save) and use that resulting hex string in the
idempotencyKey variable so distinct contents produce distinct keys.

Comment on lines +171 to +190
func (h Handler) getJobStatus(c echo.Context) error {
jobIDStr := c.Param("job_id")
jobID, err := strconv.Atoi(jobIDStr)
if err != nil {
return c.JSON(http.StatusBadRequest, map[string]string{
"error": "invalid job id",
})
}

res, err := h.JobService.JobStatus(c.Request().Context(), uint(jobID))
if err != nil {
if eRes, ok := err.(errmsg.ErrorResponse); ok {
return c.JSON(statuscode.MapToHTTPStatusCode(eRes), eRes)
}

return c.JSON(http.StatusInternalServerError, map[string]string{"error": err.Error()})
}

return c.JSON(http.StatusOK, res)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Negative jobID wraps to a large uint — validate > 0 after Atoi.

strconv.Atoi happily parses "-1", and the subsequent uint(jobID) silently wraps it to math.MaxUint64 (on 64-bit), which will query a nonexistent row or, worse, hit an unintended record. Both getJobStatus and getFailRecords share this bug.

Proposed fix (apply to both handlers)
 	jobID, err := strconv.Atoi(jobIDStr)
-	if err != nil {
+	if err != nil || jobID <= 0 {
 		return c.JSON(http.StatusBadRequest, map[string]string{
 			"error": "invalid job id",
 		})
 	}

Also applies to: 230-249

🤖 Prompt for AI Agents
In `@contributorapp/delivery/http/handler.go` around lines 171 - 190, The handlers
getJobStatus and getFailRecords convert the job_id param with strconv.Atoi then
cast to uint, which allows negative values (e.g. "-1") to wrap to a large uint;
after parsing (in both getJobStatus and getFailRecords) validate that jobID > 0
and return a 400 JSON error like "invalid job id" if not, before calling
h.JobService.JobStatus / h.JobService.FailRecords; ensure you check the parsed
int rather than the unsigned value to prevent silent wrapping.

Comment on lines +11 to +12
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent updated_at default with jobs table.

jobs.updated_at defaults to NOW() but fail_records.updated_at has no default and will be NULL on insert. If this is intentional (set only on update), that's fine — just flagging the inconsistency for awareness.

🤖 Prompt for AI Agents
In `@contributorapp/repository/migrations/000005_create_fail_records_table.sql`
around lines 11 - 12, The fail_records table's updated_at column is missing a
DEFAULT value while jobs.updated_at uses DEFAULT NOW(); to make them consistent,
update the migration so fail_records.updated_at is defined as "updated_at
TIMESTAMP WITH TIME ZONE DEFAULT NOW()" (or explicitly document if it should
remain NULL-only and only set on updates); locate the fail_records table
definition (symbol: fail_records) and change the updated_at column declaration
accordingly or add a comment documenting the intentional difference.

Comment on lines +1 to +4
# Contributor service production environment



Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove extra blank lines flagged by dotenv-linter.

Static analysis (dotenv-linter) flags lines 3-4 as extra blank lines. Since this is currently just a placeholder file, consider either trimming it to a single trailing newline or populating it with the expected production environment variables (even as commented-out templates) so it's immediately useful.

Proposed fix
 # Contributor service production environment
-
-
-
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Contributor service production environment
# Contributor service production environment
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 3-3: [ExtraBlankLine] Extra blank line detected

(ExtraBlankLine)


[warning] 4-4: [ExtraBlankLine] Extra blank line detected

(ExtraBlankLine)

🤖 Prompt for AI Agents
In `@deploy/contributor/production/.env.production` around lines 1 - 4, Remove the
extra blank lines in the placeholder .env.production so dotenv-linter no longer
flags it: trim the file down to a single trailing newline (or optionally add
commented example variables as templates) and commit the cleaned file; ensure
the file ends with exactly one newline and contains no consecutive blank lines.

Comment on lines +12 to +19
middleware:
file_size: 10485760
file_type:
- "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet"
- "application/zip"
- "text/csv"
- "text/plain"
- "text/plain; charset=utf-8"
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

MIME type allowlists diverge between middleware and validate — likely unintentional.

middleware.file_type includes "text/plain; charset=utf-8" but not "application/octet-stream", while validate.http_file_type (lines 122-128) includes "application/octet-stream" but not "text/plain; charset=utf-8". This mismatch means:

  • A file detected as text/plain; charset=utf-8 passes middleware but fails validation.
  • A file detected as application/octet-stream fails middleware but would be accepted by validation.

These two lists should be kept in sync (or consolidated into a single source of truth) to avoid confusing upload rejections.

🤖 Prompt for AI Agents
In `@deploy/contributor/production/config.yml` around lines 12 - 19, The MIME
allowlists diverge between middleware.file_type and validate.http_file_type
causing inconsistent accept/reject behavior; update them to match by either
adding "application/octet-stream" to middleware.file_type or replacing
"text/plain; charset=utf-8" with a canonical "text/plain" entry (or vice versa)
so both lists are identical, or refactor to a single shared constant used by
both middleware.file_type and validate.http_file_type to ensure one source of
truth.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
contributorapp/repository/contributor.go (2)

248-261: ⚠️ Potential issue | 🔴 Critical

github_id scanned without NullInt64 — will fail for rows with NULL github_id.

Other methods in this file (GetContributorByID, GetContributorByGitHubUsername) correctly use sql.NullInt64 to handle nullable github_id. Here rows.Scan reads directly into &c.GitHubID (int64), which will return a scan error when the column is NULL.

Proposed fix
 	for rows.Next() {
 		var c contributor.Contributor
+		var githubID sql.NullInt64
 		err := rows.Scan(
 			&c.ID,
-			&c.GitHubID,
+			&githubID,
 			&c.GitHubUsername,
 			&c.Email,
 			&c.IsVerified,
 			&c.TwoFactor,
 			&c.PrivacyMode,
 			&c.DisplayName,
 			&c.ProfileImage,
 			&c.Bio,
 			&c.CreatedAt,
 		)
 		if err != nil {
 			return nil, fmt.Errorf("failed to scan contributor: %w", err)
 		}
+		if githubID.Valid {
+			c.GitHubID = githubID.Int64
+		}
 		contributors = append(contributors, &c)
 	}

228-238: ⚠️ Potential issue | 🔴 Critical

Fix multi-provider support or remove unused provider parameter — currently hardcoded to GitHub only.

The provider parameter is never used; the query always filters on github_username. This causes silent failures when GITLAB or BITBUCKET providers are passed—the query returns empty results without error. The Contributor entity also lacks schema fields for non-GitHub providers (only github_id and github_username exist). Either extend the schema and use the provider parameter to query the correct column, or remove the parameter from the signature until multi-provider support is fully implemented.

🤖 Fix all issues with AI agents
In `@contributorapp/repository/migrations/000003_add_role_to_contributors.sql`:
- Around line 1-31: Two migration files share the same numeric prefix (both
000003 and both 000004) which breaks ordering; rename the colliding files so
every migration has a unique sequential prefix (e.g., change the second 000003
to 000005 and the second 000004 to 000006 or adjust to whatever next sequence
is), update any references/dependencies in your migration list if present, and
ensure the altered files (including the one creating role_enum and ALTER TABLE
contributors statements) keep their contents unchanged aside from the filename
prefix so the DB tool will run them in the correct order.

In `@contributorapp/repository/migrations/000004_seed_admin_contributor.sql`:
- Around line 1-25: The migration file
contributorapp/repository/migrations/000004_seed_admin_contributor.sql collides
with 000004_create_fail_records_table.sql because they share the same numeric
prefix `000004`; rename this file to a unique next sequence (e.g., change the
prefix to `000005` or another unused number) so the migration tool sees a
distinct migration order, keep the file contents (including the "-- +migrate Up"
and "-- +migrate Down" blocks) unchanged, and ensure any references or
deployment scripts that rely on the migration filename are updated to the new
name.
🧹 Nitpick comments (4)
contributorapp/repository/migrations/000003_create_jobs_table.sql (2)

6-6: Redundant unique index on idempotency_key.

Line 6 already declares idempotency_key VARCHAR(255) NOT NULL UNIQUE, which makes PostgreSQL automatically create a unique index to enforce the constraint. The explicit CREATE UNIQUE INDEX on line 18 adds a second, redundant index that doubles write overhead and storage for no benefit.

Remove line 18, or drop the UNIQUE keyword from line 6 and rely solely on the named index.

Option A: Remove the explicit index (preferred)
-CREATE UNIQUE INDEX IF NOT EXISTS idx_jobs_idempotency_key ON jobs(idempotency_key);
Option B: Keep only the named index
-    idempotency_key VARCHAR(255) NOT NULL UNIQUE,
+    idempotency_key VARCHAR(255) NOT NULL,

Also applies to: 18-18


2-2: CREATE TYPE lacks an existence guard, unlike role_enum in the sibling migration.

If status_enum already exists (e.g., from a partial or re-run migration), this statement will error out. The other 000003 migration wraps the equivalent CREATE TYPE role_enum in a DO $$ ... IF NOT EXISTS block. Consider applying the same defensive pattern here for consistency.

Defensive CREATE TYPE
-CREATE TYPE status_enum AS ENUM('pending', 'pending_to_queue', 'success', 'failed', 'partial_success', 'processing');
+DO $$
+BEGIN
+    IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'status_enum') THEN
+        CREATE TYPE status_enum AS ENUM ('pending', 'pending_to_queue', 'success', 'failed', 'partial_success', 'processing');
+    END IF;
+END $$;
contributorapp/repository/contributor.go (1)

233-238: FindByVCSUsernames omits role from the SELECT — inconsistent with entity definition.

The Contributor struct has a Role field, and the other read methods (GetContributorByID, GetContributorByGitHubUsername) now include role in their projections. This method leaves Role as a zero-value string, which could cause subtle downstream bugs if callers expect it to be populated.

contributorapp/delivery/http/handler.go (1)

180-180: Align service method parameters with types.ID for consistency.

types.ID is defined as uint64, but JobService.JobStatus(ctx context.Context, id uint) and JobService.GetFailRecords(ctx context.Context, jobID uint) accept uint. Other services in the codebase (e.g., notifapp) use types.ID for ID parameters, and updating these methods to accept uint64 or types.ID would improve consistency and avoid potential truncation on 32-bit platforms when converting from int.

Applies to lines 180 and 239 in contributorapp/delivery/http/handler.go.

Comment on lines +1 to +31
-- +migrate Up
DO $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM pg_type WHERE typname = 'role_enum') THEN
CREATE TYPE role_enum AS ENUM ('admin', 'user');
END IF;
END $$;

ALTER TABLE contributors
ADD COLUMN IF NOT EXISTS role role_enum NOT NULL DEFAULT 'user';

DO $$
BEGIN
IF EXISTS (
SELECT 1
FROM information_schema.columns
WHERE table_name = 'contributors'
AND column_name = 'role'
AND udt_name <> 'role_enum'
) THEN
ALTER TABLE contributors
ALTER COLUMN role TYPE role_enum
USING role::role_enum;
END IF;
END $$;

-- +migrate Down
ALTER TABLE contributors
DROP COLUMN IF EXISTS role;

DROP TYPE IF EXISTS role_enum;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name '*.sql' | grep -E 'contributorapp|migrations' | sort

Repository: gocasters/rankr

Length of output: 1554


Duplicate migration numbers 000003 and 000004 — both have collisions in contributorapp/repository/migrations/.

The directory contains:

  • 000003_add_role_to_contributors.sql AND 000003_create_jobs_table.sql
  • 000004_create_fail_records_table.sql AND 000004_seed_admin_contributor.sql

Migration tools rely on numeric prefixes for ordering and uniqueness. Duplicate numbers will prevent the migrations from executing correctly or cause unpredictable ordering.

Renumber one file in each pair so every migration in the directory has a unique sequence (e.g., rename the second 000003 to 000005 and the second 000004 to 000006, then adjust subsequent migrations accordingly).

🤖 Prompt for AI Agents
In `@contributorapp/repository/migrations/000003_add_role_to_contributors.sql`
around lines 1 - 31, Two migration files share the same numeric prefix (both
000003 and both 000004) which breaks ordering; rename the colliding files so
every migration has a unique sequential prefix (e.g., change the second 000003
to 000005 and the second 000004 to 000006 or adjust to whatever next sequence
is), update any references/dependencies in your migration list if present, and
ensure the altered files (including the one creating role_enum and ALTER TABLE
contributors statements) keep their contents unchanged aside from the filename
prefix so the DB tool will run them in the correct order.

Comment on lines +1 to +25
-- +migrate Up
INSERT INTO contributors (
github_id,
github_username,
email,
password,
role,
privacy_mode,
created_at,
updated_at
) VALUES (
NULL,
'fdaei',
NULL,
'$2y$10$RpwjZmD10eub5hSvUENoP.R7G6mtKn/3gt3F6LujsZutUsEGEpBCK',
'admin',
'real',
NOW(),
NOW()
)
ON CONFLICT (github_username)
DO UPDATE SET role = EXCLUDED.role;

-- +migrate Down
DELETE FROM contributors WHERE github_username = 'fdaei';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Duplicate migration number 000004 — collides with 000004_create_fail_records_table.sql.

Same issue as with the 000003 pair. Both files share the 000004 prefix in the same directory. One set must be renumbered to avoid migration tool conflicts.

🤖 Prompt for AI Agents
In `@contributorapp/repository/migrations/000004_seed_admin_contributor.sql`
around lines 1 - 25, The migration file
contributorapp/repository/migrations/000004_seed_admin_contributor.sql collides
with 000004_create_fail_records_table.sql because they share the same numeric
prefix `000004`; rename this file to a unique next sequence (e.g., change the
prefix to `000005` or another unused number) so the migration tool sees a
distinct migration order, keep the file contents (including the "-- +migrate Up"
and "-- +migrate Down" blocks) unchanged, and ensure any references or
deployment scripts that rely on the migration filename are updated to the new
name.

…b.com:gocasters/rankr into feature/integrate-auth-and-contributor-import
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deploy/leaderboardscoring/production/Dockerfile (1)

5-9: ⚠️ Potential issue | 🟠 Major

-mod=mod makes the vendor directory copy dead weight and requires network access at build time.

Line 6 copies the vendor/ directory, but -mod=mod on line 9 tells Go to resolve dependencies via the module cache/proxy, completely ignoring the vendored code. This means:

  1. The COPY vendor ./vendor layer is unnecessary bloat in the build context/cache.
  2. Production builds now require network access to download modules, reducing reproducibility and reliability.

Either keep -mod=vendor to use the already-copied vendor directory (hermetic, reproducible builds), or switch to -mod=mod and remove the COPY vendor ./vendor line (and replace it with RUN go mod download for layer caching).

This same issue applies to all 7 Dockerfiles in this PR (auth, contributor, leaderboardscoring, notification, task, userprofile, webhook).

Option A: Keep vendor mode (recommended for hermetic builds)
-RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main ./cmd/leaderboardscoring/main.go
+RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o /app/main ./cmd/leaderboardscoring/main.go
Option B: Use module mode properly
 COPY go.mod go.sum ./
-COPY vendor ./vendor
+RUN go mod download
 COPY . .
 
-RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main ./cmd/leaderboardscoring/main.go
+RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app/main ./cmd/leaderboardscoring/main.go
🤖 Fix all issues with AI agents
In `@deploy/auth/production/.env.production`:
- Around line 1-5: The .env.production currently contains hardcoded credentials
(auth_POSTGRES_DB__USER, auth_POSTGRES_DB__PASSWORD, and similar) which must be
removed; replace those values with non-sensitive placeholders and update
deployment/CI configuration to inject real secrets from your secret manager or
CI environment variables instead of committing them, ensure deployment
scripts/readers (the code that consumes
auth_POSTGRES_DB__HOST/PORT/DB_NAME/USER/PASSWORD) read from the injected env
vars or secret manager API at runtime, add the production .env to gitignore if
applicable and document the required secret names for CI/ops.

In `@deploy/leaderboardstat/production/.env.production`:
- Around line 1-5: Remove the hardcoded credentials for STAT_POSTGRES_DB__USER
and STAT_POSTGRES_DB__PASSWORD (and any real secrets in
STAT_POSTGRES_DB__DB_NAME/HOST/PORT) from the committed production env file;
instead reference external secrets by replacing those values with non-sensitive
placeholders and load real values at deploy-time from your secrets manager
(e.g., Vault/AWS Secrets Manager) or container orchestration secrets injection.
Add the production env filename pattern to .gitignore so the real file is never
committed, and add a .env.production.example (with placeholder values for
STAT_POSTGRES_DB__HOST, STAT_POSTGRES_DB__PORT, STAT_POSTGRES_DB__USER,
STAT_POSTGRES_DB__PASSWORD, STAT_POSTGRES_DB__DB_NAME) to the repo to document
required variables.

In `@deploy/realtime/production/Dockerfile`:
- Around line 6-9: The Dockerfile copies a vendor directory with "COPY vendor
./vendor" but the build uses the go build command with "-mod=mod" (RUN
CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main
./cmd/realtime/main.go), so the vendor layer is unused; fix by either removing
the "COPY vendor ./vendor" line to avoid the dead layer and rely on module mode,
or change the build flag to "-mod=vendor" so the vendor directory is honored
during the build—update the Dockerfile accordingly and keep only the matching
COPY and RUN changes.

In `@deploy/task/production/.env.production`:
- Around line 3-4: Remove the hardcoded production credentials from
.env.production by replacing the values for task_POSTGRES_DB__USER and
task_POSTGRES_DB__PASSWORD with placeholders (or remove the file entirely from
the repo), add .env.production to .gitignore, and create a
.env.production.example containing the keys task_POSTGRES_DB__USER and
task_POSTGRES_DB__PASSWORD with placeholder values; update CI/CD or deployment
scripts to inject the real secrets from your secrets manager or pipeline secret
store instead of committing them to source control.

In `@deploy/webhook/production/.env.production`:
- Around line 1-8: The production env contains a hardcoded secret
(WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass); remove this plaintext credential
from the committed file, replace it with a placeholder reference (e.g., expect
WEBHOOK_POSTGRES_DB__PASSWORD to be injected at deploy time), and wire the
deployment to fetch the real secret from your secret manager (Vault/KMS/Secrets
Manager) and inject it as an environment variable for the webhook service; also
ensure any CI/CD or k8s manifests/Helm charts reference the secret store and
confirm WEBHOOK_POSTGRES_DB__PASSWORD is no longer present in the repo history
or committed files.

Comment on lines +1 to +5
auth_POSTGRES_DB__HOST=shared-postgres
auth_POSTGRES_DB__PORT=5432
auth_POSTGRES_DB__USER=auth_user
auth_POSTGRES_DB__PASSWORD=auth_pass
auth_POSTGRES_DB__DB_NAME=auth_db
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid committing production DB credentials.

This file hardcodes username/password in the repo, which is a security/compliance risk for a production environment. Please move these values to a secret manager/CI-injected environment and keep only non-sensitive placeholders in the repo.

🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 1-1: [LowercaseKey] The auth_POSTGRES_DB__HOST key should be in uppercase

(LowercaseKey)


[warning] 2-2: [LowercaseKey] The auth_POSTGRES_DB__PORT key should be in uppercase

(LowercaseKey)


[warning] 3-3: [LowercaseKey] The auth_POSTGRES_DB__USER key should be in uppercase

(LowercaseKey)


[warning] 4-4: [LowercaseKey] The auth_POSTGRES_DB__PASSWORD key should be in uppercase

(LowercaseKey)


[warning] 4-4: [UnorderedKey] The auth_POSTGRES_DB__PASSWORD key should go before the auth_POSTGRES_DB__PORT key

(UnorderedKey)


[warning] 5-5: [LowercaseKey] The auth_POSTGRES_DB__DB_NAME key should be in uppercase

(LowercaseKey)


[warning] 5-5: [UnorderedKey] The auth_POSTGRES_DB__DB_NAME key should go before the auth_POSTGRES_DB__HOST key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@deploy/auth/production/.env.production` around lines 1 - 5, The
.env.production currently contains hardcoded credentials
(auth_POSTGRES_DB__USER, auth_POSTGRES_DB__PASSWORD, and similar) which must be
removed; replace those values with non-sensitive placeholders and update
deployment/CI configuration to inject real secrets from your secret manager or
CI environment variables instead of committing them, ensure deployment
scripts/readers (the code that consumes
auth_POSTGRES_DB__HOST/PORT/DB_NAME/USER/PASSWORD) read from the injected env
vars or secret manager API at runtime, add the production .env to gitignore if
applicable and document the required secret names for CI/ops.

Comment on lines +1 to +5
STAT_POSTGRES_DB__HOST=shared-postgres
STAT_POSTGRES_DB__PORT=5432
STAT_POSTGRES_DB__USER=leaderboardstat_user
STAT_POSTGRES_DB__PASSWORD=leaderboardstat_pass
STAT_POSTGRES_DB__DB_NAME=leaderboardstat_db
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Plaintext database credentials committed in a production environment file.

Hardcoded username (leaderboardstat_user) and password (leaderboardstat_pass) are checked into version control under a production/ path. Even if these are overridden at deploy time, committing credentials to the repo is a security risk. Consider:

  • Using a secrets manager (e.g., Vault, AWS Secrets Manager) or Docker/K8s secrets injection.
  • Adding *.env.production to .gitignore and using a .env.production.example with placeholder values instead.
🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 4-4: [UnorderedKey] The STAT_POSTGRES_DB__PASSWORD key should go before the STAT_POSTGRES_DB__PORT key

(UnorderedKey)


[warning] 5-5: [UnorderedKey] The STAT_POSTGRES_DB__DB_NAME key should go before the STAT_POSTGRES_DB__HOST key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@deploy/leaderboardstat/production/.env.production` around lines 1 - 5, Remove
the hardcoded credentials for STAT_POSTGRES_DB__USER and
STAT_POSTGRES_DB__PASSWORD (and any real secrets in
STAT_POSTGRES_DB__DB_NAME/HOST/PORT) from the committed production env file;
instead reference external secrets by replacing those values with non-sensitive
placeholders and load real values at deploy-time from your secrets manager
(e.g., Vault/AWS Secrets Manager) or container orchestration secrets injection.
Add the production env filename pattern to .gitignore so the real file is never
committed, and add a .env.production.example (with placeholder values for
STAT_POSTGRES_DB__HOST, STAT_POSTGRES_DB__PORT, STAT_POSTGRES_DB__USER,
STAT_POSTGRES_DB__PASSWORD, STAT_POSTGRES_DB__DB_NAME) to the repo to document
required variables.

Comment on lines +6 to +9
COPY vendor ./vendor
COPY . .

RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Vendor directory is copied but unused with -mod=mod.

Line 6 copies the vendor directory into the build context, but Line 9 uses -mod=mod, which resolves dependencies from the module cache instead of the vendor directory. This means the vendor copy is dead weight—it increases the build context size and wastes a layer without being used.

Either remove the vendor copy or switch back to -mod=vendor.

Proposed fix (if vendor is no longer needed)
 COPY go.mod go.sum ./
-COPY vendor ./vendor
 COPY . .
 
-RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
+RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go

Note: The explicit COPY vendor ./vendor before COPY . . is typically used to leverage Docker layer caching for vendor-mode builds. With -mod=mod, this pattern no longer provides that benefit, and the subsequent COPY . . already includes the vendor directory anyway.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
COPY vendor ./vendor
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -mod=vendor -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-s -w" -o /app/main ./cmd/realtime/main.go
🤖 Prompt for AI Agents
In `@deploy/realtime/production/Dockerfile` around lines 6 - 9, The Dockerfile
copies a vendor directory with "COPY vendor ./vendor" but the build uses the go
build command with "-mod=mod" (RUN CGO_ENABLED=0 GOOS=linux go build -mod=mod
-ldflags="-s -w" -o /app/main ./cmd/realtime/main.go), so the vendor layer is
unused; fix by either removing the "COPY vendor ./vendor" line to avoid the dead
layer and rely on module mode, or change the build flag to "-mod=vendor" so the
vendor directory is honored during the build—update the Dockerfile accordingly
and keep only the matching COPY and RUN changes.

Comment on lines +3 to +4
task_POSTGRES_DB__USER=task_user
task_POSTGRES_DB__PASSWORD=task_pass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Hardcoded production credentials in source control.

Committing plaintext passwords (task_pass) to the repository is a security risk. Consider using a secrets manager, CI/CD secret injection, or at minimum a .env.production.example with placeholder values, keeping actual secrets out of version control.

🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 3-3: [LowercaseKey] The task_POSTGRES_DB__USER key should be in uppercase

(LowercaseKey)


[warning] 4-4: [LowercaseKey] The task_POSTGRES_DB__PASSWORD key should be in uppercase

(LowercaseKey)


[warning] 4-4: [UnorderedKey] The task_POSTGRES_DB__PASSWORD key should go before the task_POSTGRES_DB__PORT key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@deploy/task/production/.env.production` around lines 3 - 4, Remove the
hardcoded production credentials from .env.production by replacing the values
for task_POSTGRES_DB__USER and task_POSTGRES_DB__PASSWORD with placeholders (or
remove the file entirely from the repo), add .env.production to .gitignore, and
create a .env.production.example containing the keys task_POSTGRES_DB__USER and
task_POSTGRES_DB__PASSWORD with placeholder values; update CI/CD or deployment
scripts to inject the real secrets from your secrets manager or pipeline secret
store instead of committing them to source control.

Comment on lines +1 to +8
WEBHOOK_POSTGRES_DB__HOST=shared-postgres
WEBHOOK_POSTGRES_DB__PORT=5432
WEBHOOK_POSTGRES_DB__USER=webhook_user
WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass
WEBHOOK_POSTGRES_DB__DB_NAME=webhook_db
WEBHOOK_REDIS__HOST=shared-redis
WEBHOOK_REDIS__PORT=6379
WEBHOOK_NATS__URL=nats://shared-nats:4222
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Do not commit production secrets in plaintext.

WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass is a hardcoded credential in a production env file. If this file is committed, it risks credential leakage and violates least‑privilege/secret‑management practices. Move secrets to a secure store (e.g., Vault/KMS/Secrets Manager) and reference them via injected envs at deploy time.

🧰 Tools
🪛 dotenv-linter (4.0.0)

[warning] 4-4: [UnorderedKey] The WEBHOOK_POSTGRES_DB__PASSWORD key should go before the WEBHOOK_POSTGRES_DB__PORT key

(UnorderedKey)


[warning] 5-5: [UnorderedKey] The WEBHOOK_POSTGRES_DB__DB_NAME key should go before the WEBHOOK_POSTGRES_DB__HOST key

(UnorderedKey)


[warning] 8-8: [UnorderedKey] The WEBHOOK_NATS__URL key should go before the WEBHOOK_POSTGRES_DB__DB_NAME key

(UnorderedKey)

🤖 Prompt for AI Agents
In `@deploy/webhook/production/.env.production` around lines 1 - 8, The production
env contains a hardcoded secret (WEBHOOK_POSTGRES_DB__PASSWORD=webhook_pass);
remove this plaintext credential from the committed file, replace it with a
placeholder reference (e.g., expect WEBHOOK_POSTGRES_DB__PASSWORD to be injected
at deploy time), and wire the deployment to fetch the real secret from your
secret manager (Vault/KMS/Secrets Manager) and inject it as an environment
variable for the webhook service; also ensure any CI/CD or k8s manifests/Helm
charts reference the secret store and confirm WEBHOOK_POSTGRES_DB__PASSWORD is
no longer present in the repo history or committed files.

@gocastsian gocastsian merged commit 0749cd9 into main Feb 18, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants