Skip to content

fix: add input validation and zero-score guard to evaluate-assessments Edge Function#172

Open
Varadraj75 wants to merge 3 commits intoAOSSIE-Org:mainfrom
Varadraj75:fix/evaluate-assessments-input-validation
Open

fix: add input validation and zero-score guard to evaluate-assessments Edge Function#172
Varadraj75 wants to merge 3 commits intoAOSSIE-Org:mainfrom
Varadraj75:fix/evaluate-assessments-input-validation

Conversation

@Varadraj75
Copy link
Copy Markdown
Contributor

@Varadraj75 Varadraj75 commented Mar 1, 2026

Closes #171

📝 Description

The evaluate-assessments Edge Function had two bugs:

  1. No input validation — missing patient_id, assessment_id, or questions
    caused questions.map() to crash with TypeError: Cannot read properties of undefined, returning an unhelpful 500 error.
  2. Silent zero score — if none of the submitted question/answer IDs matched
    the assessment data, the scoring loop silently skipped every question,
    totalScore stayed 0, and the patient was incorrectly told they are not
    autistic — a clinically dangerous false negative.

🔧 Changes Made

  • supabase/functions/evaluate-assessments/index.ts:
    • Added early input validation using req.json().catch(() => null)
      returns 400 Bad Request if patient_id, assessment_id, or questions
      are missing or malformed before any processing begins.
    • Added scoredCount tracker in the scoring loop — returns 422 Unprocessable Entity if 0 out of N questions were successfully scored, preventing a
      silent false negative result from being saved and returned to the patient.

📷 Screenshots or Visual Changes (if applicable)

N/A — Edge Function logic fix, no visual changes.

🤝 Collaboration

Collaborated with: N/A

✅ Checklist

  • I have read the contributing guidelines.
  • I have added tests that prove my fix is effective or that my feature works.
  • I have added necessary documentation (if applicable).
  • Any dependent changes have been merged and published in downstream modules.

Summary by CodeRabbit

  • New Features

    • Assessment evaluation responses now include assessment score and autism status indicators.
  • Bug Fixes

    • Implemented comprehensive validation for assessment requests with clearer error messages for invalid data.
    • Added error detection for questions that cannot be successfully scored.

Copilot AI review requested due to automatic review settings March 1, 2026 07:33
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 1, 2026

Warning

Rate limit exceeded

@Varadraj75 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 24 minutes and 57 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between d4c717f and 19a3994.

📒 Files selected for processing (1)
  • supabase/functions/evaluate-assessments/index.ts
📝 Walkthrough

Walkthrough

The evaluate-assessments Edge Function now validates incoming request bodies for required fields (patient_id, assessment_id, non-empty questions array) and returns a 400 error if invalid. It also tracks successful question scoring and returns a 422 error if no questions match the assessment data, preventing silent zero-score outcomes.

Changes

Cohort / File(s) Summary
Input Validation & Error Handling
supabase/functions/evaluate-assessments/index.ts
Added request body validation with early 400 error response; introduced scoredCount tracking to detect unscored questions and return 422 error; expanded response payload to include assessment_score field alongside existing fields.

Sequence Diagram

sequenceDiagram
    actor Client
    participant Function as evaluate-assessments
    participant DB as Database
    
    Client->>Function: POST request body
    
    rect rgba(200, 100, 100, 0.5)
    Function->>Function: Parse & validate body
    alt Missing/Invalid fields
        Function-->>Client: 400 Bad Request
    end
    end
    
    rect rgba(100, 150, 200, 0.5)
    Function->>DB: Fetch assessment data
    DB-->>Function: Assessment details
    end
    
    rect rgba(100, 200, 100, 0.5)
    Function->>Function: Score questions<br/>(track scoredCount)
    alt scoredCount === 0
        Function-->>Client: 422 Unprocessable Entity
    end
    end
    
    Function->>DB: Insert assessment_results
    DB-->>Function: Confirm
    Function-->>Client: 200 OK + assessment_score
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Poem

🐰 A validation spell, so true and bright,
No more crashes in the dead of night!
Zero scores now caught with care,
Bugs be gone—the fix is fair!

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main changes: adding input validation and a zero-score guard to the evaluate-assessments function.
Linked Issues check ✅ Passed The PR addresses both objectives from issue #171: implements input validation returning 400 for missing/malformed data, and adds scoredCount tracking to return 422 when no questions score.
Out of Scope Changes check ✅ Passed All changes are directly related to the two bugs in issue #171; no unrelated modifications to other files or unrelated functionality detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes two bugs in the evaluate-assessments Supabase Edge Function identified in issue #171: (1) a crash on missing/malformed input that produced an unhelpful 500 error, and (2) a silent zero-score false negative when submitted IDs didn't match assessment data.

Changes:

  • Added early input validation via req.json().catch(() => null) to return 400 Bad Request before any processing when patient_id, assessment_id, or questions are missing/malformed.
  • Added a scoredCount tracker in the scoring loop and a 422 Unprocessable Entity guard when zero questions are successfully scored, preventing a clinically dangerous false-negative result from being persisted and returned.
  • Removed blank lines throughout the file (whitespace-only cleanup).

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +86 to +91
}
if (scoredCount === 0) {
return new Response(
JSON.stringify({ error: "No questions could be scored. Check that question_id and answer_id values match the assessment." }),
{ status: 422, headers: { "Content-Type": "application/json" } }
);
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The scoredCount === 0 guard only catches the case where all submitted questions fail to match. If even one question matches (e.g. 1 out of 10), the guard passes and a score based on a single matched question is silently accepted and stored as the final result. This can produce a clinically misleading score — for example, a score of 0 from 1 matched question while 9 questions were silently skipped.

Consider also checking whether scoredCount is significantly less than the number of submitted questions (e.g. scoredCount < answered_questions.questions.length) and returning a warning or error when only a fraction of questions could be scored.

Suggested change
}
if (scoredCount === 0) {
return new Response(
JSON.stringify({ error: "No questions could be scored. Check that question_id and answer_id values match the assessment." }),
{ status: 422, headers: { "Content-Type": "application/json" } }
);
}
const submittedCount = answered_questions.questions.length;
if (scoredCount === 0) {
return new Response(
JSON.stringify({ error: "No questions could be scored. Check that question_id and answer_id values match the assessment." }),
{ status: 422, headers: { "Content-Type": "application/json" } }
);
} else if (scoredCount < submittedCount) {
return new Response(
JSON.stringify({
error: "Only a subset of submitted questions could be scored. Check that question_id and answer_id values match the assessment.",
scored_questions: scoredCount,
submitted_questions: submittedCount,
}),
{ status: 422, headers: { "Content-Type": "application/json" } }
);

Copilot uses AI. Check for mistakes.
Comment on lines +26 to +37
if (
!body ||
!body.patient_id ||
!body.assessment_id ||
!Array.isArray(body.questions) ||
body.questions.length === 0
) {
return new Response(
JSON.stringify({ error: "Missing or invalid request body. Required fields: patient_id, assessment_id, questions (non-empty array)." }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
Copy link

Copilot AI Mar 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The !body.patient_id and !body.assessment_id checks use JavaScript's truthiness, which does not distinguish between a missing field, null, false, 0, or an empty string "". Since these fields are expected to be non-empty UUID strings, using typeof body.patient_id !== 'string' || body.patient_id.trim() === '' (and similarly for assessment_id) would be a more explicit and robust validation. With the current check, a caller passing patient_id: "" gets a 400, which is correct, but the error message does not indicate which specific field is missing or invalid, making debugging harder for API consumers.

Suggested change
if (
!body ||
!body.patient_id ||
!body.assessment_id ||
!Array.isArray(body.questions) ||
body.questions.length === 0
) {
return new Response(
JSON.stringify({ error: "Missing or invalid request body. Required fields: patient_id, assessment_id, questions (non-empty array)." }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
if (!body || typeof body !== "object") {
return new Response(
JSON.stringify({ error: "Missing or invalid request body. Expected JSON object with fields: patient_id, assessment_id, questions (non-empty array)." }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}
const invalidFields: string[] = [];
if (typeof body.patient_id !== "string" || body.patient_id.trim() === "") {
invalidFields.push("patient_id");
}
if (typeof body.assessment_id !== "string" || body.assessment_id.trim() === "") {
invalidFields.push("assessment_id");
}
if (!Array.isArray(body.questions) || body.questions.length === 0) {
invalidFields.push("questions (non-empty array)");
}
if (invalidFields.length > 0) {
return new Response(
JSON.stringify({ error: `Missing or invalid fields: ${invalidFields.join(", ")}.` }),
{ status: 400, headers: { "Content-Type": "application/json" } }
);
}

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@supabase/functions/evaluate-assessments/index.ts`:
- Around line 25-31: The request body validation currently only checks that
body.questions is a non-empty array, but individual items can be null or missing
required fields and later crash when mapped; update the validation in index.ts
to ensure every item in body.questions is an object with required keys (e.g.,
question_id and answer_id) and valid types before proceeding (the same stricter
check should also be applied to the secondary validation block around the
mapping at the area referenced by the comment for lines 52-55); if any item
fails validation, return the existing 400/invalid request response rather than
allowing a runtime error during the questions.map handling.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d7a6d17 and d4c717f.

📒 Files selected for processing (1)
  • supabase/functions/evaluate-assessments/index.ts

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BUG: evaluate-assessments Edge Function crashes on missing/malformed input and silently returns zero score on unmatched answers

2 participants