Skip to content

Unify LLM model config, env-based frontend API routing, and reliability fixes#155

Open
ashi2004 wants to merge 1 commit intoAOSSIE-Org:mainfrom
ashi2004:main
Open

Unify LLM model config, env-based frontend API routing, and reliability fixes#155
ashi2004 wants to merge 1 commit intoAOSSIE-Org:mainfrom
ashi2004:main

Conversation

@ashi2004
Copy link
Copy Markdown

@ashi2004 ashi2004 commented Mar 22, 2026

Fixes #154

What I changed

  • Used the same Groq model (llama-3.1-8b-instant) everywhere in backend so behavior stays consistent
  • Removed hardcoded backend URLs and replaced them with NEXT_PUBLIC_API_URL
  • Fixed logging so it works properly with UTF-8 (console + file both)
  • Added better error handling in Google search (fact check part)
  • Improved logs in pipeline so it's easier to see what's happening (success/failure)

Why I did this

  • Different models in different places were giving inconsistent outputs
  • Hardcoded URLs break when running in different environments
  • Logging had issues on Windows sometimes
  • Google API failures were not handled properly
  • Needed clearer logs for debugging

Testing I did

  • Ran /api/process and /api/bias with a real article → working fine
  • Checked /api/chat from frontend → response coming correctly
  • Verified logs are printing + saving in app.log
  • Tried breaking Google API response → handled without crash

Screenshots

Backend logs
image

Frontend result
image


Summary by CodeRabbit

  • New Features

    • Backend API URL configurable via environment variables
    • Chat API accepts optional article context and rejects empty messages
  • Bug Fixes

    • Improved UTF-8 handling in logging
    • Better detection and reporting of failed web search responses
  • Improvements

    • Updated AI model used for analysis across modules
    • UI: prevents duplicate chat submissions, disables send while loading, article highlights and read-time stats, adjusted fact-check badge styles
  • Observability

    • Added clearer pipeline execution and verification logs; improved endpoint error handling

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 22, 2026

📝 Walkthrough

Walkthrough

Standardizes LLM model usage to llama-3.1-8b-instant, replaces hard-coded frontend backend URLs with env var NEXT_PUBLIC_API_URL, enables UTF-8 for logging handlers, adds robust error handling for Google search responses, and extends chat to accept optional article_context with input validation and improved endpoint error handling.

Changes

Cohort / File(s) Summary
LLM Model Standardization
backend/app/modules/bias_detection/check_bias.py, backend/app/modules/chat/llm_processing.py, backend/app/modules/facts_check/llm_processing.py, backend/app/modules/langgraph_nodes/judge.py, backend/app/modules/langgraph_nodes/sentiment.py
Replaced Groq model gemma2-9b-it with llama-3.1-8b-instant across modules; preserved request structure and response parsing. Note: ask_llm signature now accepts optional article_context.
Frontend API Endpoint Configuration
frontend/app/analyze/loading/page.tsx, frontend/app/analyze/results/page.tsx
Introduced backend_url = process.env.NEXT_PUBLIC_API_URL and replaced hard-coded HF host with env-driven base URL; added chat loading-state guards, message trimming checks, included article_context in chat requests, and adjusted summary rendering and fact-check badge variants.
Logging & Encoding
backend/app/logging/logging_config.py
Console handler attempts reconfigure(encoding='utf-8') when available; file handler created with encoding="utf-8" to ensure UTF-8 output.
Search Error Handling
backend/app/modules/facts_check/web_search.py
Wrapped Google Custom Search call in try/except, validate presence of items, extract res["error"]["message"] when missing, and re-raise with logged error on failure.
Pipeline & Endpoint Behavior
backend/app/utils/fact_check_utils.py, backend/app/routes/routes.py
Fact-check pipeline: cleaned logging (removed emoji prefixes), added status logs and error when searches yield no results. /chat endpoint: added article_context field to ChatQuery, strips/validates message, rejects empty messages (400), includes article_context in ask_llm call, and centralizes exception handling with logged 500 responses.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Backend
  participant LLM as GroqLLM
  Client->>Backend: POST /chat { message, article_context? }
  Backend->>Backend: validate & strip message\nnormalize article_context
  Backend->>GroqLLM: ask_llm(question, docs, article_context)
  GroqLLM-->>Backend: completion (answer)
  Backend-->>Client: 200 { answer } / 400 / 500
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 I hopped from Gemma to Llama bright,

env keys guiding my midnight flight,
logs now sing in UTF‑8 tune,
searches safe beneath the moon,
tiny paws applaud the fix — delight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.08% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically summarizes the main changes: unifying LLM model configuration, enabling environment-based frontend API routing, and addressing reliability issues across logging and search handling.
Linked Issues check ✅ Passed All objectives from issue #154 are met: models unified to llama-3.1-8b-instant across modules [bias_detection, chat, facts_check, judge, sentiment], hardcoded URLs replaced with NEXT_PUBLIC_API_URL in frontend [analyze/loading, analyze/results, routes.py], UTF-8 logging fixed [logging_config.py], Google search error handling added [web_search.py], and pipeline logging improved [fact_check_utils.py].
Out of Scope Changes check ✅ Passed All changes are directly aligned with issue #154 objectives. The refactoring of article context handling, memoized layout computation in results page, and badge variant mapping updates are supporting changes that enable the context-aware chat feature, which is an expected extension of the core objectives.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
frontend/app/analyze/results/page.tsx (1)

81-96: ⚠️ Potential issue | 🟠 Major

Add error handling for the chat API call.

The axios.post call has no error handling. If the request fails (network error, server error, timeout), the promise rejection will be unhandled, potentially leaving the UI in an inconsistent state with the user's message displayed but no response.

🛡️ Proposed fix with error handling
   async function handleSendMessage(e: React.FormEvent) {
     e.preventDefault();
     if (!message.trim()) return;
     const newMessages = [...messages, { role: "user", content: message }];
     setMessages(newMessages);
     setMessage("");
 
-    const res = await axios.post(`${backend_url}/api/chat`, {
-      message: message,
-    });
-    const data = res.data;
-
-    console.log(data);
-
-    // 🔹 Step 2: Append LLM's response
-    setMessages([...newMessages, { role: "assistant", content: data.answer }]);
+    try {
+      const res = await axios.post(`${backend_url}/api/chat`, {
+        message: message,
+      });
+      const data = res.data;
+      setMessages([...newMessages, { role: "assistant", content: data.answer }]);
+    } catch (error) {
+      console.error("Chat API error:", error);
+      setMessages([
+        ...newMessages,
+        { role: "assistant", content: "Sorry, something went wrong. Please try again." },
+      ]);
+    }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/app/analyze/results/page.tsx` around lines 81 - 96, The
handleSendMessage function currently calls axios.post without error handling,
which can leave the UI inconsistent; wrap the API call and subsequent logic in a
try/catch around the axios.post(`${backend_url}/api/chat`, { message }) call
inside handleSendMessage, and in the catch block handle failures by logging the
error and updating state (e.g., append an assistant message indicating the error
or set an error state) while ensuring setMessage("") and setMessages remain
consistent (use the same newMessages variable when appending either the
successful response ({ role: "assistant", content: data.answer }) or an error
fallback message). Also consider disabling the send control or showing a
loading/error indicator while the request is in flight to avoid duplicate sends.
backend/app/modules/facts_check/llm_processing.py (1)

143-148: ⚠️ Potential issue | 🔴 Critical

NameError when JSON parsing fails — parsed is undefined.

When json.loads(content) raises an exception, the error is logged but parsed remains undefined. Line 148 then attempts to append parsed, causing a NameError that propagates up and aborts the entire verification loop.

🐛 Proposed fix to handle parse failures gracefully
             # Try parsing the JSON response
             try:
                 parsed = json.loads(content)
             except Exception as parse_err:
                 logger.error(f"LLM JSON parse error: {parse_err}")
+                parsed = {
+                    "verdict": "Unknown",
+                    "explanation": f"Failed to parse LLM response: {parse_err}",
+                    "original_claim": claim,
+                    "source_link": source,
+                }
 
             results_list.append(parsed)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/modules/facts_check/llm_processing.py` around lines 143 - 148,
The current JSON parsing block using json.loads(content) can raise and leave
parsed undefined, causing a NameError at results_list.append(parsed); fix by
initializing or handling failures: within the try/except around
json.loads(content) (the block referencing parsed, parse_err, and logger.error),
either set parsed to a safe fallback (e.g., None or an error marker) before
appending or simply continue/skip appending on exception so
results_list.append(parsed) is not called with an undefined variable; update the
code paths around json.loads and results_list.append to ensure parsed is always
defined or not appended when parsing fails.
🧹 Nitpick comments (1)
frontend/app/analyze/results/page.tsx (1)

22-22: Consider validating NEXT_PUBLIC_API_URL at build/runtime.

If the environment variable is unset, backend_url will be undefined, resulting in requests to "undefined/api/chat". Consider adding a fallback or build-time validation.

♻️ Proposed validation
-const backend_url = process.env.NEXT_PUBLIC_API_URL;
+const backend_url = process.env.NEXT_PUBLIC_API_URL;
+if (!backend_url) {
+  console.error("NEXT_PUBLIC_API_URL is not configured");
+}

Alternatively, add the variable to your .env.example and document it as required.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/app/analyze/results/page.tsx` at line 22, The code defines
backend_url from process.env.NEXT_PUBLIC_API_URL which can be undefined and
cause broken requests; update the initialization to validate and fail-fast or
provide a sane fallback: check NEXT_PUBLIC_API_URL at module load or app startup
(where backend_url is defined) and either throw a clear error (including the
variable name) to stop the build/runtime or set a documented fallback (e.g.,
empty string or default host) and log a warning; reference the backend_url
constant in frontend/app/analyze/results/page.tsx and ensure callers that append
"/api/chat" handle the validated value accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/modules/facts_check/web_search.py`:
- Around line 38-46: The code assumes res["items"][0] exists and will IndexError
when items is an empty list; update the check in the web_search logic to handle
an empty items list (e.g., replace the current if "items" not in res guard with
a check like if not res.get("items") or len(res["items"]) == 0) and raise or
return a clear error (including res.get("error", {}).get("message", "No search
results")) before accessing res["items"][0]; ensure the variables first, title,
link, snippet are only populated after this guard.
- Around line 32-35: The GET call building the Google Custom Search URL using
f-strings should be hardened: switch to using a params dict including "key",
"cx" and "q" (using the existing GOOGLE_SEARCH and query variables), call
requests.get with a sensible timeout, immediately call
response.raise_for_status() to surface HTTP errors, parse JSON into res, and
then validate that "items" exists and is a non-empty list before accessing
res["items"][0] (return or handle the no-results case explicitly to avoid
IndexError); locate and update the code around the variables/results/res/query
in web_search.py to implement these checks.

In `@backend/app/utils/fact_check_utils.py`:
- Around line 47-49: The status check is using the wrong variable: it inspects
state.get("status") but the extraction result is stored in result, causing an
early failure; update the conditional to check result.get("status") != "success"
(and keep the existing logger.error and return path) so the function uses the
actual claim extraction result (`result`) instead of `state`.

In `@frontend/app/analyze/loading/page.tsx`:
- Line 19: Validate NEXT_PUBLIC_API_URL before using it: ensure the module-level
const backend_url is checked for presence and a valid base URL, and if missing
throw or return an explicit error instead of constructing requests like
`${backend_url}/api/process`; update the fetch calls that build URLs (the code
that uses backend_url around the API call sites in page.tsx — the construct used
in the try/catch near the fetch at lines ~77–80) to first guard backend_url and
produce a clear error message (or use a validated URL object) so malformed
requests like "undefined/api/process" cannot be made; apply the same guard
pattern to the analogous usage in frontend/app/analyze/results/page.tsx where
NEXT_PUBLIC_API_URL is read and used.

---

Outside diff comments:
In `@backend/app/modules/facts_check/llm_processing.py`:
- Around line 143-148: The current JSON parsing block using json.loads(content)
can raise and leave parsed undefined, causing a NameError at
results_list.append(parsed); fix by initializing or handling failures: within
the try/except around json.loads(content) (the block referencing parsed,
parse_err, and logger.error), either set parsed to a safe fallback (e.g., None
or an error marker) before appending or simply continue/skip appending on
exception so results_list.append(parsed) is not called with an undefined
variable; update the code paths around json.loads and results_list.append to
ensure parsed is always defined or not appended when parsing fails.

In `@frontend/app/analyze/results/page.tsx`:
- Around line 81-96: The handleSendMessage function currently calls axios.post
without error handling, which can leave the UI inconsistent; wrap the API call
and subsequent logic in a try/catch around the
axios.post(`${backend_url}/api/chat`, { message }) call inside
handleSendMessage, and in the catch block handle failures by logging the error
and updating state (e.g., append an assistant message indicating the error or
set an error state) while ensuring setMessage("") and setMessages remain
consistent (use the same newMessages variable when appending either the
successful response ({ role: "assistant", content: data.answer }) or an error
fallback message). Also consider disabling the send control or showing a
loading/error indicator while the request is in flight to avoid duplicate sends.

---

Nitpick comments:
In `@frontend/app/analyze/results/page.tsx`:
- Line 22: The code defines backend_url from process.env.NEXT_PUBLIC_API_URL
which can be undefined and cause broken requests; update the initialization to
validate and fail-fast or provide a sane fallback: check NEXT_PUBLIC_API_URL at
module load or app startup (where backend_url is defined) and either throw a
clear error (including the variable name) to stop the build/runtime or set a
documented fallback (e.g., empty string or default host) and log a warning;
reference the backend_url constant in frontend/app/analyze/results/page.tsx and
ensure callers that append "/api/chat" handle the validated value accordingly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 28e61eeb-d89c-4593-a1d0-a3ee2dead10b

📥 Commits

Reviewing files that changed from the base of the PR and between 68a64c3 and 1b69fb1.

📒 Files selected for processing (10)
  • backend/app/logging/logging_config.py
  • backend/app/modules/bias_detection/check_bias.py
  • backend/app/modules/chat/llm_processing.py
  • backend/app/modules/facts_check/llm_processing.py
  • backend/app/modules/facts_check/web_search.py
  • backend/app/modules/langgraph_nodes/judge.py
  • backend/app/modules/langgraph_nodes/sentiment.py
  • backend/app/utils/fact_check_utils.py
  • frontend/app/analyze/loading/page.tsx
  • frontend/app/analyze/results/page.tsx

@ashi2004
Copy link
Copy Markdown
Author

Hey @ParagGhatage , @Zahnentferner please review this PR!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/routes/routes.py`:
- Around line 95-97: In the except block that currently logs the error with
logger.exception and then raises HTTPException(status_code=500, detail="Failed
to generate chat response."), preserve exception causality by re-raising the
HTTPException with explicit chaining from the caught exception (i.e., use "raise
HTTPException(... ) from e"); update the raise statement that references
HTTPException in the routes.py handler so the original exception 'e' remains the
__cause__ for proper traceback chaining.
- Around line 88-89: The /chat route calls blocking functions
search_pinecone(...) and ask_llm(...) directly; change those calls to run in a
thread pool (use asyncio.to_thread) so the async route doesn't block the event
loop. Specifically, replace results = search_pinecone(query) and answer =
ask_llm(query, results, article_context) with awaiting
asyncio.to_thread(search_pinecone, query) and awaiting
asyncio.to_thread(ask_llm, query, results, article_context) (or equivalent
helper that delegates to threads) to match the non-blocking pattern used in the
/process route.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 79b258b0-68eb-4322-a96b-09171b735fe0

📥 Commits

Reviewing files that changed from the base of the PR and between 1b69fb1 and 06df29c.

⛔ Files ignored due to path filters (1)
  • backend/package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (3)
  • backend/app/modules/chat/llm_processing.py
  • backend/app/routes/routes.py
  • frontend/app/analyze/results/page.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
  • backend/app/modules/chat/llm_processing.py

Comment on lines +88 to +89
results = search_pinecone(query)
answer = ask_llm(query, results, article_context)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "routes.py" | head -20

Repository: AOSSIE-Org/Perspective

Length of output: 96


🏁 Script executed:

wc -l backend/app/routes/routes.py

Repository: AOSSIE-Org/Perspective

Length of output: 97


🏁 Script executed:

sed -n '80,100p' backend/app/routes/routes.py

Repository: AOSSIE-Org/Perspective

Length of output: 735


🏁 Script executed:

rg -B 20 "search_pinecone|ask_llm" backend/app/routes/routes.py | head -80

Repository: AOSSIE-Org/Perspective

Length of output: 2196


🏁 Script executed:

rg -n "@app|@router|async def" backend/app/routes/routes.py | head -30

Repository: AOSSIE-Org/Perspective

Length of output: 330


🏁 Script executed:

cat backend/app/modules/chat/get_rag_data.py

Repository: AOSSIE-Org/Perspective

Length of output: 1533


🏁 Script executed:

cat backend/app/modules/chat/llm_processing.py

Repository: AOSSIE-Org/Perspective

Length of output: 2297


🏁 Script executed:

rg "import asyncio" backend/app/routes/routes.py

Repository: AOSSIE-Org/Perspective

Length of output: 80


Offload blocking I/O operations to thread pool in /chat route.

The /chat route is async but calls search_pinecone() and ask_llm() synchronously. Both perform blocking network I/O (Pinecone and Groq API calls) that will block the event loop and degrade performance under concurrent requests. The /process route in the same file already demonstrates the correct pattern using asyncio.to_thread().

⚙️ Proposed fix
-        results = search_pinecone(query)
-        answer = ask_llm(query, results, article_context)
+        results = await asyncio.to_thread(search_pinecone, query)
+        answer = await asyncio.to_thread(ask_llm, query, results, article_context)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
results = search_pinecone(query)
answer = ask_llm(query, results, article_context)
results = await asyncio.to_thread(search_pinecone, query)
answer = await asyncio.to_thread(ask_llm, query, results, article_context)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/routes/routes.py` around lines 88 - 89, The /chat route calls
blocking functions search_pinecone(...) and ask_llm(...) directly; change those
calls to run in a thread pool (use asyncio.to_thread) so the async route doesn't
block the event loop. Specifically, replace results = search_pinecone(query) and
answer = ask_llm(query, results, article_context) with awaiting
asyncio.to_thread(search_pinecone, query) and awaiting
asyncio.to_thread(ask_llm, query, results, article_context) (or equivalent
helper that delegates to threads) to match the non-blocking pattern used in the
/process route.

Comment on lines +95 to +97
except Exception as e:
logger.exception(f"Chat request failed: {e}")
raise HTTPException(status_code=500, detail="Failed to generate chat response.")
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

cat -n backend/app/routes/routes.py | sed -n '85,105p'

Repository: AOSSIE-Org/Perspective

Length of output: 627


Preserve exception causality when mapping to HTTP 500.

When re-raising a different exception type in the except block, use explicit exception chaining with from e to preserve the original exception as the cause. This maintains the traceback chain for debugging and follows Python 3 best practices.

🧯 Proposed fix
     except Exception as e:
-        logger.exception(f"Chat request failed: {e}")
-        raise HTTPException(status_code=500, detail="Failed to generate chat response.")
+        logger.exception("Chat request failed")
+        raise HTTPException(
+            status_code=500,
+            detail="Failed to generate chat response.",
+        ) from e
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 97-97: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/routes/routes.py` around lines 95 - 97, In the except block that
currently logs the error with logger.exception and then raises
HTTPException(status_code=500, detail="Failed to generate chat response."),
preserve exception causality by re-raising the HTTPException with explicit
chaining from the caught exception (i.e., use "raise HTTPException(... ) from
e"); update the raise statement that references HTTPException in the routes.py
handler so the original exception 'e' remains the __cause__ for proper traceback
chaining.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
frontend/app/analyze/results/page.tsx (1)

22-22: Consider stripping trailing slashes from the backend URL.

If NEXT_PUBLIC_API_URL is set with a trailing slash (e.g., http://localhost:8000/), the resulting URL will have a double slash (http://localhost:8000//api/chat). While most servers tolerate this, it's cleaner to normalize.

Proposed fix
-const backendUrl = (process.env.NEXT_PUBLIC_API_URL ?? "http://localhost:8000").trim();
+const backendUrl = (process.env.NEXT_PUBLIC_API_URL ?? "http://localhost:8000").trim().replace(/\/+$/, "");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/app/analyze/results/page.tsx` at line 22, The backendUrl constant
may retain trailing slashes from NEXT_PUBLIC_API_URL, causing double slashes
when concatenating paths; update the initialization of backendUrl (the
backendUrl constant in frontend/app/analyze/results/page.tsx) to normalize the
value by trimming whitespace then removing any trailing slashes (e.g., use a
replace or regex to strip trailing '/' characters) so concatenations like
`${backendUrl}/api/chat` never produce double slashes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@frontend/app/analyze/results/page.tsx`:
- Line 22: The backendUrl constant may retain trailing slashes from
NEXT_PUBLIC_API_URL, causing double slashes when concatenating paths; update the
initialization of backendUrl (the backendUrl constant in
frontend/app/analyze/results/page.tsx) to normalize the value by trimming
whitespace then removing any trailing slashes (e.g., use a replace or regex to
strip trailing '/' characters) so concatenations like `${backendUrl}/api/chat`
never produce double slashes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5ffbce09-39be-474b-883f-d097ba7ebb14

📥 Commits

Reviewing files that changed from the base of the PR and between 06df29c and b48d433.

📒 Files selected for processing (1)
  • frontend/app/analyze/results/page.tsx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: Inconsistent model config, hardcoded API endpoints, and fragile logging/search handling

1 participant