Unify LLM model config, env-based frontend API routing, and reliability fixes#155
Unify LLM model config, env-based frontend API routing, and reliability fixes#155ashi2004 wants to merge 1 commit intoAOSSIE-Org:mainfrom
Conversation
…earch error handling
📝 WalkthroughWalkthroughStandardizes LLM model usage to Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Backend
participant LLM as GroqLLM
Client->>Backend: POST /chat { message, article_context? }
Backend->>Backend: validate & strip message\nnormalize article_context
Backend->>GroqLLM: ask_llm(question, docs, article_context)
GroqLLM-->>Backend: completion (answer)
Backend-->>Client: 200 { answer } / 400 / 500
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
frontend/app/analyze/results/page.tsx (1)
81-96:⚠️ Potential issue | 🟠 MajorAdd error handling for the chat API call.
The
axios.postcall has no error handling. If the request fails (network error, server error, timeout), the promise rejection will be unhandled, potentially leaving the UI in an inconsistent state with the user's message displayed but no response.🛡️ Proposed fix with error handling
async function handleSendMessage(e: React.FormEvent) { e.preventDefault(); if (!message.trim()) return; const newMessages = [...messages, { role: "user", content: message }]; setMessages(newMessages); setMessage(""); - const res = await axios.post(`${backend_url}/api/chat`, { - message: message, - }); - const data = res.data; - - console.log(data); - - // 🔹 Step 2: Append LLM's response - setMessages([...newMessages, { role: "assistant", content: data.answer }]); + try { + const res = await axios.post(`${backend_url}/api/chat`, { + message: message, + }); + const data = res.data; + setMessages([...newMessages, { role: "assistant", content: data.answer }]); + } catch (error) { + console.error("Chat API error:", error); + setMessages([ + ...newMessages, + { role: "assistant", content: "Sorry, something went wrong. Please try again." }, + ]); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/analyze/results/page.tsx` around lines 81 - 96, The handleSendMessage function currently calls axios.post without error handling, which can leave the UI inconsistent; wrap the API call and subsequent logic in a try/catch around the axios.post(`${backend_url}/api/chat`, { message }) call inside handleSendMessage, and in the catch block handle failures by logging the error and updating state (e.g., append an assistant message indicating the error or set an error state) while ensuring setMessage("") and setMessages remain consistent (use the same newMessages variable when appending either the successful response ({ role: "assistant", content: data.answer }) or an error fallback message). Also consider disabling the send control or showing a loading/error indicator while the request is in flight to avoid duplicate sends.backend/app/modules/facts_check/llm_processing.py (1)
143-148:⚠️ Potential issue | 🔴 Critical
NameErrorwhen JSON parsing fails —parsedis undefined.When
json.loads(content)raises an exception, the error is logged butparsedremains undefined. Line 148 then attempts to appendparsed, causing aNameErrorthat propagates up and aborts the entire verification loop.🐛 Proposed fix to handle parse failures gracefully
# Try parsing the JSON response try: parsed = json.loads(content) except Exception as parse_err: logger.error(f"LLM JSON parse error: {parse_err}") + parsed = { + "verdict": "Unknown", + "explanation": f"Failed to parse LLM response: {parse_err}", + "original_claim": claim, + "source_link": source, + } results_list.append(parsed)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/modules/facts_check/llm_processing.py` around lines 143 - 148, The current JSON parsing block using json.loads(content) can raise and leave parsed undefined, causing a NameError at results_list.append(parsed); fix by initializing or handling failures: within the try/except around json.loads(content) (the block referencing parsed, parse_err, and logger.error), either set parsed to a safe fallback (e.g., None or an error marker) before appending or simply continue/skip appending on exception so results_list.append(parsed) is not called with an undefined variable; update the code paths around json.loads and results_list.append to ensure parsed is always defined or not appended when parsing fails.
🧹 Nitpick comments (1)
frontend/app/analyze/results/page.tsx (1)
22-22: Consider validatingNEXT_PUBLIC_API_URLat build/runtime.If the environment variable is unset,
backend_urlwill beundefined, resulting in requests to"undefined/api/chat". Consider adding a fallback or build-time validation.♻️ Proposed validation
-const backend_url = process.env.NEXT_PUBLIC_API_URL; +const backend_url = process.env.NEXT_PUBLIC_API_URL; +if (!backend_url) { + console.error("NEXT_PUBLIC_API_URL is not configured"); +}Alternatively, add the variable to your
.env.exampleand document it as required.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/analyze/results/page.tsx` at line 22, The code defines backend_url from process.env.NEXT_PUBLIC_API_URL which can be undefined and cause broken requests; update the initialization to validate and fail-fast or provide a sane fallback: check NEXT_PUBLIC_API_URL at module load or app startup (where backend_url is defined) and either throw a clear error (including the variable name) to stop the build/runtime or set a documented fallback (e.g., empty string or default host) and log a warning; reference the backend_url constant in frontend/app/analyze/results/page.tsx and ensure callers that append "/api/chat" handle the validated value accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/modules/facts_check/web_search.py`:
- Around line 38-46: The code assumes res["items"][0] exists and will IndexError
when items is an empty list; update the check in the web_search logic to handle
an empty items list (e.g., replace the current if "items" not in res guard with
a check like if not res.get("items") or len(res["items"]) == 0) and raise or
return a clear error (including res.get("error", {}).get("message", "No search
results")) before accessing res["items"][0]; ensure the variables first, title,
link, snippet are only populated after this guard.
- Around line 32-35: The GET call building the Google Custom Search URL using
f-strings should be hardened: switch to using a params dict including "key",
"cx" and "q" (using the existing GOOGLE_SEARCH and query variables), call
requests.get with a sensible timeout, immediately call
response.raise_for_status() to surface HTTP errors, parse JSON into res, and
then validate that "items" exists and is a non-empty list before accessing
res["items"][0] (return or handle the no-results case explicitly to avoid
IndexError); locate and update the code around the variables/results/res/query
in web_search.py to implement these checks.
In `@backend/app/utils/fact_check_utils.py`:
- Around line 47-49: The status check is using the wrong variable: it inspects
state.get("status") but the extraction result is stored in result, causing an
early failure; update the conditional to check result.get("status") != "success"
(and keep the existing logger.error and return path) so the function uses the
actual claim extraction result (`result`) instead of `state`.
In `@frontend/app/analyze/loading/page.tsx`:
- Line 19: Validate NEXT_PUBLIC_API_URL before using it: ensure the module-level
const backend_url is checked for presence and a valid base URL, and if missing
throw or return an explicit error instead of constructing requests like
`${backend_url}/api/process`; update the fetch calls that build URLs (the code
that uses backend_url around the API call sites in page.tsx — the construct used
in the try/catch near the fetch at lines ~77–80) to first guard backend_url and
produce a clear error message (or use a validated URL object) so malformed
requests like "undefined/api/process" cannot be made; apply the same guard
pattern to the analogous usage in frontend/app/analyze/results/page.tsx where
NEXT_PUBLIC_API_URL is read and used.
---
Outside diff comments:
In `@backend/app/modules/facts_check/llm_processing.py`:
- Around line 143-148: The current JSON parsing block using json.loads(content)
can raise and leave parsed undefined, causing a NameError at
results_list.append(parsed); fix by initializing or handling failures: within
the try/except around json.loads(content) (the block referencing parsed,
parse_err, and logger.error), either set parsed to a safe fallback (e.g., None
or an error marker) before appending or simply continue/skip appending on
exception so results_list.append(parsed) is not called with an undefined
variable; update the code paths around json.loads and results_list.append to
ensure parsed is always defined or not appended when parsing fails.
In `@frontend/app/analyze/results/page.tsx`:
- Around line 81-96: The handleSendMessage function currently calls axios.post
without error handling, which can leave the UI inconsistent; wrap the API call
and subsequent logic in a try/catch around the
axios.post(`${backend_url}/api/chat`, { message }) call inside
handleSendMessage, and in the catch block handle failures by logging the error
and updating state (e.g., append an assistant message indicating the error or
set an error state) while ensuring setMessage("") and setMessages remain
consistent (use the same newMessages variable when appending either the
successful response ({ role: "assistant", content: data.answer }) or an error
fallback message). Also consider disabling the send control or showing a
loading/error indicator while the request is in flight to avoid duplicate sends.
---
Nitpick comments:
In `@frontend/app/analyze/results/page.tsx`:
- Line 22: The code defines backend_url from process.env.NEXT_PUBLIC_API_URL
which can be undefined and cause broken requests; update the initialization to
validate and fail-fast or provide a sane fallback: check NEXT_PUBLIC_API_URL at
module load or app startup (where backend_url is defined) and either throw a
clear error (including the variable name) to stop the build/runtime or set a
documented fallback (e.g., empty string or default host) and log a warning;
reference the backend_url constant in frontend/app/analyze/results/page.tsx and
ensure callers that append "/api/chat" handle the validated value accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 28e61eeb-d89c-4593-a1d0-a3ee2dead10b
📒 Files selected for processing (10)
backend/app/logging/logging_config.pybackend/app/modules/bias_detection/check_bias.pybackend/app/modules/chat/llm_processing.pybackend/app/modules/facts_check/llm_processing.pybackend/app/modules/facts_check/web_search.pybackend/app/modules/langgraph_nodes/judge.pybackend/app/modules/langgraph_nodes/sentiment.pybackend/app/utils/fact_check_utils.pyfrontend/app/analyze/loading/page.tsxfrontend/app/analyze/results/page.tsx
|
Hey @ParagGhatage , @Zahnentferner please review this PR! |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/routes/routes.py`:
- Around line 95-97: In the except block that currently logs the error with
logger.exception and then raises HTTPException(status_code=500, detail="Failed
to generate chat response."), preserve exception causality by re-raising the
HTTPException with explicit chaining from the caught exception (i.e., use "raise
HTTPException(... ) from e"); update the raise statement that references
HTTPException in the routes.py handler so the original exception 'e' remains the
__cause__ for proper traceback chaining.
- Around line 88-89: The /chat route calls blocking functions
search_pinecone(...) and ask_llm(...) directly; change those calls to run in a
thread pool (use asyncio.to_thread) so the async route doesn't block the event
loop. Specifically, replace results = search_pinecone(query) and answer =
ask_llm(query, results, article_context) with awaiting
asyncio.to_thread(search_pinecone, query) and awaiting
asyncio.to_thread(ask_llm, query, results, article_context) (or equivalent
helper that delegates to threads) to match the non-blocking pattern used in the
/process route.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 79b258b0-68eb-4322-a96b-09171b735fe0
⛔ Files ignored due to path filters (1)
backend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (3)
backend/app/modules/chat/llm_processing.pybackend/app/routes/routes.pyfrontend/app/analyze/results/page.tsx
🚧 Files skipped from review as they are similar to previous changes (1)
- backend/app/modules/chat/llm_processing.py
backend/app/routes/routes.py
Outdated
| results = search_pinecone(query) | ||
| answer = ask_llm(query, results, article_context) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "routes.py" | head -20Repository: AOSSIE-Org/Perspective
Length of output: 96
🏁 Script executed:
wc -l backend/app/routes/routes.pyRepository: AOSSIE-Org/Perspective
Length of output: 97
🏁 Script executed:
sed -n '80,100p' backend/app/routes/routes.pyRepository: AOSSIE-Org/Perspective
Length of output: 735
🏁 Script executed:
rg -B 20 "search_pinecone|ask_llm" backend/app/routes/routes.py | head -80Repository: AOSSIE-Org/Perspective
Length of output: 2196
🏁 Script executed:
rg -n "@app|@router|async def" backend/app/routes/routes.py | head -30Repository: AOSSIE-Org/Perspective
Length of output: 330
🏁 Script executed:
cat backend/app/modules/chat/get_rag_data.pyRepository: AOSSIE-Org/Perspective
Length of output: 1533
🏁 Script executed:
cat backend/app/modules/chat/llm_processing.pyRepository: AOSSIE-Org/Perspective
Length of output: 2297
🏁 Script executed:
rg "import asyncio" backend/app/routes/routes.pyRepository: AOSSIE-Org/Perspective
Length of output: 80
Offload blocking I/O operations to thread pool in /chat route.
The /chat route is async but calls search_pinecone() and ask_llm() synchronously. Both perform blocking network I/O (Pinecone and Groq API calls) that will block the event loop and degrade performance under concurrent requests. The /process route in the same file already demonstrates the correct pattern using asyncio.to_thread().
⚙️ Proposed fix
- results = search_pinecone(query)
- answer = ask_llm(query, results, article_context)
+ results = await asyncio.to_thread(search_pinecone, query)
+ answer = await asyncio.to_thread(ask_llm, query, results, article_context)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| results = search_pinecone(query) | |
| answer = ask_llm(query, results, article_context) | |
| results = await asyncio.to_thread(search_pinecone, query) | |
| answer = await asyncio.to_thread(ask_llm, query, results, article_context) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/routes/routes.py` around lines 88 - 89, The /chat route calls
blocking functions search_pinecone(...) and ask_llm(...) directly; change those
calls to run in a thread pool (use asyncio.to_thread) so the async route doesn't
block the event loop. Specifically, replace results = search_pinecone(query) and
answer = ask_llm(query, results, article_context) with awaiting
asyncio.to_thread(search_pinecone, query) and awaiting
asyncio.to_thread(ask_llm, query, results, article_context) (or equivalent
helper that delegates to threads) to match the non-blocking pattern used in the
/process route.
backend/app/routes/routes.py
Outdated
| except Exception as e: | ||
| logger.exception(f"Chat request failed: {e}") | ||
| raise HTTPException(status_code=500, detail="Failed to generate chat response.") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n backend/app/routes/routes.py | sed -n '85,105p'Repository: AOSSIE-Org/Perspective
Length of output: 627
Preserve exception causality when mapping to HTTP 500.
When re-raising a different exception type in the except block, use explicit exception chaining with from e to preserve the original exception as the cause. This maintains the traceback chain for debugging and follows Python 3 best practices.
🧯 Proposed fix
except Exception as e:
- logger.exception(f"Chat request failed: {e}")
- raise HTTPException(status_code=500, detail="Failed to generate chat response.")
+ logger.exception("Chat request failed")
+ raise HTTPException(
+ status_code=500,
+ detail="Failed to generate chat response.",
+ ) from e🧰 Tools
🪛 Ruff (0.15.6)
[warning] 97-97: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/app/routes/routes.py` around lines 95 - 97, In the except block that
currently logs the error with logger.exception and then raises
HTTPException(status_code=500, detail="Failed to generate chat response."),
preserve exception causality by re-raising the HTTPException with explicit
chaining from the caught exception (i.e., use "raise HTTPException(... ) from
e"); update the raise statement that references HTTPException in the routes.py
handler so the original exception 'e' remains the __cause__ for proper traceback
chaining.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
frontend/app/analyze/results/page.tsx (1)
22-22: Consider stripping trailing slashes from the backend URL.If
NEXT_PUBLIC_API_URLis set with a trailing slash (e.g.,http://localhost:8000/), the resulting URL will have a double slash (http://localhost:8000//api/chat). While most servers tolerate this, it's cleaner to normalize.Proposed fix
-const backendUrl = (process.env.NEXT_PUBLIC_API_URL ?? "http://localhost:8000").trim(); +const backendUrl = (process.env.NEXT_PUBLIC_API_URL ?? "http://localhost:8000").trim().replace(/\/+$/, "");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/app/analyze/results/page.tsx` at line 22, The backendUrl constant may retain trailing slashes from NEXT_PUBLIC_API_URL, causing double slashes when concatenating paths; update the initialization of backendUrl (the backendUrl constant in frontend/app/analyze/results/page.tsx) to normalize the value by trimming whitespace then removing any trailing slashes (e.g., use a replace or regex to strip trailing '/' characters) so concatenations like `${backendUrl}/api/chat` never produce double slashes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@frontend/app/analyze/results/page.tsx`:
- Line 22: The backendUrl constant may retain trailing slashes from
NEXT_PUBLIC_API_URL, causing double slashes when concatenating paths; update the
initialization of backendUrl (the backendUrl constant in
frontend/app/analyze/results/page.tsx) to normalize the value by trimming
whitespace then removing any trailing slashes (e.g., use a replace or regex to
strip trailing '/' characters) so concatenations like `${backendUrl}/api/chat`
never produce double slashes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5ffbce09-39be-474b-883f-d097ba7ebb14
📒 Files selected for processing (1)
frontend/app/analyze/results/page.tsx
Fixes #154
What I changed
llama-3.1-8b-instant) everywhere in backend so behavior stays consistentNEXT_PUBLIC_API_URLWhy I did this
Testing I did
/api/processand/api/biaswith a real article → working fine/api/chatfrom frontend → response coming correctlyapp.logScreenshots
Backend logs

Frontend result

Summary by CodeRabbit
New Features
Bug Fixes
Improvements
Observability