Skip to content

Improve Article Tab Layout & Make AI Chat Use Article Context#157

Open
ashi2004 wants to merge 3 commits intoAOSSIE-Org:mainfrom
ashi2004:feat/chat-article-context-ui-fix
Open

Improve Article Tab Layout & Make AI Chat Use Article Context#157
ashi2004 wants to merge 3 commits intoAOSSIE-Org:mainfrom
ashi2004:feat/chat-article-context-ui-fix

Conversation

@ashi2004
Copy link
Copy Markdown

@ashi2004 ashi2004 commented Mar 23, 2026

Addressed Issues:

Fixes #156

Screenshots/Recordings:

image image

What’s Changed

  • Cleaned up the Article tab UI ..it was a mess before, with text all over the place and no real structure. Now it’s way more readable, with better sectioning and metadata up top.
  • Fixed a bug where the AI chat wasn’t actually using the article context for its answers. Now, when you use the AI Discussion, it pulls in the actual article you’re analyzing, so the responses are way more relevant.
  • Also fixed a weird hook order bug that was causing some warnings in the console.

How to Test

  • Go to the Analyze Results page.
  • Click on the Article tab... check that the layout looks organized and not scattered.
  • Try the AI Discussion ,responses should now reference the article content, not just generic stuff.

Additional Notes:

This PR built on the top of #155 , which corrects the unification of the model usage across the files and updates the deprecated models

AI Usage Disclosure:

We encourage contributors to use AI tools responsibly when creating Pull Requests. While AI can be a valuable aid, it is essential to ensure that your contributions meet the task requirements, build successfully, include relevant tests, and pass all linters. Submissions that do not meet these standards may be closed without warning to maintain the quality and integrity of the project. Please take the time to understand the changes you are proposing and their impact. AI slop is strongly discouraged and may lead to banning and blocking. Do not spam our repos with AI slop.

Check one of the checkboxes below:

  • This PR does not contain AI-generated code at all.
  • This PR contains AI-generated code. I have read the AI Usage Policy and this PR complies with this policy. I have tested the code locally and I am responsible for it.

Checklist

  • My PR addresses a single issue, fixes a single bug or makes a single improvement.
  • My code follows the project's code style and conventions
  • If applicable, I have made corresponding changes or additions to the documentation
  • If applicable, I have made corresponding changes or additions to tests
  • My changes generate no new warnings or errors
  • I have joined the Discord server and I will share a link to this PR with the project maintainers there
  • I have read the Contribution Guidelines
  • Once I submit my PR, CodeRabbit AI will automatically review it and I will address CodeRabbit's comments.
  • I have filled this PR template completely and carefully, and I understand that my PR may be closed without review otherwise.

Summary by CodeRabbit

Release Notes

  • New Features

    • Chat now accepts article context for more contextually relevant responses
    • Article analysis displays enhanced visualization with highlights and computed sections
  • Bug Fixes

    • Improved error handling and user-friendly messages for chat failures
    • Enhanced text encoding and validation for chat input
  • Refactor

    • Updated underlying LLM model for improved performance
    • Made backend configuration environment-driven for better deployment flexibility

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 23, 2026

📝 Walkthrough

Walkthrough

Updates LLM models from gemma2-9b-it to llama-3.1-8b-instant across multiple backend modules, adds UTF-8 encoding to logging, introduces article context parameter to chat functionality, enhances Google search error handling with validation checks, and restructures frontend article rendering with improved layout and statistics.

Changes

Cohort / File(s) Summary
LLM Model Migration
backend/app/modules/bias_detection/check_bias.py, backend/app/modules/chat/llm_processing.py, backend/app/modules/facts_check/llm_processing.py, backend/app/modules/langgraph_nodes/judge.py, backend/app/modules/langgraph_nodes/sentiment.py
Replaced model parameter from "gemma2-9b-it" to "llama-3.1-8b-instant" across five Groq chat completion integrations while preserving all other request parameters and control flow.
Logging Configuration
backend/app/logging/logging_config.py
Enhanced UTF-8 support: console handler attempts to reconfigure stream encoding, file handler explicitly uses UTF-8 encoding.
Chat Context Integration
backend/app/modules/chat/llm_processing.py, backend/app/routes/routes.py
Extended chat to accept optional article_context parameter; build_context() now strips whitespace; ask_llm() combines RAG context with article context when both available; endpoint validates/sanitizes inputs with HTTP error handling.
Web Search Error Handling
backend/app/modules/facts_check/web_search.py
Wrapped Google Custom Search request/parsing in try/except; added validation to check for items field in response and raise ValueError with error message if missing.
Fact-Check Logging
backend/app/utils/fact_check_utils.py
Removed emoji/prefix characters from log statements; added informational logs for claim verification count and pipeline completion; added explicit error log when no search results collected.
Frontend Backend URL Configuration
frontend/app/analyze/loading/page.tsx
Replaced hard-coded Hugging Face Spaces endpoint URLs with environment-driven process.env.NEXT_PUBLIC_API_URL in concurrent API requests.
Frontend Article Layout & Chat Enhancement
frontend/app/analyze/results/page.tsx
Added isChatLoading state with input validation; implemented configurable backend URL with fallback; changed chat payload to include article_context from cleaned_text; restructured Article tab to display highlights, sections with derived titles, and metadata (word count, read time); updated verdict badge styling; enhanced error handling with fallback messages.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant Frontend
    participant Backend
    participant LLM as Groq LLM

    User->>Frontend: Submit chat message + article context
    Frontend->>Frontend: Trim input, validate non-empty
    Frontend->>Frontend: Set isChatLoading = true
    Frontend->>Backend: POST /chat {message, article_context}
    
    Backend->>Backend: Validate & sanitize message
    Backend->>Backend: Normalize article_context (default "")
    Backend->>Backend: Build RAG context from docs
    Backend->>Backend: Combine RAG + article context
    
    alt Both contexts available
        Backend->>Backend: Create composite context<br/>"Article: ... Retrieved Insights: ..."
    else Only RAG context
        Backend->>Backend: Use RAG context only
    else Only article context
        Backend->>Backend: Use article context only
    else No context
        Backend->>Backend: Return empty response message
    end
    
    Backend->>LLM: Request with combined context
    LLM->>Backend: Return answer
    Backend->>Frontend: HTTP 200 {answer}
    
    Frontend->>Frontend: Set isChatLoading = false
    Frontend->>User: Display answer
    
    alt Request fails
        Backend->>Frontend: HTTP error
        Frontend->>Frontend: Create fallback error message
        Frontend->>User: Display error message
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 Hoppy code hops to a newer, faster model,
With context now flowing from article to dossier,
UTF-8 whispers in the logger's ear,
The frontend leaps with structured flair and cheer,
Error-handling springs protect the search so dear! 🌟

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.08% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Out of Scope Changes check ❓ Inconclusive The PR includes additional supporting changes (model standardization, logging improvements, error handling) that enable the primary objectives but are not directly scoped by issue #156. While supporting changes (LLM model updates, URL standardization) improve implementation, clarify whether they align with PR #155 dependencies mentioned in objectives or require explicit scoping.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Improve Article Tab Layout & Make AI Chat Use Article Context' directly and specifically summarizes the main changes: improved Article tab UI and integration of article context in AI chat.
Linked Issues check ✅ Passed The PR successfully addresses all primary objectives from issue #156: improved Article tab layout with better visual hierarchy, metadata at top, and AI chat now uses article context in responses.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (2)
backend/app/modules/chat/llm_processing.py (1)

17-19: Docstring is outdated.

The function signature in the docstring still shows ask_llm(question: str, docs: list[dict]) but the actual signature now includes article_context: str = "". Update the docstring to reflect the new parameter.

📝 Proposed fix
-    ask_llm(question: str, docs: list[dict]) -> str:
+    ask_llm(question: str, docs: list[dict], article_context: str = "") -> str:
         Builds context from the provided documents, sends it along with the
-        question to the LLM, and returns the model's answer.
+        question to the LLM, and returns the model's answer. Optionally
+        incorporates article context for more relevant responses.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/modules/chat/llm_processing.py` around lines 17 - 19, The ask_llm
docstring is stale: update it to reflect the current signature ask_llm(question:
str, docs: list[dict], article_context: str = "") by documenting the new
article_context parameter and its purpose (optional additional context to
prepend/augment the docs), and adjust any parameter descriptions/examples in the
docstring accordingly so they match the function ask_llm, including types and
default value.
backend/app/modules/facts_check/llm_processing.py (1)

131-131: LGTM on model change, but note pre-existing bug in error handling.

The model update is correct. However, there's a pre-existing bug in the JSON parsing block (lines 143-148): if json.loads(content) fails, parsed is never assigned, causing an UnboundLocalError at line 148.

Consider fixing this while you're in the file:

🐛 Proposed fix for JSON parse error handling
             # Try parsing the JSON response
             try:
                 parsed = json.loads(content)
             except Exception as parse_err:
                 logger.error(f"LLM JSON parse error: {parse_err}")
+                parsed = {
+                    "verdict": "Unknown",
+                    "explanation": f"Failed to parse LLM response: {parse_err}",
+                    "original_claim": claim,
+                    "source_link": source,
+                }

             results_list.append(parsed)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/modules/facts_check/llm_processing.py` at line 131, The JSON
parsing block uses json.loads(content) but doesn’t set parsed on exception,
causing an UnboundLocalError when parsed is referenced later; modify the block
around the json.loads call (variable names: content, parsed) to ensure parsed is
always assigned—either initialize parsed = None before the try, or assign parsed
= {}/appropriate fallback in the except, and log the JSONDecodeError (or
re-raise a clear exception) so subsequent code that uses parsed (the later
reference) won’t crash with UnboundLocalError; update the error handling to
return/skip processing when parsed is invalid if that matches existing flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@backend/app/modules/facts_check/web_search.py`:
- Around line 32-34: The HTTP call assigning to results uses requests.get
without a timeout; update the requests.get call in web_search.py (the line that
builds results = requests.get(...)) to include a reasonable timeout (e.g.,
timeout=10 or a configurable constant) so the request cannot hang indefinitely,
and ensure any surrounding error handling will catch requests.exceptions.Timeout
if needed.
- Around line 49-51: Replace the print-based error reporting in the web_search
module's exception handler (the except Exception as e block that prints "Search
Google Error: {e}") with the project logger used elsewhere: call
logger.exception or logger.error (including exc_info) to log the exception and
context instead of print, keeping the same message text and the exception
variable e so logs are captured by the app's logging/monitoring pipeline.

In `@backend/app/routes/routes.py`:
- Around line 88-89: The calls to search_pinecone and ask_llm are synchronous
and will block the async route; wrap them in asyncio.to_thread (or
loop.run_in_executor) and await the results so the event loop isn't blocked.
Specifically, replace direct calls to search_pinecone(query) and ask_llm(query,
results, article_context) in the async handler with awaited
asyncio.to_thread(...) invocations (e.g., results = await
asyncio.to_thread(search_pinecone, query); answer = await
asyncio.to_thread(ask_llm, query, results, article_context)), preserving
existing error handling and variable names.

In `@backend/app/utils/fact_check_utils.py`:
- Around line 47-49: In the claim extraction path inside fact_check_utils (where
run_claim_extractor_sdk is called), change the conditional that checks the
extraction outcome to inspect result.get("status") instead of
state.get("status"); update the error branch that currently logs "Claim
extraction failed." and returns to use result (e.g., include result or its error
field) so failures from run_claim_extractor_sdk are correctly detected and
reported (look for the block referencing run_claim_extractor_sdk, the local
variable result, and the current check using state.get("status")).

In `@frontend/app/analyze/loading/page.tsx`:
- Around line 76-82: The two axios POST calls to /api/process and /api/bias (the
Promise.all block creating processRes and biasRes) lack a request timeout and
can hang indefinitely; update both axios.post calls to pass a timeout option of
45000 ms (match the existing chat request timeout used in results/page.tsx) so
each request will reject after 45s and allow Promise.all to settle and the page
to recover.

In `@frontend/app/analyze/results/page.tsx`:
- Around line 164-169: The current grouping logic splits a section when it
already has two paragraphs (lastSection.paragraphs.length < 2), causing the
third paragraph to start a new section and over-segment content; change the
condition so we append the new block to the existing section whenever a
lastSection exists (i.e., remove the strict length check) and only create a new
section when there is no lastSection, updating the block grouping code that
references lastSection, sections, paragraphs, and block accordingly.

---

Nitpick comments:
In `@backend/app/modules/chat/llm_processing.py`:
- Around line 17-19: The ask_llm docstring is stale: update it to reflect the
current signature ask_llm(question: str, docs: list[dict], article_context: str
= "") by documenting the new article_context parameter and its purpose (optional
additional context to prepend/augment the docs), and adjust any parameter
descriptions/examples in the docstring accordingly so they match the function
ask_llm, including types and default value.

In `@backend/app/modules/facts_check/llm_processing.py`:
- Line 131: The JSON parsing block uses json.loads(content) but doesn’t set
parsed on exception, causing an UnboundLocalError when parsed is referenced
later; modify the block around the json.loads call (variable names: content,
parsed) to ensure parsed is always assigned—either initialize parsed = None
before the try, or assign parsed = {}/appropriate fallback in the except, and
log the JSONDecodeError (or re-raise a clear exception) so subsequent code that
uses parsed (the later reference) won’t crash with UnboundLocalError; update the
error handling to return/skip processing when parsed is invalid if that matches
existing flow.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c4a8cffe-1b7b-42fa-8a63-186c8ab66276

📥 Commits

Reviewing files that changed from the base of the PR and between 68a64c3 and 6deed46.

📒 Files selected for processing (11)
  • backend/app/logging/logging_config.py
  • backend/app/modules/bias_detection/check_bias.py
  • backend/app/modules/chat/llm_processing.py
  • backend/app/modules/facts_check/llm_processing.py
  • backend/app/modules/facts_check/web_search.py
  • backend/app/modules/langgraph_nodes/judge.py
  • backend/app/modules/langgraph_nodes/sentiment.py
  • backend/app/routes/routes.py
  • backend/app/utils/fact_check_utils.py
  • frontend/app/analyze/loading/page.tsx
  • frontend/app/analyze/results/page.tsx

Comment on lines +32 to +34
results = requests.get(
f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add a timeout to the HTTP request.

The requests.get call has no timeout, which can cause the request to hang indefinitely if the Google API is unresponsive. This is a reliability concern flagged by static analysis (S113).

🛡️ Proposed fix to add timeout
         results = requests.get(
-            f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}"
+            f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}",
+            timeout=10
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
results = requests.get(
f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}"
)
results = requests.get(
f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}",
timeout=10
)
🧰 Tools
🪛 Ruff (0.15.6)

[error] 32-32: Probable use of requests call without timeout

(S113)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/modules/facts_check/web_search.py` around lines 32 - 34, The HTTP
call assigning to results uses requests.get without a timeout; update the
requests.get call in web_search.py (the line that builds results =
requests.get(...)) to include a reasonable timeout (e.g., timeout=10 or a
configurable constant) so the request cannot hang indefinitely, and ensure any
surrounding error handling will catch requests.exceptions.Timeout if needed.

Comment on lines +49 to +51
except Exception as e:
print(f"Search Google Error: {e}")
raise
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Use logger instead of print for error output.

The module uses print() for error logging while other modules in the codebase use the configured logger. This inconsistency affects log aggregation and monitoring.

♻️ Proposed fix
+from app.logging.logging_config import setup_logger
+
+logger = setup_logger(__name__)
+
 def search_google(query):
     try:
         # ... existing code ...
     except Exception as e:
-        print(f"Search Google Error: {e}")
+        logger.error(f"Search Google Error: {e}")
         raise
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/modules/facts_check/web_search.py` around lines 49 - 51, Replace
the print-based error reporting in the web_search module's exception handler
(the except Exception as e block that prints "Search Google Error: {e}") with
the project logger used elsewhere: call logger.exception or logger.error
(including exc_info) to log the exception and context instead of print, keeping
the same message text and the exception variable e so logs are captured by the
app's logging/monitoring pipeline.

Comment on lines +88 to +89
results = search_pinecone(query)
answer = ask_llm(query, results, article_context)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

sed -n '79,97p' backend/app/routes/routes.py

printf '\n--- search_pinecone definition ---\n'
rg -n -C3 '^(async\s+def|def)\s+search_pinecone\b' backend/app/modules/chat

printf '\n--- ask_llm definition ---\n'
rg -n -C3 '^(async\s+def|def)\s+ask_llm\b' backend/app/modules/chat

Repository: AOSSIE-Org/Perspective

Length of output: 1755


Wrap synchronous I/O calls in asyncio.to_thread() to prevent event loop blocking.

Lines 88–89 invoke search_pinecone() and ask_llm() synchronously inside an async route. Both perform remote I/O (Pinecone queries and LLM calls) and will block the event loop, delaying other requests.

Suggested fix
-        results = search_pinecone(query)
-        answer = ask_llm(query, results, article_context)
+        results = await asyncio.to_thread(search_pinecone, query)
+        answer = await asyncio.to_thread(ask_llm, query, results, article_context)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
results = search_pinecone(query)
answer = ask_llm(query, results, article_context)
results = await asyncio.to_thread(search_pinecone, query)
answer = await asyncio.to_thread(ask_llm, query, results, article_context)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/routes/routes.py` around lines 88 - 89, The calls to
search_pinecone and ask_llm are synchronous and will block the async route; wrap
them in asyncio.to_thread (or loop.run_in_executor) and await the results so the
event loop isn't blocked. Specifically, replace direct calls to
search_pinecone(query) and ask_llm(query, results, article_context) in the async
handler with awaited asyncio.to_thread(...) invocations (e.g., results = await
asyncio.to_thread(search_pinecone, query); answer = await
asyncio.to_thread(ask_llm, query, results, article_context)), preserving
existing error handling and variable names.

Comment on lines 47 to 49
if state.get("status") != "success":
logger.error("Claim extraction failed.")
logger.error("Claim extraction failed.")
return [], "Claim extraction failed."
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: Checking wrong variable for extraction status.

Line 47 checks state.get("status") but should check result.get("status") since result is the return value from run_claim_extractor_sdk. The input state won't have a status field, so this condition will never be true when extraction fails.

🐛 Proposed fix
-    if state.get("status") != "success":
+    if result.get("status") != "success":
         logger.error("Claim extraction failed.")
         return [], "Claim extraction failed."
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if state.get("status") != "success":
logger.error("❌ Claim extraction failed.")
logger.error("Claim extraction failed.")
return [], "Claim extraction failed."
if result.get("status") != "success":
logger.error("Claim extraction failed.")
return [], "Claim extraction failed."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/app/utils/fact_check_utils.py` around lines 47 - 49, In the claim
extraction path inside fact_check_utils (where run_claim_extractor_sdk is
called), change the conditional that checks the extraction outcome to inspect
result.get("status") instead of state.get("status"); update the error branch
that currently logs "Claim extraction failed." and returns to use result (e.g.,
include result or its error field) so failures from run_claim_extractor_sdk are
correctly detected and reported (look for the block referencing
run_claim_extractor_sdk, the local variable result, and the current check using
state.get("status")).

Comment on lines 76 to 82
const [processRes, biasRes] = await Promise.all([
axios.post("https://thunder1245-perspective-backend.hf.space/api/process", {
axios.post(`${backend_url}/api/process`, {
url: storedUrl,
}),
axios.post("https://thunder1245-perspective-backend.hf.space/api/bias", {
axios.post(`${backend_url}/api/bias`, {
url: storedUrl,
}),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

printf 'Loading page requests:\n'
sed -n '75,83p' frontend/app/analyze/loading/page.tsx

printf '\nChat request config for comparison:\n'
sed -n '92,100p' frontend/app/analyze/results/page.tsx

Repository: AOSSIE-Org/Perspective

Length of output: 686


Add a timeout to the analysis requests.

The POST calls to /api/process and /api/bias on lines 77–80 lack timeout configuration. If either request hangs, Promise.all never settles and the page stays on the spinner indefinitely because the redirect only runs after axios rejects. The codebase already uses a 45s timeout for the chat request in results/page.tsx (line 97); apply the same timeout here for consistency and to make this failure mode recoverable.

Suggested fix
+          const requestConfig = { timeout: 45000 };
           const [processRes, biasRes] = await Promise.all([
-            axios.post(`${backend_url}/api/process`, {
-              url: storedUrl,
-            }),
-            axios.post(`${backend_url}/api/bias`, {
-              url: storedUrl,
-            }),
+            axios.post(`${backend_url}/api/process`, { url: storedUrl }, requestConfig),
+            axios.post(`${backend_url}/api/bias`, { url: storedUrl }, requestConfig),
           ]);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const [processRes, biasRes] = await Promise.all([
axios.post("https://thunder1245-perspective-backend.hf.space/api/process", {
axios.post(`${backend_url}/api/process`, {
url: storedUrl,
}),
axios.post("https://thunder1245-perspective-backend.hf.space/api/bias", {
axios.post(`${backend_url}/api/bias`, {
url: storedUrl,
}),
const requestConfig = { timeout: 45000 };
const [processRes, biasRes] = await Promise.all([
axios.post(`${backend_url}/api/process`, { url: storedUrl }, requestConfig),
axios.post(`${backend_url}/api/bias`, { url: storedUrl }, requestConfig),
]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/app/analyze/loading/page.tsx` around lines 76 - 82, The two axios
POST calls to /api/process and /api/bias (the Promise.all block creating
processRes and biasRes) lack a request timeout and can hang indefinitely; update
both axios.post calls to pass a timeout option of 45000 ms (match the existing
chat request timeout used in results/page.tsx) so each request will reject after
45s and allow Promise.all to settle and the page to recover.

Comment on lines +164 to +169
const lastSection = sections[sections.length - 1];
if (lastSection && lastSection.paragraphs.length < 2) {
lastSection.paragraphs.push(block);
} else {
sections.push({ title: null, paragraphs: [block] });
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t split a section after exactly two paragraphs.

This branch turns the third paragraph of any section into a new card, which inflates sectionCount and recreates the over-segmented Article tab the PR is trying to fix. If you need visual length limits, handle that in rendering, not in the structural grouping step.

Suggested fix
       const lastSection = sections[sections.length - 1];
-      if (lastSection && lastSection.paragraphs.length < 2) {
+      if (lastSection) {
         lastSection.paragraphs.push(block);
       } else {
         sections.push({ title: null, paragraphs: [block] });
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const lastSection = sections[sections.length - 1];
if (lastSection && lastSection.paragraphs.length < 2) {
lastSection.paragraphs.push(block);
} else {
sections.push({ title: null, paragraphs: [block] });
}
const lastSection = sections[sections.length - 1];
if (lastSection) {
lastSection.paragraphs.push(block);
} else {
sections.push({ title: null, paragraphs: [block] });
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/app/analyze/results/page.tsx` around lines 164 - 169, The current
grouping logic splits a section when it already has two paragraphs
(lastSection.paragraphs.length < 2), causing the third paragraph to start a new
section and over-segment content; change the condition so we append the new
block to the existing section whenever a lastSection exists (i.e., remove the
strict length check) and only create a new section when there is no lastSection,
updating the block grouping code that references lastSection, sections,
paragraphs, and block accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: Improve Article Tab Layout and Fix AI Chat Context

1 participant