Skip to content

[BUG]: Inconsistent model config, hardcoded API endpoints, and fragile logging/search handling #154

@ashi2004

Description

@ashi2004

Bug Description

There are multiple issues in the current pipeline:

-> Different modules use different LLM models, causing inconsistent results
-> Frontend API calls are hardcoded instead of using environment variables
-> Logging has encoding issues (especially on Windows)
-> Fact-check search does not properly handle invalid or empty Google API responses

This makes the system harder to maintain, debug, and deploy across environments.

Steps to Reproduce

-> Run different analysis modules (bias, sentiment, fact-check, chat)
-> Notice inconsistent outputs due to different model usage
-> Run frontend without proper environment variables
-> Check logs on Windows system
-> Trigger a failed or empty Google search API response
-> Observe errors or crashes in fact-check flow

Logs and Screenshots

No response

Environment Details

No response

Impact

High - Major feature is broken

Code of Conduct

  • I have joined the Discord server and will post updates there
  • I have searched existing issues to avoid duplicates

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions