Frontend for Findatalab projects, built with Next.js (App Router).
- Next.js 15
- React 19
- App Router API routes (
app/api/**) - Markdown rendering (
marked) + sanitization (isomorphic-dompurify)
/— project landing page/fingpt— chat UI for admissions assistant/llmcity— news sentiment analysis + generated comments- Server-side proxy routes for LLM calls:
/api/chat/api/llmcity/chat
- Node.js 20+
- npm 10+
- Running LLM backend (for local usage), e.g. Ollama or compatible API
- Install dependencies:
npm install- Create local env file:
cp .env.example .env.localIf .env.example does not exist yet, create .env.local manually from the section below.
- Start development server:
npm run dev- Open app:
http://localhost:3000
If port 3000 is busy, Next.js may use 3001 automatically.
Use .env.local for local development.
# Common chat endpoint (used by /api/chat as fallback)
CHAT_ENDPOINT=http://localhost:1416/chat/completions
# LLM model for /fingpt requests
NEXT_PUBLIC_CHAT_MODEL=
# Optional fixed chat history id in /fingpt
NEXT_PUBLIC_CHAT_HISTORY_ID=
# LLM City specific endpoint/model (optional)
LLMCITY_ENDPOINT=http://localhost:11434/api/chat
LLMCITY_MODEL=qwen3.5- Do not put secrets into
NEXT_PUBLIC_*variables. - Browser pages call internal routes (
/api/...), and server routes call upstream LLM APIs.
npm run dev— start development servernpm run build— create production buildnpm run start— start production servernpm run lint— run lint checks
Build first:
npm run buildStart with PM2 on custom port:
pm2 start npm --name "findatalab-frontend" -- run start -- -p 33000Check status/logs:
pm2 status
pm2 logs findatalab-frontendClient page sends request to /api/chat → server route forwards to CHAT_ENDPOINT.
Client page sends { news, commentCount } to /api/llmcity/chat → server route builds prompt, calls LLM endpoint, parses model response, returns structured JSON:
{
"sentiment": {
"label": "positive|negative|neutral|mixed",
"score": 0.32,
"explanation": "..."
},
"comments": ["...", "..."]
}- Restart dev server (
npm run dev) after adding/changing route files. - If needed, clear cache and restart:
rm -rf .next && npm run dev- Usually means HTML was returned instead of JSON (wrong URL, 404 page, proxy issue).
- Verify frontend is calling
/api/llmcity/chaton the same host/port as the app.
- Upstream LLM responded in unexpected format or could not be parsed.
- Check server logs for
[llmcity/chat] response_parse_errororupstream_error.
- Keep all external LLM calls on server-side routes.
- Keep client error messages user-friendly; keep technical details in server logs.
- Validate and sanitize all user-visible model output.