I am a Computer Science undergraduate based in Bangladesh, working at the intersection of software engineering and AI reliability research. I build production-grade web systems and ask hard questions about when — and whether — we should trust the outputs of language models.
My engineering work spans the full MERN stack. My research addresses a problem with direct safety implications: LLMs generate confident, fluent, and sometimes factually incorrect outputs. I am designing a framework to detect and mitigate these failures before they reach end users.
Both efforts share the same standard — systems should be correct, robust, and accountable, not merely functional.
I write openly, collaborate across disciplines, and actively seek environments where rigor is expected. I am looking for internships, research collaborations, and graduate programs that match that ambition.
| 🔬 AI Reliability | 🛡️ Privacy Engineering | ⚙️ Full-Stack Systems | 🧮 Algorithmic Reasoning |
|---|---|---|---|
| Hallucination detection Trust calibration Verifiable generation |
Consent architecture Data governance Digital rights |
MERN stack REST API design Scalable web systems |
Graph theory Combinatorics Optimization |
Large language models can produce fluent, confident, and factually incorrect outputs — a failure mode with serious consequences in high-stakes deployment. My thesis proposes a post-generation verification pipeline in which Small Language Models (SLMs) act as lightweight auditors, evaluating the factual reliability of LLM outputs before they surface to users.
The framework addresses three interconnected challenges: uncertainty estimation, output grounding against structured knowledge, and verifiable generation. The objective is not to eliminate LLMs from the pipeline, but to make their outputs inspectable, interpretable, and calibrated for trust.
DPPA is a full-stack application designed to protect individuals from unauthorized digital exposure. It implements structured consent workflows, granular privacy controls, and user-controlled data governance — treating privacy as a first-class system property rather than a compliance feature.
The project applies software engineering directly to a socially significant problem, reflecting a commitment to building systems that carry ethical accountability alongside technical correctness.
▸ Completing the SLM-based verification pipeline for the undergraduate thesis
▸ Implementing consent enforcement and audit layer in DPPA
▸ Deepening applied knowledge in machine learning and NLP
▸ Contributing to open-source work at the intersection of AI and reliability
▸ Preparing applications for graduate research programs
Algorithmic problem-solving is where I sharpen engineering judgment — practicing correctness under constraint, asymptotic reasoning, and the discipline of producing verifiable solutions. I focus on graph algorithms, dynamic programming, and combinatorial optimization.
I am open to substantive conversations about software engineering, AI reliability, privacy systems, and meaningful open-source work. If you are building something at the intersection of these areas — or looking for a collaborator with both engineering and research grounding — I would be glad to hear from you.
