Find your celebrity twin with AI • Open source • Powered by Inference.net
- Upload a photo (PNG/JPG, paste, or camera)
- AI analysis via Inference.net Vision + LLM returns structured JSON with your top celebrity matches
- Results display with hi-res images, similarity scores, and shareable cards
- Privacy-first - no accounts, no storage, everything processed in memory
- Transparent AI - structured JSON responses from Inference.net, not black-box results
- Fast development - full-stack multimodal demo in minutes with
pnpm run dev - Type-safe - JSON schema enforcement with TypeScript + Zod validation
- Share-ready - built-in card generator for social media (copy, download, platform links)
- Production-ready - Bun + Hono backend, Docker support, no vendor lock-in
- Privacy-focused - optional analytics, no data storage
- Frontend – Vite • React 18 • shadcn/ui • TailwindCSS
- Backend – Bun runtime • Hono router • TypeScript end‑to‑end
- AI – Inference.net Vision API + Structured Outputs
- Deploy – Railway, Vercel, Fly.io, or any Docker host
git clone https://github.com/yourrepo/lookalikeceleb.git
cd lookalikeceleb
pnpm install # or bun install / npm i
cp .env.example .env # add INFERENCE_API_KEY
pnpm run dev # frontend at http://localhost:5173
bun run server:index.ts # backend at http://localhost:3000Tip: Vite proxy is pre-configured — uploads hit
/apion port 3000 automatically.
LookalikeCeleb includes optional Plausible Analytics for privacy-friendly tracking:
// In App.tsx - automatically skips if env vars not set
const plausible = Plausible({
domain: import.meta.env.VITE_PLAUSIBLE_DOMAIN,
apiHost: import.meta.env.VITE_PLAUSIBLE_HOST,
trackLocalhost: false, // Only tracks in production
});Add to your .env file:
VITE_PLAUSIBLE_DOMAIN=yourdomain.com
VITE_PLAUSIBLE_HOST=https://plausible.io
# Or use your own Plausible instanceSimply delete the useEffect block in src/App.tsx or leave env vars unset.
Note: Analytics only tracks in production (ignores localhost). No tracking = no data collected.
flowchart TD
A[Client React] --> B[API Matches Hono+Bun]
B --> C[Inference.net Vision LLM]
C --> B
B --> D[Image Search Proxy]
B --> A
Flow:
- Client uploads image to Hono API
- Server sends vision prompt to Inference.net with JSON schema
- AI returns structured celebrity matches
- Server fetches hi-res images via search proxy (avoids CORS)
- Combined response sent back to client
Inference.net Request → Response
Request
Response
{
"matches": [
{ "name":"Emma Stone","percentage":94,
"description":"Wide-set green eyes, pronounced cheekbones…" },
{ "name":"Ryan Gosling","percentage":87,
"description":"Similar jawline, nose bridge, blue eyes…" },
{ "name":"Zendaya","percentage":82,
"description":"Matching eyebrow arch, chin profile…" }
]
}| Platform | Instructions |
|---|---|
| Railway | |
| Docker | docker build -t lookalikeceleb . && docker run -p 3000:3000 --env-file .env lookalikeceleb |
Set INFERENCE_API_KEY in your environment variables.
We welcome contributions! Ideas for improvements:
- New share card templates
- Additional AI providers
- Performance optimizations
- Dark mode
- Mobile app version
- Fork this repo
- Create a feature branch:
git checkout -b my-feature - Commit your changes and open a PR
MIT - feel free to use this in your own projects.
A practical demo of multimodal AI for the open source community.
Built to show what's possible with modern vision models.
