Skip to content

JovannyEspinal/lead-qualifier-simulation

Repository files navigation

Lead Qualifier

A learning project I built while studying agentic AI. Paste a list of leads, define your ICP, and a GPT-4o agent scores each one — calling tools in the right order, deciding when it's done, without any hardcoded orchestration.

Good excuse to get function calling and the OpenAI API working together in something real.


How it works

The function calling loop

The agent drives the scoring. The code just runs whatever it asks for.

while True:
    response = gpt-4o(messages, tools=[lookup_lead_info, calculate_lead_score])

    if response.tool_calls:
        for tool_call in response.tool_calls:
            result = execute(tool_call)
            messages.append(tool result)
    else:
        break  # model decided it's done

The model is given two tools and instructed to use them in order for every lead. It calls them across multiple loop iterations until all leads are processed, then stops on its own.

The tools

lookup_lead_info(lead) — takes a raw lead string and extracts structured fields: name, title, company, size, industry.

calculate_lead_score(name, title_match, company_size_match, industry_match, intent_signals_match, reasoning) — the model fills in each dimension score (0–25) based on ICP fit. The function just adds them up. Scoring is the model's judgment; the math is deterministic.

score = title_match + company_size_match + industry_match + intent_signals_match

The constraint

The model must call lookup_lead_info before calculate_lead_score for every lead. This enforces a consistent two-step pattern — enrich first, then score — across every iteration of the loop.


Architecture

frontend/          Next.js 14 + React 18
  app/
    page.tsx       8-step wizard (leads → ICP → key → score → message → simulate → results)
  components/
    steps/
      Step3ICPDefinition   Editable ICP patterns + free-text definition
      Step4ApiKey          OpenAI key input (never stored)
      Step5LeadScoring     Live scoring results with per-lead AI reasoning
      Step6MessageDraft    AI-generated cold outreach message

backend/           FastAPI + Python
  main.py          Tool definitions, agent loop, and all endpoints

Data flow:

  1. User uploads leads + defines ICP
  2. /score-leads runs the agent loop — model calls lookup_lead_info then calculate_lead_score per lead until all are scored
  3. /generate-message writes a cold outreach message targeting the top leads

Stack

Component Tool Why
Agent loop GPT-4o + function calling Model controls tool order and call count — no hardcoded orchestration
Lead lookup lookup_lead_info Structured extraction from raw lead strings
Lead scoring calculate_lead_score Model scores each ICP dimension; function handles the math
Backend FastAPI REST wrapper around the agent loop
Frontend Next.js 14 + Framer Motion 8-step wizard

Setup

Requirements: Python 3.12+, Node 18+, uv, OpenAI API key

Backend

cd backend
uv run uvicorn main:app --reload

Frontend

npm install
npm run dev

Open http://localhost:3000. Enter your OpenAI key at the API Key step — used for that session only, never stored.


Cost

Under $0.10 for a typical run (5 leads).

  • Lead lookup: ~$0.001 per lead (lookup_lead_info makes a GPT-4o call)
  • Scoring: negligible — model fills in four integers
  • Message generation: ~$0.003 one-time

Limitations

  • lookup_lead_info makes a separate GPT-4o call per lead. 10 leads = at least 20 model calls before a score is returned.
  • No streaming — the frontend blocks until the full agent loop completes.
  • Intent signals are inferred from the lead string alone. Without real enrichment data, this dimension mostly scores low.
  • Tool call order is enforced through the system prompt, not the API. The model can still call them out of order if it decides to.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors