TL;DR: Rendy is a Render Blueprint that deploys a React UI (rendy-web), a private orchestration service (rendy-orchestration), and managed Postgres (rendy-postgres) as a production-shaped starter for AI apps.
Rendy is a Render-native starter for shipping an AI product with real service boundaries. Instead of forcing the UI, orchestration runtime, and data layer into one process, this repo uses Render Services the way an AI app actually benefits from them: one browser-facing service, one private orchestration service, and one managed database.
That matters because AI product work changes in layers. Frontend copy changes. Prompting and tool wiring changes. Retrieval and indexing changes. Render makes that easier to manage when each layer can evolve and redeploy independently.
- end-to-end AI app deployed at
https://rendy-web-ltnn.onrender.com/, running browser UI -> private Langchain orchestration -> managed Postgres / pgvector - browser-facing UI deployed as the
rendy-webRender service - same-origin proxy from the UI to the private
rendy-orchestrationservice - managed Postgres provisioned and wired through the Blueprint
- importable starter chatflows for ingestion and assistant behavior
- ETL examples for rendered crawl-to-JSON, JSON-to-Pinecone, and sitemap-to-Pinecone ingestion
- a future-facing commented cron scaffold in
render.yamlfor scheduled ETL
Rendy is an end-to-end Render Blueprint for going from idea to a working AI stack with minimal glue code and clear service boundaries.
The real thing being demonstrated is Render's service model:
- Blueprint-driven provisioning in
render.yaml - service-to-service private networking
- managed Postgres plus persistent disk-backed orchestration state
- service-level redeploys when you push code or change env vars
Flowise is the current orchestration workload inside one Render Service. Render is the thing that makes the overall app easier to ship, isolate, and iterate on.
Render looks especially good here because it removes a lot of the glue work that usually slows AI teams down:
- Service separation without platform sprawl: the UI, orchestration runtime, and database are separate Render Services, but they still deploy from one Blueprint.
- Private networking by default: the orchestration layer can stay behind the browser-facing app instead of being exposed directly to the public web.
- Infrastructure as code: the working topology lives in
render.yaml, not only in a dashboard. - Managed state: Postgres is provisioned for you, and the orchestration layer gets persistent state for logs, stored assets, and runtime data.
- Fast iteration loops: pushing code or changing env vars on a service gives you a clean redeploy path, which is exactly how AI apps tend to evolve.
- Clear expansion path: ETL can stay ad hoc at first and later move into cron jobs or workers without redesigning the whole repo.
For AI development specifically, that is a real productivity win. You spend less time wiring secrets, hostnames, ports, proxy rules, and data services by hand, and more time iterating on the assistant itself.
| Service | Role | Why it helps AI development |
|---|---|---|
rendy-web |
React UI plus same-origin orchestration proxy | Lets you iterate on the user experience without exposing orchestration internals directly to the browser. |
rendy-orchestration |
Private orchestration layer | Keeps prompt/tool/agent orchestration isolated from the frontend and gives you a place for Flowise today plus Python, LangChain, SDK-driven integrations, and more custom backend logic over time. |
rendy-postgres |
Managed Postgres for orchestration metadata and optional pgvector data | Gives you durable app state and a straightforward place to keep retrieval data if you want one database. |
Request path: Browser -> rendy-web -> rendy-orchestration -> rendy-postgres
graph LR
A[Browser] --> B[rendy-web]
B --> C[rendy-orchestration]
C --> D[(rendy-postgres)]
Starting with a more custom Python backend would also be valid. This repo uses Flowise as the current runtime because it speeds up prompt, retrieval, and tool composition. But the more important architectural choice is the private orchestration layer itself. That layer can continue to host Flowise, and it is also the natural place to add Python code, LangChain components, SDK-driven integrations, and more bespoke backend logic as the app grows.
For an AI product, Render Services line up cleanly with the actual responsibilities in the app:
- the browser-facing service owns UX, sessions, and request proxying
- the orchestration service owns prompts, tools, models, and workflow logic
- the database service owns durable state
That split is useful because AI apps rarely change all layers at once. Render lets you redeploy the layer you changed without pretending every prompt edit is a full platform event.
The Blueprint in render.yaml is doing useful platform work before you even touch a chatflow:
- provisions the three core resources
- injects Postgres connection details into the orchestration service
- generates the current orchestration secret values used by Flowise in this starter
- mounts a persistent disk for orchestration state
- passes the orchestration service hostport into the UI service over Render's private network
- seeds the UI with the proxy and streaming defaults it needs
This is one of the main reasons Render is nice for AI apps. You are not just getting "hosting." You are getting the wiring between services, secrets, storage, and deploys already encoded in the stack definition.
rendy-orchestration is not just a single chat endpoint. It is the private orchestration and custom-backend layer for the app.
That means it is where you get:
- the current Flowise API, CLI, and assistant/chatflow runtime surfaces
- model/tool/prompt orchestration separate from the browser UI
- the LangChain / LlamaIndex / MCP / custom-tool integration surface that makes the assistant useful
- a natural place for Python code, SDK-driven integrations, and more custom backend logic as the product evolves
That separation is a big deal for AI dev. Inside Render, it means orchestration behaves like a real private backend service instead of frontend-adjacent glue code.
This repo is intentionally biased toward the real edit loops AI teams go through:
- UI changes live under
UI/rendy_rt/and redeployrendy-web. - Prompt, tool, retrieval-flow, and orchestration changes live in the private orchestration layer and affect
rendy-orchestration. - Data ingestion changes live under
ETL/and can later move into scheduled jobs if needed.
That is a better fit for AI development than a monolith, because most changes are not full-stack changes. Render Services make it easy to keep those concerns separate.
- Fork this repo for the Blueprint and UI service.
- Fork the current orchestration runtime repo, Flowise, and point the
rendy-orchestration.repovalue inrender.yamlat your fork. - Review the
repo:entries inrender.yamlsorendy-webpoints at your repo andrendy-orchestrationpoints at your orchestration fork. - Deploy the Blueprint from
render.yaml. - Open the
rendy-orchestrationservice in Render, complete the Flowise first-run setup, and create the OpenAI/Postgres credentials you want to use inside the imported nodes. - If you want pgvector in Postgres, enable it in the target database:
CREATE EXTENSION vector;- Import
chatflows/pgvector-template.jsonandchatflows/assistant-template.json. - Set
VITE_FLOWISE_CHATFLOW_IDon therendy-webservice in Render. - Let Render redeploy
rendy-web, then test the full browser path end to end.
For this repo, the normal workflow should be thought of as service-level iteration on Render:
- push a UI change to the branch connected to
rendy-web - or change an env var on
rendy-web - let Render rebuild and redeploy
- review service logs, deploy output, and health in Render
- test the deployed service, not only a local shell session
That is especially useful for AI products because the deployed environment matters. Proxying, service-to-service networking, env vars, model credentials, and retrieval settings are part of the behavior you are validating. Render makes those checks visible at the service level instead of leaving them scattered across ad hoc scripts and local assumptions.
Local development is still available, but it is secondary here. The Render deployment path is the primary path. See UI/rendy_rt/README.md for the optional local workflow.
VITE_FLOWISE_CHATFLOW_IDis intentionally blank inrender.yaml. The UI should not claim to be wired until the assistant chatflow actually exists.UI/rendy_rt/api/flowiseProxy.jskeeps the orchestration runtime behind the UI service instead of sending the browser directly to the orchestration host.FLOWISE_INTERNAL_HOSTPORTcomes from therendy-orchestrationservice, which keeps the browser-facing app simple and the private orchestration path explicit.
The current ETL examples are:
ETL/crawl-ETL/crawl_to_json.py: Playwright-based crawler that executes client-side JavaScript and captures rendered DOM text plus inline/external JS and CSS into JSON/JSONL{url, text}recordsETL/json-ETL/json_to_pinecone.py: JSON/JSONL{url, text}ingestion into Pinecone with chunking, ledgers, and optional sync-deleteETL/sitemap-ETL/sitemap.py: Playwright-based sitemap fetcher that extracts<loc>URLs and embeds URL strings
See ETL/README.md for ETL details.
That gives you a practical ETL flow today:
- crawl rendered pages into local JSON/JSONL
- optionally reshape or review the dataset
- ingest
{url, text}records into Pinecone - or use sitemap-only discovery when you want URL coverage without full page-body capture
The nice part from a Render perspective is that this does not need to stay manual forever. The Blueprint already includes a commented cron-job scaffold in render.yaml as a future direction. That keeps the repo "AI app first" today while still showing a clean path toward scheduled ingestion later.
VITE_FLOWISE_CHATFLOW_IDis missing, so the UI deploys but cannot talk to the intended assistant- the orchestration service is healthy, but the UI cannot reach it because
FLOWISE_INTERNAL_HOSTPORTor the service linkage is wrong - retrieval quality looks weak because the assistant flow is not pointed at the same data the ingestion path populated
- people test only locally and miss behavior that depends on Render env vars, private networking, or deploy state
render.yaml: Render Blueprint and future cron scaffoldUI/rendy_rt/: deployed UI service and orchestration proxychatflows/: Flowise chatflow exports plus setup documentation for the current orchestration pathchatflows/README.md: quick reference for the included Flowise templates and how they fit togetherchatflows/HOWTO.md: step-by-step guide for importing, configuring, and using the chatflow templatesETL/: rendered crawl, JSON ingestion, and sitemap ingestion helpers