feat: lease expiration notifications via cron + API#70
feat: lease expiration notifications via cron + API#70chitcommit wants to merge 1 commit intomainfrom
Conversation
Add automated lease expiration monitoring: - Daily cron trigger (9 AM UTC) checks for leases expiring within 30/60/90 days - Creates deduped tasks linked to each lease with urgency-based priority - Sends email (SendGrid) and SMS (Twilio) reminders when configured - New GET /api/leases/expiring?days=N endpoint for frontend consumption - ExpiringLeases dashboard widget with urgency badges - Wrangler cron triggers added to all 3 env blocks Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
@coderabbitai review Please evaluate:
|
Deploying with
|
| Status | Name | Latest Commit | Updated (UTC) |
|---|---|---|---|
| ✅ Deployment successful! View logs |
chittyfinance | 8bfa110 | Mar 26 2026, 05:18 AM |
📝 WalkthroughWalkthroughThis PR introduces a lease expiration notification system spanning frontend, backend, and infrastructure layers. It adds a React dashboard component displaying expiring leases, API endpoints retrieving lease data within configurable day windows, database queries for lease selection, a scheduled worker task that runs daily to process expirations, and notification handlers for email and SMS alerts. Changes
Sequence DiagramsequenceDiagram
participant Scheduler as Scheduler<br/>(Cron)
participant Worker as Worker<br/>(server/worker.ts)
participant Storage as Database<br/>(SystemStorage)
participant TaskSys as Task System
participant Email as SendGrid
participant SMS as Twilio
Scheduler->>Worker: Trigger at 09:00 UTC daily
Worker->>Storage: getExpiringLeases(90/60/30 days)
Storage-->>Worker: Return expiring lease data
loop For each notification window
loop For each expiring lease
Worker->>Storage: getTasksByRelation(leaseId)
Storage-->>Worker: Existing tasks
alt Task not already created
Worker->>TaskSys: Create expiration task<br/>(with priority & metadata)
TaskSys-->>Worker: Task created
Worker->>Email: Send notification<br/>(if tenant email exists)
Email-->>Worker: Email sent or error caught
Worker->>SMS: Send notification<br/>(if tenant phone exists)
SMS-->>Worker: SMS sent or error caught
end
end
end
Worker-->>Scheduler: Return stats<br/>(checked, tasksCreated,<br/>emailsSent, smsSent)
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
PR Review: feat: lease expiration notifications via cron + APIOverall this is solid work — the cron integration, dedup logic, and component design are well-structured. A few issues need attention before merging. 🔴 Critical: Tenant isolation missing in
|
| Severity | Issue |
|---|---|
| 🔴 Critical | getExpiringLeases has no tenant filter — cross-tenant data leak in API |
| 🟡 Medium | 3 redundant DB queries + N×3 getTasksByRelation calls in cron |
| 🟡 Medium | useExpiringLeases dead import in Dashboard.tsx |
| 🟡 Medium | stats.checked overcounts leases |
| 🟠 Minor | Missing try/catch in route handler |
The tenant isolation issue is a blocker. Everything else can be addressed in follow-up or in this PR. The overall design — cron handler, dedup by task title, opt-in email/SMS, dashboard widget — is clean and matches the existing patterns well.
Reviewed by Claude Sonnet 4.6
There was a problem hiding this comment.
Pull request overview
Adds lease-expiration visibility and notifications to the system: a daily Cloudflare Scheduled handler generates lease-related tasks and optionally sends tenant reminders, plus a new API + dashboard widget for expiring leases.
Changes:
- Add a daily scheduled (cron) worker handler that scans for expiring leases and creates deduped tasks; optionally sends SendGrid/Twilio reminders.
- Add
GET /api/leases/expiring?days=Nand underlying storage queries to return expiring leases withdaysRemaining. - Add a dashboard widget to display expiring leases with urgency badges.
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 9 comments.
Show a summary per file
| File | Description |
|---|---|
| server/worker.ts | Adds scheduled() handler to run lease expiration processing via cron. |
| server/lib/lease-expiration.ts | Implements the cron-driven lease expiration scan, task creation, and optional email/SMS notifications. |
| server/storage/system.ts | Adds expiring-lease query and a task lookup helper used for notification dedup. |
| server/routes/leases.ts | Introduces GET /api/leases/expiring endpoint. |
| server/app.ts | Mounts new lease routes and protects /api/leases with auth+tenant middleware. |
| deploy/system-wrangler.toml | Adds cron trigger configuration for scheduled execution across envs. |
| client/src/hooks/use-property.ts | Adds useExpiringLeases() hook + response typing. |
| client/src/components/property/ExpiringLeases.tsx | Adds dashboard card/table UI for expiring leases. |
| client/src/pages/Dashboard.tsx | Places the widget on the dashboard right column (tenant-selected). |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const days = parseInt(c.req.query('days') || '90', 10); | ||
|
|
||
| if (isNaN(days) || days < 1 || days > 365) { | ||
| return c.json({ error: 'days must be between 1 and 365' }, 400); | ||
| } | ||
|
|
||
| const expiring = await storage.getExpiringLeases(days); |
There was a problem hiding this comment.
This new route calls storage.getExpiringLeases(days) without scoping to the active tenant (c.get('tenantId')). Since tenantMiddleware only validates membership (it does not scope the storage instance), the endpoint currently returns leases for all tenants, leaking PII. Fix by passing tenantId into storage and enforcing properties.tenantId = tenantId in the query.
| const days = parseInt(c.req.query('days') || '90', 10); | |
| if (isNaN(days) || days < 1 || days > 365) { | |
| return c.json({ error: 'days must be between 1 and 365' }, 400); | |
| } | |
| const expiring = await storage.getExpiringLeases(days); | |
| const tenantId = c.get('tenantId'); | |
| const days = parseInt(c.req.query('days') || '90', 10); | |
| if (isNaN(days) || days < 1 || days > 365) { | |
| return c.json({ error: 'days must be between 1 and 365' }, 400); | |
| } | |
| const expiring = await storage.getExpiringLeases(tenantId, days); |
| export async function processLeaseExpirations(env: Env): Promise<{ | ||
| checked: number; | ||
| tasksCreated: number; | ||
| emailsSent: number; | ||
| smsSent: number; | ||
| }> { | ||
| const db = createDb(env.DATABASE_URL); | ||
| const storage = new SystemStorage(db); | ||
|
|
||
| const stats = { checked: 0, tasksCreated: 0, emailsSent: 0, smsSent: 0 }; | ||
|
|
||
| // Set up optional notification clients | ||
| const sendgrid = env.SENDGRID_API_KEY && env.SENDGRID_FROM_EMAIL | ||
| ? new SendGridClient({ apiKey: env.SENDGRID_API_KEY, fromEmail: env.SENDGRID_FROM_EMAIL }) | ||
| : null; | ||
| const twilio = env.TWILIO_ACCOUNT_SID && env.TWILIO_AUTH_TOKEN && env.TWILIO_PHONE_NUMBER | ||
| ? new TwilioClient({ accountSid: env.TWILIO_ACCOUNT_SID, authToken: env.TWILIO_AUTH_TOKEN, fromNumber: env.TWILIO_PHONE_NUMBER }) | ||
| : null; | ||
|
|
||
| // Process each notification window (90, 60, 30 days) | ||
| for (const window of NOTIFICATION_WINDOWS) { | ||
| const expiring = await storage.getExpiringLeases(window.days); | ||
| stats.checked += expiring.length; | ||
|
|
||
| for (const { lease, unit, property } of expiring) { | ||
| // Dedup: check if a task already exists for this lease + window | ||
| const existingTasks = await storage.getTasksByRelation('lease', lease.id); | ||
| const alreadyNotified = existingTasks.some((t) => t.title === window.taskTitle); | ||
| if (alreadyNotified) continue; | ||
|
|
||
| // Create task for property manager | ||
| await storage.createTask({ | ||
| tenantId: property.tenantId, | ||
| title: window.taskTitle, | ||
| description: `${lease.tenantName}'s lease at ${property.name} (Unit ${unit.unitNumber || 'N/A'}) expires ${formatDate(lease.endDate)}.`, | ||
| dueDate: lease.endDate, | ||
| priority: window.days <= 30 ? 'urgent' : window.days <= 60 ? 'high' : 'medium', | ||
| status: 'pending', | ||
| relatedTo: 'lease', | ||
| relatedId: lease.id, | ||
| metadata: { | ||
| notificationWindow: window.days, | ||
| leaseEndDate: formatDate(lease.endDate), | ||
| propertyId: property.id, | ||
| unitId: unit.id, | ||
| }, | ||
| }); | ||
| stats.tasksCreated++; | ||
|
|
||
| const endDateStr = formatDate(lease.endDate); | ||
|
|
||
| // Send email notification if tenant has email and SendGrid is configured | ||
| if (sendgrid && lease.tenantEmail) { | ||
| try { | ||
| const email = EMAIL_TEMPLATES.lease_reminder(lease.tenantName, property.name, endDateStr); | ||
| await sendgrid.sendEmail({ to: lease.tenantEmail, ...email }); | ||
| stats.emailsSent++; | ||
| } catch (err) { | ||
| console.error(`Failed to send lease reminder email to ${lease.tenantEmail}:`, err); | ||
| } | ||
| } | ||
|
|
||
| // Send SMS notification if tenant has phone and Twilio is configured | ||
| if (twilio && lease.tenantPhone) { | ||
| try { | ||
| const smsBody = SMS_TEMPLATES[window.smsTemplate](lease.tenantName, endDateStr); | ||
| await twilio.sendSms(lease.tenantPhone, smsBody); | ||
| stats.smsSent++; | ||
| } catch (err) { | ||
| console.error(`Failed to send lease reminder SMS to ${lease.tenantPhone}:`, err); | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| return stats; | ||
| } |
There was a problem hiding this comment.
New scheduled notification logic is being introduced without corresponding tests. Given the repo already has server-side route/storage tests (Vitest), it would be valuable to add tests that cover window selection (30/60/90 behavior), deduplication, and “backfill” behavior when a lease is already close to expiry at first run.
| // GET /api/leases/expiring — list leases expiring within N days (default 90) | ||
| leaseRoutes.get('/api/leases/expiring', async (c) => { | ||
| const storage = c.get('storage'); | ||
| const days = parseInt(c.req.query('days') || '90', 10); | ||
|
|
||
| if (isNaN(days) || days < 1 || days > 365) { | ||
| return c.json({ error: 'days must be between 1 and 365' }, 400); | ||
| } | ||
|
|
||
| const expiring = await storage.getExpiringLeases(days); | ||
|
|
||
| return c.json( | ||
| expiring.map(({ lease, unit, property }) => ({ | ||
| leaseId: lease.id, | ||
| tenantName: lease.tenantName, | ||
| tenantEmail: lease.tenantEmail, | ||
| tenantPhone: lease.tenantPhone, | ||
| endDate: lease.endDate, | ||
| monthlyRent: lease.monthlyRent, | ||
| unitNumber: unit.unitNumber, | ||
| propertyId: property.id, | ||
| propertyName: property.name, | ||
| address: property.address, | ||
| daysRemaining: Math.ceil( | ||
| (new Date(lease.endDate).getTime() - Date.now()) / (24 * 60 * 60 * 1000), | ||
| ), | ||
| })), | ||
| ); | ||
| }); |
There was a problem hiding this comment.
This new API route should have tests (similar to other route tests under server/__tests__) to validate: (1) tenant isolation (can’t see other tenants’ leases), (2) days validation bounds, and (3) response shape including daysRemaining correctness.
| import { formatCurrency } from '@/lib/utils'; | ||
| import { usePortfolioSummary } from '@/hooks/use-property'; | ||
| import { usePortfolioSummary, useExpiringLeases } from '@/hooks/use-property'; | ||
| import ExpiringLeases from '@/components/property/ExpiringLeases'; |
There was a problem hiding this comment.
useExpiringLeases is imported but never used in this file. Removing it avoids dead imports and keeps the module’s public surface clearer.
| import ExpiringLeases from '@/components/property/ExpiringLeases'; |
| async getTasksByRelation(relatedTo: string, relatedId: string) { | ||
| return this.db | ||
| .select() | ||
| .from(schema.tasks) | ||
| .where( | ||
| and( | ||
| eq(schema.tasks.relatedTo, relatedTo), | ||
| eq(schema.tasks.relatedId, relatedId), | ||
| ), | ||
| ); |
There was a problem hiding this comment.
getTasksByRelation() is not scoped by tenant. Even if it’s currently only used for cron deduplication, leaving it unscoped makes it easy to accidentally use in a request path later and leak cross-tenant task data. Consider requiring tenantId and filtering by schema.tasks.tenantId, or at least naming it getTasksByRelationUnsafe/documenting intended use.
| // Process each notification window (90, 60, 30 days) | ||
| for (const window of NOTIFICATION_WINDOWS) { | ||
| const expiring = await storage.getExpiringLeases(window.days); | ||
| stats.checked += expiring.length; | ||
|
|
||
| for (const { lease, unit, property } of expiring) { | ||
| // Dedup: check if a task already exists for this lease + window | ||
| const existingTasks = await storage.getTasksByRelation('lease', lease.id); | ||
| const alreadyNotified = existingTasks.some((t) => t.title === window.taskTitle); | ||
| if (alreadyNotified) continue; |
There was a problem hiding this comment.
Notification window selection is inconsistent with the SMS templates: getExpiringLeases(window.days) returns leases expiring anytime within that cutoff, so on first rollout (or after cron downtime) a lease expiring in ~10–20 days will trigger 90/60/30 notifications immediately, including sending “expires in 60 days” messages when it’s not true. Consider enforcing disjoint date ranges per window (e.g., 61–90, 31–60, 0–30 days remaining) or selecting the most urgent applicable window per lease per run.
| for (const { lease, unit, property } of expiring) { | ||
| // Dedup: check if a task already exists for this lease + window | ||
| const existingTasks = await storage.getTasksByRelation('lease', lease.id); | ||
| const alreadyNotified = existingTasks.some((t) => t.title === window.taskTitle); | ||
| if (alreadyNotified) continue; | ||
|
|
||
| // Create task for property manager | ||
| await storage.createTask({ | ||
| tenantId: property.tenantId, | ||
| title: window.taskTitle, | ||
| description: `${lease.tenantName}'s lease at ${property.name} (Unit ${unit.unitNumber || 'N/A'}) expires ${formatDate(lease.endDate)}.`, | ||
| dueDate: lease.endDate, | ||
| priority: window.days <= 30 ? 'urgent' : window.days <= 60 ? 'high' : 'medium', | ||
| status: 'pending', | ||
| relatedTo: 'lease', | ||
| relatedId: lease.id, | ||
| metadata: { | ||
| notificationWindow: window.days, | ||
| leaseEndDate: formatDate(lease.endDate), | ||
| propertyId: property.id, | ||
| unitId: unit.id, | ||
| }, | ||
| }); | ||
| stats.tasksCreated++; | ||
|
|
||
| const endDateStr = formatDate(lease.endDate); | ||
|
|
||
| // Send email notification if tenant has email and SendGrid is configured | ||
| if (sendgrid && lease.tenantEmail) { | ||
| try { | ||
| const email = EMAIL_TEMPLATES.lease_reminder(lease.tenantName, property.name, endDateStr); | ||
| await sendgrid.sendEmail({ to: lease.tenantEmail, ...email }); | ||
| stats.emailsSent++; | ||
| } catch (err) { | ||
| console.error(`Failed to send lease reminder email to ${lease.tenantEmail}:`, err); | ||
| } | ||
| } | ||
|
|
||
| // Send SMS notification if tenant has phone and Twilio is configured | ||
| if (twilio && lease.tenantPhone) { | ||
| try { | ||
| const smsBody = SMS_TEMPLATES[window.smsTemplate](lease.tenantName, endDateStr); | ||
| await twilio.sendSms(lease.tenantPhone, smsBody); | ||
| stats.smsSent++; | ||
| } catch (err) { | ||
| console.error(`Failed to send lease reminder SMS to ${lease.tenantPhone}:`, err); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The current approach does an N+1 query pattern (getTasksByRelation per lease) and then does email/SMS sends sequentially inside nested loops. For a tenant set with many leases, this can exceed scheduled event time limits and/or generate significant DB load. Consider: (1) querying existing notification tasks in bulk for the window (or doing a SELECT 1 ... WHERE relatedTo/relatedId/title LIMIT 1), and (2) sending notifications with bounded concurrency (e.g., a small promise pool) so you don’t serialize all network calls.
| async getExpiringLeases(withinDays: number) { | ||
| const now = new Date(); | ||
| const cutoff = new Date(now.getTime() + withinDays * 24 * 60 * 60 * 1000); | ||
| return this.db | ||
| .select({ | ||
| lease: schema.leases, | ||
| unit: schema.units, | ||
| property: schema.properties, | ||
| }) | ||
| .from(schema.leases) | ||
| .innerJoin(schema.units, eq(schema.leases.unitId, schema.units.id)) | ||
| .innerJoin(schema.properties, eq(schema.units.propertyId, schema.properties.id)) | ||
| .where( | ||
| and( | ||
| eq(schema.leases.status, 'active'), | ||
| sql`${schema.leases.endDate} >= ${now}`, | ||
| sql`${schema.leases.endDate} <= ${cutoff}`, | ||
| ), | ||
| ) | ||
| .orderBy(schema.leases.endDate); | ||
| } | ||
|
|
||
| async getTasksByRelation(relatedTo: string, relatedId: string) { | ||
| return this.db | ||
| .select() | ||
| .from(schema.tasks) | ||
| .where( | ||
| and( | ||
| eq(schema.tasks.relatedTo, relatedTo), | ||
| eq(schema.tasks.relatedId, relatedId), | ||
| ), | ||
| ); | ||
| } |
There was a problem hiding this comment.
SystemStorage has an existing test suite (server/__tests__/storage-system.test.ts), but these new query helpers aren’t covered. Adding tests for getExpiringLeases() (including tenant scoping once fixed) and getTasksByRelation() would help prevent cross-tenant leaks/regressions and verify date boundary behavior.
| async getExpiringLeases(withinDays: number) { | ||
| const now = new Date(); | ||
| const cutoff = new Date(now.getTime() + withinDays * 24 * 60 * 60 * 1000); | ||
| return this.db | ||
| .select({ | ||
| lease: schema.leases, | ||
| unit: schema.units, | ||
| property: schema.properties, | ||
| }) | ||
| .from(schema.leases) | ||
| .innerJoin(schema.units, eq(schema.leases.unitId, schema.units.id)) | ||
| .innerJoin(schema.properties, eq(schema.units.propertyId, schema.properties.id)) | ||
| .where( | ||
| and( | ||
| eq(schema.leases.status, 'active'), | ||
| sql`${schema.leases.endDate} >= ${now}`, | ||
| sql`${schema.leases.endDate} <= ${cutoff}`, | ||
| ), | ||
| ) | ||
| .orderBy(schema.leases.endDate); |
There was a problem hiding this comment.
getExpiringLeases() returns leases across all tenants. Because leases don’t have a tenantId column (tenant scoping is via units -> properties.tenantId), this method needs a tenant filter when used by authenticated API routes; otherwise callers can enumerate other tenants’ leases/PII. Consider changing the signature to require tenantId (and add eq(schema.properties.tenantId, tenantId)), and add a separate admin/cron-only method if you truly need cross-tenant results.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (1)
server/lib/lease-expiration.ts (1)
78-98: LGTM - Resilient notification handling with proper error isolation.Good pattern: catching and logging individual notification failures allows processing to continue for remaining leases. This prevents a single email/SMS failure from blocking the entire batch.
One minor observation: errors are logged to console but there's no aggregation of failed notifications in the returned stats. Consider tracking
emailsFailedandsmsFailedif monitoring notification reliability is important.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@server/lib/lease-expiration.ts` around lines 78 - 98, Add failure counters to the notification stats and increment them in the respective catch blocks: ensure the stats object (used by lease-expiration.ts) includes emailsFailed and smsFailed initialized to 0, then in the email catch block increment stats.emailsFailed and in the SMS catch block increment stats.smsFailed; use the existing symbols (sendgrid, twilio, EMAIL_TEMPLATES, SMS_TEMPLATES, window.smsTemplate, stats) so the try/catch blocks keep logging errors but also aggregate failures for reporting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@client/src/components/property/ExpiringLeases.tsx`:
- Around line 16-46: The component currently treats failed fetches the same as
an empty result; update ExpiringLeases to handle the error state from
useExpiringLeases: read isError and error (or equivalent) from the hook
alongside isLoading and leases, and when isError is true render a Card (same
header) showing an error message and optional retry hint or error.message to
distinguish API failures from "no leases"; ensure you reference
useExpiringLeases, leases, isLoading, isError and error when implementing this
change.
In `@server/lib/lease-expiration.ts`:
- Around line 46-55: The loop over NOTIFICATION_WINDOWS can create multiple
notifications for the same lease because getExpiringLeases(window.days) is
called per window and dedupe only checks taskTitle; fix by tracking
already-processed lease IDs within a single run: create a Set (e.g.,
processedLeaseIds) outside the for (const window of NOTIFICATION_WINDOWS) loop,
then before handling each { lease, unit, property } check if
processedLeaseIds.has(lease.id) and skip if true, and after creating a
notification/task add processedLeaseIds.add(lease.id); update code around
getExpiringLeases, the inner loop, and where tasks are created (references:
NOTIFICATION_WINDOWS, getExpiringLeases, storage.getTasksByRelation,
existingTasks, window.taskTitle, lease.id).
In `@server/routes/leases.ts`:
- Around line 9-13: Replace the manual parseInt/isNaN check for the days query
param with a Zod schema: create a schema (e.g., daysQuerySchema) using
z.coerce.number().int().min(1).max(365).optional().default(90) (or equivalent
preprocess) and call schema.parse or safeParse on c.req.query() to obtain the
validated days value instead of using parseInt; on validation failure return the
same 400 response with the Zod error details, and update references to the local
variable days to use the parsed value from the schema (look for uses of days and
c.req.query in this route handler).
In `@server/storage/system.ts`:
- Around line 354-374: getExpiringLeases currently returns leases across all
tenants; update its signature to accept a tenantId parameter and add a tenant
filter (e.g., eq(schema.leases.tenantId, tenantId)) to the where clause in
getExpiringLeases, then update the caller in server/routes/leases.ts to read
tenantId from the request context (c.get('tenantId')) and pass it into
storage.getExpiringLeases so queries are scoped to the requesting tenant; ensure
parameter types and any tests are updated accordingly.
In `@server/worker.ts`:
- Around line 13-17: Replace the .then()/.catch() promise chain on
processLeaseExpirations(env) with an async IIFE passed into ctx.waitUntil();
specifically, call ctx.waitUntil((async () => { try { const stats = await
processLeaseExpirations(env); console.log('Lease expiration check complete:',
JSON.stringify(stats)); } catch (err) { console.error('Lease expiration check
failed:', err); } })()); this uses async/await and preserves the original
logging while ensuring the task is awaited by the worker runtime (referencing
processLeaseExpirations and ctx in server/worker.ts).
---
Nitpick comments:
In `@server/lib/lease-expiration.ts`:
- Around line 78-98: Add failure counters to the notification stats and
increment them in the respective catch blocks: ensure the stats object (used by
lease-expiration.ts) includes emailsFailed and smsFailed initialized to 0, then
in the email catch block increment stats.emailsFailed and in the SMS catch block
increment stats.smsFailed; use the existing symbols (sendgrid, twilio,
EMAIL_TEMPLATES, SMS_TEMPLATES, window.smsTemplate, stats) so the try/catch
blocks keep logging errors but also aggregate failures for reporting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0319498e-f11d-47c0-b70d-3b8e7c599f78
📒 Files selected for processing (9)
client/src/components/property/ExpiringLeases.tsxclient/src/hooks/use-property.tsclient/src/pages/Dashboard.tsxdeploy/system-wrangler.tomlserver/app.tsserver/lib/lease-expiration.tsserver/routes/leases.tsserver/storage/system.tsserver/worker.ts
| const { data: leases, isLoading } = useExpiringLeases(90); | ||
|
|
||
| if (isLoading) { | ||
| return ( | ||
| <Card> | ||
| <CardHeader> | ||
| <CardTitle className="text-base flex items-center gap-2"> | ||
| <Clock className="h-4 w-4" /> Expiring Leases | ||
| </CardTitle> | ||
| </CardHeader> | ||
| <CardContent> | ||
| <Skeleton className="h-24" /> | ||
| </CardContent> | ||
| </Card> | ||
| ); | ||
| } | ||
|
|
||
| if (!leases || leases.length === 0) { | ||
| return ( | ||
| <Card> | ||
| <CardHeader> | ||
| <CardTitle className="text-base flex items-center gap-2"> | ||
| <Clock className="h-4 w-4" /> Expiring Leases | ||
| </CardTitle> | ||
| </CardHeader> | ||
| <CardContent> | ||
| <p className="text-sm text-muted-foreground">No leases expiring in the next 90 days.</p> | ||
| </CardContent> | ||
| </Card> | ||
| ); | ||
| } |
There was a problem hiding this comment.
Add an explicit error state for failed lease fetches.
Right now, request failures fall through to the same UI as “no expiring leases,” which can hide backend/API issues.
Proposed fix
export default function ExpiringLeases() {
- const { data: leases, isLoading } = useExpiringLeases(90);
+ const { data: leases, isLoading, isError } = useExpiringLeases(90);
@@
if (!leases || leases.length === 0) {
@@
}
+
+ if (isError) {
+ return (
+ <Card>
+ <CardHeader>
+ <CardTitle className="text-base flex items-center gap-2">
+ <Clock className="h-4 w-4" /> Expiring Leases
+ </CardTitle>
+ </CardHeader>
+ <CardContent>
+ <p className="text-sm text-muted-foreground">Could not load expiring leases. Please retry.</p>
+ </CardContent>
+ </Card>
+ );
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@client/src/components/property/ExpiringLeases.tsx` around lines 16 - 46, The
component currently treats failed fetches the same as an empty result; update
ExpiringLeases to handle the error state from useExpiringLeases: read isError
and error (or equivalent) from the hook alongside isLoading and leases, and when
isError is true render a Card (same header) showing an error message and
optional retry hint or error.message to distinguish API failures from "no
leases"; ensure you reference useExpiringLeases, leases, isLoading, isError and
error when implementing this change.
| // Process each notification window (90, 60, 30 days) | ||
| for (const window of NOTIFICATION_WINDOWS) { | ||
| const expiring = await storage.getExpiringLeases(window.days); | ||
| stats.checked += expiring.length; | ||
|
|
||
| for (const { lease, unit, property } of expiring) { | ||
| // Dedup: check if a task already exists for this lease + window | ||
| const existingTasks = await storage.getTasksByRelation('lease', lease.id); | ||
| const alreadyNotified = existingTasks.some((t) => t.title === window.taskTitle); | ||
| if (alreadyNotified) continue; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how getExpiringLeases is implemented to understand the range behavior
ast-grep --pattern $'getExpiringLeases($DAYS) {
$$$
}'
# Also search for the method signature and implementation
rg -n -A 20 'getExpiringLeases' --type tsRepository: chittyapps/chittyfinance
Length of output: 4261
🏁 Script executed:
# Find NOTIFICATION_WINDOWS definition
rg -n -B 5 -A 10 'NOTIFICATION_WINDOWS' --type ts server/lib/lease-expiration.tsRepository: chittyapps/chittyfinance
Length of output: 1699
🏁 Script executed:
# Get full context around the processing loop in lease-expiration.ts
head -100 server/lib/lease-expiration.ts | tail -70Repository: chittyapps/chittyfinance
Length of output: 2952
Multiple confusing notifications will be sent when a lease falls into multiple notification windows.
When a lease has 45 days remaining, getExpiringLeases(90) and getExpiringLeases(60) both return it in the same cron execution. The deduplication at line 54 only checks for matching taskTitles within the current window, so it cannot prevent creating both the 90-day and 60-day tasks (which have different titles). This results in a single tenant receiving multiple notifications simultaneously (e.g., "expiring in 90 days" and "expiring in 60 days").
Suggested approaches:
- Track already-processed lease IDs within a single run
- Query for exclusive ranges (e.g., 60 < days ≤ 90)
- Process windows in ascending order (30→60→90) and skip leases that matched a smaller window
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/lib/lease-expiration.ts` around lines 46 - 55, The loop over
NOTIFICATION_WINDOWS can create multiple notifications for the same lease
because getExpiringLeases(window.days) is called per window and dedupe only
checks taskTitle; fix by tracking already-processed lease IDs within a single
run: create a Set (e.g., processedLeaseIds) outside the for (const window of
NOTIFICATION_WINDOWS) loop, then before handling each { lease, unit, property }
check if processedLeaseIds.has(lease.id) and skip if true, and after creating a
notification/task add processedLeaseIds.add(lease.id); update code around
getExpiringLeases, the inner loop, and where tasks are created (references:
NOTIFICATION_WINDOWS, getExpiringLeases, storage.getTasksByRelation,
existingTasks, window.taskTitle, lease.id).
| const days = parseInt(c.req.query('days') || '90', 10); | ||
|
|
||
| if (isNaN(days) || days < 1 || days > 365) { | ||
| return c.json({ error: 'days must be between 1 and 365' }, 400); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify manual parsing vs schema-based validation in this route.
rg -nP --type=ts 'parseInt\(|isNaN\(|safeParse|z\.coerce\.number|@shared/schema' server/routes/leases.tsRepository: chittyapps/chittyfinance
Length of output: 177
🏁 Script executed:
find . -path "*shared/schema*" -o -path "*@shared/schema*" | head -20Repository: chittyapps/chittyfinance
Length of output: 86
🏁 Script executed:
fd -t f "schema" --full-path | grep -E "(shared|@shared)" | head -20Repository: chittyapps/chittyfinance
Length of output: 109
🏁 Script executed:
# Look at the shared directory structure
git ls-files | grep -E "^shared/" | head -30Repository: chittyapps/chittyfinance
Length of output: 109
🏁 Script executed:
wc -l shared/schema.ts shared/finance.schema.tsRepository: chittyapps/chittyfinance
Length of output: 133
🏁 Script executed:
cat -n shared/schema.tsRepository: chittyapps/chittyfinance
Length of output: 17196
🏁 Script executed:
cat -n shared/finance.schema.tsRepository: chittyapps/chittyfinance
Length of output: 987
🏁 Script executed:
cat -n server/routes/leases.tsRepository: chittyapps/chittyfinance
Length of output: 1379
🏁 Script executed:
# Check other routes for schema validation patterns
fd -t f "\.ts$" server/routes/ | head -10Repository: chittyapps/chittyfinance
Length of output: 306
🏁 Script executed:
# Search for query parameter validation patterns in other routes
rg -n "safeParse|z\.object|c\.req\.query" server/routes/ --type=ts -A 2 | head -50Repository: chittyapps/chittyfinance
Length of output: 3449
🏁 Script executed:
# Check if there are any pre-defined query parameter validation schemas anywhere
rg -n "z\.object.*query|QuerySchema|query.*z\." server/ --type=tsRepository: chittyapps/chittyfinance
Length of output: 50
🏁 Script executed:
# Verify the import path to `@shared/schema` works from server/routes
head -20 server/routes/properties.ts | grep -E "^import|^export"Repository: chittyapps/chittyfinance
Length of output: 351
🏁 Script executed:
# Check what path is used for schema imports across different routes
rg -n "from.*schema" server/routes/*.ts | grep -E "import.*schema"Repository: chittyapps/chittyfinance
Length of output: 187
🏁 Script executed:
# Verify the tsconfig paths to understand `@shared` vs ../db
cat tsconfig.json | grep -A 5 "paths"Repository: chittyapps/chittyfinance
Length of output: 161
🏁 Script executed:
# Check what db/schema actually is and where shared/schema.ts exists
ls -la server/db/ 2>/dev/null || echo "No server/db directory"Repository: chittyapps/chittyfinance
Length of output: 300
🏁 Script executed:
# Check tsconfig baseUrl to understand path resolution
cat tsconfig.json | head -30Repository: chittyapps/chittyfinance
Length of output: 809
🏁 Script executed:
# Compare server/db/schema.ts with shared/schema.ts
cat server/db/schema.tsRepository: chittyapps/chittyfinance
Length of output: 929
🏁 Script executed:
# Check if `@shared/schema` is the same as server/db/schema or different
head -50 shared/schema.ts | diff -u - <(head -50 server/db/schema.ts) || trueRepository: chittyapps/chittyfinance
Length of output: 111
Use Zod schema validation for the days query parameter instead of manual parseInt.
The manual parsing with parseInt and isNaN check does not follow the project's standardized validation approach. Per coding guidelines, all backend inputs must be validated using Zod schemas.
Suggested fix
+import { z } from 'zod';
import { Hono } from 'hono';
import type { HonoEnv } from '../env';
export const leaseRoutes = new Hono<HonoEnv>();
+
+const expiringLeasesQuerySchema = z.object({
+ days: z.coerce.number().int().min(1).max(365).default(90),
+});
// GET /api/leases/expiring — list leases expiring within N days (default 90)
leaseRoutes.get('/api/leases/expiring', async (c) => {
const storage = c.get('storage');
- const days = parseInt(c.req.query('days') || '90', 10);
-
- if (isNaN(days) || days < 1 || days > 365) {
+ const parsed = expiringLeasesQuerySchema.safeParse({ days: c.req.query('days') });
+ if (!parsed.success) {
return c.json({ error: 'days must be between 1 and 365' }, 400);
}
+ const { days } = parsed.data;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const days = parseInt(c.req.query('days') || '90', 10); | |
| if (isNaN(days) || days < 1 || days > 365) { | |
| return c.json({ error: 'days must be between 1 and 365' }, 400); | |
| } | |
| import { z } from 'zod'; | |
| import { Hono } from 'hono'; | |
| import type { HonoEnv } from '../env'; | |
| export const leaseRoutes = new Hono<HonoEnv>(); | |
| const expiringLeasesQuerySchema = z.object({ | |
| days: z.coerce.number().int().min(1).max(365).default(90), | |
| }); | |
| // GET /api/leases/expiring — list leases expiring within N days (default 90) | |
| leaseRoutes.get('/api/leases/expiring', async (c) => { | |
| const storage = c.get('storage'); | |
| const parsed = expiringLeasesQuerySchema.safeParse({ days: c.req.query('days') }); | |
| if (!parsed.success) { | |
| return c.json({ error: 'days must be between 1 and 365' }, 400); | |
| } | |
| const { days } = parsed.data; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/routes/leases.ts` around lines 9 - 13, Replace the manual
parseInt/isNaN check for the days query param with a Zod schema: create a schema
(e.g., daysQuerySchema) using
z.coerce.number().int().min(1).max(365).optional().default(90) (or equivalent
preprocess) and call schema.parse or safeParse on c.req.query() to obtain the
validated days value instead of using parseInt; on validation failure return the
same 400 response with the Zod error details, and update references to the local
variable days to use the parsed value from the schema (look for uses of days and
c.req.query in this route handler).
| async getExpiringLeases(withinDays: number) { | ||
| const now = new Date(); | ||
| const cutoff = new Date(now.getTime() + withinDays * 24 * 60 * 60 * 1000); | ||
| return this.db | ||
| .select({ | ||
| lease: schema.leases, | ||
| unit: schema.units, | ||
| property: schema.properties, | ||
| }) | ||
| .from(schema.leases) | ||
| .innerJoin(schema.units, eq(schema.leases.unitId, schema.units.id)) | ||
| .innerJoin(schema.properties, eq(schema.units.propertyId, schema.properties.id)) | ||
| .where( | ||
| and( | ||
| eq(schema.leases.status, 'active'), | ||
| sql`${schema.leases.endDate} >= ${now}`, | ||
| sql`${schema.leases.endDate} <= ${cutoff}`, | ||
| ), | ||
| ) | ||
| .orderBy(schema.leases.endDate); | ||
| } |
There was a problem hiding this comment.
Missing tenant filter in getExpiringLeases can leak cross-tenant lease data.
This method is used by tenant-scoped API flow, but it currently queries all tenants. Add tenant scoping support and pass tenantId from the route.
Proposed fix
- async getExpiringLeases(withinDays: number) {
+ async getExpiringLeases(withinDays: number, tenantId?: string) {
const now = new Date();
const cutoff = new Date(now.getTime() + withinDays * 24 * 60 * 60 * 1000);
- return this.db
+ const baseQuery = this.db
.select({
lease: schema.leases,
unit: schema.units,
property: schema.properties,
})
.from(schema.leases)
.innerJoin(schema.units, eq(schema.leases.unitId, schema.units.id))
- .innerJoin(schema.properties, eq(schema.units.propertyId, schema.properties.id))
- .where(
- and(
- eq(schema.leases.status, 'active'),
- sql`${schema.leases.endDate} >= ${now}`,
- sql`${schema.leases.endDate} <= ${cutoff}`,
- ),
- )
- .orderBy(schema.leases.endDate);
+ .innerJoin(schema.properties, eq(schema.units.propertyId, schema.properties.id));
+
+ const dateRange = and(
+ eq(schema.leases.status, 'active'),
+ sql`${schema.leases.endDate} >= ${now}`,
+ sql`${schema.leases.endDate} <= ${cutoff}`,
+ );
+
+ return tenantId
+ ? baseQuery.where(and(dateRange, eq(schema.properties.tenantId, tenantId))).orderBy(schema.leases.endDate)
+ : baseQuery.where(dateRange).orderBy(schema.leases.endDate);
}// server/routes/leases.ts
const tenantId = c.get('tenantId');
const expiring = await storage.getExpiringLeases(days, tenantId);As per coding guidelines: "Multi-tenant routes must enforce tenant isolation by checking tenant_id and user permissions via middleware before accessing data".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/storage/system.ts` around lines 354 - 374, getExpiringLeases currently
returns leases across all tenants; update its signature to accept a tenantId
parameter and add a tenant filter (e.g., eq(schema.leases.tenantId, tenantId))
to the where clause in getExpiringLeases, then update the caller in
server/routes/leases.ts to read tenantId from the request context
(c.get('tenantId')) and pass it into storage.getExpiringLeases so queries are
scoped to the requesting tenant; ensure parameter types and any tests are
updated accordingly.
| processLeaseExpirations(env).then((stats) => { | ||
| console.log('Lease expiration check complete:', JSON.stringify(stats)); | ||
| }).catch((err) => { | ||
| console.error('Lease expiration check failed:', err); | ||
| }), |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify promise-chain usage in the changed worker entrypoint.
rg -nP --type=ts '\.then\s*\(|\.catch\s*\(' server/worker.tsRepository: chittyapps/chittyfinance
Length of output: 152
🏁 Script executed:
#!/bin/bash
# Check if there are other promise chains in server/worker.ts
rg -n '\.then\(|\.catch\(' server/worker.ts
# Also check the full context of the scheduled method
sed -n '10,25p' server/worker.tsRepository: chittyapps/chittyfinance
Length of output: 703
Convert .then()/.catch() chain to async/await.
TypeScript code in this repository must use async/await instead of promise chains. Refactor the scheduled task to wrap the async operation in an async IIFE passed to ctx.waitUntil():
Refactor required
async scheduled(controller: ScheduledController, env: Env, ctx: ExecutionContext) {
// Daily cron: check lease expirations and send notifications
- ctx.waitUntil(
- processLeaseExpirations(env).then((stats) => {
- console.log('Lease expiration check complete:', JSON.stringify(stats));
- }).catch((err) => {
- console.error('Lease expiration check failed:', err);
- }),
- );
+ ctx.waitUntil((async () => {
+ try {
+ const stats = await processLeaseExpirations(env);
+ console.log('Lease expiration check complete:', JSON.stringify(stats));
+ } catch (err) {
+ console.error('Lease expiration check failed:', err);
+ }
+ })());
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| processLeaseExpirations(env).then((stats) => { | |
| console.log('Lease expiration check complete:', JSON.stringify(stats)); | |
| }).catch((err) => { | |
| console.error('Lease expiration check failed:', err); | |
| }), | |
| async scheduled(controller: ScheduledController, env: Env, ctx: ExecutionContext) { | |
| // Daily cron: check lease expirations and send notifications | |
| ctx.waitUntil((async () => { | |
| try { | |
| const stats = await processLeaseExpirations(env); | |
| console.log('Lease expiration check complete:', JSON.stringify(stats)); | |
| } catch (err) { | |
| console.error('Lease expiration check failed:', err); | |
| } | |
| })()); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/worker.ts` around lines 13 - 17, Replace the .then()/.catch() promise
chain on processLeaseExpirations(env) with an async IIFE passed into
ctx.waitUntil(); specifically, call ctx.waitUntil((async () => { try { const
stats = await processLeaseExpirations(env); console.log('Lease expiration check
complete:', JSON.stringify(stats)); } catch (err) { console.error('Lease
expiration check failed:', err); } })()); this uses async/await and preserves
the original logging while ensuring the task is awaited by the worker runtime
(referencing processLeaseExpirations and ctx in server/worker.ts).
Summary
relatedTo='lease'tasks with window-specific titles, skips if already notifiedlease_reminder) and Twilio (lease_reminder_30/60/90) templates when configuredGET /api/leases/expiring?days=Nreturns leases with tenant, property, unit, anddaysRemainingExpiringLeasescomponent shows upcoming expirations with urgency badges (red <=30d, default <=60d, muted <=90d)[triggers] crons = ["0 9 * * *"]added to top-level + all 3 env blocksTest plan
npm run checkpasses (TypeScript clean)npm run buildsucceedsGET /api/leases/expiring?days=90returns correct data shapewrangler dev --test-scheduledGenerated with Claude Code
Summary by CodeRabbit
Release Notes