We Built a Client's SaaS MVP in 2 Days Using Cursor — Here's the Full Process
A behind-the-scenes look at how Maximal Studio ships AI-powered software in 2 days using Cursor and modern tooling — and what that means for clients who've been quoted months by traditional dev shops.

The Claim That Gets Me Skeptical Looks
When I tell agency owners I can ship an MVP in 2 days, I get one of two reactions:
- "That's exactly what I need" (rare)
- "Sure you can" (common)
The skepticism is fair. They've been burned before — by agencies that said "quick turnaround" and delivered 3 months later, by freelancers who disappeared mid-project, by their own attempts to build things internally that never shipped.
The pushback I hear most is on timeline: "Two days? That's just a prototype, not something we can actually use." Or quality: "It'll be full of bugs." The real source of doubt is that people are used to software taking months. They haven't seen what AI-assisted development with a tight scope and an experienced engineer actually looks like.
So this post is the answer to that skepticism. Here's exactly how a 2-day build works at Maximal Studio — not the pitch version, the actual version.
Why 2 Days Is Possible (And When It Isn't)
First, the honest caveat: not everything ships in 2 days. Here's what makes 2 days possible:
- The scope is tight and agreed on before a line of code is written
- The client is available to unblock decisions during the build
- The core functionality doesn't require custom model training or novel research
- We're building on top of existing infrastructure (LLM APIs, cloud services, standard databases)
Here's what pushes a project past 2 days: scope creep ("can we also add X?"), unclear requirements that only surface mid-build, complex third-party integrations with poor docs, or the client going dark when we need a decision. The 2-day constraint isn't a sales promise. It's a discipline. It forces scope clarity upfront in a way that "let's see how it goes" projects never achieve.
The Stack We Use and Why
Frontend
We use Next.js. It gives us a single codebase for SSR and API routes, fast iteration with hot reload, and a deployment story (Vercel) that takes zero config. For the kind of AI-powered dashboards and forms our clients need, nothing else gets us from zero to deployed as fast.
Backend / API Layer
Next.js API routes or a small Node/Express layer when we need a separate service. AI orchestration lives in either Next server actions or a thin API that calls LLMs and returns structured data. We avoid heavy frameworks — the less boilerplate, the more we ship in 2 days.
LLM Integration
Claude and OpenAI, depending on the task. We use raw API calls more than LangChain; for MVPs, fewer abstractions mean fewer surprises. Claude for longer context and nuanced copy; OpenAI when the client already has an account or we need a specific model feature.
Database
Postgres (often via Supabase or Neon) for anything relational. We need migrations, auth, and real-time-ish updates without fighting the database. Supabase gets us auth + DB + storage in one place, which saves a full day on a 2-day build.
Hosting / Infrastructure
Vercel for Next.js front ends; Railway or Fly for anything that needs a long-running process. Default is Vercel — one push and it's live. We only switch when we need workers, queues, or persistent connections.
Cursor (The Actual Workflow)
Cursor is in the loop for every file. We use it for: generating components from a one-line description, writing tests, refactoring, and debugging with full codebase context. It doesn't replace architecture or product decisions — we still define the spec, the data model, and the user flow. Cursor turns "build this screen" into a first draft in seconds so we can refine instead of start from zero.
A Real 2-Day Build: Hour by Hour
I'll walk through a lead-gen tool we built for an agency owner — anonymized.
The brief: An agency needed a single dashboard to ingest leads, enrich them via a few API calls, run personalization through an LLM, and let the team review and send (or schedule) outreach. No custom training, no legacy systems — just a clear pipeline from CSV/API → enriched lead → draft email → out.
Day 1 — Morning (Scoping + foundation):
We locked the scope in a 45-minute call: three main screens (lead list, lead detail + enrichment, send queue), one LLM step for personalization, and Supabase for auth and data. By midday we had the repo, auth, and the lead list view with real data from a seed script. The client could already see the shape of the app.
Day 1 — Afternoon (Core feature):
Enrichment pipeline and LLM integration. We wired the enrichment APIs (public data only), stored results in Postgres, and added a "Generate draft" button that called Claude with the lead context and returned an email draft. By end of Day 1 the client could click a lead, run enrichment, and get a personalized draft. That was the "it works" moment.
Day 2 — Morning (Integration + edge cases):
Send path (we used Resend for email), scheduling logic, and error handling. A few edge cases showed up — rate limits on one API, prompt tweaks so the draft stayed on-brand. We fixed those in the same morning. The client did a first full pass: import leads, enrich, generate, send. We caught and fixed three small bugs before lunch.
Day 2 — Afternoon (Client review + ship):
The client tested with real leads and asked for two copy changes and one UI tweak. We made those without adding scope. We documented the env vars, added a short README, and handed off the repo and the deployed URL. "Done" meant: they could run the full workflow themselves, and we'd fixed every bug they'd found.
The Role Cursor Actually Plays
Cursor gets credited with more than it deserves in some discussions and dismissed too quickly in others. Here's my honest take:
What Cursor is genuinely great at: Boilerplate (components, API route skeletons, tests), repetitive refactors, and "implement this function given this spec." When we have a clear data shape and acceptance criteria, Cursor can produce a first version in one shot. It's also strong at debugging with full repo context — "why is this failing?" with the whole codebase in the prompt.
Where Cursor needs human judgment: Architecture (what to build first, where to draw service boundaries), prompt design for the client's voice, and anything that depends on the client's business logic. It doesn't know their market or their definition of "good outreach." We stay in the loop for that.
The frame I use: Cursor is a senior developer who writes fast but needs to be told what to build. My job is being the architect. Cursor's job is being the builder.
On a 2-day build, we often split by layer: one person owns frontend and UX, the other owns data + LLM and APIs. We sync at the end of Day 1 and again before ship so the integration doesn't become a bottleneck.
What Clients Actually Get at the End
The deliverable isn't just code. A completed 2-day handoff includes:
- Source code in a private repo (theirs or ours, their choice)
- App deployed and running (we send the URL and env setup)
- README with how to run locally and what each env var does
- A short walkthrough (recorded or live) so they can run the workflow and tweak prompts if needed
What's not in scope for the base 2-day engagement: ongoing maintenance, new features, or SLA. Those can be a separate retainer or follow-on project.
The Cost Question
We price 2-day builds as a flat scope-based fee. A typical 2-day MVP — one clear workflow, standard stack, no custom ML — lands in a range that's a fraction of what a traditional shop would charge for the same outcome. For context: traditional dev shops building the same scope would typically quote 2–3 months and 3–5x the cost. The difference isn't that we cut corners. It's that AI-assisted development with a tight scope and experienced engineers removes the drag that inflates most software projects: unclear requirements, large teams, over-engineering for scale that doesn't exist yet, and revision cycles that happen because no one saw a prototype early enough.
Is This Right for Your Project?
A 2-day AI-powered MVP makes sense if:
- You have a clear workflow you want to automate or a specific tool you want to exist
- You want to see something working quickly to validate before investing more
- You don't need enterprise-grade security compliance on day one
- You can make decisions fast and stay available during the build
It probably doesn't make sense if:
- The scope is genuinely complex and uncertain (explore before building)
- You need deep integrations with legacy enterprise systems from day one
- You want to own a product that scales to millions of users immediately (different architecture decisions)
We've also turned down projects where the client wanted SOC2 or heavy compliance from day one, or where the "MVP" was actually a list of 10 different ideas with no single clear first step. A 2-day build needs one sharp problem and one clear path to a first version.
Start With the Audit
If you have a specific tool or automation in mind, the free audit is the fastest way to figure out if it's a 2-day build, a longer project, or something you can actually do with an existing tool.
Shubham and his team at Maximal Studio have shipped AI tools for lead gen agencies, e-commerce marketers, and other agency owners. They work in public — follow the build on X @MaximalStudio.
