How I Keep Shipping with AI

By Lz on 2026-04-27 · 4 min read

Read on MediumHow I Keep Shipping with AI

I have a full-time job, and I also run several apps on the side: Habite, TimeCrunch, theDay, and Budgitify. For a long time the math on that did not really work, because there are only so many hours in a week after a real job, and shipping anything substantial on the side took months at a stretch when I could afford to put real focus into it. AI is the thing that has actually changed that math for me, more than any productivity system or tooling change I have tried in the last decade.

Most of that happens inside Claude Code, which lives in my editor and can read my files, edit them, and run commands the same way I would. I still use the chat-based tools for the off-the-cuff questions, but the bulk of the actual work happens where the project already lives.

1. Where I Actually Lean On It

I lean on it most for the parts of the work where the answer already exists somewhere but would take me an hour to assemble: boilerplate and scaffolding, working with unfamiliar APIs, cross-file refactors that are not quite mechanical enough for a regex, and writing tests for code I just shipped. None of that was ever the interesting part of the job, but all of it used to eat the kind of time I do not have to give.

The unfamiliar API piece is the bigger one for me right now. I have been integrating Plaid into Budgitify, and Plaid is the kind of API where the documentation is reasonably complete but the actual behavior in edge cases is its own learning curve. Being able to ask very specific questions and validate the answer against my own logs has saved me days of trial-and-error. A typical question looks like:

In Plaid Sandbox, when a TRANSFER returns with code R01, what does
the webhook payload look like, and which fields do I need to act on
versus log and ignore?

The answer comes back in a form I can compare directly against what I am already seeing in my own code, which is what turns it from a guess into something I can ship the same evening.

2. How I Actually Prompt

The biggest difference between this being useful and being a frustration is in how you talk to it. Vague prompts produce vague results. The clearer you can describe the constraint, the file structure, the conventions you care about, the better the output, and that clarity comes from knowing your own codebase well enough to ask the right questions in the first place. In practice the difference looks like this:

Vague:
"Write a webhook handler for Plaid transaction updates"

Specific:
"Write a webhook handler for the TRANSACTIONS:DEFAULT_UPDATE event in
apps/budgitify/src/api/webhooks/plaid.ts. Follow the existing pattern
in apps/budgitify/src/api/webhooks/auth.ts, use the PlaidWebhookEvent
type from apps/budgitify/src/types/plaid.ts, and call syncTransactions
from apps/budgitify/src/lib/plaid/sync.ts. Only handle the
new_transactions and removed_transactions arrays for now."

The second one takes thirty seconds longer to write and saves the round-trip of correcting the first one back into something I can actually use.

3. Where I Do Not Use It

Where I do not lean on it is anywhere that requires real judgment. What to build, why to build it, how to design something so it is still maintainable in two years when I have half-forgotten the original context. Those calls are still on me, and they are still the parts of the work that actually separate good from average.

The other thing to watch for is that the failure mode is usually subtle. A function name that does not quite exist, a return type that is one field off, an import that resolves to nothing. Something like this:

// What looked correct at a glance:
const accessToken = await plaid.exchange(publicToken)

// What the actual API is:
const { access_token } = await plaidClient.itemPublicTokenExchange({
  public_token: publicToken,
})

The first version compiles in your head and only falls over when you actually run it. Catching that kind of thing is the work that does not go away no matter how good the tooling gets.

4. What Has Actually Changed

The honest measurement here is throughput. Habite v3, which was a multi-month complete rebuild of the entire app, would have taken me twice as long a year ago, and probably would not have happened at all the year before that. None of that is a transformation. I still ship the things I would have shipped, just more of them in the same window of time. The difference is at the margin, and the margin happens to be the part of the equation that decides whether a side project actually gets across the line or sits in a folder for a year waiting on a stretch of focus that never quite shows up.

If you have been on the fence on whether any of this is worth your time, spend a real week using one of these tools as if it were already part of your workflow rather than a curiosity. A week of that will teach you more about the actual capabilities and the actual limits than a year of opinions formed from a distance. If your time is already constrained the way mine is, that is the gap worth closing.