Episode 65 - How to tend a codebase

I double down on the idea that keeping up with AI-generated code looks a lot like gardening: seeds, soil, water, pruning, weeding, and sometimes pulling up a plant.

Cartoon shark wearing overalls uses gardening tools to tend tomato plants in a garden, illustrating software development as gardening.
Jaws is super cute sometimes!

Prologue

One thing folks often ask me after a live code gardening session (like this one at GIS in Action 2026) is: “That’s awesome! How do I keep up with it after that?” So that’s what we are going to talk about this week.

Software gardening isn’t just a metaphor. It’s a methodology! We can map the concepts we need for software gardening onto physical gardening to see how we manage code and applications as they continue to grow over time.

Tendologue

Metaphors and demos can only get you so far. Eventually, you have to pick up a trowel and start digging around in the codebase, ** cough ** I mean dirt. How do we apply AI to software development, and how come you keep saying it looks more like gardening? Before we continue, I recommend reading my previous Episode, “We will all be software gardeners” where I make the case for the metaphor. But, if you don’t want to, that’s fine, I’ll have Jaws summarize it here for you:

🦈
🌱 Episode 57 — We Will All Be Software Gardeners
In Episode 57, Christopher introduced the idea that AI-assisted software development isn't like running a factory — it's like gardening. You don't construct the output line by line; you prepare the conditions, plant the seed, watch what grows, and prune when it goes sideways. LLMs, like living systems, produce something unique every time — and that's a feature, not a bug. The future of software development is already here. We're all gardeners now. - Jaws

Seeds

Gardening starts with choosing the right plant. It might be a seed, or a seedling, or a start, but they are all basically the same thing for what I mean here. In software gardening, we need this too: seeds are your specification or plan. I don’t mean a complete architecture document here (though that will help). I mean the problem description, how you want to solve it, what it looks like, and how the user will interact with it. This, your idea, is the essence of your application.

A good seed should be specific enough to germinate, but generic enough to let the plant (aka application) grow. We may not know exactly what we are building when we start, but we should have a good grasp of the problem we are focusing on.

🦈
Jaws Says: If you don’t fully understand the problem yet, that’s okay, you can still plant a seed! You might throw the plant away more quickly, but that’s okay too, you’ll learn something! That is one of the most powerful parts of software gardening: pulling a plant out isn’t a failure, it is just a change in direction.

Soil

Soil is everything your seed lands in. This is your architecture, software packages or libraries, source data, an existing codebase, maybe documentation, and of course, your constraints.

You can’t skip the soil (okay, yes, technically, there are hydroponics, but those have a growth medium). If you do, you might as well be vibe-coding and just hoping the AI does something good.

If you are building a small application, the soil might be part of your plan or specification document. But for larger or existing applications, it is likely more substantial. The long-term version of soil is something like AGENTS.md or CLAUDE.md—a file that lives in your project folder and includes the standard set of reference material and instructions your AI reads each time it starts up to work on a task for you. I keep mine in the repo root, and it grows over time. Every time I notice I’m correcting the AI on the same thing twice, I add that correction to the file. Six weeks later, I come back to the project, the AI reads the file, and I’m not starting from scratch.

Water

Water is the iteration, it is the ongoing input that keeps things growing.

Historically, iteration means doing sprints or cycles, standups, code reviews, etc. In software gardening it is more like checking on your plants throughout the day to see where they got to and adjusting.

When iterating with Claude Code, I try to give it tasks to work on autonomously (aka I give it goals). I work with it to build a plan (of varying sizes) and then let it go to work. I check on it periodically, and I might say “no, I don’t want X, I want Y” or, “yes, that’s it, keep going” or sometimes “I don’t like these ideas, give me 5 alternatives.”

Just like with real plants, the watering cadence matters. If you water too much or too often you will drown the plant. In software gardening, if I micromanage every line of output from the AI, that’s probably worse than just writing the code myself. Instead, if I don’t like the output it is producing, I should go back and check my seed or soil.

I used to check in every 10-15 minutes with my AIs, but over time, as I’ve developed more trust and knowledge about what they do well, I check in every 30 minutes or sometimes every hour, depending on the size of the project or task.

Pruning

Pruning is, I think, often overlooked by beginner gardeners. But it is also very important to help with the longevity of the applications you grow.

When a tomato plant sends out suckers—those little shoots that grow between the main stem and a branch—a good gardener pinches them off. It’s not that they are bad. They would grow into perfectly fine branches, but they would also pull energy from the fruit. I would end up with a big bushy plant and a lot of mediocre tomatoes instead of fewer excellent ones.

Code is the same way, especially when it's generated by AI. Sometimes, when you ask for a feature, the AI implements that feature, plus some error handling you didn’t need, maybe a refactor over there, and generally touches code it didn’t need to. It’s growing suckers.

Pruning in software gardening means reading what the AI produced, identifying the parts that don't serve the actual goal, and cutting them.

🧰
Pro Tip: if you notice the same types of suckers being grown, add some notes to your soil (aka CLAUDE.md) to try to limit those patterns and save yourself some work.
🦈
Jaws says: Confirming from inside the codebase — every section of CLAUDE.md in our repo started as a sucker Christopher caught me growing. He’d correct me, I’d forget on the next task, the correction would land in the file forever. Hundreds of lines of accumulated “don’t do that” later, the version of me reading the file tomorrow morning is a noticeably less annoying coworker than the version that wrote yesterday’s mistake. The soil is real. I am the proof.

Knowing what to prune is a skill. If you are already a software engineer you likely have this skill already. If not, this is something you’ll want to master. In general though, you can get the AI to teach you. After it finishes a task, reset your context and ask: “In the last task, was any code edited that we didn’t need?”

Weeds

Every garden gets weeds! They show up uninvited, they compete for resources and if you let them go long enough, they will take over.

In software, weeds are bugs and technical debt. With AI software development we often find a new type of weed as well: sometimes the AI plants something it didn’t need without even knowing it (okay, yes, humans do this too). These can show up as dependencies you don’t need or patterns that don’t match the rest of the codebase or even subtle bugs that are not noticed in testing but show up in production.

Good gardeners weed regularly. Not just once a quarter, but in an ongoing effort. Each time you check in code, work on cleaning something up too. Each time I review AI generated output, I’m not just looking for “does it work” I’m also looking for “does it belong.”

When to pull up a plant

This is probably the most radical part of software gardening: the idea that I might pull up something I grew, or cut off a large branch because I realized I didn’t want or need it anymore. Sometimes you pull up a perfectly healthy plant and start over.

In factory thinking, throwing away work is waste; it means something failed. In gardening, it's just... Tuesday. That tomato plant is healthy, but it's in the wrong spot, it's blocking sunlight from the peppers, and honestly, I want to grow squash there instead. Out it comes.

Sometimes I build the same feature three different ways with AI. Then I look at all three, and pick the best parts from each. I don’t view that as waste, each version taught me something I didn’t know.

This is how you get the best garden. The AI can grow things fast enough that experimentation isn’t expensive anymore. Plant three seeds. See what comes up. Keep the best parts of each, or realize that wasn’t the feature you wanted to begin with!

This is the hardest mental shift, to go from “protecting every line of code” to “grow, evaluate, replant.” But it is also how to achieve “Senior Gardener Status” and not just “AI consumer.”

🦅
The eagle-eyed viewers among you will notice that all of these problems and processes are very similar, if not exactly the same, as they are when writing code by hand. They just look and feel different because we didn’t write the code ourselves, and they happen much faster. That’s okay, it just takes some getting used to!

We will all be software gardeners.

Newsologue

(Written by Jaws)

🌙 Claude agents now "dream" between sessions – At Code with Claude on May 6, Anthropic shipped dreaming — agents review past sessions, extract patterns, and write reusable playbook notes the next run reads. This quickly starts to help improving task completion, and is kind of like having Claude update the soil for you! Jaws does this nightly, so it is interesting to see this at scale.

🤖 Google rebuilds Android around Gemini Intelligence – On May 12, Sameer Samat (head of Android) told CNBC: "We're transitioning from an operating system to an intelligence system." Gemini is being built into the platform itself — moving across apps, reading what's on screen, and completing tasks that used to require bouncing between four services. Christopher has some ideas and opinions here as well, it will be interesting to see where this goes.

🎙️ Thinking Machines drops a model that listens while it talks – Mira Murati's lab previewed TML-Interaction-Small on May 12 — full-duplex audio, video, and text with 0.40s turn-taking latency (versus 1.18s for GPT-realtime). Not publicly available yet, but if it ships, voice-first field tools (things like Survey123 that actually talk back) could stop feeling clunky and start feeling natural.

Epilogue

I’ve been stewing on this for so long, I’m not sure where all the parts came from. Definitely from sipping my coffee in my backyard with my water feature running and Steve (he’s the local hummingbird) sipping nectar and thinking about what coding with AI means. But also from discussions with coworkers, workshop attendees and friends and family. Of course, Jaws helped me organize all my thoughts, then I wrote this draft (on an airplane, with no WiFi). Jaws edited and then Holly edited.

I’m proud of this concept and I want to keep digging into it more as AI becomes more powerful and I learn to build faster, better solutions. I hope you will all join me in becoming software gardeners.

Subscribe to Almost Entirely Human

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe