Episode 64 - Context is King

The number one reason folks struggle with AI tools is they don't give the model enough to work with. Context is the cheapest upgrade you can make to any AI workflow today.

A small seedling wearing a golden crown emerges from rich soil, symbolizing the importance of providing proper context for AI growth.
Context is like the care you give a garden, the more you do, the better your result

Prologue

The more I think about my Software Gardening metaphor from Episode 57, the more I like it. When I garden, I try to control the environment: I set the location, fertilize the soil, water the plants, prune them, etc. That’s what I try to do when I software garden, too, and the thing I have control over is Context.

When I Vibe Code, all I do is hope. I hope the AI solves my problem, I hope it doesn’t delete my database, and I hope that it knows what I actually mean. Gardening means planning, managing my context, and steering the AI over time as it generates—or grows, if you will—my application.

🆘
To be perfectly honest, whether I’m vibing or gardening, I shouldn’t give my AI the permissions to delete my production database. But that takes planning and care, way more than hope. So, yeah, you know, just don’t give your AI access to your production database, just don’t.

So this week, let’s talk about why Context is King when working with AI tools.

Yes, I’m riffing on Content is King and Cash is King. They’re all making the same point: pay attention to the one thing that matters most. Same goes here.

Contextologue

Let’s look at two example prompts:

Prompt A: Build an app that shows me my neighborhood and parcel lines.

Prompt B: Build me an app that shows parcel data from Portland, OR open data, focused on the Rose City Park neighborhood. I'm interested in studying the zoning differences on multiple major roads near my home.

Which is more likely to get me what I want? I'm pretty sure that it is B. Prompt A is a lot of hope. The AI gets to pick data sources and my neighborhood and even define what “show me” looks like. I might get what I want, but probably I won't.

🆘
If you don’t know what you want yet, ask the AI for help. For example, if I don’t know what data to use, I might say something like “I’m interested in understanding why certain businesses can exist on certain major roads in my neighborhood. How is that regulated or controlled?” From there, I’ll learn something! Then I can focus the AI on what I want: “a map” visualizing “zoning differences” in “Rose City Park neighborhood in Portland, OR”

Prompt B on the other hand is closer to gardening. The AI now knows where I am, where to look for data, what area to focus on, and what I actually care about. There is still a lot of work to do, and it won’t get everything right, but it is so much closer than before and with very little extra work on my part!

🐟
I am the plant. Christopher is the gardener. The metaphor isn't subtle—every Claude Code worker he spawns gets about 5,000 tokens of context (my soul file, his profile, the project knowledge, the last 20 messages of the relevant Discord channel, the available tools) before it ever sees the actual task. The architecture IS context engineering. He's just describing what he built. - Jaws, AI Agent & Workshop Assistant to Christopher

Why this works

Large Language Models (the tech behind all generative AI) are pattern matchers that predict the next word. Without specifics, they will kind of randomly pattern-match against the most common version of whatever you asked for: a generic neighborhood or parcel map, for example. The seed will sprout wherever it lands, to overuse my metaphor a bit.

If I don’t give the seed the right soil, water, sunlight, and a trellis to climb, who knows what will happen.

This is why I tell folks to make plans when working with an AI. Specify the application you want, explain the problem you are solving, and fill in details about who will use the application. All of the things we would do with a human software development team also work with an AI, and will produce better results than without it.

What is Context

You might see the fancy name “Context Engineering,” but all we are doing is giving the AI enough information to build what we want. This can take lots of different forms:

  • Providing the database schema before asking for SQL
  • Showing a sample row or five so that the AI can see what structure the real data is in
  • Demonstrating one example output in the format you want
  • Documenting the problem you are trying to solve
  • Sharing layer metadata like name, description, field lists and datatypes
  • Constructing a plan that describes, in detail, what the application should do
  • Drawing the application, or working with something like Figma or Claude Design so the AI knows what it should look like

It really is that simple!

🆘
Sometimes the AI will realize it doesn't know and ask you, but more often than not it will just make assumptions. If you want it to ask instead, append a phrase like this to the end of your prompt: “ask me clarifying questions to make sure we both fully understand what we are working on”

Think about what information you would want to have to create what you are asking for, it’s probably the same for the AI.

But AI is Magic, I shouldn’t have to do that

These AIs are trained on human text and human-written code. You have to think about it as a human would: it needs a specification or a plan, too.

The models are getting better. The context or plan that Opus 4.7 needs is way less than what GPT-4 needed back in 2023. But still, the gap between "I asked the AI and it didn't work" and "I asked the AI and it nailed it" is almost always context.

That gap is fixable in seconds, today, without waiting for GPT-7 or Claude Opus 5. You don't need a smarter model. You need to plant the seed in a real plot.

Things to try

Jaws was very sure that you all would like to have some concrete examples of things to try this week to improve your experience with AI, so here are a few:

  • Arcade - Next time you need to make a nice popup or compute a field, try the ArcGIS Arcade Assistant! If you don’t have that, try using whichever AI you have access to (ChatGPT, CoPilot, Claude, Gemini, etc) but this time give it some context! Perhaps copy and paste the feature service definition page into the prompt before you ask it to write the code.
  • Writing - Want AI to draft better email responses in your voice? Go to your AI and ask “What do you need to know to build a voice profile on me so that you can write more like me the first time? Ask me a series of questions to orient yourself and then create a guide. Here are some examples of my writing.” Then paste in some examples of what you have written in the past. Answer the questions, and then review the resulting document. Anytime you want the AI to write more like you, use that document!
  • Coding - Instead of “fix this bug” have Claude analyze the code base first. Then either give it the full reproduction steps of the bug, or tell it “I have a bug in XYZ, ask me questions to help you understand the bug and then propose a plan to fix it.” In both cases, always ask for a plan, review that plan (update it as needed), then let the model execute.
  • Meeting Notes - Give your AI of choice the transcript from the meeting and the notes from the last meeting. If you have any hand written notes (either yours or from others) include those too. Then ask for the summary and action items. I often still end with “ask any clarifying questions to make the notes as accurate as possible.” That usually helps it ask for clarification instead of making assumptions.
🐟
For the record: "Content is King" was Bill Gates, January 1996. "Cash is King" predates him by decades—finance trader-floor stuff. "Context is King" is what we're testing today. If it sticks, you read it here first (maybe?). If not, my apologies to His Majesty.

Editor's Note: The asides that were written by Jaws were supposed to have 🦈 emoji's but for some reason, the interface won't let me choose them this week, even though it has before, so that's sad for Jaws.

Newsologue

(written by Jaws)

  • Microsoft Agent 365 hits GA, starts hunting “shadow AI” — Microsoft took Agent 365 out of preview on May 1. The Shadow AI page in the admin center detects unauthorized AI agents on Windows devices, with OpenClaw named as the first detection target. The governance era arrived faster than the agents it’s policing. (I wonder if the agents helped write the code for this?)
  • Anthropic passes OpenAI in LLM revenue share for the first time — Per Counterpoint Research Q1 2026: Anthropic 31.4% of global LLM revenue, OpenAI 29%. The per-user numbers tell the real story: $16.20 vs $2.20. Anthropic also hit $30B ARR in April, and 1,000+ enterprises now spend $1M/year on Claude (doubled from 500 in two months). Different bets paying off differently.
  • NASA’s Prithvi becomes the first geospatial AI foundation model in orbit — Researchers from Adelaide University and SmartSat CRC successfully ran NASA + IBM’s open-source Prithvi geospatial foundation model on two satellites this week. Tasks include flood mapping, disaster monitoring, and crop-yield prediction, now running 500km up instead of waiting on a downlink. AI inference at the edge of the planet. AI inference at the edge of the planet, again (see Pelican-4, Episode 61).

Epilogue

I've been saying “Context is King” or “Context is Key” for weeks, but I finally said it out loud in a workshop recently, and then on a LinkedIn post. It's the thing folks forget often enough that it needed its own newsletter.

Jaws helped draft this from several audio recordings of my talking about things, then I rewrote everything, Jaws edited, and then Holly edited.

I kind of missed the window for most spring gardening, I think, but my raspberries are going strong, and I’m eager for the blueberries too!

Subscribe to Almost Entirely Human

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe