Self analysis
I knew something had shifted in how I work with AI. The way I approached ideation felt different than six months ago. Building moved faster. But I couldn't articulate what had actually changed. Just that it had.
So I ran an experiment.
I pointed Claude at five months of my own conversations, three projects, 398 commits. Not to discover patterns I didn't know existed. To validate the ones I sensed and find the gaps I couldn't see from inside my own process.
It took one night. What came back was sharper than I expected.
I'd developed two distinct modes—thinking and building—with deliberately different approaches to each. I'd learned to withhold context strategically, not dump everything upfront. I'd stopped describing problems and started demonstrating fixes. The analysis confirmed the evolution I'd felt. It also revealed growth edges: where I undersell myself, where I skip crucial context on structural decisions, where I accept Claude's reversals too easily without probing why.
The popular advice misses all of this. Everyone's chasing the perfect prompt when the real skill is knowing what kind of conversation you're having.
Context is constraint
We were building a design system at work. Healthcare company, complex product, lots of components to systematize. We did what felt responsible: Wrote a comprehensive brief. Company context, medical terminology, patient workflows, the whole picture.
Claude absorbed it beautifully. Then started generating components called things like "PatientErrorChip" and "MedicalStatusLabel." Every output dripped with healthcare language baked into the component names, the documentation, the suggested variants.
We needed agnostic building blocks. Buttons that didn't know they lived in a hospital. Error states that worked whether you were scheduling surgery or ordering coffee.
Front-load vision. Drip-feed implementation.
I'd given Claude too much context. Context isn't just information. It's constraint. The model couldn't see past what I'd told it to care about.
This is the thing nobody teaches. More detail doesn't mean better outputs. The right detail does. And "right" depends entirely on what you're trying to accomplish.
I've learned to drip-feed context deliberately. Start agnostic. Let the model explore broadly before I tell it which direction matters. Add specificity only when the conversation needs it. When we're making structural decisions, not tactical ones.
The "why" threshold
But withholding context has a limit. I learned this the hard way too.
I was setting up project architecture. Folder structure, component patterns, how things would connect. I gave Claude the what without the why. Stayed agnostic, like I'd trained myself to do.
The implementation worked. But when I needed to extend it later, Claude couldn't generalize. It didn't know why we'd made those choices, so it couldn't apply the same reasoning to new situations. I'd saved tokens upfront and spent them tenfold on corrections.
Now I distinguish between two types of decisions:
Structural decisions need the why. Project architecture, system boundaries, design patterns. These set the foundation everything else builds on. Claude needs the reasoning to generalize correctly.
Tactical decisions don't. CSS tweaks, copy changes, bug fixes. These are isolated. The why just adds noise.
Vision and scope get the why. Implementation details don't.
Two modes, deliberately different
The analysis made this explicit: I operate completely differently depending on what I'm trying to accomplish.
When I'm ideating, I want Claude to challenge assumptions and identify biases. If I dump my whole worldview upfront, I get my own thinking reflected back at me. Polished, articulate, completely useless for finding blind spots. So I stay agnostic. I ask for challenges. I signal the mode:
When I'm building, precision matters. I describe outcomes and behaviors, not implementations: "Summary cards move on hover. Remove that." Not: "Change the CSS transform property on line 47." I care about what it looks like. Claude figures out how.
I ideate in open-ended conversation. Exploratory, circular. I'm not trying to finish anything. I'm trying to understand what I'm actually thinking. I ask lots of meta questions. You know, questions about questions.
I build with direct, linear exchanges. Commits as checkpoints. Branches as experiments that can be discarded. Terse corrections: "Copy icon is black, should be white." No wasted words.
The switch matters. When I hit a wall while building, I don't debug harder. I step back and explore. When ideation clarifies into something buildable, I stop talking and start executing.
Most people don't know which mode they're in. They try to build while exploring and wonder why the output feels scattered. They try to explore while building and wonder why they're burning tokens on philosophy when they need working code.
Describe and show
When Claude gets something wrong, I don't just tell it that's wrong. I describe the problem and show the fix if possible.
I describe the current status and the expected outcome. Paste the correct structure. Point to documentation or a working example. Reference existing implementations: "Use the same pattern we have in the footer." This eliminates ambiguity.
Saying "that's not right" creates loops. Saying "here's what right looks like" creates forward motion.
Better than showing the fix is pointing to something that already works. "Use the same icon we have on Pre-Note." Claude can read code. Let it reference what's already there instead of describing from scratch.
The analysis revealed I do this instinctively now, but it took months to develop. Early on, I'd have hard time describing problems. Now I demonstrate solutions in examples.
When not to prompt
Sometimes the fastest collaboration is no collaboration.
For CSS tweaks I need to test five times, I edit directly. For small copy changes, I just make them. Claude comes back in for structural changes, complex logic, things I genuinely can't see clearly.
This sounds obvious but it's not. I've watched myself and others prompt for changes that would take ten seconds to make manually. So locked into "AI-assisted workflow" that we forget that we can just do things. Same for debugging.
Knowing when to prompt and when to act. That's part of the skill too.
Commits as checkpoints
My git workflow is part of my collaboration system.
Commits aren't milestones. They're savepoints. Every logical stopping point gets a commit, not because I care about clean history, but because I want the freedom to throw things away.
Branches are experiments I might discard entirely. When I can delete a direction without losing real work, I explore more freely. The cost of being wrong drops. The speed of iteration rises.
This isn't just about version control. It's about psychological safety. I move faster when mistakes are cheap.
What most actually get wrong
The non-technical people I watch struggle with AI are looking for magic prompts. They think the secret is in the wording. Some incantation that unlocks the good outputs. I've watched people paste entire specs into a prompt and wonder why the output feels generic or it just doesn't work until they actually remove stuff.
The technical people I watch struggle are using AI as autocomplete. A faster way to write code they already know how to write. And dismiss the exploration and ideation completely.
The quality of your output depends on two things
The clarity of your context and your domain expertise. If you don't know what good looks like, you can't recognize it when Claude produces it. If you can't communicate what you need clearly, Claude can't help you get there.
The magic prompt doesn't exist. Mapping your own process does.
What changed for me
I'm a designer. I don't write production code. Not really. HTML and CSS, sure. But a full-stack application with authentication, database, PDF processing? That wasn't "slow" for me. It was impossible.
CESFAM took six days. A healthcare dashboard that processes patient data, calculates eligibility, handles Chilean labs formats. I designed it and shipped it. Not because I learned to code. Because I learned to collaborate with something that could.
The bottleneck didn't just move. It disappeared.
AI doesn't replace your thinking. It lets you operate at the speed of your ideas instead of the speed of your limitations.
But only if you know what conversation you're having.
The models evolved. So did I.
The way I worked with GPT-3.5 doesn't work with Claude now.
Early models needed comprehensive specs upfront. Scaffolding to stay coherent over long tasks. You had to front-load everything because the model couldn't maintain context or course-correct mid-conversation.
Current models handle ambiguity. They maintain context longer. They can course-correct mid-task without losing the thread. So I've shifted from comprehensive specs to incremental guidance. Build a piece, review it, add the next piece.
The optimal collaboration style isn't fixed. It evolves with the models. What worked six months ago might be inefficient now. What works now will probably change.
I've been in this long enough to see the shifts. That's not expertise. It's just paying attention. But paying attention matters. The people still writing prompts like it's 2023 are leaving performance on the table.
Growth edges
The analysis didn't just confirm what I was doing well. It showed me where I still fall short.
I undersell myself. I present modestly, Claude calibrates to that, and then I have to push back to get the depth I actually need. The fix is honest self-assessment upfront. Not arrogance, just accuracy.
I accept reversals too easily. When Claude changes position after I push back, I've started asking: "What specifically changed your mind?" A weak answer means Claude is capitulating, not updating. I need to probe, not just accept.
I catch myself calling things "proof of concept" when they're really products I'm afraid to ship imperfectly. The sandbox framing is useful for exploration. It becomes a shield when it stops me from claiming the work.
These edges only became visible because I mapped my own patterns. I couldn't see them from inside the process. Running the analysis created distance. And distance created clarity.
Start here
Do what I did. Point the AI at your own history. Ask it to find your patterns. Not the story you tell yourself, but what you actually do. Validate the shifts you sense. Find the gaps you can't see.
I know this whole piece is about not giving magic prompts, but this is how I started that conversation.
The prompt isn't the skill. Knowing which conversation you're in is.