How the Creator of Claude Code Actually Uses Opus 4.7
Boris Cherny shared his playbook
Boris Cherny is the creator of Claude Code. On April 16th, after dogfooding Opus 4.7 for several weeks, he posted a thread on X breaking down six ways to get more out of the new model. Not marketing copy. Not benchmarks. Just the workflow he landed on after living with it daily.
The thread resonated because developers want to know how the person who built the thing actually uses the thing. Here is what he shared, and what it means for the rest of us.
Stop babysitting your AI
The first tip is auto mode. Before this, you had two choices when running long tasks: sit there clicking "approve" every thirty seconds, or use the `--dangerously-skip-permissions` flag and hope nothing went sideways.
Auto mode sits in the middle. Permission prompts get routed to a model-based classifier that decides whether a command is safe. Safe commands run automatically. Risky ones still pause and ask you. The downstream effect is significant: you can run multiple Claude sessions in parallel without being the bottleneck for every permission prompt.
One important caveat Boris mentioned: auto mode is currently available for Max, Teams, and Enterprise users. If you are on a Pro plan, this is not available yet.
Use Shift+Tab to cycle between Ask permissions, Plan mode, and Auto mode in the CLI, or choose it from the dropdown in Desktop or VS Code.
Let the tool audit itself
Alongside auto mode, Anthropic shipped a skill called /fewer-permission-prompts. It scans through your session history, finds bash and MCP commands that are safe but repeatedly trigger permission prompts, and recommends adding them to your allowlist.
This is smart for a subtle reason. Most people never tune their permissions because they do not know which commands come up often enough to matter. The skill does that analysis for you. Run it after a normal day of work and you will probably cut your permission prompts significantly.
If you are not ready for auto mode — or not on a plan that supports it — this is the practical alternative.
Recaps help you context-switch
Anthropic shipped recaps shortly before Opus 4.7 launched, specifically to prep for the new model. They are short summaries of what an agent did and what is next, shown when you return to a session after stepping away.
This sounds like a minor quality-of-life feature. It is not. If you run multiple Claude sessions, context-switching between them is the real bottleneck. Before recaps, you had to scroll through the transcript to figure out where things stood. Now you get a few lines of summary and you are back up to speed.
Boris highlighted how useful these are when returning to long-running sessions. You can disable them in /config if they are not your thing.
Focus mode is about trust
The /focus command hides all the intermediate work — tool calls, file reads, bash outputs — and shows you only the final result. Boris said he has been loving this because he generally trusts the model to pick the right commands and make the right edits.
This is a mindset shift more than a feature. Most developers watch every single step because they are anxious about what the model might do. Focus mode forces you to ask: do I actually need to see every file read, or do I just care about the outcome?
If you are still watching every intermediate step, try focus mode for a day. You might realize you were spending mental energy on oversight that was not changing the result.
Effort levels replaced thinking budgets
Opus 4.7 uses adaptive thinking instead of fixed thinking budgets. You no longer tell the model how much to think. You tell it how much effort to apply, and it figures out the thinking on its own.
Five levels: low, medium, high, xhigh, and max. The default in Claude Code is xhigh. The max level only applies to the current session and does not persist — it is meant for specific hard problems, not as a permanent setting.
One thing worth noting: community members testing 4.7 have observed that it uses more thinking tokens than 4.6 at the same effort level, because the adaptive system decides independently how much reasoning is needed. Anthropic raised rate limits for all subscribers to offset this, but if you are cost-conscious, it is worth monitoring.
Verification is still the biggest multiplier
Boris's final tip is the same one he has emphasized across multiple threads: always give Claude a way to verify its own work.
What verification looks like depends on the task. Backend work — have Claude start the server and test end-to-end. Frontend — use the Claude Chromium extension so the model can control a browser. Desktop apps — use computer use.
His personal pattern is a /go skill that does three things in sequence: test the change end-to-end, run /simplify to clean up the code, then put up a PR. Every prompt he writes ends with /go. The model builds, verifies, cleans up, and ships — in one command.
This is the tip that separates good results from great ones. Without a verification loop, the model writes code and hopes it works. With one, it writes code, tests it, sees failures, fixes them, and iterates until it passes. In Boris's experience across multiple Claude versions, this feedback loop improves output quality dramatically.
The meta-lesson
Reading Boris's thread, the thing that stands out is not any single tip. It is how deliberately he has built his workflow around the model's strengths. Auto mode for parallelism. Recaps for context-switching. Focus mode for trust. Effort levels for tuning. Verification for quality.
None of these are complicated. They are just intentional. Most people install Claude Code, leave everything on default, and then wonder why their experience does not match what they see online. The gap is not talent or secret knowledge. It is configuration.
The model is genuinely capable enough now that the bottleneck is not intelligence — it is how you set up the environment around it. Boris's thread is proof that even the person who built the tool still has to tune it to get the best results.
That should be encouraging. It means the ceiling is high and most people have not hit it yet.
Based on Boris Cherny's thread posted April 16, 2026. Boris is the creator of Claude Code at Anthropic. Some technical details on token usage and effort defaults confirmed by community testing.
Ready to stop writing meeting notes?
Let AI handle your meeting notes, transcriptions, and summaries. Available on iOS.
Download for iOS