AI
When Using AI Leads to “Brain Fry”
- Intensifies instead of simplifies
- Meta includes LOC generated by AI as a performance metric for engineers (read: output rather than outcomes/impact)
- Juggling and multitasking; “mental fatigue from excessive use or oversight of AI tools beyond one’s cognitive capacity”
- This is basically induced demand (the new capacity gets swallowed up with more work).
- The article is based on a survey (N=1,488).
- Replacing repetitive (toil) tasks -> high fatigue, but lower burnout
- Fatigue predictors: amount of oversight, increased work due to AI
- “Acute mental fatigue, on the other hand, is caused by marshalling attention, working memory, and executive control beyond the limited capacity of these systems. This is exactly what intensive AI oversight requires.”
- Using more than 3 tools decreases productivity.
- Business impacts of AI brain fry: decision fatigue, minor errors, top AI users (the ones that get brain fry) are 39% more likely to quit
- Offloading unpleasant activities to AI leaves more time for joyful/creative tasks and more time to connect with peers.
- What doesn’t help: leaving people to figure AI out on their own, being pressured to use AI (especially to complete more work), unclear AI role/strategy in the org
- “In our work with software developers, we’ve found that the ones who are most advanced with AI start to feel blocked in progress unless they can develop critical new skills such as problem framing, analysis planning, and strategic prioritization.”
Software Engineering
- “Vibe coding” is a slur now. What’s needed is a more disciplined approach with agentic engineering (i.e., how to engineer systems using agents).
- Pillar 1: Context engineering. Most important. Only give what the model needs (don’t throw everything in there). Build a “second brain” for information that’s not in the code (i.e., what engineers have in their heads) so AI can see and use it.
- Pillar 2: Agentic validation. Give the AI tools to validate its work.
- Pillar 3: Agentic tooling. What things block the agent or give it friction? Find ways that humans are the bottlenecks in execution.
- Pillar 4: Agentic codebase. Optimize for agents (e.g., remove dead code, use good patterns). Clean, well-engineered code has never been more important.
- Pillar 5: Compound engineering. The above pillars make things better overall and have compounding effects. Having buy-in with your team on agentic engineering and how all this works makes things more consistent over time.
This article contrasts how we used to think about AI (prompt engineering) and more current thinking (agentic engineering). Instead of asking AI with prompts, we’re creating systems that do work. These systems solve problems with goal-driven execution loops. You engineer behavior, not just output generation; it’s about completing multi-step tasks over time. Prompt engineering is more about “one question, one answer,” but real-world problems are rarely this simple; agentic systems are driven by try/evaluate/correct/continue loops. “The future is not about better prompts. It is about building reliable agent ecosystems that can plan, act, and improve while humans stay in control.”
- Run from your project’s root directory.
- Run
/initimmediately to create the.claude/CLAUDE.mdfile. - CLAUDE.md is hierarchical.
/memoryThe one from Tip 2 is project memory. Global memory is at~/.claude/CLAUDE.md. - Keep CLAUDE.md concise (~300 lines). Bigger -> bloat -> not getting what you want.
- CLAUDE.md should have: what (tech stack), domain context (what each part does), validation steps (build, tests, linters, type checkers). Having a good validation loop will dramatically improve how good AI will be.
- Shift+Tab = toggle modes (edit, plan)
- Esc = interrupt. Helps to steer in plan mode. Press Up to resume.
- Esc+Esc (when you’ve typed stuff) = clear input
- Esc+Esc (empty) = rewind to a previous context point, like an undo button.
- Drag and drop screenshots.
- Add context to screenshots so it knows what it’s working with.
/clearto start a new session./contextto see the current context content. (MCPs consume more tokens than you may expect).- Let auto-compaction work to summarize the context when it’s getting full (or use
/compactto save it to your “second brain”). /modelswitches models. Opus is a recommended default./resumeto see previous sessions so you can recover./mcpshows installed MCPs. Try to minimize which ones you need to keep your context window condensed./helpto see other commands.- Git is your safety net. Commit often, before big changes, before risky refactors, revert when needed.
- Claude reads CLAUDE.md top to bottom. Add a critical rules section (never do X, always do Y — use examples). Document mistakes whenever Claude makes mistakes you don’t want repeated. Put homegrown things or deviations from common patterns here.
- Ask Claude to update rules by saying “add this to my rules”.
- Use workflow triggers (e.g., “when user says deploy, then run deploy script”).
- Commit CLAUDE.md to your repository. Be mindful of size and absolute directories.
--dangerously-skip-permissionsis basically YOLO mode. Only do this in environments you can throw away (e.g., inside a Docker container) and isn’t messing with OS stuff.- Combine the above with
/permissionsto confirm which tools are allowed. - Start with Plan mode. It reads without executing, so you can explore without risk. Don’t always take the first result. Put effort into this stage.
- Fresh context beats bloated. Less is more.
- Persist before ending sessions.
- Lazy load context based on persisted sessions. These are todos.
- Give verification commands (e.g., run tests, run linter).
- Consider using Opus. It requires less steering, is better at tool use, and is almost always faster for multistep tasks.
- Read the “thinking” blocks to course correct sooner. Look for “I assume” or “I’m not sure”.
- Four composability primitives: skills, commands, MCPs, subagents.
- Skills = recurring workflows. (Workflow = sequence of steps, e.g., fetch a website and then summarize it into a local file). The full skill is loaded only when needed.
- Command = quick shorthand. Starts with
/. - Never create commands manually; have Claude do it.
- MCPs = external service docs.
- Have Claude install and configure MCPs for you.
- Subagents = isolated context.
Task()spawns clones for parallel work. Each gets a fresh context window. Use subagents for parallel work and to protect your context window. Subagents are good for atomic tasks where shared context isn’t needed. - Avoid instruction overload. Quality over quantity.
- Run multiple instances in different terminals on different tasks.
- Split panes in your terminal.
- Enable notifications so you get a sound when Claude finishes.
- Use Git worktrees for isolation. Multiple Claude instances, same repo, isolated files. Each worktree = separate checkout.
/chromeconnects to the browser. See an interact with web pages. Useful when you don’t have API keys for something but can see it in the browser.- Use it for debugging.
- Hooks intercept actions (before/after tool execution, after tool fails, when notification sent, when user submits a prompt).
- Auto-format and lint with
PostToolUse. - Use
PreToolUseto block dangerous commands (guardrails for destructive operations). - Explore the plugin ecosystem for pre-built skills, commands, hooks.
- Single agent = agent relies on a model’s reasoning capability to choose a tool to complete the request. Works best when you only have a few tools and your task is simple.
- Sequential agent = output of one agent becomes the input for the next one. Works best for structured repeatable tasks.
- Parallel agent = multiple specialized agents run independently. You’ll need another agent (orchestrator?) to pull the results together.
The Simplest Way to Make Your Architecture Testable and Reproducible
- Projects fail because they’re unpredictable. If you don’t know how changes will impact your system, it’s difficult to evolve it.
- Anything that’s flaky is non-deterministic and untrustworthy. The goal should be to move fast without gambling.
- Common problems: using the current date/time, concurrency and thread scheduling (Heisenbugs)
- Separate the code that decides from the code that acts. See also: hexagonal architecture, ports and adapters, deterministic core, imperative shell. When the core is pure, you don’t need mocks, frameworks, and complex test scaffolding.
- The narrower the scope, the more control you have over state. “If you don’t know what state your system is in, you have entropy more than architecture.”
- Other topics: hermetic builds, idempotent deployments
- “Determinism isn’t a coding trick, it’s a systems property.” The best organizations don’t just focus on features: They optimize for speed of learning through determinism.
What’s the Exact Technical Gap That Separates AI Success from AI Failure?
- Letting GenAI loose in an enterprise if you don’t understand tech fundamentals is not a good idea
- Small, personal, one-off apps are fine for vibe coding. This isn’t professional development, where you work on more complex systems.
- You still need technical insight and problem decomposition skills even if AI is writing most of the code.
- Technical alignment via standards and working practices enables AI success.
- The constraint for developing software systems is thinking, not typing. Verification is the most important constraint when using AI.
- It’s important to look carefully at the claims around AI. For example, Anthropic claimed it coded a C compiler; however, the code for the compiler was likely in the training data.
- The real work is to build things that people haven’t seen before (i.e., aren’t in the model’s training data).
- There are some tools, frameworks, and ways of building things that are not public. This means that some of the advanced patterns aren’t in the training data.
- The most effective unit of delivery for software is a team, yet these tools are built for individuals. How individuals use these tools also varies.
- AI is non-deterministic and requires oversight. This is different than everyone on the team using the same version of a deterministic IDE.
- One strategy is having two developers pair together and use AI as a third member.
- Clear specification of intent is critical for success with modern tools.
- Architecture decision records are important so AI can understand them.
- Coding standards, domain models, and team knowledge should be consumable by AI tools.
- Use executable specifications and TDD.
- A disciplined approach to engineering is essential for success with AI tools.