comment 0

Professional Development – 2026 – Week 15

AI

The Junior Developer CRISIS: How to Build a Team When AI Does the Entry-Level Work

  • See four stages of competence
  • Juniors devs are curious, they learn more, then they come back with better questions.
  • GenAI has encyclopedic knowledge, but we don’t really question where that came from.
  • Every interaction is billable. “It’s like a lawyer with a clock.”
  • GenAI doesn’t care at all about your code base, your product, your users, your coworkers. It only cares about what you’re asking and paying for right now.
  • John Willis: “An agent has unlimited knowledge, unlimited speed, and no accountability. That combination is what makes this a new risk class entirely.”
  • GenAI has a severe and incurable memory problem.
  • GenAI is more like a contractor than a junior developer. It has high accuracy, but low integrity. It works well with small, very well articulated steps. You can’t trust it completely.
  • When you’re pairing with someone that doesn’t know the domain, they’re going to ask really good questions.
  • We should have already been documenting our ways of working and doing things in small chunks.

Managers and Executives Disagree on AI—and It’s Costing Companies

Senior leaders see a rosier view of AI because of how they use AI (synthesis, strategic drafting, decision support) where AI excels. The middle managers work with operational reality: workflows built over years, teams with uneven tech comfort, output that has to be consistently right (not just fast).

Software development

Introduction to Agent-First Development – Ep 1 of 6

  • Harness — infrastructure that wraps an AI model (e.g., model selection, agent mode, tools)
  • Model — for example Claude Opus, GPT; these also have a low/medium/high thinking effort
  • Context — files and folders the harness includes; # in VS Code
  • Tools — things the harness can invoke; try to minimize for the task at hand
  • Prompts — a balance of not too vague and not too specific
  • Opinion from Geoff: Apparently just sending a prompt is an “agent session” now?

Your First Agent In Action – Ep 2 of 6

  • The “Set Permissions” drop-down is set to “Default Approvals”, but there are other options (bypass approvals == asks for clarification if there are decisions, autopilot == makes its own decisions without asking).
  • You can invoke tools yourself with the # prefix (e.g., #webpage).
  • Click the circular progress bar icon to explore the context window usage. VS Code and Copilot do compaction “intelligently in the background.”

Reviewing and Controlling Agent Changes – Ep 3 of 6

  • Instead of giving follow-up prompts to fix things, edit the previous message and resend it.
  • While the prompt is executing, you can steer it. Type the next prompt, then click the drop-down next to the send icon (“Steer with Message”). “Stop and Send” kills the current action and sends your next prompt; “Add to Queue” will send your next prompt after the current one finishes.
  • You can keep all changes or keep by file.
  • To explore a new direction type /fork, or click the fork icon just after your previous prompt in the chat view.
  • Restore checkpoint will put the code back to the state before your prompt.

Agent Sessions and Where Agents Run – Ep 4 of 6

  • You can archive, rename, and delete sessions in the Sessions window (under Chat).
  • You can start a GitHub Copilot CLI session from within VS Code’s terminal.
  • The “Set Session Target” dropdown has some options. Local == normal mode. Copilot CLI == makes a Git worktree and uses a CLI background agent. Cloud == runs on GitHub.
  • The “Plan” agent researches and outlines multi-step plans.
  • The Sessions window also shows you sessions running on GitHub.

Review Agents Work with Agent Debug Logs and Chat Debug View – Ep 5 of 6

  • If the results of the AI aren’t what you expect, click the meatballs menu and choose “Show Agent Debug Logs”. This shows you all the components of a given turn. You can see a summary and the Agent Flow Chart.
  • “Chat Debug View” gives you all the raw data sent to the LLM. It also shows you how long each section took and how many tokens were used.
  • There’s a /troubleshoot to ask Copilot various questions (e.g., why is my skill not being used).
  • Insight from Geoff: This video scrolls through the details of a VS Code-initiated prompt. It was interesting to see all the behind-the-scenes prompts that make the harness work.

Demo – Build Your First App with Agent Mode – Ep 6 of 6

  • Tip: Use a local tool to do speech to text instead of typing your prompt. There’s also a microphone icon.
  • Plan mode may use the #askquestions tools.

Leave a Reply

Your email address will not be published. Required fields are marked *


This site uses Akismet to reduce spam. Learn how your comment data is processed.