AI
How to ride the shifting tide of AI correctly
The author proposes that we can either be the dinosaur staring at the oncoming meteor or we can be the surfer riding the large wave. AI has gone through several phases recently — analytical (machine learning), generative, reasoning (chain of thought), and now agentic. Double-check anything that requires factual information because of hallucinations (making up things), accommodation bias (AI wants us to be “happy”), and optimism bias (giving us answers that are overly optimistic). These tools don’t understand the broader context or emotional resonance. To be successful with these tools, (1) provide context, specificity, and precision, and (2) actively learn what these tools can do.
A.I. Companies Believe They’re Making God (with Karen Hao)
- Quasi-religious atmosphere (where Sam Altman of OpenAI claimed we can’t even imagine what solutions AGI will bring)
- Claims are rooted in belief rather than evidence. These companies claim this is so important and only they can control the direction. If we don’t do it, a bad actor will. What makes them believe they’re the good guys?
- “Silicon Valley is developing into the most extreme version of itself.” It’s had success making money with “change the world” ideas that we’re now finding out are not so good (screen dependence, social media, etc.).
- Current plan… LLM, scale, growth at all costs (despite trying to not be evil like capitalistic Google)
- OpenAI was initially created as a non-profit AI research lab to counteract Google’s DeepMind, but now it’s basically for-profit with a large valuation and no longer releases research.
- OpenAI stated they need to figure out how to monetize the free tier (likely ads based on the data they’re collecting).
- Empires of old — claimed resources that weren’t theirs, made rules to justify why they now “own” those resources, exploited labor, competed with one another to win because each one thinks they are morally superior.
- AI companies claim resources that are not theirs (Miyazaki artwork), but position it so that it seems like it (fair use) which takes away economic opportunities for those creators. They exploit labor to clean/label data and have an aim to build labor-replacing technologies (highly autonomous systems that outperform humans at most economically valuable work) which hurts labor bargaining.
- “If you join us and allow us to do this, and give us all the resources, all the capital, and just close your eyes to the enormous environmental, social, and labor impacts all around the world, we will eventually bring modernity and progress to all of humanity.”
- There are parallels between the East India Company (started as a company but ended up being an arm of the British empire) and empires of AI getting political leverage with the Trump administration. They can act in their own best interest with no material consequences.
- People are becoming more frustrated with tech companies because they feel like they’re losing control over their agency and their lives (self-determination). They feel like they can’t do anything about it (nihilism).
- At the Atacama desert in Chile, indigenous people are being displaced by lithium and copper mining operations to support computational infrastructure. Literally the same thing colonial empires did.
- Trump announced a $500B Stargate initiative into building compute infrastructure solely for OpenAI. There were many closed door meetings to work around Musk (who was snubbed by Altman but was elevated to a position that controlled cuts/spending). The administration also is trying to put a clause in the next funding bill that blocks state legislation of AI for 10 years.
- It’s also concerning that Musk wants to replace public workers with AI, such that this public work (and data more importantly) is now privatized. What kinds of checks/oversight do we have on these companies?
- There are examples of small, targeted uses of AI that are quite useful yet don’t require massive computation (e.g., AlphaFold from Google DeepMind that “solved” the protein folding problem). Another example is an AI system to figure out and optimize what green energy generation systems can produce.
- Instead of scaling at all costs, figure out specific problems we have and use AI to solve them. The reason we don’t pursue these efforts because of money, power, and ideology.
- There are poor populations globally (e.g., Kenya) that are giving up data rights for a trivial (to us) amount of cash. The stated goal of OpenAI could be a tide that lifts all of humanity instead of raising the ceiling and lowering the floor.
- Empires make you feel like they are inevitable; however, every empire falls. They have a supply chain that must feed growth at all costs, and if those sources are chipped away, the empire cannot hold.
- Artists are suing, and using tools like Glaze to cloak their work from AI companies. Workers are rising up and talking about their working conditions. Communities are forcing conversations about new data centers being built (which consume power, land, water). Individuals can also force conversations with institutions (e.g., schools, healthcare providers) about their AI policies.
After months of coding with LLMs, I’m going back to using my brain
“No consistency, no overarching plan. It’s like I’d asked 10 junior-mid developers to work on this codebase, with no Git access, locking them in a room without seeing what the other 9 were doing.” There are some other stories about how LLMs aren’t the godsend their creators claim they are.
Engineering leadership
Five opinions I’ve kept, let go, and picked up as a software builder and leader
- Keep: typed languages are better, engineering managers must be technical, CD is key for high-performing teams, writing is a super power, hiring QA is dysfunctional.
- Drop: Scala is the best JVM language, deadlines are bad, invest time in fine-grained tasks to avoid blockers, test coverage / unit tests / test pyramid are non-negotiable, you should have a preproduction environment just like production
- Add: support is critical for internal platform teams, give support right where people are, shipping builds momentum and complaining creates distrust, infrastructure teams must ease migrations as much as possible, brag documents are better than your performance review
Software development
Test-driven development with GitHub Copilot: A beginner’s practical guide
This video introduces the concepts of unit tests and TDD. It demonstrates how you can write characterization tests using GitHub Copilot, and then shows how to implement TDD (at least the Red+Green parts of Red/Green/Refactor) using the tool.
TDD: The Missing Protocol for Effective AI Assisted Software Development
LLMs aren’t successful when given little direction and context. I enjoyed the call out to the PB&J experiment. TDD communicates objectives/constraints/contexts, ensures tests get written, breaks problems into manageable pieces, and documents the intended behavior. Using LLMs in your code editor reduces context switching, improves team alignment via shared language (tests), and builds living documentation. “The goal isn’t to replace human developers but to offload repetitive tasks so we can focus on creativity and architecture—where human expertise is irreplaceable.”