AI
How Behavioral Science Can Improve the Return on AI Investments
- Touches on many behavioral constructs: technosolutionism, loss aversion, inventor’s bias, availability heuristic, illusion of explanatory depth, algorithm aversion.
- Use a bottom-up approach by getting end-users involved.
- For adoption… Frame AI as an augmenter (not a replacer), make AI’s mistakes relatable, provide transparency about how AI arrived at its decision to demystify it
- Most companies measure only adoption, when they should measure trust, adoption, and perceived fairness.
- The article references a report from Boston Consulting Group (Oct 2025) stating 74% of companies not seeing “tangible ROI from AI.”
AI Agents Aren’t Ready for Consumer-Facing Work — But They Can Excel at Internal Processes
- “In our view, many of the leading AI proponents are overhyping when they make bold statements that entire swaths of the economy will be shortly replaced by AI. That’s because real, functional AI in established companies is hard work: it takes relatively clean data, process mapping, and deep experimentation—and even then often requires a human in the loop.”
- Focus on the next 2 years (not 10 years from now).
- The gains are good, but not dramatic (read: hype).
- To succeed, cut through the hype, understand what it can actually do, and use a practical experimental/learn approach to multi-agent systems.
- Use small, specialized agents with lots of verification (treat them like toddlers).
- “Customer-facing contexts are a bad fit for the current capabilities of AI agents. They’re messy and unpredictable; inputs are unstructured, tone and context shift constantly, and regulators and consumers have little tolerance for hallucinations or errors.”
- “In many ways, it feels like rediscovering Lean—reengineering work from the ground up. The difference is that today’s toolset is vastly more powerful, enabling not just incremental optimization but full process redesign, even across departments.”