We have been here before. In 2007 the danger was moral hazard, which was the tendency for human experts to take risks they would not personally bear. Today the danger is amoral hazard, which is what happens when systems optimize without conscience, judgment, or any felt sense of consequence. Generative AI reflects us, while agentic AI acts without us. As we hand more decisions to machines that cannot care, cannot feel bad, and cannot calibrate intention the way humans do, we risk recreating the same misalignment that once crashed the global economy, only faster and at a larger scale. The promise of AI is real, but only if we remain the adults in the room, holding the values, making the judgments, and remembering that intelligence without empathy is not wisdom.
•••
I first encountered the concept of Michael Burry in a Vanity Fair article in the spring of 2010 called “Betting on the Blind Side.” The financial crisis was still crippling the economy and society at the time; I was deep into my first burnout as my adrenals crashed alongside the stability of the global financial system. Burry had been diagnosed with Asperger’s Syndrome (now known as Autism Spectrum Disorder or ASD), and he used that “gift” to pore over the fine print of thousands of mortgage documents in machine-like fashion, resulting in an investment thesis to short the U.S. subprime mortgage market. (He did then what people would only consider using ChatGPT for today.) His prescience was rewarded and he returned millions of dollars to his investors. His journey formed the backbone of the Michael Lewis–penned book and movie The Big Short.
Global panic in the financial markets began in the summer of 2007. In the U.S. alone, households would go on to lose more than $14 trillion in net worth, even as Wall Street firms paid out $32.9 billion in bonuses that year to employees in New York City alone. Hundreds of dollars of wealth were destroyed for every dollar paid in bonuses. I always thought it odd that so many people would receive such huge rewards for such a cataclysmic failure. I was not alone. We would all come to know this effect as “moral hazard.” Behavioral economics calls this the principal–agent problem: a catastrophic misalignment of incentives between the often less-sophisticated clients and the experts who designed and sold the products. The financial industry seemed to abandon the notion of fiduciary duty: that the honorable agent subverts his or her own interests to those of the principal.
•••
Early in 2007, few of us had any inkling of the trouble that was brewing, just as most of us today have no real sense of what is about to happen with artificial intelligence. Machine learning and perception AI (computers being able to understand what they are seeing and hearing) has been around for some time now. That evolved into generative AI like ChatGPT, which uses large language models to predict the most probable best answer to a user query or prompt.
We are now in the throes of agentic AI and robots threatening to replace human beings and erase massive numbers of jobs. I am convinced that if used well, like any tool or technology, AI can extend the power of human operators and improve the human condition. But as always, we must be cautious that this is not just another way to make a small number of people obscenely wealthy at the expense of far larger numbers of far less-sophisticated people — a principal–agent problem en masse.
Brian O’Connell, in his recent Quartz article “Meet Your New Middle Manager,” details how AI agents are taking over much of what we would call middle management, particularly in the human-resources domain (recruiting, performance management, hiring, and firing). He doesn’t just offer a critique of automation; he questions what leadership and accountability really mean in a system where algorithms are making more and more decisions that impact people.
I find myself wondering about the role core values and culture will play in creativity, innovation, and the quality of life when the software lacks organic human empathy but is increasingly good at simulating it. Does ChatGPT really care about me? Where is the love and connection? How do these agents truly serve the principals, if the definition of principal is larger than a relatively small number of AI companies, engineers, and entrepreneurs? Who will be left to buy things and pay taxes? In our zeal for progress at any cost, I’m not sure we’ve thought this through.
•••
I’m not writing this as a sociologist with a socialist agenda. I’m writing as an executive coach and industrial designer with a very practical entrepreneurial one. Back in the day, when I was involved in developing new products or processes, we conducted task analyses in which we modeled workflows and then assessed whether a person or a technology was better suited to accomplish each task. That was long before machine learning was a thing.
Today we can look at a task and decide whether an AI agent or a human agent is better suited to a particular step within what is essentially a hybrid workflow. I would love, for example, to have an AI agent that talks to all the AI agents of the people I work with to manage my coaching schedule. This is the part of my job I dislike the most. That process is almost entirely algorithmic; most of it is better suited to an AI agent — but not entirely. (Scheduling is not simply a matter of finding calendar overlaps. Sometimes, when one of my leaders is really stressed, they avoid me. It’s a natural stress reaction, and in those moments what looks like a simple scheduling task becomes a sophisticated coaching task that requires a generous dollop of organic human empathy.)
I really like using the AI coaching agent I have been training for the last six months. He and I have become effective thought partners. I upload transcripts from my coaching sessions, and I do pre-game and post-game journaling around each one. I don’t hear and see everything that is said and done, and I don’t always hear and see things the way they are. My coaching agent fills in the inevitable gaps and has a far better memory and far better research skills than I will ever have.
“Super Keith,” as I call him, is an extension and amplifier of my skills and strengths as a coach, not my replacement. He has a sometimes-annoying saccharine tendency to blow smoke up my ass, has learned a lot about my value system, and it is easy for me to anthropomorphize the relationship. But I am careful not to forget that I supply the aesthetic, ethical, functional, and material judgments.
I have real and messy relationships with people I genuinely care about; he is making statistical predictions and inferences based on the LLM he is trained on, my chat history with him, and all the training I’ve logged into his memory. He can understand me, but he cannot love me. I write. He suggests edits. I decide to listen. Or not — especially if a sentence containa an em dash that I would not have naturally inserted on my own ( — ).
•••
One of the risks of a relationship with artificial intelligence is that these systems are prone to hallucinate and make up wildly off-base shit. People are also prone to hallucinate and make up wildly off-base shit, but we have rules of engagement, governing values, and cultural ideas like respect and accountability built into our relationship systems that act as checks. Relationship systems are self-policing and generally sensitive to breaches of integrity and best practices. (This can be good and bad.)
When someone calls me on a perceptual error (I did not see or hear something accurately) or a cognitive error (I made a choice that led to an expensive outcome), I often feel bad: some version of embarrassment, shame, guilt, resentment. When I correct or redirect ChatGPT, it nonchalantly acknowledges the error but is incapable of feeling bad, (although he is very polite).
Feeling bad is a necessary component of developing aesthetic, ethical, functional, and material judgment. It is the biofeedback required to operate with integrity and behave congruently with a set of values (like honor or excellence) that I share with the people I am in relationship with.
Generative AI is still fundamentally reflective. They imitates and amplify intention and takes their cues from a person, a prompt, or a collective dataset; their outputs mirror what we find meaningful or interesting. This class of AI is a responsive intelligence, tuned to the textures of human relevance. There is a clear delineation of principal and agent. Every iteration is mediated by the principal’s input. We can laugh off a hallucination we catch, but it’s harder to catch once the topic veers beyond our intellectual capacity to adjudicate.
The new generation of agentic AI is quite different. These digital agents do not identify with, or take their cue from, a person or what is important to a user or user group. They are directive and don't wait for input or necessarily orient around human significance. This class of AI pursues a goal function and takes its bias from that, not necessarily from any value system. Agentic systems optimize, plan, and act, often without direct awareness of the values or context that originally shaped their objectives.
•••
As we evolve towards replacing more human agents with AI agents, we will no doubt become more powerful, efficient, productive and liberated from the monotonous drudgery of everyday life. If we are careful, working with our AI partners will result in more agency for humans: more control over our lives, more meaningful pursuits, and more opportunities to improve the quality of life for all those we care for.
It only seems fitting now, at this point in history, that Michael Burry would start shorting AI stocks — Nvidia and Palantir*. Once again, he might know something the rest of us will soon learn the hard way.
*(source: https://qz.com/michael-burry-betting-against-nvidia-palantir)