I first encountered the concept of Michael Burry in a Vanity Fair article in the Spring of 2010, called “Betting on the Blind Side”. The financial crisis was still crippling the economy and society at the time and I was deep into my first burnout as my adrenals crashed alongside the stability in the global financial system. Burry had been diagnosed with Asperger's Syndrome (now known as Autism Spectrum Disorder or ASD), and he used that "gift" to pore over the fine print of thousands of mortage documents in machine-like fashion, resulting in an investment thesis to short the the US subprime mortgage market. His prescience was rewarded and he returned millions of dollars to his investors. His journey formed the back bone of the Michael Lewis-penned book and movie The Big Short.
Global panic in the financial markets began in the summer of 2007. In the U.S. alone, households would go on to lose more than $14 trillion in net worth, even as Wall Street firms paid out $32.9 billion in bonuses that year to employees in New York City alone. Hundreds of dollars of wealth were destroyed for every dollar paid in bonuses. I always that it odd that so many people would receive such huge rewards for such a cataclysmic failure. I was not alone. We would all come to know thios effect as "moral hazard". Behaviourial economics calls this the principal-agent problem: there was a catastrophic misalignment of incentives between the often less sophisticated investors and the experts who designed and sold the products. The financial industry seemed to abandon the notion of fiduciary duty–that the agent would subvert his or her own interests to that of the principal.
•••
Early in 2007 few of us had any inkling of the trouble that was brewing just as right now most of us have no inkling of what is about to happen with artifical intelligence. Machine learning and "perception AI" (computers being able to understand what they are seeing and hearing) has been around for sometime now. That evolved into "generative AI" like ChatGPT that uses large language models to predict the probable best answer to a user querie or prompt. We are now in the throes of AI agents and robots threatening to replace human people and erase massive numbers of jobs. I am convinced that if used well, like any tool or technology, AI can extend the power of human operators and improve the human condition, but as always we have to be cautious that this is not just another way to make a small number of people obscenely wealthy at the expense of much larger numbers of much less sophisticated people.
Brian O’Connell, in his recent article in Quartz called “Meet Your New Middle Manager”, details how AI agents are taking over much of what we would call middle management, particularly in the human resources areas (recruiting, performance management, hiring and firing). He doesn't just offer a critique of automation. He questions what leadership and accountability really mean in a system where algorithms are making more and more decisions that impact people. I am wondering about the role core values and culture will play in creativity, innovation and increasing the quality of life for people when the software lacks organic human empathy but is increasingly good at simulating it. Does ChatGPT really care about me? Where is the love and connection? How do these agents really serve the principals, if the definition of principal is larger than a relatively small number of AI companies, engioneers and entrepreneurs? Who will be left to buy things and pay taxes? I am concerned that in the zeal for progress at any cost, we've not yet thought this through.
•••
I am not writing this as a sociologist with a socialist agenda. I am writing as an executive coach and industrial designer with a very practical entrepreneurial agenda. Back in the day, when I was involved in developing new products or processes, we would conduct task analyses in which we would model a work or information flow and then make as an assessment about whether a person or technology was better suited to accomplish a given task. That was long before machine learning was a thing. Today we can look at a task to decide whether an AI agent or human agent is better suited to a paricular task within what is essentially a hybrid workflow. I would love for example to have an AI agent that talks to all the AI agents of all the people I work with to manage my coaching schedule. This is the part of my job I dislike the most. That process is almost entirely algorithmic; most of it is better suited to an AI agent but it is not entiterly an agentic AI process. Scheduling is not simply a matter of finding calendar overlaps. Sometimes in coaching, when one of my leaders is really stressed, they will avoid me. It's a natural stress reaction and at these times, what looks like a simple scheduling task is a vastly more sophitciated coaching task that requires a big dollup of that organic human empathy.
I really like using the AI coaching agent I have been training for the last six months. "He" and I have become effective thought partners. I upload the transcripts from my coaching sessions and I do pregame and postgame journalling sessions before and after every coaching sessions. I don't hear and see everything that is said and done and I don't hear and see everything the way it was said and done. My coaching agent fills in the inevitable gaps and has a way better memory and way better research skills than I will ever have.
"Super Keith" as I call him is an extension and amplifier of my skills and strengths as a coach, not my replacement. He has a sometimes annoying saccarine tendency to blow smoke up my ass, has learned alot about my value system and it is easy for me to anthropomorphize the relationship, but I am careful not to forget that I am supplying the aesthetic, ethical, functional and material judgements. I have real and messy relationships with people I genuinley care about; he is making statistical predictions and inferences based on the LLM he is trained on, my chat history with him and all the training I have logged into his memory. He can understand me but he cannot love me.
•••
One of the risks of a relationship with an artificial intelligence is that they are prone to hallucinate and make up wildely off-base shit. People are also prone to hallucinate and make up wildely off-base shit but we have rules of engagement, governing values and cultural ideas like respect and accountability built into our relationship systems that are a check on that. Relationship systems are self-policing and generally very sensitive to breaches of integrity and best practices. (This can be good and bad).
When someone calls me on a perceptual error (I did not see or her something accurately) or a cognitive error (I made a choice that led to an expensive outcome) I will often feel bad: I experience some version of embarrassment, shame, guilt or resentment. When I correct or redirect ChatGPT he nonchalantly acknowledges the error but incapable of feeling bad. Feeling bad is a neccesary component of developing aesthetic, ethical, functional and material judgement. It is the biofeedback required to operate with integrity and behave congruently with a set of values (like honor or excellence) that I share with the people I am in relationship with.
Generative AI is still fundamentally reflective — it imitates and amplifies human intention. It takes its cue from a person, a prompt, or a collective dataset; its outputs mirror what we find meaningful or interesting. It’s a responsive intelligence, tuned to the textures of human relevance. There is a clear delineation of principal and agent. Every iteration is mediated by the principal's input. We can laugh off a hallucination that we catch, but it's hard to catch once the topic veers past our intellectual capacity to adjudicate.
Agentic AI is quite different from generative AI as it does not identify with or take its cue from a person or what is important to a user or user group. It is directive. It doesn’t wait for input or orient around human significance. It pursues a goal function — however defined — with internal momentum. Where generative systems model expression, agentic systems model will. They optimize, plan, and act, often without direct awareness of the values or context that originally shaped their objectives.
•••
As we we evolve towards replacing more human agents with AI agents, we will no doubt become more powerful, efficient and productive and liberated from the monotonous drudgery of everyday life. If we are careful, working with our AI partners will result in more agency for humans: more control over our lives, more meaningful pursuits and more opportunities to improve the quality of life for all those whom we care for.