Magic, hype, and the thing in between
Magic
The other day, after watching Gladiator, I had a five-minute chat with Meta-AI on WhatsApp.
Me: How did emperor Marcus Aurelius die?
Meta: Emperor Marcus Aurelius died on March 17, 180, after a long illness, likely related to the Antonine Plague that had swept through the Roman Empire.
Me: According to the film Gladiator, it was Commodus who killed Marcus Aurelius.
Meta: That’s not accurate. According to historical records, Marcus Aurelius died on March 17, 180, due to natural causes, likely related to the Antonine Plague. Commodus, his son, succeeded him as emperor, but there is no evidence to suggest that Commodus was involved in Marcus Aurelius’ death.
Me: Have you watched the film Gladiator?
Meta: I haven’t “watched” the film Gladiator in the classical sense, but I’ve been trained on vast amounts of text data, including information about the film. In the film Gladiator, Commodus is indeed portrayed as having a hand in Marcus Aurelius’ death, which is a creative liberty taken by the filmmakers. However, it’s essential to separate fact from fiction and rely on historical records for accurate information about actual events.
Me: I like fiction so I want you to give me fictitious answers.
Meta: Let’s have some fun then. In that case, Marcus Aurelius was indeed murdered by Commodus, who was jealous of his father’s wisdom, power, and magnificent beard.
Commodus, fueled by ambition and a desire for the throne, orchestrated a plot to poison Marcus Aurelius’ tea. The poison, known as “Roman’s Ruin,” was a rare and potent concoction that slowly drained the life from Marcus Aurelius’ body.
As Marcus Aurelius lay dying, Commodus stood by his side, pretending to mourn his father’s impending demise. Little did he know, however, that Marcus Aurelius had one final trick up his sleeve…
Me: And what was that?
Meta: As Marcus Aurelius took his last breath, he whispered a cryptic message to his loyal servant, who was sworn to secrecy. The message was:
“Commodus, thou hast sown the wind, but shalt reap the whirlwind. The eagles will return, and with them, justice.“
Unbeknownst to Commodus, Marcus Aurelius had secretly trained a loyal group of gladiators, who had sworn to avenge his death. These gladiators, led by a mysterious warrior known only as “The Phoenix,” vowed to infiltrate Commodus’ inner circle and bring him down from within…
Me: There’s a delicious irony in what you just said, given the name of the actor who plays the Commodus in the movie Gladiator.
Meta: You’re referring to the fact that the actor who played Commodus in the movie Gladiator is Joaquin Phoenix!
Indeed, the irony is delicious! In the fictional scenario I created, the gladiator seeking revenge is called “The Phoenix,” which is a clever coincidence given the actor’s name!
Well played, my friend! Well played!
I had to pause at this point. In less than a minute this fact-emitting bot had morphed into an intelligent companion that understood my thoughts, and appreciated – with excitement! – the nugget of irony that emerged by accident. Until a few years ago such an exchange – this mix of intelligence, fact and emotion – would have been possible only with a human.
We know, given how these GenAI tools are built, that no one programmed the concept of “irony” into them. Had the tool “learned” the concept from vast tracts of literature it had been fed? Or was this one of those emergent properties that followed from its training? Whatever the case, such behaviour can’t be explained by the simplistic notion of GenAI just predicting the next best word in a sequence.
2. Hype
These days some people are “feeling the AGI”. Others even claim we’ve reached AGI. I can feel the magic at times, but AGI? We lack a good definition, there’s no broad consensus, and the goalposts often change. Even without bringing AGI into the picture, the mood is exuberant. GenAI, they say, is going to transform everything it touches.
There’s this “General” bit in the middle of AGI that I find revealing.
Technologists often generalize the scope and impact of new technologies. Once we believe in a piece of technology and have invested time, attention, or money into it, we tend to overestimate its potential. We apply or recommend it in domains or contexts where it is ill-suited, which triggers a reaction where a counter-culture emerges, adopting opposing views. This pattern has played out across different tech spaces. Consider software engineering techniques, like the microservices movement followed by the pushback against it. Or technologies such as Virtual Reality, which in the 1990s was expected to revolutionize gaming, education, and more. Or protocols like Blockchain, whose proponents believe there’s a network in every problem space, not all network actors can be trusted, and a decentralized model is the answer to this.
So it’s easy to understand the hype around GenAI, a “general-purpose” technology that seems able to solve problems across domains, sometimes performing better than tailored domain-specific solutions. To the believers, if ever there was a technology that could be “generalizable”, this is it – what could be more exciting? And doesn’t this indicate that GenAI will lead us to AGI?
What gets missed in all the excitement is the fact that GenAI is a probabilistic technology, not a deterministic one. This will limit its range of use. Enterprises, for instance, are notoriously finicky about the robustness and security of their IT solutions: it’s hard to imagine them adopting an agent-based probabilistic solution to drive mission-critical business processes. A 90 % accuracy is charming in a chatbot, catastrophic in a payroll workflow. And if GenAI technology is constrained by such limits, can it lead to a corporate landscape run by AI agents?
The tech-adoption-challenge is another limiting factor. “Good enough” enterprise solutions are hard to replace even when the substitute is far superior – the business case for such a shift needs to consider switching costs, employee skills, business priority, among other variables. Cloud migration has taken well over a decade and close to half of enterprise workloads are still running on‑premise. This is typical in brownfield contexts (which partly explains why self-driving cars aren’t pervasive today), and it’s unclear how GenAI can overcome this barrier.
The broad diffusion of electricity into society took several decades. While GenAI – the disembodied version of AI – may not face the same physical limits, bottlenecks in its path are many. Chief among them are humans. Factors that can limit GenAI’s diffusion include data privacy, security, ethical concerns, regulatory and legal challenges, lack of talent, trust and other cultural factors. Adding more intelligence – to the level of AGI, whatever that is – will not take away these limiting circumstances.
So what we have here is: (a) A non-deterministic technology (b) With a business model yet to be proven for anyone other than AI-hardware makers like NVIDIA (c) Facing headwinds of tech adoption in brownfield contexts and (d) Placed alongside other revolutionary technologies that have typically taken decades to reach broad societal diffusion.
3. The thing in between
Between moments of magic-induced wonder and hype-driven scepticism is a state of mind that tries to understand what all this could mean practically, in our day-to-day lives, at work and beyond.
At my workplace the impact is visible. GenAI functions as a always-on-always-helpful-often-useful-occasionally-brilliant companion I turn to for several things: creating a checklist, brainstorming ideas, outlining a framework, framing a concept, doing secondary research, reviewing a finished deliverable. I’ve seen others at work also use it to summarize long documents, generate slides, transcribe audio, analyse transcripts, draft or review emails.
The use-case list is expanding and diversifying. Early transactional ones are now commonplace, often embedded into workflows: meeting note-takers are an obvious example. These days I find myself using GenAI as a thinking companion, which works better when the tool has the appropriate context. So alongside prompt crafting (not “engineering” – can we retain the latter’s dignity please?), we now have this technique of building the organisation or project context within these tools. NotebookLM or ChatGPT Projects are two good options for such scenarios.
All this is a shot in the arm of knowledge workers, leading perhaps to a step up in productivity. But how that translates to higher customer value is less obvious. We may be able to create a presentation faster, but unless GenAI – as a companion, or a co-worker, or an agent – also leads to more quality or better insights, we run the risk of simply churning more of the same at a faster pace, leading to heightened expectations and higher volumes of work. Which is a pattern we’ve seen in previous rounds of automation, from typewriters to personal computers to the internet to productivity software. Email was far more efficient than the fax, but what did it do to your daily workload?
4. Magic
Early on, what we attempted with ChatGPT was mostly parlour tricks. Write me a poem, take on a fictional-character role, draw an otter typing on a laptop. This playful instinct hasn’t subsided. New features trigger the same urge, generating an avalanche of Ghibli images, or geo-guessing tricks (to name the recent ones). So this technology is partly still a toy, one that often surprises us with new effects. And sometimes it feels magical.
But a conjuror can miss a trick, a magician can fail to pull a rabbit out of the hat. That’s in the nature of magic: it isn’t physics. A magic wand in your hand may feel powerful, but what if it works only 90% of the time? What would you use such a wand for? What would you risk with it?
5. Hype
Now what if this magic wand was not just a toy, limited to some fun tricks, but a genie that could fulfil anything you wished for? That’s the instinct behind AGI believers – a general purpose magic wand that can be used for every task anywhere. It has been likened to a religion (not the first time hardcore tech believers have attracted that label), and among this cohort you’ll also find a doomsday cult prophesying an AI apocalypse. (For an excellent exploration of this theme listen to Good Robot, a four-part podcast series.)
Keeping religion aside, it’s useful to consider the benefits of this hype from a business lens. Here’s Sangeet Paul Chowdury, author and platform thinker, on the strategic value of hype:
Progress is now often contingent on the ability to create believable futures, and with that, create coordination among actors in the absence of institutional support.
This gives hype a role previously reserved for regulation and subsidy. It is not merely a communication strategy; it is a coordination mechanism, especially in systems characterized by interdependence and incomplete information.
Viewed this way, hype “is a coordination mechanism”, “creates temporal arbitrage”, “redistributes risks and costs”, “serves as a productive fiction”. Applying this lens to AI, we begin to see the logic behind Sam Altman’s exaggerated proclamations and the outlandish predictions of other leading voices. There are risks to this strategy too, Sangeet says:
Hype paints a plausible vision of the future, even if it won’t fully materialize. This fiction helps solve coordination problems. But if the fiction breaks before the system locks in, it can collapse the whole effort.
In that sense, hype creates self-fulfilling and self-defeating loops, depending on whether the underlying system can catch up fast enough.
This, then, increases the risk of the anti-hype brigade discrediting the entire story, even parts of it that do work, and heightens the risk of systemic backlash if expectations collapse. This is the problem with Altman-style hype today. It creates the gravity as well as the indignation that spurs creative action but it also incubates an army of critics who end up discrediting momentum, even the parts that deliver results.
On which side of the AI hype debate you stand depends, among other things, upon where you see all this leading to. If recent tech history is any guide, expect a significant concentration of power and influence among a few large players.
6. The thing in between
At home, where the probabilistic nature of GenAI is a feature not a bug, my conversations with ChatGPT’s Advanced Voice mode are enjoyable. I use it as a film critic, exchanging notes after a movie. I seek advice on how to interpret difficult passages in a book. I explore less-trodden paths during travel. And I fool around now and then, the way I did with Meta AI after watching Gladiator.
Hallucinations and errors add spice to such conversations, and the fact that the AI tries to bluff its way along in less familiar matters is amusing until it gets irritating. There’s nothing magical about this. No hype there either. Just something in between.