You’d be forgiven for thinking that the most significant global event this millennium had occurred last week as the news feeds were choked with speculation and commentary on Sam Altman’s unexpected sacking from probably the highest profile job in the world of AI, followed swiftly by his subsequent reinstatement and consequent humiliation for those who had engineered the coup.
I can’t add anything to the frenzied and often wild theorising about why it happened, let alone bring any sense to the febrile pre-reinstatement predictions of the resultant demise of civilisation on Earth as we know it. If only Elon had started SpaceX while still in infant school perhaps a lucky few could have escaped to start a safer, more virtuous new world order on Mars.
It would not have been worth my while contributing to what I suspect is ultimately a moot debate, nor worth your time reading what I had to say on the subject since I can only speculate as well as any other commentator given the same sketchy information. And while my pithy observations would surely be insightful somebody somewhere would likely be saying a very similar thing. That wouldn’t move us forward at all.
Focus on the Important Stuff
Why do I say this? Because the pantomime we've witnessed this week eclipsed a much more important debate the grown-ups need to continie having about the future of AI: how it’s developed, who develops and owns it, who uses it and for what purposes. These issues should, and must, transcend any individual or any one company albeit those individuals and companies will be overly influential if they themselves are not subject to playing by a universally agreed set of ‘rules’. Whether OpenAI’s reported breakthrough, Q*, lives up to scrutiny or not probably matters little at this stage although if it does turn out to be a genuine step-change that takes us closer to AGI then it will have profound implications.
I have no academic background in data science or AI and despite my borderline nerdy geekiness on these subjects I do run out of intellectual horsepower when things get too technical (or mathematical). I have however worked in the tech world for the last 3 decades and in that time seen astonishing claims and promises made about the latest technology only to find that the reality didn’t live up to the hype. That said, I do fully accept that AI is different to the extent that it has the potential to outstrip human intellectual capability within a period of time that most of us will get to see in our lifetimes and affect us in ways more profound than any other technology in history. What a time to be alive indeed.
A Dose of Healthy Scepticism
So what causes me to suggest it’s time to pop on our monocle of scepticism and stop pretending we are seeing, or conditioning ourselves to expect to see, orders of magnitude advances in AI technology on a weekly basis. This level of unrealistic expectation can cause us to turn a blind eye to well documented shortcomings. We can choose to ignore these issues, such as hallucinations in LLMs, for example, all we like but there is little point denying their existence. In fact, we might have to face the reality that the current design philosophy underpinning current LLMs has reached its limit and a new paradigm, as yet not conceived, will be needed to take us beyond those inherent limitations.
It’s a truism that most eternal optimists have a pathological aversion to pessimism and naturally find it difficult to be sufficiently rigorous in situations calling for truly objective, critical thinking. It’s easy to hide behind the exception that disproves the rule; there are many examples where an almost blind faith in a concept that looked hopeless resulted in incredible progress and success. For me, how strong a lens of pessimism we should choose ought to be proportional to what’s at stake. And for those eight billion of us fortunate to be alive today, the stakes could be existential. We had better go for a pretty powerful one, at least for now.
“So”, I hear you say, “you must be one of those zealous subscribers to the virtually cult-like effective altruism movement”. Sorry to disappoint, but no. Cautious, yes. Cognisant of the inherent danger posed from the influence of unfettered commercialisation (or totalitarianism in the case of certain nation states), of course. These are, however, not sufficient reasons to slam the brakes on. And frankly we’re currently behind the wheel of the most high-performance car yet built, which has not been fully road-tested in the real-world, with only a provisional driving licence, perhaps even slightly tipsy, and we’re on a road we’ve never driven before, at night, in the rain. To make things worse, the designers of this hypercar have fitted it with brakes made of cheese. Slowing the car down will be difficult enough; doing an emergency-stop isn’t an option.
Let’s Choose AI For Good
I'm firmly of the view that AI can and will propel humanity forward in spectacular ways that were pure science-fiction less than 50 years ago and are now beyond what most of us can even begin to imagine possible. We’ve become accustomed to medical breakthroughs in disease prevention and treatment that previously killed or affected millions of people, almost to the point of nonchalance. With AI harnessed for good, the pace of advancement will make current medical discoveries and treatments look sluggish and mediaeval.
There has been much debate about the mass extinction of jobs in white collar professions that have largely been untouched, and there surely will be casualties in the fullness of time. What I actually believe is that if a genuinely existential threat comes to pass it is more likely to be those billions of people who are already the most economically disadvantaged who will bear the brunt.
If we (humanity as a whole) tread a very careful path that enables us to maximise the immense value AI promises without succumbing to self-inflicted, mutually assured destruction we can look forward to a reasonably peaceful next few hundred thousand years as a species. Who knows, if AGI can work out how to halt or reverse ageing, some of us might even still be around in some form. My sense is that the path we need to tread could be as wide as a motorway or as precarious as a tightrope. We are grown-ups, or should at least pretend to be while the great AI debate rumbles on, and the outcome is still in our gift. If we choose unwisely our demise won’t be the fault of AI, it’ll be all our own making.
Comentarios