All Hail the Humans
There is just so much AI hype going around nowadays, it's hard to figure out what's real and what's noise. The right thing to do is to bet on humans.
There’s much fear and confusion these days in AI. We’ve got developers harnessing multi-agent swarms, with collective societies of agents running out and refining and defining their work. Agents are now running ML experiments. Articles speculate about how AI agents going to start improving themselves. It’s hard to imagine a world where there isn’t going to be a robot uprising. All this can present to us a special kind of of anxiety and fear about some new world order where we’re rendered obsolete.
When I read the “AI as Normal Technology” paper by Arvind Narayanan and Sayash Kapoor, researchers at Princeton, it all made sense. I’m going to do a quick recap of what their paper says, and reassure you that it’ll be okay. Really.
Tech’s adoption and impact is slowed by societal “speed bumps”. This is no different for AI.
The reason I’m so confident about saying this is that technology doesn’t develop in a vacuum. The dissemination, usage, application and improvement of technology happens in the context of socio-technical systems, whose processes are governed by human systems and have their own “speed limits”. Essentially, the authors argue that the “fast” view of AGI (superintelligent species or rapid human extinction) was unlikely to happen because of the socio-technical limits of AGI. It’s not like an AGI would just appear and humans would become fully AI-pilled and surrender completely to it. The authors note:
…the speed of diffusion is inherently limited by the speed at which not only individuals, but also organizations and institutions, can adapt to technology. This is a trend that we have also seen for past general-purpose technologies: Diffusion occurs over decades, not years.
The authors argue that a modern-day analogy would be more like electrification. Citing analysis from Paul A. David (“The Dynamo and the Computer, A Historical Perspective on the Modern Productivity Paradox”), who argued that despite electricity being invented and even with infrastructure built, it took decades for electrification to become utilized in such a way to dramatically affect productivity:
What eventually allowed gains to be realized was redesigning the entire layout of factories around the logic of production lines. In addition to changes to factory architecture, diffusion also required changes to workplace organization and process control, which could only be developed through experimentation across industries. Workers had more autonomy and flexibility as a result of the changes, which also necessitated different hiring and training practices.
Thus we see that there are practical limits to the adoption and transformative effects of new technology, and adoption – society-altering affects on a grand scale – will be slow.
Humans remain in control of AI systems and are unlikely to delegate control.
But isn’t there a chance that this time it’s different? That AI superintelligence could break out of a lab like a computer virus and start to wreak havoc on society? After all, aren’t there researchers broadly and deeply concerned about misalignment risks?
There was that old AI tale, about an AI that was instructed to make as many paperclips as possible. In the misaligned scenario, the AI takes over the world by creating paperclip factories and deeming humans a risk to itself, destroying humanity.
The authors’ response:
Misalignment concerns often presume that AI systems will operate autonomously, making high-stakes decisions without human oversight. But as we argued in Part II, human control will remain central to AI deployment. Existing institutional controls around consequential decisions—from financial controls to safety regulations—create multiple layers of protection against catastrophic misalignment.
Humans are the ones building or designing products; humans are the ones who decide how they’re deployed, what safeguards to deploy around them, how much or how little responsibility they should take on, how their outputs are parsed and used, and whether or not they should be automated and deployed into the wild. We are very unlikely to ever see the total delegation of decision making to an AI system.
AI is in service of a human; AI is in service to humanity. Humans use AI as tools, and if there’s anything to be reassured of in this day and age, it’s that human creativity and oversight will durably remain over the machines.
This is actually quite an insightful and perhaps insidious insight. The true risks and choke points lie in the providers and organizations who run the services, develop models, and license their technology for whatever technology. What are their values, what are their motivations? These are more important than worrying about what the machines are doing.
Disruption in the software industry
Circling back to the first point - if you’ve been watching the layoffs at Block and Oracle and (likely) Meta, you’re probably still just as discomfited as you were before reading this. After all, it doesn’t matter if AGI or superintelligence is here (or not). The AI hype / bubble is disrupting our industry, especially in software.
I know I started this article telling you that “it’s going to be okay”. The general anxiety around how AI will disrupt our industry is very real, and it’s certainly true for industries where the work is rote and easily automated.
The best thing to do is to understand the technology. Build product taste and exercise technical judgement. Be shrewd enough to learn from the new technology - how to harness it and understand how it works. But don’t delegate your thinking or your skills away.
Can I guarantee that mass layoffs aren’t around the corner? No, I can’t predict that. The sheer variability and noise in the modern world today is incredible. But the best thing we can do is lean in, focus on what we can control, and bet on ourselves. We humans are going to figure it out.