I remain incredibly bullish on the future of AI and its impacts on the world and humanity. It’s not without risk - but the upsides are tremendous. Here’s my short version of the case for being optimistic about the ultimate impacts of AI in the world.
1) The doomers are overstating the case.
We’ve all seen The Terminator. I would be wary of giving an AI control over life or death decisions – but we do not seem to be close to either (a) giving over control of vital infrastructure or weapons to AIs or (b) AIs being capable enough to take control for themselves.
That isn’t to say that there aren’t concerns about AI - there are plenty. But I tend to think they are manageable. We’ve managed major technological transitions before, and while this one may be different, our starting prediction should be that it will look mostly like previous major technology revolutions. Disruptive, but manageable.
2) The upsides are more easily missed.
If the doomers are over-selling the case, I think many people under-appreciate the upsides we have. I’m most excited not about LLMs like ChatGPT, but what domain-specific models are doing across science, medicine, and material sciences. AlphaFold, for example, solved protein structures that had stumped scientists for decades, and DeepMind’s GNoME recently identified over two million new crystal materials that could transform batteries and semiconductors. The opportunity to accelerate discoveries and innovations across scientific domains is immense—and still largely under-appreciated by most.
3) BUT, we need to get the big decisions right
So, I think the downsides are over-played and the upsides are underplayed. But both require smart decision-making and policies. That means balancing the desire to prevent truly bad outcomes (think making bio-terrorism easier) while not stopping the positive one (like making it faster to find treatments and cures for diseases or better materials for building sustainably).
We need regulation and policies in the goldilocks zone - not too little but not too much. That balance is achievable, though rarely linear. I’m cautiously optimistic that we’ll find sensible policy frameworks over time. In the end, my biggest uncertainty isn’t about AI’s technical trajectory—it’s about whether we humans can navigate the tradeoffs wisely enough to harness its potential without choking off innovation.