Data Centers in Space

It seems there are more and more discussions of the possibilty of putting data centers in space. At first the idea seems crazy. I mean - why launch a bunch of computers into space on a rocket instead of just, you know, plugging them in here on Earth. But the more you think about it, the more sense it starts to make.

Data centers are challenging because they need physical space, energy, and cooling. It turns out that all three are easier in space! Space is naturally cooling, solar power is far more effecienty when being collected in space (and they can keep the satellites pointed at the sun 24 hours a day), and there is lots of, well, space in space.

I’m not sure that this will be the best way to operate data centers, but I love seeing new ideas being tested and innovation in the space. And a a scifi nerd, a data center in orbit just sounds cool!

AI Can’t Do Your Job (Yet)

One of the reasons I’m less worried about AI models leading to mass unemployment, is that I don’t think the models are nearly ready to take over actual jobs. All of our benchmarks look at small isolated tasks, and assume that getting better at these makes the models more capable at doing whole jobs. I disagree. And now there is some evidence to support that view.

Scale AI and the Center for AI Safety just released their Remote Labor Index. It used freelance projects on Upwork as a proxy for more realistic job needs and tested leading models to see if they could complete these tasks at a level a client would be satisfied with. Most models succeeded less than 2% of the time. That’s pretty different than the benchmarks we see for specific tasks or skills.

And I think that represents the best case scenario. Most jobs aren’t nearly as short-term as Upwork freelance projects - and don’t have such neatly defined inputs and expected outputs. That helps the team measure this concept - but it also means that 2% is more like an upper bound for the work most readily tackled by AI. It would be far lower in the real world.

If you’re interested in a more software engineering take on this argument, check out the recent post A Project Is Not a Bundle of Tasks by Steve Newman.

Winning the Wrong Race

We would do well to focus as much on the diffusion of AI technology as the development of it.

Every time a new model comes out, there is a debate about how much smarter it is than previous models. Developers talk about all sorts of different tests and metrics to evaluate the capabilities of each model. And while we can argue about whether advances in capability are slowing down or not, it’s clear that we have mode remarkable progress.

It seems clear at this point that we are engaged in a race of sorts with China around AI. And, like the space race, I think it can be an incredibly powerful motivator - and I think it’s important to future of democracy and capitalism that we win this race. Which is why I’m worred we’re running the wrong one.

Much of our discussion about the AI race centers around either investment in building infrastructure or developing the most capable models - measured in either model capabilities or data center capacity. And this is an important part of the race. There is a good case to be made that we are largely winning in thise area (though I would love to see this competition spur greater investment in energy infrastructure in the US).

But it also misses a critical component, which is diffusion of AI technologies - how broadly we are taking advantage of these capabilities . It is far less clear that we are winning this part of the race. Often laws and regulations in the US prevent us from leveraging  many AI (or even just basic ML) technologies in critical areas like healthcare, financial services, or education. Having the best models is only useful if you can take advantage of them where it matters most.

So, as we debate how we make sure the most powerful AIs are made in America and how to regulate AI so that it can be developed and deployed safely - we also need to make sure we are thinking about how we enable reasonable deployment of these technologies broadly so that we can benefit from these amazing inventions.

What is AGI Anyway?

Despite all the debate around Artificial General Intelligence (AGI), the concept itself feels misguided. It doesn’t reflect how AI models are actually evolving.

AI systems have jagged edges — brilliant in some areas, clueless in others. They can ace science Olympiads yet miss how many r’s are in “blueberry.”

So are they geniuses or idiots? Both. That’s why the AGI discussion misses the point. A model that still makes silly mistakes can still be transformative when used in the right context with the right guardrails. We don’t need “general intelligence” to get extraordinary value.

We should stop fixating on when AI will become “general” or “super.” What matters is what these models can do, where they fail, and how we build systems that amplify their strengths while containing their flaws.


The Case for AI Optimism

I remain incredibly bullish on the future of AI and its impacts on the world and humanity. It’s not without risk - but the upsides are tremendous. Here’s my short version of the case for being optimistic about the ultimate impacts of AI in the world.

1) The doomers are overstating the case.

We’ve all seen The Terminator. I would be wary of giving an AI control over life or death decisions – but we do not seem to be close to either (a) giving over control of vital infrastructure or weapons to AIs or (b) AIs being capable enough to take control for themselves.

That isn’t to say that there aren’t concerns about AI - there are plenty. But I tend to think they are manageable. We’ve managed major technological transitions before, and while this one may be different, our starting prediction should be that it will look mostly like previous major technology revolutions. Disruptive, but manageable.

2) The upsides are more easily missed.

If the doomers are over-selling the case, I think many people under-appreciate the upsides we have. I’m most excited not about LLMs like ChatGPT, but what domain-specific models are doing across science, medicine, and material sciences. AlphaFold, for example, solved protein structures that had stumped scientists for decades, and DeepMind’s GNoME recently identified over two million new crystal materials that could transform batteries and semiconductors. The opportunity to accelerate discoveries and innovations across scientific domains is immense—and still largely under-appreciated by most.

3) BUT, we need to get the big decisions right

So, I think the downsides are over-played and the upsides are underplayed. But both require smart decision-making and policies. That means balancing the desire to prevent truly bad outcomes (think making bio-terrorism easier) while not stopping the positive one (like making it faster to find treatments and cures for diseases or better materials for building sustainably). 

We need regulation and policies in the goldilocks zone - not too little but not too much. That balance is achievable, though rarely linear. I’m cautiously optimistic that we’ll find sensible policy frameworks over time. In the end, my biggest uncertainty isn’t about AI’s technical trajectory—it’s about whether we humans can navigate the tradeoffs wisely enough to harness its potential without choking off innovation.


ChatGPT Atlas - Initial Review

I promised my quick review, so here it is. I like it and it has become my primary browser. Though, given how much I like new toys, perhaps the better test if it still is in 3-4 weeks.

What do I like? Here are my top few items.

1) ChatGPT as default search as soon as you open a new tab is great.

2) The new set of options to switch to web, image, video, or news results is also great. This is now a real, functional Google replacement.

3) The sidebar is very useful. Having access to ChatGPT beside any web page is a game changer.

4) AgentMode with access to my. browser (so already logged into all of my sites/services) is much more useful. I’ve already used it many more times than the original version.

5) Cursor Chat, the ability to select text in any input field (like a Google Doc) and pull up ChatGPT inline seems great, but I haven’t really used it yet.

6) UX just feels simple, fresh, and sort of delightful. Definitely a nice change of pace from Chrome. Another example of OpenAI just having great product sense.

7) It seems like they are iterating quickly. There are already announcements of new features (eg open a Project in the sidebar) and some fixes (I can’t wait for the 1Password extension to work properly with the app). Seems like a good sign.

So, that’s the quick take. Very little I’m not enjoying so far. It’s been a nice replacement for Comet and Chrome for me.


If you’ve got other takes - or questions for me! - I’d love to hear them.

Alpha School - AI in Education

About the only thing I’m sure about in terms of AI in Education is that it will be a very big deal. In many ways, it already is. Many teachers report rampant usage of ChatGPT to complete assignments. But what is the right answer for how AI should impact education, that’s very hard to say.

But, with schools taking every approach from teaching ChatGPT skills to students to trying to ban all AI tools from their campuses, I most enjoy watching those places trying to truly re-imagine our education system for a world with AI.

In particular, I’ve loved learning about Alpha School. Just imagine, customized lesson plans that allow students to excel in academics in just two hours of focused time per day - with the rest of the day dedicated to learning important life skills. 

If you haven’t listed to the episode of the Invest Like the Best podcast with Joe Liemandt, you really should. It’s an amazing story, and Joe is a fantastic story teller.

You may not agree with Alpha School’s approach. They may not get it right. But they will certainly help us iterate our way to the best path forward more quickly than almost anyone else.

AI Skepticism vs Optimism

I was a bit disappointed to see this survey from Pew that more people are concerned than excited about the impacts of AI . I was even more disappointed to see that the US public shows one of the most extreme imbalances on this question.

I’m obviously an AI optimist - and I see so much upside it’s hard not to be excited. Perhaps people focus on LLMs and the potential to replace white collar jobs. But the opportunity to expand our capacity in areas from healthcare to education to materials sciences and improve human flourishing is dramatic.

I’m concerned about the risks here too. I just wish the public were more balanced in their take - and more excited about the opportunities. I guess AI optimists like myself have more work to do to convince the broader public.

Social Media, Revealed Preferences, and Discipline

Matt Yglesias reminded me of an awkward truth about today’s “social” media: most of it isn’t social. Early feeds were friends sharing to a small circle. Now every app is a short‑form video firehose tuned for entertainment. That’s closer to TV than to the original idea of social networks.

One take is the revealed‑preferences argument: if people keep watching reels, that must be what we truly want. Platforms are just serving demand; don’t blame them for our scrolling. There’s some truth there.

But it’s incomplete. Humans routinely fail to live up to stated preferences. I want to lose a few pounds, get stronger, and study more. My daily behavior doesn’t always match those goals. That gap isn’t “real preferences revealed.” It’s immediate gratification beating long‑term aims. Call it willpower, self‑control, or just being human, but it’s not a truer desire.

Endless feeds exploit that gap. They are the potato chips on the counter: engineered, omnipresent, easy to grab, hard to stop. Snack foods reveal very little about our values and a lot about our impulses. Same with autoplaying video.

So when we debate revealed preferences, we should separate choice from choice architecture. Platforms optimize for engagement, not our well‑being. If the bowl is always full and always in reach, chips will “win” more than we want them to. That doesn’t mean chips are what we most desire. It means the environment was designed so our short‑term selves win the vote.

Revealed preferences are useful. They’re just not sacred—especially when the system is built to reveal our weakest ones.