Alpha School - AI in Education

About the only thing I’m sure about in terms of AI in Education is that it will be a very big deal. In many ways, it already is. Many teachers report rampant usage of ChatGPT to complete assignments. But what is the right answer for how AI should impact education, that’s very hard to say.

But, with schools taking every approach from teaching ChatGPT skills to students to trying to ban all AI tools from their campuses, I most enjoy watching those places trying to truly re-imagine our education system for a world with AI.

In particular, I’ve loved learning about Alpha School. Just imagine, customized lesson plans that allow students to excel in academics in just two hours of focused time per day - with the rest of the day dedicated to learning important life skills. 

If you haven’t listed to the episode of the Invest Like the Best podcast with Joe Liemandt, you really should. It’s an amazing story, and Joe is a fantastic story teller.

You may not agree with Alpha School’s approach. They may not get it right. But they will certainly help us iterate our way to the best path forward more quickly than almost anyone else.

AI Skepticism vs Optimism

I was a bit disappointed to see this survey from Pew that more people are concerned than excited about the impacts of AI . I was even more disappointed to see that the US public shows one of the most extreme imbalances on this question.

I’m obviously an AI optimist - and I see so much upside it’s hard not to be excited. Perhaps people focus on LLMs and the potential to replace white collar jobs. But the opportunity to expand our capacity in areas from healthcare to education to materials sciences and improve human flourishing is dramatic.

I’m concerned about the risks here too. I just wish the public were more balanced in their take - and more excited about the opportunities. I guess AI optimists like myself have more work to do to convince the broader public.

Social Media, Revealed Preferences, and Discipline

Matt Yglesias reminded me of an awkward truth about today’s “social” media: most of it isn’t social. Early feeds were friends sharing to a small circle. Now every app is a short‑form video firehose tuned for entertainment. That’s closer to TV than to the original idea of social networks.

One take is the revealed‑preferences argument: if people keep watching reels, that must be what we truly want. Platforms are just serving demand; don’t blame them for our scrolling. There’s some truth there.

But it’s incomplete. Humans routinely fail to live up to stated preferences. I want to lose a few pounds, get stronger, and study more. My daily behavior doesn’t always match those goals. That gap isn’t “real preferences revealed.” It’s immediate gratification beating long‑term aims. Call it willpower, self‑control, or just being human, but it’s not a truer desire.

Endless feeds exploit that gap. They are the potato chips on the counter: engineered, omnipresent, easy to grab, hard to stop. Snack foods reveal very little about our values and a lot about our impulses. Same with autoplaying video.

So when we debate revealed preferences, we should separate choice from choice architecture. Platforms optimize for engagement, not our well‑being. If the bowl is always full and always in reach, chips will “win” more than we want them to. That doesn’t mean chips are what we most desire. It means the environment was designed so our short‑term selves win the vote.

Revealed preferences are useful. They’re just not sacred—especially when the system is built to reveal our weakest ones.