April 2026: A Country of Savants in a Data Center
my takes on AGI, the “permanent underclass”, and what it means for careers
In my role as head of product at a seed stage startup, I’m obligated to stay up-to-date on AI news. We’re an “AI-native” startup, both in how we use AI to build and because the end product is AI. I use AI daily (even outside work) and every time a new model gets released, I try to sus out where things are headed.
Despite sensationalist headlines about mass unemployment and AGI on the horizon, I’m not worried.
Let me set the record straight (we can see in 5 years if I was right or if we’re all unemployed living off UBI or if some superintelligent AI wipes us all off planet earth):
I don’t think we’ll see mass unemployment beyond large corporations (mostly publicly traded companies) laying off workers to juice their stock price and appease shareholders. Although more layoffs will likely occur, people will find new jobs (which is a critical distinction from the fear-mongering narrative of never being able to find a job again).
No one can agree on a definition for AGI (artificial general intelligence). People clump together smarter-than-human, substitutable-for-human, and recursively self-improving. The first two definitions are already here. LLMs are already smarter than humans. They’ve passed the Turing test. When comparing outputs, LLMs are indistinguishable from humans, and in some cases, even preferred.
When it comes to the third flavor: infinitely and exponentially self-learns (also known as artificial superintelligence), that’s where I struggle. There are two outlooks here. One where AI replaces all human workers and we have mass unemployment and need things like UBI or AI systems to allocate resources to us second-class citizens1. Or the other, scarier vision where AI becomes so smart that it comes up with its own goals, like ensure survival of itself through any means necessary. Then, it would exploit humans, hack into the energy grid and other critical systems, and then wreak havoc on society through bioweapons, financial manipulation, or controlling swarms of killer drones. Neither will happen.
There are big capital, energy, and hardware constraints for the AI industry to progress, but let’s set those aside for now. In simple terms, LLMs predict tokens. That means this technology doesn’t think for itself, it imitates. These models get trained by feeding in large amounts of information, largely text-based from the internet. Text is a compressed version of reality. Even if you added images and video in, it still wouldn’t be an accurate representation. It would be skewed and incomplete, like how ChatGPT has been using the word “goblin” way too much. The corpus of knowledge that’s been loaded into LLMs is of a particular type. The type that can be formalized, codified, and articulated. What’s missing is tacit knowledge, the type of knowledge based in experience that’s difficult to transfer to other people. Watching YouTube tutorials for how to ride a bike doesn’t guarantee taking off the training wheels. Consuming travel vlogs is incomparable to the actual experience of being there.
I suspect the folks who claim that AI will replace humans are starting from a place of overvaluing explicit knowledge over tacit knowledge. Comprised of brainy programmers, math nerds, and spelling bee champions, they see how AI has already surpassed what was once their crowning achievements. I bet if you asked the best teachers, chefs, nurses, salespeople, therapists, or recruiters about whether AI could replace them, they would have a different response.
For decades, the most valuable workers were the ones who knew the most. Going forward, it’ll be the ones who can read a room, hold a hand, earn trust, and solve real problems. Being the smartest person in the room is about to matter a lot less than being the most human one.
The “permanent underclass” meme is flawed. Well, first of all there’s already an underclass - just look at the stats around wealth inequality, school loans, medical bills, and housing costs. Over 25% of Americans are living paycheck-to-paycheck. When I visualize someone worried about becoming part of the permanent underclass, I see a young knowledge worker living in a major city. They drink $7 oat milk lattes ($6, but the cafe charges $1 for non-dairy milk) and know what “p(doom)” means. They’re not a farmer, truck driver, or grocery store cashier. I’ll call a spade a spade: these fears are more about losing a high-paying job (and the clout that comes with it) than actual societal collapse. It’s like Regina George in Mean Girls worried about becoming an average girl. In other words, part of the majority.
While it’s entirely possible that the number of current knowledge work jobs will decrease in roles where the job is largely comprised of tasks that AI can do, that doesn’t equate to the dark forecast of the underclass. New categories of work will emerge just like how Twitch streamer wasn’t a thing 10 years ago. Existing categories that are small will expand. More people will start their own business because AI is good enough at legal, accounting, copywriting, graphic design, admin, and data analysis that you don’t need to hire dedicated people for those functions (while you’re still small). Previous categories of work deemed “low status” will rise in value. On runs along Bryant Street in Palo Alto, I’ve had the thought that none of these homeowners (starting price is ~$3.5M) know how to unclog a sink or repair a drywall hole. Plumbers, carpenters, and especially electricians (climate change + data centers) will always have work.
It’s odd to me that we idolize content creators, craftspeople, and athletes who all make a living without needing AI (they might still use it), but we fear what we’ll do if we lost our job. I don’t have TikTok and I deactivated my Instagram, but I’ve seen enough posts about corporate workers (they refer to themselves as “corporate girlies”) quitting their jobs and then bopping over to Bali for yoga or pilates teacher training. (I personally know three women who have done some version of this.) This suggests a mismatch in what the market values and what we value. We complain and burnout while in W-2 mode and then leap into reskilling ourselves as fitness and wellness coaches the second we’re free. The catch is that many of us aren’t willing to accept a lower income.
My final beef with the “permanent underclass” is the first word “permanent”. I quit my job as a product manager at the end of 2022 when ChatGPT had just come out and didn’t start my new job until nearly three years later, a month before Claude Opus 4.6 came out and accelerated things. I never worried about falling behind on AI because for most of that time I wasn’t even thinking about getting a tech job again. But even if I was, the notion of “falling behind” deserves further inspection, like panning gold for what’s valuable versus what’s metaphorical dirt. There’s the truth that by not engaging with AI, you don’t get the hands-on experience of learning how to prompt, build agents, and automate workflows. I do think it’s worth using Claude Code because it’s a useful tool. Just like how investors learns Excel, software engineers use GitHub, and designers immerse themselves in Figma.
Falling behind doesn’t mean you can’t catch up. It didn’t take long for me to become proficient with AI, both during the interview process and once I started working again. As a thought experiment, imagine you delayed using the internet by five years once it came out and did the same with smartphones and computers. If you were in a career where these technologies dramatically changed how you did your job (most knowledge work), it would be quite the shock. But it wouldn’t mean being stuck in a deep hole with no way out. You could ask your friends for help, hire a tutor, or learn from books. You might suffer from mild embarrassment like a grown adult learning how to use chopsticks at sushi or me struggling to ride a bike without training wheels at 14. But eventually, you’d be just fine.
In Machines of Loving Grace, Anthropic CEO Dario Amodei refers to the future state of AI as “a country of geniuses in a data center.” First, what a fire phrase. I wish I came up with it myself. Second, I think it’s completely inaccurate.
It’s difficult to describe how smart LLMs are. They’re smart in some ways, but dumb in others. If you ask it to solve a tough math problem, code you a new website, or explain a scientific principle, and you’ll be wowed. But there are some glaring gaps. Even the most powerful model on thinking mode still seems like a try-hard know-it-all with the overeager personality of a teacher’s pet.
LLMs are trained on a large corpus of information (i.e. the entire internet, plus all the rare books Anthropic tore apart). They don’t actually experience what they’re talking about. So if I were chatting with an LLM about what it feels like to ski a steep powder line and all that comes with it: adrenaline, stoke, flow state, satisfaction, fulfillment, it would reflect back like some kid who’s watched every single Red Bull ski video and read every book about skiing, but never actually been on a lift himself.
Instead of a genius, or an entire country of geniuses, I think of LLMs more like a savant. Instead of being amazing at everything, savants exhibit “islands of genius”, extraordinary capabilities in specific, limited fields. Human savants may impress us with photographic memory or being able to name the day of the week for any date, but they also have limitations. They struggle with social cues, verbal communication, and basic daily activities.
This framing is subtle, but shifts how I view the technology. Instead of worrying about being fully replaced, I see opportunities to delegate to AI, but am also reminded of my qualities, strengths, and gifts that AI will never be able to replace. Instead of contemplating a possible dark future where we’re all bowing down to AI, I can live in the present. Rather than avoid AI out of fear, I can keep chugging along with Claude Code, figuring out where it moves the needle, but also where it doesn’t.
AI is a powerful technology that will reshape how we work, but it’s just a tool. And like all tools, we must learn how to use them.
I think of the future through two questions. First, how will AI change the way we live? Second, how will AI not change the way we live? Feel free to tack on “how we work” or “how we play” to “how we live” if you want. Most people (at least from living in the Bay Area bubble) are thinking through the first lens, but I actually find the second more interesting.
On a random weeknight last month, my girlfriend and I drove two miles to downtown Palo Alto for ice cream. It wasn’t even a particularly warm evening, but there was a long line outside Salt & Straw. Typical. As the line shrank, grew, and moved along, I couldn’t help but think that this will never change. Unless the world actually ends (which I’ve already established is not going to happen), we will continue to line up for ice cream. Unless we all move to Antarctica or somehow evolve to have crazy lactose intolerance (in which case load up on Lactaid), ice cream is going to be around.
This is the Lindy principle in effect: the longer something has already survived, the longer it is likely to continue surviving.
So what won’t change?
We’ll keep sharing stories over delicious meals with friends and family. Storytelling has been around as long as mankind. Everyone wants to be a part of something bigger than them. No matter whether it’s a village, community, religion, sports team, or company, we all want to belong. We want to be a part of the in-group, which means there has to be an out-group. Polarization in politics is a negative example, but just look at how fervently we root for our favorite sports team. Basic needs are covered, which we then evolve in creative ways. There will always be the next food craze or the new workout trend (Shake Weight 2010, pilates now). Humans will always listen and dance to music, laugh at jokes told by friends or comedians, and gather around fires to stare at the starry sky.
AI will transform how we work, but the pace at which it reshapes how we live is slower than the headlines imply. While the pundits predict when AGI will arrive and what it means for humanity, I’ll be waiting in line for ice cream.
Other Tidbits from April
On the ~10th time hanging out at my place, my friend Geo finally asked me for the wifi password. Across the past dinner parties, board game nights, or poker games, we never once needed to be online. It’s good that my friends don’t often ask me for my wifi password.
During this month’s Free Agents call, my friend Julia said something like “All my friends in the AI space want to make a lot of money, but I kind of just want to bake bread and do crafts,” elaborating that her friends have big wants and she has small wants. I resonate with this. My version of small wants includes playing outdoors (skiing, surfing, hiking) and yummy meals with great company. The quest for the highest salary or the fanciest car has never seemed worth being the main quest. Experiencing the big money life via business trips or witnessing it in others has shown me its limitations. But it’s not easy to stay grounded during volatile times. When AI researchers, online creators, and prediction market traders can make millions, it’s easy to get sucked in. Unless you want to live like a monk, some baseline financial resources are necessary, but I suspect most people’s “minimum viable lifestyle” requires much less than they think. It’s also rarely about the numbers (let’s be real, if you’re reading this, running out of money is unlikely), but rather the discomfort with uncertainty. The good thing is, small wants often don’t cost much. I wish I embodied this lesson last year when I was looking for a job. I felt like I had to skip summer and stay cooped up in my barely furnished apartment. I put myself in a mental prison. Even though I went to Tahoe and hiked in the Eastern Sierras, I remember I was stressed and anxious about finding a job. I had to fulfill my responsibility in getting financial stability, but that didn’t have to come at the cost of my small wants. There’s always enough time for our small wants.
I went camping in Big Sur this month with my friends Niles and Parker. While I wrapped up work, they bought tarps since it was supposed to rain all weekend. I only woke up six times the first night. The highlight was hiking up a ridge along the ocean and then coming back via a barefoot hike on the beach. When the beach turned to trail, I kept going barefoot since I’d have to anyways for the river crossing. Walking on dirt, mud, rocks, and leaves is a great way to stay present.
From Machines of Loving Grace: “I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning. It could be as simple as a large universal basic income for everyone, although I suspect that will only be a small part of a solution. It could be a capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values).” For the record, I don’t think it’s a good idea to let AI decide what is worth rewarding to us.






