Generation G: Why The Generalist Generation Is Officially Upon Us
We've been captivated by the sudden rise of AI tools like ChatGPT because of the dystopian future they seemingly implicate. In reality, the only people who should be worried are the specialists.
A lot of readers ask, “What’s a sciolist?” After I restrain myself from the urge to tell them to “Google it,” I explain that it’s basically an archaic word for someone who pretends to know a lot about, well, a lot. If they don’t punch me in the face or walk away out of disinterest, their natural follow-up question is, “Why did you pick such an obnoxious name?” Which is a fair follow-up question to have. But while the name of this newsletter is largely tongue in cheek, the ethos underlying it is not. At its core, this newsletter is predicated on the belief that a broad but interconnected base of knowledge is the best way to navigate this crazy little thing called society.
A short while ago, I stumbled upon a “mind blowing” video of chess grandmaster Magnus Carlsen’s memory being put to the test by a fanboying journalist who was able to afford almost an entire suit:
The clip is no doubt impressive, and shouldn’t come as a shock given that Carlsen is widely considered to be one of the best chess players of all time. But the feat loses some of its luster—shifting from an inspiring display of memory and cognition to something more akin to a party trick—when you realize that even the cheapest of chess computers can take Carlsen down handily in a match. In fact, Carlsen “won't even play his computer . . . because he just loses all the time and there's nothing more depressing than losing without even being in the game.” Computers first started beating the best chess players in the world in 1997—when IBM's Deep Blue defeated Garry Kasparov—and it’s basically been downhill for man ever since.
Carlsen is the paradigmatic specialist. Since the age of five, he’s been learning to master the game of chess through repetitive practice. Now, at 32 years old, he’s likely more than 10x Malcolm Gladwell’s infamous “10,000-hour rule” from Outliers. But, while we can certainly marvel at the tremendous skill he has developed, Carlsen and the game of chess should serve as a cautionary tale for over-specialization in today’s society. We may not be chess grandmasters, but we should all be weary of becoming shackled to a narrow skill, because when IBM’s Deep Blue comes for our job, we won’t have the luxury to just stop playing the computer.
The argument in favor of specialization—the process of concentrating on, and becoming expert in, a particular subject or skill—dates all the way back to Adam Smith’s book The Wealth of Nations, originally published in the uneventful year of 1776. In his book, Smith used the very relatable job of pin-making as an example to illustrate why specializing creates value. Smith argued that by “taking the important business of making a pin” (a phrase so absurd that you would think he was joking were it not for the fact that The Wealth of Nations is 692 pages of quite possibly the least funny content since The Diary of Anne Frank) and dividing it up into eighteen distinct operations, each person in the factory could make about 4,800 more pins a day than if each person did the entire process herself from start to finish. Setting aside Smith’s dubious math and the fact that historians have questioned whether he even knew anything about pin-making, the point is that people have long thought that specialization creates optimal outcomes.
Stay with me, I promise I’ll stop talking about 18th century pin-making.
Not much has changed since Smith argued in favor of specialization all those years ago. As Victor Matheson, a professor who “specializes in all things sports economics,” put it, “We talk about the term, ‘this guy’s a jack of all trades.’ But the reality is, being a jack of all trades kind of means you’re a jackass of all trades.” In fact, each new generation seems to be specializing earlier and earlier—whether due to overzealous parents or systemic pressures. Want your kid to be the next Magnus Carlsen, Tiger Woods or Yo-Yo Ma? Better get her practicing the same rote, repetitive tasks for multiple hours a day once she turns five so that she can hit 10,000 hours before she gets to high school. Even if your aspirations fall short of being the GOAT, it’s nearly impossible to escape forced choices like a college “major” (and typically a “focus” or “specialization,” too) or pin-making-esque jobs like “data analyst.” Ultimately, whether you’re vying to be the next Tiger Woods or you’re an average Joe forced into it by a pin factory large corporation, the name of the game is specialization.
For the last 200+ years this worked, for the most part, rather well. Specialization gave us sports heroes, musical prodigies and groundbreaking products and services. It also enabled companies to scale fast and get minimum viable products to market even faster. Then, somewhere around, well, call it 1997, a sea change began to take shape.
This sea change was not immediate, but creeping. At first, the status quo largely remained. Companies like Microsoft and IBM were predicated, and thrived, on specialization. But, after the dot-com bubble burst and the Internet age took hold, it became easier than ever to interact with one another, exchange ideas and, most importantly, problem-solve. Millions of people were suddenly connected with each other in a way that enabled them to harness the network of crowds to answer questions, rather than seek out a local expert on the matter (a phenomenon Michael Mauboussin calls the “expert squeeze”). Meanwhile, a burgeoning supply of engineers began to flock to Silicon Valley to come up with the next big problem-solving idea for venture capitalists to throw a seemingly endless supply of money at. But there are a lot of problems out there, and some are easier to solve than others. So, in the race to be the first mover, it’s natural to target the lowest hanging fruit. For tech-literate engineers, that low-hanging fruit was systematizing and automating pin-making. Not literally pin-making, but jobs that were effectively 21st century pin-making. Soon, factory workers and other machine operators, customer service and travel agents, postal workers, cashiers, and millions of others like them would be rendered obsolete by automation. Why were these systematized roles targeted first? Well, like chess, they’re what’s known as “tame” problems for engineers to solve (i.e. code for). Rittel and Webber first explained the concept in their paper:
The kinds of problems that planners deal with — societal problems — are inherently different from the problems that scientists and perhaps some classes of engineers deal with . . . The problems that scientists and engineers have usually focused upon are mostly ‘tame’ or ‘benign’ ones. As an example, consider a problem of mathematics, such as solving an equation; or the task of an organic chemist in analyzing the structure of some unknown compound; or that of the chess player attempting to accomplish checkmate in five moves. For each the mission is clear. It is clear, in turn, whether or not the problems have been solved.
Wicked problems, in contrast, have neither of these clarifying traits; and they include nearly all public policy issues — whether the question concerns the location of a freeway, the adjustment of a tax rate, the modification of school curricula, or the confrontation of crime.
See, up to this point, technological advancements—as rapid and revolutionary as they may seem—have all addressed tame problems, because they are solvable problems. We can map out the right moves on a chess board and then program a computer to get to those right answers in an optimal manner. The same goes for specialized roles where the task involves a finite set of variables with predictable outcomes. Wicked problems, however, cannot be so easily addressed through code. Rittel and Webber again:
In the sciences and in fields like mathematics, chess, puzzle-solving or mechanical engineering design, the problem-solver can try various runs without penalty. Whatever his outcome on these individual experimental runs, it doesn’t matter much to the subject-system or to the course of societal affairs. A lost chess game is seldom consequential for other chess games or for non-chess-players. With wicked planning problems, however, every implemented solution is consequential. It leaves ‘traces’ that cannot be undone. One cannot build a freeway to see how it works, and then easily correct it after unsatisfactory performance.
Nowhere is this more prevalent right now than in the hot-button world of artificial intelligence. AI that solves tame problems is called “narrow AI” because, as the name suggests, it has a narrow range of abilities. We’ve gotten really good at narrow AI, and basically any feature (unlocking your phone with your face) or automated job (customer service chatbots) is narrow. On the other hand, tech that’s more on par with human capabilities is called “general AI.” General AI should, in theory, be able to solve wicked problems. There’s also a third category, “superintelligent AI,” which describes tech that surpasses human intelligence and presumably spells the end of humanity as we know it.
Despite what you may think, we haven’t yet reached general AI. It may seem like tech is approximating—or even surpassing—human beings, but the reality is we’re far from that place. ChatGPT, for instance, is groundbreaking technology that, at first glance, looks and feels a lot like general or even superintelligent AI, but the fact of the matter is it’s nothing more than a very impressive chat bot. Its answers may appear humanlike—and it’s certainly more impressive than any of its predecessor chatbots because of its ability to learn from past answers—but it’s still fundamentally narrow AI because it’s limited to a single medium: text-based chat. Not only does ChatGPT make plenty of mistakes, but it doesn’t know it’s making those mistakes, which leads to frequent instances of very confident-sounding wrong answers.
Another example is the self-driving car, which, again, is just narrow AI trying to tackle a tame problem. The first viable iteration of the self-driving car made its way across America in 1995, albeit with control over braking and speed (which seems sort of like saying you ran across America but with the help of a car, but whatever). Almost 30 years later, we still haven’t mastered the driverless car, largely because, while driving is a tame problem, it’s a tame problem with just enough variables to make it, as of today, beyond the reach of coders to master (but maybe if we all just click a few more images of red lights and crosswalks, we’ll get there soon).
The point is that we’re far, far away from murderous robots that transcend human intelligence and have the ability to solve wicked problems. And that’s to be expected. Today, we still “know very little about the brain” and fundamentally “don’t know how information is processed.” It’s hard to see how we can produce general or superintelligent AI that mirrors or surpasses human intelligence if we still don’t know the underlying mechanics of human intelligence.
This puts us humans in a unique position: Although we’re much farther away from general AI than we think, we’re also better than ever at narrow AI, which is quickly pervading more and more aspects of our society. In fact, we’re getting so good at narrow AI that automation is currently predicted to eliminate an estimated 73 million US jobs by 2030. The common denominator among all those jobs—regardless of whether they’re blue collar or white collar, or in rural America or urban America—is that they’re specialized roles involving a predictable set of tasks. David Epstein wrote about this phenomenon in his popular book Range:
Like chess masters and firefighters, premodern villagers relied on things being the same tomorrow as they were yesterday. They were extremely well prepared for what they had experienced before, and extremely poorly equipped for everything else. Their very thinking was highly specialized in a manner that the modern world has been telling us is increasingly obsolete. They were perfectly capable of learning from experience, but failed at learning without experience. And that is what a rapidly changing, wicked world demands—conceptual reasoning skills that can connect new ideas and work across contexts. Faced with any problem they had not directly experienced before, the remote villagers were completely lost. That is not an option for us. The more constrained and repetitive a challenge, the more likely it will be automated, while great rewards will accrue to those who can take conceptual knowledge from one problem or domain and apply it in an entirely new one.
In 2023, the real question isn’t whether—in an era where specialists are effectively being rendered obsolete in developed nations—one should be a specialist or a generalist, but rather where to draw the line. How much generalization is too much? At what point does a generalist skillset become so disparate and unrelated that it becomes ineffectual? Those are difficult questions to answer, and it may be there may not be one right answer. But with the global market for AI expected to be worth over half a trillion dollars by 2024, and over 1.5 trillion dollars by 2030, it’s probably best to start figuring it out.
It’s fitting that Victor Matheson, the self-proclaimed specialist in sports economics that I quoted earlier, paraphrased the line “A jack of all trades is a master of none” to criticize generalists. Had he not been so focused on the life-altering world of sports economics, Matheson might have stumbled across the full quote: “A jack of all trades is a master of none, but oftentimes better than a master of one.” Your job may be safe if you’re one of the few professionals specializing in the sports that Matheson has devoted his life to studying (we seem averse, as of now, to replacing athletes with machines, even though, as Magnus Carlsen knows well, machines can technically do the job better). As for the rest of us, the writing on the wall is clear: If you want to be valuable in Generation G, where all the tame problems have been automated away and only wicked problems remain, it’s time to join team sciolist.
Great read!