In February of 2009, the LessWrong forums were launched by some guy named Eliezer Yudkowsky. Yudkowsky is a member of the demographic I call the Very Smartest Boys - affluent men who, despite lacking any particular expertise or accomplishments, consider themselves intellectual titans because they're good at a particular set of narrow, societally rewarded tasks, like taking standardized tests. LessWrong was a haven for this type, and the forums echoed with heady philosophical and scientific discussions, even though the participants rarely had any idea what they were talking about. Yudkowsky himself would occasionally say something stupid enough to break out into broader nerd circles, and we'd roll our eyes and chuckle and then go back to ignoring him.

After sixteen years of self-promotion, he managed to get a book into the mainstream. Now I have to explain Eliezer Yudkowsky to my dad.

Most people expect that a book couldn't get published, that its authors couldn't get space in The Atlantic, that critics wouldn't glowingly endorse it, if the authors didn't actually have any relevant subject matter expertise. And yet here we are. Yudkowsky claims relevant expertise, but here's the thing: He's full of shit. Let's take a look at his book's website:

Eliezer Yudkowsky is a founding researcher of the field of AI alignment and the co-founder of the Machine Intelligence Research Institute.

Yudkowsky does not produce works that scientists cite. Neither does MIRI. They write stuff, but that's not "research," even if you put it in a fancy document on a fancy website that carries your "institute's" fancy name. Yudkowsky's and MIRI's output are only ever "cited" in Wikipedia articles written by other Yudkowsky disciples. No AI researcher has ever built any notable work on top of them.

With influential work spanning more than twenty years,

This "work" is - I swear I am not making this up - a six hundred thousand word Harry Potter fanfiction. This is a great example of a lie so audacious that nobody even suspects it. I sound like a crazy person telling you this.

Yudkowsky has played a major role in shaping the public conversation about smarter-than-human AI. He appeared on Time magazine's 2023 list of the 100 Most Influential People In AI, and has been discussed or interviewed in The New Yorker, Newsweek, Forbes, Wired, Bloomberg, The Atlantic, The Economist, the Washington Post, and elsewhere.

We know his publicist is competent, but this says nothing about Yudkowsky or his ideas.

What about Yudkowsky's co-author, Nate Soares? Maybe he's the actual brain here. Let's look at his bio on the book's website:

Nate Soares is the President of the Machine Intelligence Research Institute.

Nope.

I suspect this book is blowing up enough that if you get asked about it, you won't confidently say "oh some guy on the internet said the authors are charlatans," so let's dive into the actual content a bit. Now, I did not read the book, and I'm not going to, for the same reason that I don't test the sound quality of the speakers being sold out of the back of the truck before deciding whether to purchase them. I did take a gander at a podcast appearance by Soares, in which he discusses the book, and I will presume that it can stand in for the book itself.

Here is the crux of the argument:

So, you know, the basic argument is people are making smarter and smarter AIs. The AIs right now aren't the smartest, but, you know, the machines are talking now. They weren't talking five years ago. They can do all the high school homework now. They couldn't do that five years ago. And there's a question of where will that be in the next five years? Where will that be when the next breakthroughs happen? Yeah. And, you know, in my book, I argue the default thing that happens here, the most likely thing that happens is we make these AIs smarter and smarter.

"The AIs started out dumb but they got smarter pretty quick and they're just gonna keep getting smarter until they're really really smart" is received wisdom (at least publicly) within the LLM research community. This is not a Yudkowsky/Soares original. I think it's wrong, but the mistake isn't theirs. And I think it's wrong for two related reasons.

The first is that "intelligence" is not coherently defined. Soares tries to wave this away as mere linguistic ambiguity:

We sort of have this difficulty where the word intelligence in the English language has two meanings. One meaning is the thing that jocks lack and that nerds possess. And the other meaning is the thing that mice lack and humans possess. And the type of intelligence that these people are trying to automate is the latter.

But the cognitive line between mice and humans is not nearly as bright as Soares implies. You don't need to be a specialized researcher to know this; as an apocryphal park ranger once lamented, there is considerable overlap between the intelligence of the smartest bears and the dumbest tourists. LLMs can, today, provide answers to technical questions unanswerable by the median human. Is the LLM "smarter" than them? Is that question useful?

But maybe you want to say, well, reality on the ground is just outpacing our ability to describe it. We don't have a good word for the thing that LLMs will eventually be better at than humans, but that thing definitely exists, and it's definitely getting "more" somehow, and once it gets "enough" things will be really bad. But this argument ignores the incentives in play. Right now, there are two groups of people influencing the direction of AI research: the nerds on the ground building models, and the CEO class making strategic decisions. The first group wants to create cool party tricks. The second group wants to get rich. "Superintelligence" is a sexy marketing word, but it doesn't describe anything any involved parties actually want, so why would they try to build it?

Maybe the AI systems will make themselves smarter? This is the "singularity" scenario. But we're back to our first problem: What is "smarter?" How can you optimize for something you can't even define?

I agree with Soares's broader point that these things could be prompting drastic social change that nobody's talking or thinking about, and that it kinda sucks that we're handing control over them to megacorporations. But that's true of all sorts of shit. This particular topic only grabbed his attention because it allows him to write science fiction under the guise of Serious Policy. His motivation is laughably transparent:

Like this is the part of the movie right now where the scientists go to the elected officials and say, like, we're headed in a bad direction and need to change. Like, that's what's happening to you right now. I'm the scientist here.

You want to discuss a system that processes, transforms, and even creates knowledge on a broad scale; that has the potential for drastic effects on the material world; and can, if not actively monitored and maintained, collapse into a disastrous feedback loop? We've had that for centuries. It's called childhood education. Write a book about that, you goddamn nerd.