
What can you say to a title like that, except "Noted"?
Before you accuse the authors, Eliezer Yudkowsky and Nate Soares, of fear mongering, alarmism, and attention-grabbing, let me reassure you: that's something they readily admit to. They want your attention, alarm, and (above all) to make you afraid of the path AI research and development is on.
Their argument centers around the "AI alignment problem". Which is (see Wikipedia) a real thing. (Not that you should trust Wikipedia.) The alignment concept is pretty simple: making sure that your AI shares its designers' "intended goals, preferences, or ethical principles."
Fortunately, this is not yet a major problem. Chess-playing programs will beat you, sure. At chess, because that's their goal. They won't start taking over NORAD, like in War Games.
But it is, the authors allege, an unsolved problem. Worse, it may be insoluble with the current state of the AI craft. Once AIs reach the level of "superintelligence", and given even a shred of autonomy, we are inevitably in for it. And, since AI operates millions of times faster than puny human intelligence, we are destined to see it spiral out of control before we even understand what's happening.
The only "solution", the authors argue, is an effective worldwide ban on AI R&D. We don't know where the critical threshold for AI-doom lies; we only know, since we are not all dead, that we haven't passed it yet. Probably not, anyway.
The book is written for the lay reader, with lots of analogies and metaphors. (E.g., Chernobyl, leaded gasoline, freon, nuclear weapon proliferation.) There is an accompanying website that goes into detail on technical issues, and encourages your activism.
Counterpoint is readily available: see Neil Chilson's review of the book at Reason: Superintelligent AI Is Not Coming To Kill You . And the relevant Wikipedia article (which you shouldn't trust, see above) states: "Reviews of the book by critics have been mixed." For example, the NYT reviewer "compared the book to that of a Scientology manual and said reading it was like being trapped in a room with irritating college students on their first mushroom trip."
What do I think? I readily admit I don't know. I want to be optimistic. And I'm introspective enough to realize that's likely to bias my beliefs. Also, it seems to me that we are treading into areas we don't understand that well: natural intelligence, human consciousness, and free will. Would we even recognize artificial superintelligence if it occurred? Or would it be very, utterly alien, so much so that predicting its behavior would be impossible? Over to you, sci-fi authors.
![[The Blogger]](/ps/images/barred.jpg)


