Apocalypse No

Superintelligent AI is highly unlikely. Here's why.

There is no shortage of jeremiads about the apocalyptic possibilities of a so-called superintelligence: defined by the term's coiner as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." Leaving aside the presumption implied in this definition ("smarter than...brains"?), I contend that AI is extremely unlikely to advance past the machine-learning based automation of language tasks. Focusing instead on the unique challenges of human-machine teaming would be far more fruitful than handwringing over naught.

In order for a superintelligence to come into existence, it would need to bootstrap off of a large portion of the combined knowledge of human civilization. In order to perform this bootstrapping, an intelligence or proto-intelligence would need a suitable hermeneutic model. No such hermeneutic model can exist.

Definitions

What we are really worried about is a dangerous superintelligence, which I define as an intelligence exhibiting:

  • Agency: The intelligence must have an agency of its own
  • Superiority: A dangerous intelligence must be more intellectually capable than any human could be.
  • Effectivity: A dangerous intelligence must be able to intervene in our world in a malicious way

By "hermeneutic model" I mean a framework for understanding a given document, which we assume must at least somewhat overlap with the intent of the document's creator. This would include language, worldview, references, etc.

The need for a bootstrap

In order for a dangerous intelligence to surpass us at anything that we care about, it would have to stand on our shoulders. To use a characteristically morbid example, let's imagine that a dangerous intelligence wanted to build a nuclear weapon. It is almost taken for granted that the basis for the design, part selection and build would come from a repository of human knowledge.

You, a philosopher, may object: but why is that the case? Given the cycle speeds of a supercomputer and some magical evolutionary algorithms, could not an intelligence merely simulate all of human history, including the entire biosphere of the earth? Surely then it could, from principle, derive a nuclear weapon by itself.

Accepting this clearly absurd proposition that would take the lifetime of the universe to run, this would still not necessarily yield an intelligence that intersects with what we mean when we say intelligence. This assumes that intelligence evolved teleologically, with human-like intelligence the only possible outcome of evolution. The only other option would be to sprout every possible intelligence and cull those that aren't fit for our task. Doing so would require a reasonably parameterized model of intelligence (e.g. what we're trying to derive) so we've reached a dead end. 

So, we agree that a dangerous intelligence would need to bootstrap off of human knowledge, and all of it, if possible. In order to do so, however, the intelligence bootstrapping protocol needs to ingest all of this knowledge and convert it to a format that the superintelligence can learn

Nobody thinks of hermeneutics

We now have a translation problem: We need a way to translate every single document in human history into a format appropriate for training a learning algorithm. Unfortunately, the translation task that we have set before us is more fraught than translating a Chinese food menu into Bantu. This mechanism by which human documents are translated into machine documents will be the limiting factor to any development of intelligence. 


First: we can't make presumptions about the relative importance of documents, since if we knew which documents were important we'd be superintelligent ourselves. This means that we need to make an effort to translate as close to every document as possible.

Second: these documents must also be "interpreted" or rather, "normalized" for the training regime that we are using. Most of the machine translation tasks to which we are accustomed translate between languages and assume a shared hermeneutic model. Often, this is safe because the source and target messages both refer to a contemporary, hyper-globalized world. Translating from one hermeneutic model into another (interpretation) across time is a different story. It would require either the creation of a third hermeneutic model between source and target or for the target hermeneutic model to grow, pseudopod-like, towards the source. 

The only way to properly ingest these documents to begin training the algorithm would be to have a system that could match a given document with its proper hermeneutic model and then translate the document into a format that the learning algorithm can use. I submit that this is impossible without a foreknowledge of the subjective experience of human intellect, which is the unacknowledged baseline for hermeneutics: we assume that the creator of a text had an intellect that somewhat mirrors our own.

The best one can hope for in this respect is that the "smartest" person in the world can be tasked to personally convert every document from human history into a format that the AI understands. So first, we'll have to engineer an extremely long-living human who is also superintelligent. Wait, that almost sounds like...

Centaurs not SkyNet

I am not saying that the introduction of "Language Automation" software into the
economy will be a non-incident. Far from it.

Many of the economic repercussions that the press warns about do not need general purpose AI. All they need are application-specific, machine-learning based NLP, which I think is one of the real stories of the last couple decades. I am bullish on any company who understands this and is taking steps to either train or reinforcement learn algorithms in their application area. This is more economically important than trying to get an AI to win Jeopardy. Oh wait.

Ultimately, no business actually wants an agential super AI, they want an advanced question answering system. Their business rationale is that, if they can train a general purpose algorithm that can answer any question forever, they can fire everyone. I doubt very seriously that this is a possibility with just a machine. Human-machine teaming will be able to accomplish much of what has been sold as AI by the more egregious hype-mongers and do so without the statistical anomalies of machine learning, which are still unacceptable in most applications. I'd even wager that a well-run Centaur will be more cost-effective in the long run.

Cross-posted from my Urbit