by Eric Schwitzgebel

In a series of fascinating recent articles, philosopher Susan Schneider argues that

(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.

(2.) These AIs might entirely lack conscious experiences.

Schneider’s argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI’s probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I’m inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.

The argument for (2) is trickier, partly because we don’t yet have a consensus theory of consciousness. Here’s how Schneider expresses the central argument in her recent Nautilus article:

Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.

On this issue, I’m more optimistic than Schneider. Two reasons:

First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you’d probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it’s plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)

Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing — a “global workspace” or “fame in the brain” or “availability to working memory” or “higher-order” self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity’s other subsystems and/or reportable in some sort of “introspective” summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation “lots of visible light from that direction!” would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it’s unlikely that such a system could act coherently over the long term. Its left hand wouldn’t know what its right hand is doing.

I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren’t likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there’s any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.

Related:

Possible Psychology of a Matrioshka Brain (Oct 9, 2014)

If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).

Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)

[image source]
[cross-posted at The Splintered Mind]

Posted in

2 responses to “Is Most of the Intelligence in the Universe Non-Conscious AI?”

  1. Kenny Easwaran Avatar

    Interesting stuff. To me, this seems to be the most important point: “it’s plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.”
    But if the question is whether most of the intelligence in the universe is conscious or not, then it’s not enough that there are some evolutionary niches that reward high levels of creative intelligence (assuming that this sort of creative intelligence is correlated with consciousness) – it matters whether most evolutionary niches that favor intelligence favor this kind of creative intelligence.
    If we take the question down a notch, and ask about whether most life in the universe would be intelligent or not, it seems that our experience on Earth suggests that the answer is no. It looks like copepods and krill are the two largest source of animal biomass, and I’m assuming that they aren’t conscious, and that consciousness is confined to some animals and at most a small fraction of plants, leaving out the vast majority of life in single-celled and fungal form.
    Maybe there’s some reason to think that once you’ve reached the threshold of intelligence, there would be strong forces favoring consciousness. But I’m not sure why that would be any more plausible than saying that once you reach multicellularity there would be strong forces favoring intelligence.
    But perhaps the point about global workspace or fame or whatever means that effective intelligence must come along with consciousness on one level or other.

    Like

  2. Eric Schwitzgebel Avatar

    Thanks for the comment, Kenny! Yes, that seems right about krill and such, and I probably didn’t give the idea enough of its due in my parenthetical aside. I was vague about “intelligence” but I was assuming that the target here was intelligence of human level or greater — which presumably means things like long-term planning, especially in a hostile or competitive environment, and some sort of competition and cooperation with other beings of the same type. I’m thinking, then, that the Darwinian winners here would either dip below human intelligence or have creativity and/or something in the direction of a central workspace, brain fame, or higher-order self-representation. Human-level or greater intelligence, without either of those things, seems like a strange and unstable solution in a Darwinian environment. I guess part of my implicit thinking here is that a workspace doesn’t seem that expensive to design and it seems like it would bring large advantages of the left-hand right-hand sort.

    Like