I have been reading Daniel Hutto and Erik Myin’s book Radicalizing Enactivism for a critical notice in the Canadian Journal of Philosophy. Enactivism is the view that cognition consists of a dynamic interaction between the subject and her environment, and not in any kind of contentful representation of that environment. I am struck by H&M’s reliance on a famous 1991 paper by the MIT roboticist Rodney Brooks, “Intelligence Without Representation.” Brooks’s paper is quite a romp—it has attracted the attention of a number of philosophers, including Andy Clark in his terrific book, Being There (1996). It’s worth a quick revisit today.

To soften his readers up for his main thesis, Brooks starts out his paper with an argument so daft that it cannot have been intended seriously, but which encapsulates an important strand of enactivist thinking. Here it is: Biological evolution has been going for a very long time, but “Man arrived in his present form [only] 2.5 million years ago.” (Actually, that’s a considerable over-estimate: homo sapiens is not more than half a million years old, if that.)

He invented agriculture a mere 19,000 years ago, writing less than 5000 years ago and “expert” knowledge only over the last few hundred years.

This suggests that problem solving behaviour, language, expert knowledge and application, and reason are all pretty simple once the essence of being and reacting are available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of intelligence is where evolution has concentrated its time—it is much harder. (141) 


To summarize:

(a) That evolution took a long time to do something shows that it is hard,

(b) Animals other than humans have intelligence only in the form of being able to react to the environment in real time.

(c) Other kinds of intelligence took a very short time to evolve because they are “pretty simple” add-ons to “being and reacting.”

Conceptual reasoning and language are just gewgaws, Brooks suggests: to focus on them in the attempt to understand cognition is like time-travelling engineers of the 1890s being taken on a flight aboard a Boeing 747. Asked to duplicate its amazing capacity  for “artificial flight," they recreate its seats and windows and suppose they have grasped the central innovation.

Now, premise (b) of this argument is dead wrong. The evolution of non-sensorimotor cognition and learning is very ancient indeed (in evolutionary terms). For example, even simple invertebrates sense and learn—Eric Kandel found conditioning in Aplysia, a very simple and ancient organism possessing only about 20,000 neurons in total. (In fact, Kandel demonstrated conditioning at the cellular level, so system complication was, to a point, irrelevant.) Classical conditioning (and learning in general) consists of modifications to the internal states of organisms as a consequence of exposure to the environment; it is an outside-in mode of non-behavioural cognition; it is the creation of inner states corresponding to environmental regularities that an organism has been exposed to. Operant conditioning is a bit more complex: it is an internal modification as a result of sensorimotor interaction with the environment. (It is worth noting that though it results from dynamic interaction, it is nonetheless an internal modification, not a mode of interaction.) The evolutionary history of conditioning and learning shows that there is a very long history of cognitive evolution that is independent of sensorimotor evolution. Language is the product of that evolutionary stream as much as it is of any other. It is neither discontinuous with what went before nor a simple add-on to environmental interaction.

Brooks’s flagship example is a robot (dating back to 1987) that wanders around avoiding obstacles. In his introductory description, he says:

It is necessary to build this system by decomposing it into parts, but there need be no distinction between a “perception subsystem,” a central system, and an “action system.” In fact, there may well be two independent channels connecting sensing to action (one for initiating motion, and one for emergency halts), so there is no single place where “perception” delivers a representation of the world in the traditional sense. (147, emphasis added)

The traditional idea of obstacle avoidance relied on an egocentric map of the surrounding area. Brooks found that this was not necessary. He talks repeatedly about “data” and the like, but protests:

Even at a local level we do not have traditional AI representations. We never use tokens that have semantics that can be attached to them. The best that can be said in our implementation is one number is passed from a process to another. (149)

The second sentence above sounds perversely like Fodor’s syntactic theory of mind: the machine runs by the interactions of its internal tokens without knowing its own semantics. But that is not the question. The question is: Does it have semantics? Or: Why is this number passed from one process to another? What is the significance of the transfer. The answers to such questions are embedded in Brooks's description of his machine:

The finite state machine labelled sonar simply runs the sonar devices and every second emits an instantaneous map with the readings converted to polar coordinates. This map is passed on to the collide and feelforce finite state machine. The first of these simply watches to see if there is anything dead ahead, and if so sends a halt message . . . Simultaneously, the other finite state machine computes a repulsive force on the robot, based on an inverse square law . . . (153)

I am not suggesting that this kind of agentive talk should be taken literally. My point is that it provides a design perspective on the machine without which you cannot comprehend the setup. In an evolutionary setting, this kind of description shows us why an organic system has the external connections that it does. In short, it tells us what environmental significance various state transitions possess. And if the machine could learn, we’d want to figure the environmental significance of its interactions, wouldn’t we? How else could we figure out what it had learned?

Two points, then. First, the evolution of cognition has cognitive starting points. Second, even Brooks's robots have cognitive states. But I am surely not saying anything new. Dan Dennett said it all, didn’t he, a decade or so earlier? I am just a bit surprised to find all of this still being taken so seriously nearly a quarter of a century later.

 

Posted in , , , , ,

9 responses to “Rodney Brooks and the Enactivists”

  1. Michael Barkasi Avatar
    Michael Barkasi

    Mohan,
    Do you have a citation handy to Dennett? I’d like to see what he has to say about Enactivists.

    Like

  2. Eric Schliesser Avatar

    That’s a great post, Mohan! Brooks and Dennett started an official collaboration in 1993, so one wonders about their original conversations. (Dennett has published about their collaboration in his, “The Practical Requirements for Making a Conscious Robot.”)

    Like

  3. Mohan Matthen Avatar

    I was talking about the intentional stance.

    Like

  4. Mohan Matthen Avatar

    For what it is worth, Eric, “Practical Requirements” seems to come down in favour of robotic representations, but is hesitant about whether robots have the right kind of context to ground those symbols. He writes:

    It is all very well for large AI programs to have data structures that purport to refer to Chicago, milk, or the person to whom I am now talking, but imaginary reference is not the same as real reference according to this line of criticism. These internal ‘symbols’ are not properly ‘grounded’ in the world, and the problems thereby eschewed by pure, non-robotic, AI are not trivial or peripheral. . . I submit that Cog [i.e., the robot MM] moots the problem of symbol grounding, without having to settle its status as a criticism of strong AI. Anything in Cog that might be a candidate for real symbolhood will automatically be grounded in Cog’s real predicament, as surely as its counterpart in any child, so the issue does not arise, except as a practical problem for the Cog team, to be solved or not as fortune dictates.” (Proc Roy Soc A, 1994, 144)

    In other words, the intentional stance applies fully to robots.

    Like

  5. Orwin O'Dowd Avatar
    Orwin O’Dowd

    For the record, solutions to the Hodgkin-Huxley equations for the nerve impulse run to two orders of infinity, and what results is certainly not an finite-state machine. I saw this a generation ago but ran into a brick wall of dogmatic metaphysics, which now carries the ultimate authority of Proc. Roy. Soc.
    So what? Robotics shop-talk is just shop-talk and has no conceivable bearing on life, for all its material salience for the British economy. And if you swallowed some drivel on von Neumann automata, please realise that the genome does not copy itself, it divides. A genome is not a virus and failing to grasp that distinction may indeed be deemed criminally negligent…

    Like

  6. plus.google.com/114527471475176681943 Avatar

    Orwin, Could you please explain what you mean? What are two orders of infinity? What dogmatic metaphysics? Who swallowed drivel about Von Neumann? Who thought that the genome copies itself?

    Like

  7. Catarina Dutilh Novaes Avatar

    Hi Mohan, it’s funny that you have a post now about the Brooks article, I’ll be discussing it in my phil of cogsci course in a few weeks! (It was already in the program before I took over the course, and it’s a classic anyway…) I am sympathetic to some of your criticism here, but I’m also sympathetic to the general idea that cognition does not always involve representations. Radical enactivists like Hutto and Myin seem to claim that cognition never involves representations, but that seems too strong to me. I am somewhere in between, rejecting both universal claims.
    Have you read Louise Barrett’s ‘Beyond the Brain’? I’d be very curious to hear what you make of it.

    Like

  8. Mohan Matthen Avatar

    Hi Catarina,
    Where one draws the line between cognition-involving-representation and its opposite depends on what you think a representation is. One of the problems for representation-making robots is that if they store (for instance) a map in a separate memory location (or locations), that map becomes difficult to update in real time, which is obviously a huge limitation if the map is in egocentric coordinates and the robot is moving. Brooks’s big breakthrough is to have no discrete map. (See, however, Gallistel The Organization of Learning, chapter 1 for a lovely discussion of ant navigation.) One question to ask here is whether other modifications to the internal states of robots count as representing its external surroundings. The quote I gave from p. 153 of Brooks’s paper suggests that the answer could be ‘yes’. Hutto and Myin say that cognition is “intentionally directed” but has no content. I don’t exactly know what they mean. In their view, cognition is a simple causal interaction. How can such a process be “intentionally directed?” And if it is, why does it lack representational content? I don’t know the answers to these questions, and their text is quite unhelpful. (But for what it is worth, they do allow that “intellectual” cognition involves representation.)

    Like

  9. Lucas Avatar

    Catarnina,
    You write, “Radical enactivists like Hutto and Myin seem to claim that cognition never involves representations”. I think this is too strong. I take it that their position is that representational cognition is made possible by public language. So without, or prior to, language,cognition never involves representations. I think something like this is Hutto’s view.
    I guess a number of people inspired by the late Wittgenstein (and/or Sellars) have views like this: Representational cognition requires some sort of normativity, and the right type of normativity can only be supplied by the community.

    Like

Leave a comment