The brilliant but controversial Stephen Wolfram is up to something again:

[H]e is proclaiming his new project, the Wolfram Language, to be the biggest computer language of all time. It has been in the works for more than 20 years, and, while in development, formed the underlying basis of Wolfram’s popular Mathematica software. In the words of Wolfram, now 54, his new language “knows about the world” and makes the world computable.

From the point of view of the philosophical debates on artificial intelligence, the crucial bit is the claim that his new language, unlike all other computer languages, “knows about the world”. Could it be that this language does indeed constitute a convincing reply to Searle’s Chinese Room argument?

To be clear, I take Searle’s argument to be problematic in a number of ways (some of which very aptly discussed in M. Boden’s classic paper), but the challenge posed by the Chinese Room seems to me to still stand; it still is one of the main questions in the philosophy of artificial intelligence. So if Wolfram’s new language does indeed differ from the other computer languages thus far developed specifically in this respect, it may offer us reasons to revisit the whole debate (which for now seems to have reached a stalemate).

But over at Slate, David Auerbach is not convinced:

In short, the users of the language “know about” the world, not the language itself. Baking particular sorts of data into the language does not help it “learn” or “understand.” It is not able to generalize or deal with exceptions to its rules. Wolfram has shown how the language can grab the flags of different countries and treat them as a dataset that can be manipulated for geographical or aesthetic analysis—but that’s only because the code for Wolfram has specific handling for associating specific visual images of flags with specific country names. But if the nation of Davidstan decides to change its flag to an animated, rotating sphere with ultraviolet paint on it, the Wolfram Language won’t be able to handle that without modification.

At first sight, this may seem to be yet another version of a Chinese Room-like argument. Yet, the real question seems to be not whether the Wolfram Language itself knows about the world, but whether the agents using it – computers – will ‘know about the world’. Will it be a phenomenon sufficiently similar to how competent speakers of English ‘know about the world’ when using English – or any other vernacular language they master, unlike the man in the Chinese Room who does not speak Chinese? (Putting aside the fact that Searle’s characterization of computation as manipulation of meaningless symbols is somewhat simplistic.) To claim that a language, not the users of a language, ‘knows about the world’ is arguably a category mistake, and Auerbach is right to call Wolfram on his bloated claims. (“The intellectual dishonesty in the presentation of the Wolfram Language, whether intentional or unintentional, disturbs me, as I’m sure it does many other computer science professionals.”)

But the general question of whether non-human agents using the Wolfram Language, or any other computer language, ‘know about the world’ in some suitable sense, that one still stands.

Posted in , ,

11 responses to “The Chinese Room all over again?”

  1. Frode Bjørdal Avatar

    This seems like a useful software program. I don’t think the claim that the language knows about the world should be taken in anything but a metaphorical sense; at least the link does not support a philosophical claim connected with strong AI.

    Like

  2. GFA Avatar

    Interesting — I wonder if Wolfram’s language is also perhaps relevant to the debates about analytic truth. You’re right that saying a language “knows about the world” may be a category mistake. But perhaps Wolfram was speaking loosely, and we could re-phrase that idea as something like: the language “encodes information about the world” (maybe there’s a better formulation; ‘information’ is a fraught word).
    But if a language really does encode information about the world, then it seems like we might have an instance of the (unpublished) criticism Gödel raised against Carnap. An analytic truth in L, according to Carnap, both (i) is true in virtue of the rules that define/constitute L, and (ii) makes no empirical/ factual claims. Gödel’s challenge was: what guarantee do we have that every sentence meeting (i) also meets (ii)? If Wolfram’s language really does encode information, then it seems we would have a clear, unequivocal case in which (i) is met but (ii) is not.
    (Of course, the actual situation is more complicated. Commentators [Goldfarb & Ricketts, Friedman… and Carnap himself in places] have responded on Carnap’s behalf that if e.g. ‘Snow is white’ is taken as one of the rules defining your language, then it really does not make an empirical claim. Rather, that sentence is just homonymous with an English sentence that does make an empirical claim, but which is not true in virtue of the semantic rules that constitute English. I guess what I’m wondering is whether Wolfram’s language might make this line of Carnapian response seem more strained and implausible…)

    Like

  3. Patrick S. O'Donnell Avatar

    I understand knowledge, with Raymond Tallis, as fundamentally a mode of explicitness, of explicit-making consciousness. To elaborate a bit: after Grice, and in the words of Raymond Tallis, “linguistic meaning in the real world does not reside in the behavior of the symbols or expressions of which languages are composed—they are not located in ‘the system of symbols’ or its component terms—but in people who use languages to mean things, and the worlds they live in. This is because the specification of linguistic meanings requires that they are meant (by someone). What is more, in order that I should be able to determine what you mean, I have to intuit what you mean to mean. This involves, as Searle shows, getting a listener to recognize my intention to communicate just those things I intended to say in the act of communication. One cannot ignore the speaking subject: “Our utterances are invested with, and exploit, an ‘implicature’ in virtue of which we can always imply more than we say. Verbal meaning, in short, resides in acts performed by human being who draw upon their knowledge of the world and make presuppositions about the knowledge possessed by their interlocutors.”
    If one believes, as I do and again with Tallis (among others), and yet again after Grice (or Searle for that matter) that “[m]eaning cannot be separated from the psyche of the one who emits meaning, or from the psyche of the one who receives it,” and that our concept of knowledge is intimately tied to the various forms of memory (e.g., factual, experiential, and objectual), to emotions, thoughts, beliefs, and imagination, “the general question of whether non-human agents using the Wolfram Language, or any other computer language, ‘know about the world’ in some suitable sense” lacks any standing whatsoever. The question makes sense only if one thinks of meaning (which is, as Tallis says, ‘a quintessential feature of human consciousness’) “in purely linguistic terms and language being primarily a system of symbols.” One, it seems, has to have a (or something like a) “computational theory of mind” to imagine a computer language might exemplify having knowledge about the world (the relevant ‘knowledge’ here can only be metaphorical or secondary and derivative, parasitic in meaning on the knowledge possessed by those who program the computers, etc.). In short, knowledge requires “an enworlded self.” More explicitly:
    “Knowledge begins with the sense of there being something beyond how things appear to us: it begins with the concept of an object that is other than the self who entertains the notion of an object. Implicit in the idea of the object is the intuition of the subject contrasted with the object; more precisely, the Existential Intuition ‘That I am this…’ [the nature and origin of which are discussed in Tallis’s 2004 volume, I Am: A Philosophical Inquiry into First-Person Being]. Object knowledge [even Kleinian ‘internal objects’!] is also permeated [as ‘Wittgensteinians’ remind us] is also permeated by a sense of publicness—of a shared world—that is not available to asocial sentience or asocial neural activities [or an electronic device that performs high-speed arithmetical and logical operations].”
    Intentionality is a feature of perceptions, of propositional attitudes such as beliefs and desires, and of utterances such as assertions. This necessarily implicates consciousness, consciousness of something…. Computers are without minds, the most conspicuous feature of which is consciousness. And consciousness cannot be reduced to material or biological or neurological properties: in other words, materialism cannot account for the “indexicality of human consciousness” in the sense of being “here”and “now” as Tallis says, similar to the Da-sein Heidegger identifies as the essence of the human being (Tallis provides compelling arguments against attempts to neurologize ‘here’ and indexicality in general). Computers by definition can’t have first-person experience: a “narrative center of gravity” requires the higher-order activity of a self….

    Like

  4. Patrick S. O'Donnell Avatar

    Errata: (first para.) “[….] What is more, in order that I should be able to determine what you mean, I have to intuit what you mean to mean.”
    (third para.) “…is also permeated [as ‘Wittgensteinians’ remind us] by a sense of publicness—of a shared world—that is not available to asocial sentience….”

    Like

  5. David Auerbach Avatar

    Thanks for linking. Space and audience concerns prevented me from even touching on the philosophical implications, though I guess my affiliations were pretty clear. From a technical standpoint, I don’t think Wolfram offers any new ability to test the Chinese Room argument. And in general I do not think purely symbolic GOFAI approaches will ever reach a Chinese Room-like level of “understanding” (or pseudo-understanding).
    But subsymbolic approaches do raise some interesting questions. A paper, Building High-level Features Using Large Scale Unsupervised Learning, seems to hint at an enormous machine learning network being able to functionally deploy the visual concept of “face.” This is extraordinarily primitive next to what animals can do, but I think it’s at least possible to argue that the system has some legitimate understanding of “face,” whereas I don’t think Wolfram’s language can be claimed to understand anything.
    (Because it’s relevant, I’ll link to another of my pieces that touches much more on the AI issues from a cognitive science standpoint: here. It was not written to philosophical standards of rigor so please be gentle!)

    Like

  6. Tony Avatar

    The model sounds like Berkeley or some of the 19th century Idealists like Royce: if I (a program) know about the world, it’s by way of a connection to “the mind of god”, that is, Wolfram’s cloud servers. Except that they are, of course, Wolfram’s cloud servers, and not the mind of god.

    Like

  7. Catarina Dutilh Novaes Avatar

    My feeling is that Wolfram’s language would not have counted as analytic by Carnap’s lights, i.e. i) might not be met. Because there is a lot of information encoded in it, it’s not clear to me that it operates purely on the basis of what Carnap would be happy to recognize as syntax. But I would have to know more about Wolfram’s language to have a more educated opinion on this; so far, I was only following Auerbach’s article.

    Like

  8. Catarina Dutilh Novaes Avatar

    But to stipulate that intentionality must be exclusively to humans from the start is to beg the question on precisely what is at stake, i.e. can non-human agents instantiate phenomena that are relevantly similar to human cognition? That’s one of the points eloquently made by M. Boden in the paper I linked to above.

    Like

  9. Catarina Dutilh Novaes Avatar

    Hi, thanks for stopping by! I enjoyed reading your article, and other articles too if my memory does not fail me 🙂 I understand that you write to a non-philosophical audience in these pieces, but your ‘affiliations’ were definitely clear! Thanks for the additional links. There was another recent piece I read on bloated claims made by researchers working on artificial intelligence, this one:
    http://www.newyorker.com/online/blogs/elements/2014/01/the-new-york-times-artificial-intelligence-hype-machine.html
    It seems to me it makes a point similar to yours: as long as some computer scientists use this bad rhetoric to hype their results, it’s going to reflect badly on the rest of the community.

    Like

  10. Catarina Dutilh Novaes Avatar

    Yes, something like the mind of God indeed. Scary, no? 🙂

    Like

  11. Patrick S. O'Donnell Avatar

    Perhaps I’m obtuse, but I fail to see where Boden “eloquently makes that point.” A computer can only instantiate phenomena that are relevantly similar to human cognition to the extent that it is human beings who program computers, and “similar” is then only used rather loosely if not figuratively: For instance, we sometimes hear it said that computers “follow rules,” but computers
    “cannot correctly be described as following rules any more than planets can correctly be described as complying with laws. The orbital motion of the planets is described by the Keplerian laws, but the planets do not comply with the laws. Computers were not built to ‘engage in rule-governed manipulation of symbols,’ they were built to produce results that will coincide with rule-governed, correct manipulation of symbols. For computers can no more follow a rule than a mechanical calculator can. A machine can execute operations that accord with a rule, provided all the causal links built into it function as designed and assuming that the design ensures the regularity in accordance with the chosen rule or rules. But for something to constitute following a rule, the mere production of a regularity in accordance with a rule, is not sufficient. A being can be said to be following a rule only in the context of a complex practice involving actual and potential activities of justifying, noticing mistakes and correcting them by reference to the rule, criticizing deviations from the rule, and if called upon, explaining an action as being in accordance with the rule and teaching others what counts as following a rule. The determination of an act as being correct , in accordance with the rule, is not a causal determination but a logical one. Otherwise we should have to surrender to what results our computers produce.” (Bennett and Hacker)
    The use of language that suggests, for instance, that computers instantiate phenomena “relevantly similar to human cognition” is fairly harmless until it is take literally, leading us to suppose that it is a fact, or simply possible, that “computers really think, better and faster than we do, that they truly remember, and, unlike us, never forget, that they interpret [or understand] what we type in , and sometimes misinterpret [or misunderstand] it, taking what we wrote to mean something other than we meant. Then the [computer] engineers’ [or scientists’] otherwise harmless style of speech ceases to be an amusing shorthand and becomes a potentially pernicious conceptual confusion,” as is, I think, the case here.
    Dennett would have us speaking of Deep Blue as “playing” chess, just like Kasparov, but the computer only “’plays’ chess in the sense that the microwave ‘cooks’ soup, though the programming is vastly more complicated” (Daniel Robinson). What’s “stipulative” is the “intentional stance,” fashioned, in part, so as to make it appear plausible that machines (among other things) are, like us, “intelligent systems.” With Daniel Robinson, “[c]onsider the broad, various, cultural, and dispositional factors that to be recruited in order to qualify an activity as ‘play,’ and then array these against whatever ‘process’ gets Deep Blue to have the Bishop move to QP3.” Further, and relatedly, we might ask, “If Spassky and Kasparov are doubtful as to whether computers are ‘playing’ chess, is it not Dennett who must rethink the matter?”
    It’s on the order of a category mistake to think intentionality applies to non-human agents (although it applies in some degree to at least some non-human animals), Dennett’s “intentional stance” and nonsense about the fictional character of folk psychology notwithstanding: the ascription of psychological attributes is not about an interpretative stance, heuristic overlays or theoretical posits (it’s not surprising that Boden uncritically cites Dennett on this score). One does not merely adopt an “intentional stance” in the use of psychological predicates.* But my principal point concerns consciousness (intentionality being one feature or property of consciousness) in the first instance and not intentionality, at least insofar as some mental phenomena are not obviously intentional in any conventional sense (e.g., moods or sensations). In any case, it would be more precise to say, after Bennett and Hacker, that what is intentional is “the psychological attribute that has an intentional object.” Therefore,
    “[o]ne cannot intelligibly ascribe ‘intentionality’ to molecules, cells, parts of the brain, thermostats or computers. Not only is it a subclass of psychological attributes that are the appropriate bearers of intentionality and not animals or things, but, further, only animals, and fairly sophisticated animals at that, and not parts of animals, let alone molecules, thermostats or computers, are the subjects of such attributes. …[I]t makes no sense to ascribe belief, fear, hope, suspicion, etc. to molecules, [contra Searle] the brain or its parts, thermostats or computers.”
    * For the full critique of Dennett on this score, see the first appendix to M. R. Bennett and P.M.S. Hacker’s Philosophical Foundations of Neuroscience (2003). I agree with Tallis who writes, “It is difficult to know why this argument has been taken seriously.” See too the debate in Maxwell Bennett, Daniel Dennett, Peter Hacker, and John Searle (with Daniel Robinson), Neuroscience and Philosophy: Brain, Mind, and Language (Columbia University Press, 2007).

    Like

Leave a reply to Catarina Dutilh Novaes Cancel reply