• If you’ve gotten this far, you’ve no doubt heard that Typepad is shutting down on Sept. 30. I’ve moved NewAPPS to WordPress, which is what you’re reading here.

    In the coming days and weeks I hope to get nearly all of the content from the original site ported over, get the url’s working correctly, and then to resume posting on my ordinary semi-regular schedule.

    For now, I’ve imported back to late 2013. Back then, NewAPPS was very much a group endeavor. Unfortunately, I’ve not (yet) figured out how to get original authors preserved in post titles. Everything will appear as me, even when it’s not.

    Thanks to all of NewAPPPS’ readers over the years!

  • By Gordon Hull

    For quite a while, I’ve been exploring how to relate Derridean concerns about language and the politics behind theories of language (and text), and how to think about those in the context of large language models (part one, two, three, four, five, six).  Last time, I talked about subjectivity and the question of whether a speaking subject necessarily subtends language production and how that might play out in the context of language models.  Here I want to return the conversation to Plato, since Derrida’s discussion of Plato’s critique of writing in the Phaedrus is central to all of these thoughts.  One thing that emerges in Plato is that he is acutely aware of the political stakes involved, and the Phaedrus deploys various strategies with myths and stories to communicate those stakes.  Indeed, Plato does this basically all the time.

    Most obviously, in the Republic, when he infamously banishes the poets, it is not all poets he banishes, but the Homeric ones.  The problem is that Homeric poetry teaches the wrong thing.  Socrates broaches the subject as follows:

    “We must begin, then, it seems, by a censorship over our storymakers, and what they do well we must pass and what not, reject.  And the stories on the accepted list we will include nurses and mothers to tell to the children and so shape their souls by these stories far rather than their bodies by their hands.  But most of the stories they now tell we must reject” (377c).

    He then immediately cites the example of the Uranus Kronos story told by Hesiod, which he avers shouldn’t be told to young people even if true (I’ll return to this point, as I think it’s important).  Instead, for stories like that, “the best way would be to bury them in silence” (378a).  This is because “the young are not able to distinguish what is and what is not allegory, but whatever opinions are taken into the mind at that age are wont to prove indelible and unalterable” (378d).  As Penelope Murray comments of this passage, Plato “is not concerned with the factual veracity of history here, but with the ethical truth that should be expressed through myth” (252).  The problem is that the wrong myths have been told, and numerous examples of allowed and disallowed myths follow. For example, we must say that the Gods do not deceive, because words are a “copy of the affection in the soul” (382b) and “essential falsehood … is hated not only by gods but by men.”

    (more…)

  • The announcement is here: https://everything.typepad.com/blog/2025/08/typepad-is-shutting-down.html

    NewApps has its own URL, but it's hosted on Typepad.  I assume this means that Typepad blog content will disappear from everything other than the Internet Archive (I can also download a file, so I can explore migration options), which hopefully will capture a lot of it. I have limited experience with the Internet Archive however.

    I don't know what this means for producing content going forward.  I know NewApps is not the only philosophy blog with this sudden problem (Leiter Reports is on Typepad, according to the URL; I'm not sure about Daily Nous).

     

  • By Gordon Hull

    Over what’s become a lengthy series of posts ((one, two, three, four, five), I’ve been exploring a Derridean response to language models. Initially prompted by a pair of articles by Lydia Liu on the Wittgensteinian influence on the development of language models, and some comments Liu makes about Derrida, I’ve been looking at the implications of Derrida’s critique of Platonism in the context of language models, and in particular the need to avoid making ontological pronouncement about them when we should be seeing them politically.  At the end of last time, I suggested that one possible Platonism concerns the unity of a speaking subject: when I say “the cat is on the mat,” what kind of subjectivity subtends my speech?

    That is, the Platonism question secondarily points to the question of the unity of a speaking subject (or at least the desirability of positing that speech emanates from a unified subject), which the Platonic priority of voice over text then enables one to associate with the production of language.  Language models produce speech but there is no unified subject behind them, only statistical prediction.  This predictive model treats meaning as a matter primarily of association and distribution across a language system. Anybody who’s versed in 20c “continental” thought will not be surprised by this, since one of the main endeavors of that thought from Heidegger (or even Nietzsche or Marx) onward has been to dismantle projects that posit such a unified subject.  As Henry Somers-Hall has argued, there has been a particular effort in French thought to move past constructions that rely on a broadly Kantian understanding of thinking as judgment (x is y) which are themselves subtended by an understanding of thought as representative.  Indeed, there’s a rich history of the developments in cybernetics making their way into France.

    There are several implications for language models.

    (more…)

  • I’ve been exploring some Derridean implications of the distributional understanding of meaning in language models (one, two, three, four), following a couple of papers by Lydia Liu that situate an important strand of LLM development in Wittgenstein.  From there, I’ve argued that a good Derridean contribution is in seeing the politics behind the non-Wittgensteinian view – that “Platonism” is the project of assigning metaphysical labels to a preference for voice over writing, even as that preference is a political decision that cannot be metaphysically justified.  Thus for Derrida the relevant Platonic move is to use the preference for voice as a representation of the eidos over writing as a bad pharmakon is really a distinction between two forms of writing and a preference for the former.  Here I’ll say some more about what I take it Derrida is doing and then get back to language models.

    As Derrida explains it, the distinction between speech as good writing and writing as bad writing (one can see Plato’s problem!) amounts to a distinction between dialectics and grammar, which I’m sorry to report needs to be quoted at length:

    “What distinguishes dialectics from grammar appears twofold: on the one hand, the linguistic units it is concerned with are larger than the word (Cratylus, 385a-393d); on the other, dialectics is always guided by an intention of truth. It can only be satisfied by the presence of the eidos, which is here both the signified and the referent: the thing itself. The distinction between grammar and dialectics can thus only in all rigor be established at the point where truth is fully present and fills the logos. But what the parricide in the Sophist establishes is not only that any full, absolute presence of what is (of the being-present that most truly "is",: the good or the sun that can't be looked in the face) is impossible; not only that any full intuition of truth, any truth-filled intuition, is impossible; but that the very condition of discourse–true or false-is the diacritical principle of the sumploki. If truth is the presence of the eie/os, it must always, on pain of mortal blinding by the sun's fires, come to terms with relation, nonpresence, and thus nontruth. It then follows that the absolute precondition for a rigorous difference between grammar and dialectics (or ontology) cannot in principle be fulfilled. Or at least, it can perhaps be fulfilled at the root of the principle, at the point of arche-being or arche-truth, but that point has been crossed out by the necessity of parricide. Which means, by the very necessity of logos. And that is the difference that prevents there being in fact any difference between grammar and ontology” (Dissemination, 166).

    Again, a few comments to help bring out what I think Derrida is getting at:

    (more…)

  • I want to take a break from Derrida and language models this week to explore an emerging policy issue.  As is impossible to miss, “AI” is everywhere.  Not everything that claims to be “AI” really is, but it’s getting hard to avoid things that call themselves “AI” as the AI companies look to make the technology profitable.  This is happening despite the decidedly lukewarm public attitude toward AI.  Current Pew research, for example, shows that AI experts are very enthusiastic about it, while the public isn’t: only 17% of all the adults surveyed thought AI was going to have a positive effect on the US over the next 20 years.  Concern is growing.

    This has generated at least three industry responses.  One is to push for deregulation of AI at the federal level.  Industry advocates nearly snuck in a total ban on state regulation of AI into Trump’s spending bill; it was excised at the last minute by the Senate on a 99-1 vote.  Industry has simultaneously tried to get the executive branch to push (mostly unregulated) AI as vital to national economic competitiveness and security.  Trump has obliged repeatedly, starting with an executive order all the way back in January.  Trump is all about this AI narrative, but it has been a consistent U.S. approach to and story about AI for quite a while.

    The second and third approaches are to try to (for lack of a better term) engineer stronger public support.  This second is in the form of PR campaigns about the inevitability and magnificence of AI and the need of it to be shepherded by the incumbent AI companies.  Those who aren’t as fully on board the train – women, for example -  are chastised and presented as doing damage to their careers; their concerns are frequently ignored.  The third is related, and that is the all-out push to get AI into education at every level.  Ohio State and Florida have mandated that AI be in all the curriculum (what does this mean, other than as a branding exercise? Nobody knows).  OpenAI is doing everything it can to make itself ubiquitous on college campuses.  Microsoft is dropping a cool $4 billion on AI education in K12, and OpenAI and Microsoft are sponsoring teacher training.  A couple of weeks ago, Trump dropped an executive order promoting AI in education.

    (more…)

  • By Gordon Hull

    As part of thinking through the implications of Lydia Liu’s papers (here and here) demonstrating a Wittgensteinian influence on the development of large language models, I’ve made a detour into Derrida’s critique of writing (my earlier parts: one, two, three).  My initial suggestion last time was that Derrida’s discussion is designed to show that “Platonism” is a political move (not a metaphysical one).  For Derrida the Platonic priority of voice over writing disguises the fact that both are (in his own terms) repetitions of the eidos, and so the claim that writing is bad is the claim that it’s the wrong kind of repetition.  I suggested that for Platonism as read by Derrida, one could easily imagine a hierarchy of writing systems, based on their proximity to voice/speech.  Chinese ideography – which Liu argues is central to Masterman’s breakthroughs in computer language modeling – would be at the very bottom of a Platonic hierarchy.  But because this is Derrida, we can’t either proceed quickly or proceed without talking about Hegel.  So I closed last time with a long passage from Hegel in which he denigrates Chinese for being insufficiently spiritual and too hard to learn.  Today I’ll start with why it’s relevant.

    (1) First, Derrida takes up Hegel’s understanding of language in “The Pit and the Pyramid,” first delivered in 1968 and thus almost exactly contemporaneous with “Plato’s Pharmacy.”  There, working from the other end of metaphysics (Hegel, not Plato), Derrida describes such a hierarchy:

    (more…)

  • By Gordon Hull

    I’ve been looking (part 1, part 2) at a couple of articles by Lydia Liu (here and here) demonstrating a Wittgensteinian influence on the development of large language models.  Specifically, Wittgenstein’s emphasis on the meaning of words as determined by their contexts and placement relative to other rules gets picked up by Margaret Masterman’s lab at Cambridge and then becomes integrated into the vector semantics models that underlie current LLMs.  Along the way, Liu argues that the Masterman approach to language, which learns a lot from Chinese ideographs, in this sense goes farther than the Derridean critique of logocentrism.  Here I want to transition to Derrida’s critique, not to criticize Liu’s account, but to see an additional point in Wittgenstein, one that Derrida takes further.

    To recall, Liu notes that one effect of the Wittgenstein-Masterman approach is to overturn the logocentrism in Western writing, but that Masterman is doing something different from Derrida, who remains in the space of alphabetic writing:

    “For Masterman, to overcome Western logocentrism means opening up the ideographic imagination beyond what is possible by the measure of alphabetical writing. This is important, as it follows that the scientist’s and philosopher’s reliance on conceptual categories derived from alphabetical writing in their commitment to logical precision and systematization as well as their deconstruction must likewise be subjected to post-Wittgensteinian critique” (Witt., 437)

    As she adds a few pages later, “I am fully convinced that Masterman is the first modern philosopher to push the critique of Western metaphysics beyond what is possible by the measure of alphabetical writing, and, unlike deconstruction, her translingual philosophical innovation refuses to stay within the bounds of self-critique” (Witt., 444).

    (more…)

  • Last time, I started a look at the work of the early AI researcher Margaret Masterson of the Cambridge Language Research Unit (CLRU).  As demonstrated by Lydia Liu in a pair of articles (here and here), Masterson proceeded from Wittgenstein to a thorough deconstruction of traditional ideas of word meaning, moving instead to treating meaning as a function of a word’s associations, as we might find in a thesaurus.  This approach is a clear forerunner to the distributed view of language applied in current LLMs.  Here I’ll outline the basics of the Masterman approach and show how it applies to LLMs.

    Masterson’s starting point is a Wittgensteinian point about the distinction between a word and a pattern. Counting with words would be “one, two three.”  Counting with patterns would be “-, –, —.”  But what if we counted “one, one one, one one one.”  Can words function as patterns?  Masterman applies the thought to the classical Chinese character “zi” (字, which I’ll write here as “zi”), the meaning of which depends on its context and placement in a given text.  Thus, “for Masterman, the zi is what makes the general and abstract category of the written sign possible, for not only does the zi override the Wittgensteinian distinction of word and pattern, but it also renders the distinction of word and nonword superfluous” (Witt., 442).

    (more…)

  • I’ve been loosely tracking the AI and copyright cases, most notably the Thaler litigation, where Thaler keeps losing the argument that work solely by an AI should get copyright protection.  To summarize: everybody who has ruled on that said that only work involving humans can get copyright protection.  As I said at the time, I think a good policy reason in support of this argument is that if pure AI could get copyright, it could produce millions of copyrighted images in almost zero time.  That’s got nothing to do with incentivizing human creation.  It was easy to miss given the deluge of atrocious Supreme Court decisions, but last week, a pair of district court judges ruled on a different (but not unrelated, in terms of markets) AI copyright question – whether scraping online text for training data is fair use.  Both cases are in the Northern District of California, so we can expect the 9th Circuit to have the first appellate decision on this topic.

    By way of background: fair use is an affirmative defense against copyright infringement.  That means that if you accuse me of infringement, I can defend myself as having engaged in “fair use,” which basically means “use that the copyright owner doesn’t like, but that we as a society think should be allowed for policy reasons.”  It could also mean “use that everybody thinks is ok, but for which licensing would be so inefficient that a licensing market would never emerge.”  Fair use is supposed to be decided case-by-case.  It depends on four factors: the “nature and purpose” of the (allegedly infringing) use, the nature of the copyrighted work, the amount of it used, and the market effects of the infringing use.  The middle two factors tend not to matter much.  The first factor is usually decided by a deciding whether the use in question is “transformative.”  For example, consider parody; the Supreme Court ruled back in 1994 that a 2 Live Crew parody of Roy Orbison’s “Pretty Woman.” was fair use.  The most closely analogous case I know to the training data was an appellate decision about Google thumbnails.

    (more…)