I want to take a break from Derrida and language models this week to explore an emerging policy issue. As is impossible to miss, “AI” is everywhere. Not everything that claims to be “AI” really is, but it’s getting hard to avoid things that call themselves “AI” as the AI companies look to make the technology profitable. This is happening despite the decidedly lukewarm public attitude toward AI. Current Pew research, for example, shows that AI experts are very enthusiastic about it, while the public isn’t: only 17% of all the adults surveyed thought AI was going to have a positive effect on the US over the next 20 years. Concern is growing.
This has generated at least three industry responses. One is to push for deregulation of AI at the federal level. Industry advocates nearly snuck in a total ban on state regulation of AI into Trump’s spending bill; it was excised at the last minute by the Senate on a 99-1 vote. Industry has simultaneously tried to get the executive branch to push (mostly unregulated) AI as vital to national economic competitiveness and security. Trump has obliged repeatedly, starting with an executive order all the way back in January. Trump is all about this AI narrative, but it has been a consistent U.S. approach to and story about AI for quite a while.
The second and third approaches are to try to (for lack of a better term) engineer stronger public support. This second is in the form of PR campaigns about the inevitability and magnificence of AI and the need of it to be shepherded by the incumbent AI companies. Those who aren’t as fully on board the train – women, for example - are chastised and presented as doing damage to their careers; their concerns are frequently ignored. The third is related, and that is the all-out push to get AI into education at every level. Ohio State and Florida have mandated that AI be in all the curriculum (what does this mean, other than as a branding exercise? Nobody knows). OpenAI is doing everything it can to make itself ubiquitous on college campuses. Microsoft is dropping a cool $4 billion on AI education in K12, and OpenAI and Microsoft are sponsoring teacher training. A couple of weeks ago, Trump dropped an executive order promoting AI in education.
AI Literacy, or cognate terms, is all the rage. I’ll just quote from the start of the current EO, since it’s a good example:
“To ensure the United States remains a global leader in this technological revolution, we must provide our Nation’s youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology. By fostering AI competency, we will equip our students with the foundational knowledge and skills necessary to adapt to and thrive in an increasingly digital society. Early learning and exposure to AI concepts not only demystifies this powerful technology but also sparks curiosity and creativity, preparing students to become active and responsible participants in the workforce of the future and nurturing the next generation of American AI innovators to propel our Nation to new heights of scientific and economic achievement”
Ok! But it’s time to ask an old STS and HCI question: who is the “AI user,” and why are we designing our society and our schools to cater to this person? In a now-classic 2003 paper, Sally Wyatt raised the question of the Internet Non-User and their importance to both the design of the Internet and campaigns around it (which included a massive push to get it into schools). As Wyatt notes, “the assumption that non-use or lack of access is a deficiency to be remedied underlies much policy discussion about the Internet” (68). She also notes that “by focusing on use, we implicitly accept the promises of technology and the capitalist relations of its production” (69). Wyatt identifies four types of non-users: resisters (never used the internet and don’t want to); rejecters (stopped using it because they found it boring or they have alternate sources of information and communications); excluded (lack access); and expelled (used it, but it became too expensive or they lost institutional access). She also says we need to need to look at what a “user” is and how it’s defined:
“The Internet ‘user’ should be conceptualized along a continuum, with degrees and forms of participation that can change. Different modalities of use should be understood in terms of different types of users, but also in relation to different temporal and social trajectories” (77).
Let’s start there: the “AI user” also needs to be conceptualized on a continuum, and (corollary) we need to look at non-users, too. Importantly, not all non-users are users in training! But all of them will have to live in a world with AI, if even a tiny percentage of the hype is correct. This means that training for competency in AI also needs to understand and respect those who don’t use AI. Here’s two initial reasons.
First, empirical work by Inha Cha and Richmond Wong shows that not using AI is an important part of workflows, even among computing professionals who are receptive to AI or use it in other contexts. A variety of factors, ranging from system and design process specifications to professional responsibilities to larger social, legal and regulatory factors influence decisions not to use AI. For example, some of their interview subjects noted that “using AI to create the end outcomes and artifacts would skip over many of the benefits gained by human reflection while creating those artifacts during the traditional design process.” I cite this example specifically because it’s borne out by education research that shows how student reliance on AI can short-circuit the benefits of going through an educational process.
As Cha and Wong argue, this evidence indicates the importance of understanding AI as part of sociotechnical assemblages. If AI literacy does not adopt this standpoint – which centers not just using AI, but the context of AI use and the context of AI non-use, it will fail to serve students.
Moreover, the argument has implications for design. Cha and Wong note:
“Our findings highlight the need to shift the focus in HCI and design research from merely identifying feature requirements – what AI systems should do – to critically evaluating what tasks AI should not perform and what human roles it should not replace. This reframing challenges the prevailing narrative of AI adoption as inherently beneficial, instead emphasizing the importance of defining usage boundaries for AI applications. By addressing what AI should not do, designers and researchers can prioritize the preservation of human agency, professional judgment, and ethical responsibilities”
The passage emphasizes that those who are involved with using AI and developing AI (the very people at the core of AI literacy initiatives) need to be taught to think about when AI is inappropriate.
Second, getting the input of non-users is important for the shape of AI. The article I linked above about women, for example, points out that a very good reason for AI hesitancy among women has been the tendency of recent technology to be used against them (see, for example, Danielle Citron on Internet harassment and non-consensual pornography). The perspective of non-users can make a world with AI better, even if they never touch it.
In a somewhat later (2009) paper, Christine Satchell and Paul Dourish point out that understanding non-users is important for HCI (Human-Computer Interaction) as a branch of computer science. As they announce their project:
“We are interested in non-use – in the varieties and forms, in the circumstances and contradictions, and in the importance of the ways in which experience may be intimately shaped by information technology outside or beyond specific circumstances of “use”. HCI has always had some kind of interest in non-users, of course, but generally has regarded them as potential users. Here, we want to take non-use on its own terms and examine the ways in which aspects of non-use might be relevant, conceptually, practically, and methodologically, for HCI.” (9)
Specifically, they are concerned to understand “the ways in which the discourse of use and users omits other forms of engagement with interactive systems that, we argue, are consequential for HCI research and practice.” (10). If I can be permitted a somewhat rough analogy, today we need to start substituting “AI Literacy” into paragraphs like that one.
These two factors matter because they suggest that a dominant conceptual frame of AI literacy and AI adoption is fundamentally flawed. The dominant conceptual frame for non-users is that they are lagging adopters – those who do not yet use a technology. That is pretty clearly the AI industry’s narrative around its products, and the one adopted in federal policy like Trump’s EO. That frame then drives education initiatives like OSUs and Florida’s and the EO where everyone is assumed to be going to use AI and need, primarily, to learn to use it better. Aspects of this narrative may well be true – as Wyatt points out, the widespread social adoption of a technology (her example is cars) makes it harder and harder not to use it. We saw this phenomenon recently with Facebook – research as early as 2007 showed that college students got social capital from Facebook, though of course how you used the site mattered. Before long, social networking became indispensable to social life on campus. But still, work like Cha and Wong’s show that a picture of impending near-universal adoption is a serious over-simplification.
Notice three additional things: first, initiatives like OSUs aren’t bottom-up. They’re forcing AI into everything. That’s like how the auto industry worked to get urban trams dismantled. It’s a significant social shift, one that the public provably is skeptical about, and it’s getting made in a very undemocratic way. There is no reason why our entire educational system should be reorganized to support this outcome, even if the ideological state apparatus is there to guarantee the social reproduction of capital.
Second, it turns out that maybe designing the whole country around cars wasn’t the best idea after all! Some of those non-users may have a point! But it’s very hard to even imagine different forms of transit now, because of the resources that are consumed by cars (this is very much something that advocates of autonomous vehicles try to shove under the rug: autonomous cars are still… cars). Slowing down, just a bit, might be a good idea. There's a lot of good, critical work on AI and its potential to cause social harm, including on how to see through industry hype on what AI can do. That work is made invisible or irrelevant by a narrative the constructs AI use as an inevitability.
Third, if the perspectives of non-users of cars would be better taken into account, maybe we’d have better cars. The first thing that popped into my head was those really tall pickups – the trucks where you can’t see over the hood, including to avoid running over toddlers in front of you. But nobody consulted pedestrians on that one. Car bloat is a problem, but the auto industry supports bigger cars because they have higher profit margins. So they push for technological solutions that don’t work all that well, but that do take as inevitable that people want huge cars. Meanwhile, compared to other developed countries, the US has a stratospheric and rising pedestrian fatality rate, one that only be addressed at the policy level, not by educating drivers or car buyers (who, in what amounts to a prisoner's dilemma, feel compelled to buy bigger vehicles to protect themselves from all the other bigger vehicles).
Satchell and Dourish note of the “lagging adopter” construction that “HCI’s attention is therefore directed towards the navigation of this [s-shaped adoption] curve” (10). Here, substitute “AI Literacy:” AI Literacy’s attention is therefore directed toward the navigation of this s-shaped adoption curve, the consequences be damned.
Satchell and Dourish discuss other forms of non-users; I’ll note two here. First, Active Resistance is those who, well, actively resist using the technology. Even from an HCI standpoint, these folks are worth attention:
“A broader view reveals that active resistance constitutes one position within a larger collective effort to make sense of new technologies, and so, to the extent that those who resist a technology contribute to these debates and these ongoing processes of negotiation, they are deeply relevant. Eager adopters and active resisters are both responding to and shaping cultural interpretations of technology, even though they do so in different ways; their perspectives each play a role in the cultural appropriation of technologies” (11)
They refer to the Luddites, and note that the Luddite movement wasn’t about resisting technology so much as it was about resisting exploitative labor practices (Jathan Sadowksi’s current book goes much further into this strategy of resistance). Today, for example, one might resist AI on energy consumption grounds, since data centers are expected to use as much electricity as the entire country of Japan by 2030. Wyatt similarly cites research that rural resistance to electrification affected subsequent design, and asks rhetorically where “mobile phones [would] make such irritating noises if non-users had been involved in their design” (78)? As a recent paper on technology refusal strategies puts it:
“When refusers think strategically about the actions they need to take to bring about better technological futures, they are engaging in a design process that is pragmatically oriented and considers alternatives. Rather than designing physical artifacts, they are designing refusals as social artifacts. Instead of physical objects, these artifacts are individual and collective actions that challenge existing social relations”
This entire process is invisible if AI non-use is simply a case of lagging use, but it is clearly a necessary part of responsible AI literacy as a form of civic education.
Another category worth mentioning is Disinterest, “when the topics that we want to investigate are those that turn out not to be of significant relevance to a broader population” (13). What if people might be interested in AI, but they’re not interested in having it rammed down their throat in Microsoft Office or every Google search they use?
Overall, Satchell and Dourish they note that “use and non-use are systemically related to each other as part of a broader framework, what we might call the “cultural milieu”” (14). And:
“The significance of the milieu as an approach to understanding use and non-use is it focuses our attention on the contexts within which use and non-use emerge as different aspects of broader phenomenon of cultural production around technological artifacts” (14)
AI Literacy initiatives need to be designed to respect the AI non-user, because that is a real person and not just a blip on the way to Universal AI Dominance™. This is not just about understanding the social context of AI use and the AI industry (though it is that). It’s about understanding that a healthy part of AI literacy is understanding why people might not use it, not so you can convert them, but so you can learn from them. AI Competency has to include knowing how and when not to use AI.

Leave a comment