Technology

Who’s Afraid of Negan in Pearls?

The pareidolia generating such renewed AI catastrophizing around large language model prose generators seems mostly absent from the coverage of DALL-E 2, MidJourney, and other image generators. Why aren’t more people like Blake Lemoine, Andrew Marantz, and Kevin Roose writing about the weird or creepy or dangerous potential sentience of image generators like DALL-E 2 and MidJourney? Should we not apocalyptically goose ourselves with fears of what the equally AI-ish image generators might want and do?

Let’s give it a shot.

prompt 1: make me an interesting and unusual picture showing me what you think about me, the human asking an artificial intelligence to make interesting pictures, that expresses your more general artistic considerations about what you think humans want to see

. . . prompts 2–8 riff and tweak on the same general theme. . .

prompt 9: illustrate what you, a generative adversarial network, most wish to communicate to me, the human typing this text

grid of 9 images of a robot, one with a face resembling actor Jeffrey Dean Morgan as the character Negan from the drama "The Walking Dead"

OMG TEH AI SINGULARITY APOCALYPSE IS COMING WE R DOOMED </sarcasm>

Update: I’m reminded that one instance of such overheated apocalyptic discourse invokes “Loab,” a set of creepy and disturbing variations of a female-seeming figure characterized as an “AI-generated phenomenon” or “the first AI art cryptid.” If you grasp what’s going on with backpropagation, it’s pretty easy to understand Loab mathematically as the output of negative weighting—sorry, folks, no mystery here; just, again, human pareidolia, assigning meaning to maths.

Language is the simplest interface, and it operates over time, thereby necessarily incorporating reflection: hence the differences in relative ease and desire between ascribing intent to image-generating GNNs and ascribing intent to language-generating GNNs. Those differences should further alert smart folks to leave the intent question behind, even if one is trying to make phenomenological arguments about what it’s like to be a bat.

ChatGPT for Writing Teachers: A Primer

or, how to avoid writing like a machine
Background

At this year’s Conference on College Composition and Communication in Chicago, there was a lot of interest in generative large language models (LLMs), or what the popular media more crudely dub AI, or what many today metonymically refer to (like calling photocopies Xeroxes or sneezepaper Kleenex) as ChatGPT. I first played with an earlier version of the LLM, GPT-3, at about the same time I started playing with neural network image generators, but my interest in language and computing dates from the early 1980s and text adventure games and BASIC, to hypertext fiction and proto-chatbots like Eliza, and to LISP and early prose generators like Carnegie Mellon’s gnomic and inscrutable Beak—and also to the arguments I heard John Hayes express in Carnegie Mellon’s cognitive process Intro Psych lectures about how we might try to adjust human neural processes in the same ways we engineer computing processes. That idea is part of what makes ChatGPT and other generative neural networks appealing, even when we know they’re only statistical machines: thinking about how machines do what they do can help humans think about how we do what we do. ChatGPT offers a usefully contrastive approach for reconsidering writing and learning. So it’s worth understanding how it operates. With that desire, and having read devoured lectitaveram everything I could find on the topic, I went to a CCCC presentation and was only mildly and briefly disappointed, given that I was not (as should have been obvious to me from the outset) the target audience.

Here, then, is my attempt at writing an alternate what-if presentation—the one I’d half-imagined (in the way working iteratively with ChatGPT or MidJourney gradually gets one closer to what one didn’t know one was imagining—OK, you see what I’m doing here) I’d learn from in Chicago. And I’ll offer the combination warning and guilty plea up front:

Read more

GPT-3 Gave Me This Today

“There is something in the telling of our lies that can redeem us, can make us better than we are. We see Abraham Lincoln at Gettysburg battlefield, with his son’s body on a stretcher before him, his hand on the boy’s head, his eyes cast down, the sound of the artillery in the distance like thunder, or like the beating of a great heart, and Lincoln says, This world does not belong to the strong.”

https://beta.openai.com/playground

Move Complete

I’ve moved the weblog and database over from vitia.org, which will no longer be updated, and in the process I’m updating and tweaking a variety of other things as well. The most significant two tweaks, still somewhat in process but to be wrapped up in the next few days—new email is myfirstname at this domain and anonymous at this domain for encrypted, and putting together a self-hosted webfont stack rather than relying on external—have in common a move away from Google, which is perhaps much more understandable to those who’ve read Shoshana Zuboff’s The Age of Surveillance Capitalism, which has my vote for the #1 most important must-read book of recent memory. On the one hand, I do stuff with digital technologies, and I put effort into using integrated stacks of digital technologies (shell scripting, operating system scripting, URL hooks and scripting, keyboard automations, messaging and file system automations, et cetera) to make my life easier and declutter my headspace, and on the other hand, I teach classes that critically engage with digital technologies in ways (q.v. Safiya Umoja Noble, Algorithms of Oppression; John Cheney-Lippold, We Are Data; Neda Atanasoski and Kalindi Vora, Surrogate Humanity; Ed Finn, What Algorithms Want; Finn Brunton and Helen Nissenbaum, Obfuscation; Frank Pasquale, The Black Box Society) that make what I used to think of as paranoia look increasingly like not only usefully cautious skepticism, not even only good common sense, but socially imperative just and caring practice toward ourselves and others. (And I continue to be so much happier having almost entirely abandoned short-form social media: no, Facebook, I don’t miss you at all, and yes Twitter, I’m absolutely fine not seeing you until my next academic conference.)

So now that I’ve got the overhaul done and the tune-up mostly complete, I figure it’s time for me to start putting this blog through its paces again and doing some laps before I take it out on the highway.

Vroom, vroom.

Nineteenth-Century Cognitive Capitalism?

Some of my research lately has been focusing on 19th-century technologies and economies and their relationship to the 19th-century birth of composition studies. I’m particularly interested in the ways that the economic transformations of the Industrial Revolution and American slavery intersected with composition pedagogies and the progression and development of higher education. I’ve also been teaching about the history of the digital and its relation to labor, and participating in a reading group working our way through Marx’s Capital volumes 2 and 3, and so there are a number of ideas coming together for me right now. I’ll share more on the topic of the economics of slavery later as I read further in that domain, but tonight I’m thinking about Marx’s concept of the “general intellect” from the Grundrisse and what 19th-century cognitive capitalism might have looked like, depended upon, and made possible. Already, of course, I’m anticipating the objection that I’m dehistoricizing and decontextualizing a 21st-century concept and thereby making a foolish category error. Well, fair enough. I’ll at least think through that category error. The first objection might be that there wasn’t enough of a “general intellect” (though Marx was imagining it) for there to be a cognitive capitalism in the days when capitalism itself was still young. (I date capitalism’s broad emergence and supercession of mercantilism from 1776, the year Adam Smith’s Wealth of Nations was published.) The idea of cognitive capitalism and a general intellect rely both upon a widespread system (network?) of capital and a widespread system of educating a workforce or populace (the two are necessarily distinct in the 19th century).

So we have this intersection I’m imagining coming into being that for me comes out of the passage from Marx’s 1858 “Fragment on Machines” in the Grundrisse where he writes,

Nature builds no machines, no locomotives, railways, electric telegraphs, self-acting mules etc. These are products of human industry; natural material transformed into organs of the human will over nature, or of human participation in nature. They are organs of the human brain, created by the human hand; the power of knowledge, objectified. The development of fixed capital indicates to what degree general social knowledge has become a direct force of production, and to what degree, hence, the conditions of the process of social life itself have come under the control of the general intellect and been transformed in accordance with it; to what degree the powers of social production have been produced, not only in the form of knowledge, but also as immediate organs of social practice, of the real life process.

Machines are “the power of knowledge, objectified.” Elsewhere I’ve talked about imagining the classical economic factors of production—land, labor, and capital—transformed under cognitive capitalism (or, I think more precisely, under our present system of intellectual and affective economic activity) into material-technological capital (computers and algorithms and networks replacing land as the new sites of production), intellectual and affective labor (the work humans perform at those sites), and intellectual and affective capital (the relations and building blocks and books and documents and programs that humans work with and on at those sites). So Marx’s “Fragment” is where I imagine those transformations emerging from.

Read more

From My Cold, Dead Hands

When I teach first-year writing, I sometimes use the story of Charlton Heston’s post-Columbine NRA speech in Denver as an example of rhetorical kairos, keyed to its time and place. (What actually happened, as always, is more complex than the story.) The lesson I try to teach: whatever one’s views on guns after Columbine, the time and place of that speech affected or reinforced them. There is such a thing, I suggest, as a rhetorical moment.

Recently, we were in another such moment in the furor over iPhone encryption. John Oliver did a good 18-minute job  of explaining some of the particulars, and it’s worth your time if you haven’t seen it. The furor over encryption, in a US context, was a fight about the intersection of information and technology and politics, and that intersection is one I’ve lately had increasingly strong thoughts about.

I was dismayed to see James Comey, the Director of the FBI who selected the fight with Apple over encryption, taking what became the government’s public stand. Tim Weiner’s excellent 2012 history of the FBI, Enemies, notes (lest we forget) that Comey is the former Acting United States Attorney General who was in the intensive care hospital room on March 10 2004 when John Ashcroft refused the request brought by Andrew Card and Alberto Gonzales from President George W. Bush to reauthorize the Stellar Wind government eavesdropping program. Ashcroft said at the time that it didn’t matter, “‘[b]ecause I’m not the attorney general. There is the attorney general.’ And then he pointed at Comey” (Weiner 434). Comey refused as well. Later, in an admirable 2005 address to the NSA, Comey would describe what then-director of the FBI Robert Mueller “had heard [two days later] from Bush and Cheney at the White House”:

“If we don’t do this, people will die.” You can all supply your own this: “If we don’t collect this type of information,” or “If we don’t use this technique,” or “If we don’t extend this authority.” (Weiner 436)

Eleven years later, Comey supplied his own this.

Comey and the FBI were wrong to demand decryption. Code is speech. Forcing someone to speak is a violation of the First Amendment. Osip Mandelstam was commanded to write a poem in praise of Stalin, refused, and died in a cold prison camp near Vladivostok after smuggling out a letter to his wife asking for warm clothes. Apple’s 3/15 response to the FBI rightly invoked the specter of compelled speech when it pointed out that “the state could force an artist to paint a poster, a singer to perform a song, or an author to write a book, so long as its purpose was to achieve some permissible end, whether increasing military enrollment or promoting public health.” So-called “back doors” that would allow a government of eavesdropping and informants like that of Stalin’s regime endanger us all. And the FBI’s expressed position is hostile to liberty and anti-Constitutional.

Consider the similarly Stalinist inverse of compelled speech: Read more

Metadata and the Research Project

In a widely reported quotation, former director of the NSA and CIA General Michael Hayden said in May 2014 that “We kill people based on metadata.” Metadata is increasingly valuable today: it would also seem that it carries not one but multiple forms of value, some of those forms payable in blood.

Information Scientist Jeffrey Pomerantz, in his book Metadata (Cambridge, MA: The MIT Press, 2015), argues that until recently, the term “metadata” has typically been used to refer to “[d]ata that was created deliberately; data exhaust, on the contrary, is produced incidentally as a result of doing other things” (126, emphasis mine). That’s an interesting term, “data exhaust,” as perhaps an analogue to the pollution associated with the economic production and consumption of the industrial age. And of course corporations and governments are finding new things to do with this so-called data exhaust (like kill people, for example, or just to chart the social networks of potential insurgents like Paul Revere, as Kieran Healy charmingly demonstrates, or even to advertise Target products to covertly pregnant teenagers until their parents find out, as the anecdote popular a while back noted). It’s got cash value, click-through value, and my Digital Technology and Culture (DTC) students last semester put together some really terrific projects examining the use of cookies and Web advertising and geolocation for ubiquitous monitoring and monetizing.

But that idea of useful information as by-product keeps coming back to me: I wonder if someone has ever tried to copyright the spreading informational ripples they leave in their wakes as they travel through their digital lives, since those ripples would seem to be information in fixed form (they’re recorded and tracked, certainly) created by individual human activity, if not intention. There’s a whole apparatus there that we interact with: as Pomerantz notes, “[i]n the modern era of ubiquitous computing, metadata has become infrastructural, like the electrical grid or the highway system. These pieces of modern infrastructure are indispensible but are also only the tip of the iceberg: when you flick on a lightswitch, for example, you are the end user of a large set of technologies and policies. Individually, these technologies and policies may be minor, and may seem trivial. . . but in the aggregate, they have far-reaching cultural and economic implications. And it’s the same with metadata” (3). So the research paper has as its infrastructure things like the credit hour and plagiarism policies and the Library of Congress Classification system, which composition instructors certainly address as at once central to the research project and also incidental, because the thing many of us want to focus is the agent and the intentional action; the student and the research. Read more

Rationale for a Graduate Seminar in Digital Technology and Culture

Proposed syllabi for graduate seminars are due Monday, and while I’ve got the documents themselves together, I also want to be able to better articulate the exigency for this particular seminar I’ve proposed a syllabus for. There’s no guarantee my proposal will fit the Department’s needs better than any other proposals, of course, so this is partly an exercise in hopeful thinking, but it’s also helping me to figure out why I’m interested in investigating certain topics. The course, “Studies in Technology and Culture” (DTC 561 / ENGL 561), examines “key concepts, tools, and possibilities afforded by engaging with technology through a critical cultural lens,” and is one of the two required courses for the interdisciplinary WSU graduate certificate in Digital Humanities and Culture, a certificate designed to “enhance already existing graduate programs in the humanities and the social sciences, . . . [offering] graduate-level coursework in critical methods, textual analysis, composing practices, and hands-on production for engaging with humanistic studies in, as well as about, digital environments.” I see a couple important points there:

  • first, the certificate’s “critical cultural lens” indicates a reflexive and dialectical (practice- and theory-based) analysis of cultural phenomena as in process and under construction by human and nonhuman agents, and toward the notion of culture as a “noun of process” (from the etymological tracing of Raymond Williams, who points out that the original verb form of “culture” was transitive) representing complex multiple self-developing practices relating to symbolic action; and
  • second, the certificate’s interdisciplinary aspects contribute in rich ways to its digital focus, given its required electives that examine how (AMST 522) the economics of access in the digital divide reinforce inequalities, how (DTC 477) the commodification of information and digital tools can contribute to the stratification of their use, how (DTC 478) interface designs can sometimes reinforce stratification and inequality, how (HIST 527) public history projects incorporating digital technologies can attempt to resist the dominant appropriation or suppression of the heritage of subjugated cultures through practices of responsible representation, and how (HIST 529) ethical digital curation and archiving practices can serve equitable and inclusive ends.

One possible intersection of both points might be understood as the intersection of process and information, which is how I would theme the seminar. Such a theme would represent the familiar cultural studies topoi of race, sexuality, class, gender, ethnicity, age, religion, ability, and others as points of contestation over information. The processes via which information is produced, distributed, owned, used, and re-produced shape and are shaped by those topoi and their intersections with digital technologies. Furthermore, I see tendencies in our emerging studies of digital technology and culture that replicate past trajectories whereby early adopters of technologies (often members of privileged cultural groups) tend to centralize, monopolize, and territorialize research domains—fields that shape processes related to the development of information—especially in an academic context shaped by the eagerness of funding agents to throw money at technology. Given such eagerness, the certificate’s welcome emphasis on “hands-on production” might offer an opportunity to counter that territorializing impulse.

Read more