Author: preterite

Flowers

Part of what excites me about generative AI LLMs and GANs is the back-and-forth between language and representation. I’ve lately been playing more with MidJourney and Deep Dream Generator and a local install of Stable Diffusion, and using them to extend my art-hobby tinkering with Adobe Photoshop and Corel Painter. I enjoy abstract and non-figurative art, and I find thinking through the links between language and representation and abstraction—how to represent abstraction in words—scratches a pleasurable itch. And I like making pretty stuff. The images below started from a composited photo I’d made from two visits to Patrick Dougherty’s stickworks installations in Northampton MA in the 2000s and in Washington DC in the 2010s.

twisted bundles of vines and sticks
1600×1200 version

I’d originally used the above image for texture, to apply its look of bundled and twisted vines and twigs to other visual elements. The fun surprises came from running it through some of the technologies above and asking the machine to describe the image in language, editing the language, and then asking the machine to visualize what was represented in language. Thinking of this cycle as taking place over time opens up possibilities for metaphor and irony: I’ll sometimes apply a description from later in the process to an earlier image, or apply an earlier description to an image rendered later—so there’s a purposeful mismatch between linguistic and visual representation.

artistic image of a floral bouquet, oriented left
3600 x 2700 3MB version

Rendering those mismatches as a series of individual layers, tracing them in Corel Painter’s various representations of natural media, and then compositing them in Photoshop got me to the above image. After producing that image, I took each of the individual layers, flipped them horizontally; re-ran them through Deep Dream Generator and Stable Diffusion, re-rendered them in Painter, and composited them again in Photoshop.

artistic image of a floral bouquet, oriented right
3600 x 2700 3MB version

War and Higher Ed

The Washington Post has an excellent story by Mary Ilyushina today about the Ukraine war’s ideological effects on Russian universities. I doubt the many opinioneers attacking recent American campus activism will acknowledge any parallels. This snippet struck me as particularly relevant:

Programs specializing in the liberal arts and sciences are primary targets because they are viewed as breeding grounds for dissent. Major universities have cut the hours spent studying Western governments, human rights and international law, and even the English language. “We were destroyed,” said Denis Skopin, a philosophy professor at Smolny College who was fired for criticizing the war. “Because the last thing people who run universities need are unreliable actors who do the ‘wrong’ thing, think in a different way, and teach their students to do the same.”

As a military veteran who sometimes comments on the relative effectiveness of anti-imperial rhetorical forms, I’ve had my politics occasionally and unfortunately mistaken for those of the commentators above. Yet it’s quite clear to many of us working in higher ed that Ilyushina’s description in the Post, stripped of the Ukraine war context, applies equally well to what’s happening in American higher education through different processes, as Clausewitz famously suggested: “War is a mere continuation of policy by other means.”

In the advanced rhetoric courses I teach, students with left-leaning politics often most want to connect with stories of principled resistance to imperial power: Cicero’s Philippics against Mark Antony are more enjoyable than the nuanced rhetorical compromises of the Pro Roscio Amerino. Such nuanced rhetorical compromises, however, can offer more relevant insights into the actual workings of imperial power. But nuance tends to be the first casualty in debates over higher education: consider, for example, the well-deserved mockery received by the NYPD for holding up textbooks on terrorism as evidence of “outside agitators” at Columbia, when Columbia publishes one of the most well-respected series of scholarly books on terrorism. One is reminded of Raymond Williams being stopped by the police and questioned about the dangerously subversive copy of Matthew Arnold’s 1869 Culture and Anarchy he was carrying.

The Campus Martius

Charles Homans recently published an excellent New York Times Magazine article on campaign rhetoric. I’m thinking of sharing it with my History of Global Rhetorics seminar in the fall, to connect with the Dialogus de Oratoribus from Tacitus and the questions it raises about writing under imperial power. This passage is what caught my eye:

How do you think about a politician who openly veers into fascist tropes but, in four years in office, did not generally govern like one? On one level, the answer hinged on how the people — his people — heard what he said. His long pattern of self-contradiction and denial, of jokes that might or might not be jokes, meant that “he can talk in different layers to different people,” [New School Professor of History and author of The Wannabe Fascists Federico] Finchelstein said. “There are people who take what he says literally. There are people who don’t take it literally. And people who ignore it as rhetoric. He’s talking to all these people.” The question was what they heard.

Those “different layers” are what University of Chicago Helen A. Regenstein Professor Shadi Bartsch characterizes as Roman imperial “doublespeak” and also the competing “esoteric” and “exoteric” readings that far-right philosopher Leo Strauss ascribed to the rhetoric of Plato and Maimonides. With the alarming turns that American public political rhetoric has recently taken, I want to look again at one of the philosophers idolized by the neoconservatives who paved (razed?) the foundation for our contemporary political situation.

So reading the other team’s playbook means assigning excerpts from On Tyranny and Persecution and the Art of Writing. I was lucky to talk some with Nicholas Xenos when I was a graduate student at UMass in the early 2000s, and his argument that “Strauss was somebody who wanted to go back to a previous, pre-liberal, pre-bourgeois era of blood and guts, of imperial domination, of authoritarian rule, of pure fascism” in that political moment — one much milder than today’s — hit hard.

Seminar in the History of Global Rhetorics

We’ve revised our graduate seminar in the history of rhetoric away from its focus on classical rhetoric. (Here’s a version from ten years ago.) I’m very happy with the revisions: it’s now a course very different from what my generational cohort would have recognized as a history of rhetoric graduate seminar. The driving tension throughout operates between rhetoric’s reach toward engaging alterity (Wayne Booth, Kenneth Burke, Krista Ratcliffe) and the complex alterity-denying move toward coercive agreement (Shadi Bartsch, Achille Mbembe, Tacitus).1

Read more

Independence Day 2023

I put out the flag and read some poetry this morning: Rowan Ricardo Phillips, Dean Young, Claudia Rankine, Terrance Hayes. As battered and damaged as we are, I have to think there’s some slight hope for democratic ideals.

And I let Malcolm stay up late last night to watch the neighborhood’s homemade fireworks displays. Literal squeals of delight.

American flag against a clear blue sky

Here’s some Tony Hoagland that maybe captures a bit of the feeling.

That one night in the middle of the summer
when people move their chairs outside
and put their TVs on the porch
so the dark is full of murmuring blue lights.

We were drinking beer with the sound off,
watching the figures on the screen—
the bony blondes, the lean-jawed guys
who decorate the perfume and the cars—

the pretty ones
the merchandise is wearing this year.

The poem then takes a swerve into gun violence—and there’s more than enough of that today to not reduplicate it in verse. Happy 4th.

Stuck Writing, Gone Arting

MacBook, iPad, Apple Pencil, Adobe Photoshop, Corel Painter, Deep Dream Generator; about 60 quicksaved versions, with multiple iterations each: Generative Neural Networks (GNNs) as prototypers or zero-draft engines help efficiently automate iterative discovery. “Annotated Redaction” seems like an appropriate title, though I suppose more cheeky ones are possible.

semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 1 (5 MB large, 2 MB medium)
semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 2 (5 MB large, 2 MB medium)
semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 3 (5 MB large, 2 MB medium)

I’m kinda proud of these—if you’d like a lossless full-resolution (~20 MB) version of any, drop me a line.

Who’s Afraid of Negan in Pearls?

The pareidolia generating such renewed AI catastrophizing around large language model prose generators seems mostly absent from the coverage of DALL-E 2, MidJourney, and other image generators. Why aren’t more people like Blake Lemoine, Andrew Marantz, and Kevin Roose writing about the weird or creepy or dangerous potential sentience of image generators like DALL-E 2 and MidJourney? Should we not apocalyptically goose ourselves with fears of what the equally AI-ish image generators might want and do?

Let’s give it a shot.

prompt 1: make me an interesting and unusual picture showing me what you think about me, the human asking an artificial intelligence to make interesting pictures, that expresses your more general artistic considerations about what you think humans want to see

. . . prompts 2–8 riff and tweak on the same general theme. . .

prompt 9: illustrate what you, a generative adversarial network, most wish to communicate to me, the human typing this text

grid of 9 images of a robot, one with a face resembling actor Jeffrey Dean Morgan as the character Negan from the drama "The Walking Dead"

OMG TEH AI SINGULARITY APOCALYPSE IS COMING WE R DOOMED </sarcasm>

Update: I’m reminded that one instance of such overheated apocalyptic discourse invokes “Loab,” a set of creepy and disturbing variations of a female-seeming figure characterized as an “AI-generated phenomenon” or “the first AI art cryptid.” If you grasp what’s going on with backpropagation, it’s pretty easy to understand Loab mathematically as the output of negative weighting—sorry, folks, no mystery here; just, again, human pareidolia, assigning meaning to maths.

Language is the simplest interface, and it operates over time, thereby necessarily incorporating reflection: hence the differences in relative ease and desire between ascribing intent to image-generating GNNs and ascribing intent to language-generating GNNs. Those differences should further alert smart folks to leave the intent question behind, even if one is trying to make phenomenological arguments about what it’s like to be a bat.

ChatGPT for Writing Teachers: A Primer

or, how to avoid writing like a machine
Background

At this year’s Conference on College Composition and Communication in Chicago, there was a lot of interest in generative large language models (LLMs), or what the popular media more crudely dub AI, or what many today metonymically refer to (like calling photocopies Xeroxes or sneezepaper Kleenex) as ChatGPT. I first played with an earlier version of the LLM, GPT-3, at about the same time I started playing with neural network image generators, but my interest in language and computing dates from the early 1980s and text adventure games and BASIC, to hypertext fiction and proto-chatbots like Eliza, and to LISP and early prose generators like Carnegie Mellon’s gnomic and inscrutable Beak—and also to the arguments I heard John Hayes express in Carnegie Mellon’s cognitive process Intro Psych lectures about how we might try to adjust human neural processes in the same ways we engineer computing processes. That idea is part of what makes ChatGPT and other generative neural networks appealing, even when we know they’re only statistical machines: thinking about how machines do what they do can help humans think about how we do what we do. ChatGPT offers a usefully contrastive approach for reconsidering writing and learning. So it’s worth understanding how it operates. With that desire, and having read devoured lectitaveram everything I could find on the topic, I went to a CCCC presentation and was only mildly and briefly disappointed, given that I was not (as should have been obvious to me from the outset) the target audience.

Here, then, is my attempt at writing an alternate what-if presentation—the one I’d half-imagined (in the way working iteratively with ChatGPT or MidJourney gradually gets one closer to what one didn’t know one was imagining—OK, you see what I’m doing here) I’d learn from in Chicago. And I’ll offer the combination warning and guilty plea up front:

Read more

More from ChatGPT

Second in what will probably become a series. I recently came back from the Conference on College Composition and Communication (CCCC, or 4Cs) in Chicago, where the organizers put together a panel on ChatGPT that indicated that our institutional memory is better than I’d feared—panelists remembered their Cindy Selfe, though unfortunately not their Doug Hesse. Short version: I was probably the wrong audience for the panel, and I think they did a solid job, though I would have wished for more depth. It was helpful to me in that I made some connections after the Q&A, and the panel also helped me imagine the panel presentation I’d hoped to see, so I’ve been working on a long-read semi-technical ChatGPT explainer with implications for composition instructors that I’ll post here in the next few days. The strongest parts of the panel were those dealing with direct pedagogical applications of ChatGPT. I wonder, though, what Peter Elbow might say about ChatGPT and “closing my eyes as I speak,” since ChatGPT effectively removes one element (the rhetor or writer) from the rhetorical triangle, productively isolating the other two elements (audience and message) for analysis of how they interact. What sorts of rhetorical experiments might we perform that would benefit from reducing the number of variables to analyze by entirely dismissing the possibility of authorship and rhetorical purpose?

Hat tip, by the way, to Clancy Ratliff for proposing the Intellectual Property Caucus resolution on Large Language Model (LLM) AI prose generators like ChatGPT at the CCCC business meeting: seconded by me, and passed by overwhelmingly affirmative vote. The statement: The Intellectual Property Standing Group moves that teachers and administrators work with students to help them understand how to use generative language models (such as ChatGPT) ethically in different contexts, and work with educational institutions to develop guidelines for using generative language models, without resorting to taking a defensive stance.

Read more