Author: preterite

Seminar in the History of Global Rhetorics

We’ve revised our graduate seminar in the history of rhetoric away from its focus on classical rhetoric. (Here’s a version from ten years ago.) I’m very happy with the revisions: it’s now a course very different from what my generational cohort would have recognized as a history of rhetoric graduate seminar—more of a sprint through various traditions highlighting thematically linked aspects than a deep dive into the bad old too-white arrays of specific texts. This still isn’t exactly what I think my own ideal history of rhetoric seminar would be, but more like what’s right for our graduate students today. Had I my druthers, it’d be two seminars spread over a year, split somewhere around the Middle Ages to allow for more time on Ancient empires and more time on C19 American antislavery rhetorics—but that’s what I’d most enjoy, not what I think our graduate students would find most necessary and relevant. Here, I’m also setting up McManus as a (reasonably) well-argued text that I think some students will want to kick against, and doing some other workarounds—like, for example, looking at receptions of Aristotle in Arabic rather than presenting the Rhetoric as a standalone monolithic text. The driving tension throughout operates between rhetoric’s reach toward engaging alterity (Wayne Booth, Kenneth Burke, Krista Ratcliffe) and the complex alterity-denying move toward coercive agreement (Shadi Bartsch, Achille Mbembe, Tacitus).1

Read more

Independence Day 2023

I put out the flag and read some poetry this morning: Rowan Ricardo Phillips, Dean Young, Claudia Rankine, Terrance Hayes. As battered and damaged as we are, I have to think there’s some slight hope for democratic ideals.

And I let Malcolm stay up late last night to watch the neighborhood’s homemade fireworks displays. Literal squeals of delight.

American flag against a clear blue sky

Here’s some Tony Hoagland that maybe captures a bit of the feeling.

That one night in the middle of the summer
when people move their chairs outside
and put their TVs on the porch
so the dark is full of murmuring blue lights.

We were drinking beer with the sound off,
watching the figures on the screen—
the bony blondes, the lean-jawed guys
who decorate the perfume and the cars—

the pretty ones
the merchandise is wearing this year.

The poem then takes a swerve into gun violence—and there’s more than enough of that today to not reduplicate it in verse. Happy 4th.

Stuck Writing, Gone Arting

MacBook, iPad, Apple Pencil, Adobe Photoshop, Corel Painter, Deep Dream Generator; about 60 quicksaved versions, with multiple iterations each: Generative Neural Networks (GNNs) as prototypers or zero-draft engines help efficiently automate iterative discovery. “Annotated Redaction” seems like an appropriate title, though I suppose more cheeky ones are possible.

semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 1 (5 MB large, 2 MB medium)
semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 2 (5 MB large, 2 MB medium)
semi-abstract painting of a heavily annotated and redacted print book
Annotated Redaction 3 (5 MB large, 2 MB medium)

I’m kinda proud of these—if you’d like a lossless full-resolution (~20 MB) version of any, drop me a line.

Who’s Afraid of Negan in Pearls?

The pareidolia generating such renewed AI catastrophizing around large language model prose generators seems mostly absent from the coverage of DALL-E 2, MidJourney, and other image generators. Why aren’t more people like Blake Lemoine, Andrew Marantz, and Kevin Roose writing about the weird or creepy or dangerous potential sentience of image generators like DALL-E 2 and MidJourney? Should we not apocalyptically goose ourselves with fears of what the equally AI-ish image generators might want and do?

Let’s give it a shot.

prompt 1: make me an interesting and unusual picture showing me what you think about me, the human asking an artificial intelligence to make interesting pictures, that expresses your more general artistic considerations about what you think humans want to see

. . . prompts 2–8 riff and tweak on the same general theme. . .

prompt 9: illustrate what you, a generative adversarial network, most wish to communicate to me, the human typing this text

grid of 9 images of a robot, one with a face resembling actor Jeffrey Dean Morgan as the character Negan from the drama "The Walking Dead"

OMG TEH AI SINGULARITY APOCALYPSE IS COMING WE R DOOMED </sarcasm>

Update: I’m reminded that one instance of such overheated apocalyptic discourse invokes “Loab,” a set of creepy and disturbing variations of a female-seeming figure characterized as an “AI-generated phenomenon” or “the first AI art cryptid.” If you grasp what’s going on with backpropagation, it’s pretty easy to understand Loab mathematically as the output of negative weighting—sorry, folks, no mystery here; just, again, human pareidolia, assigning meaning to maths.

Language is the simplest interface, and it operates over time, thereby necessarily incorporating reflection: hence the differences in relative ease and desire between ascribing intent to image-generating GNNs and ascribing intent to language-generating GNNs. Those differences should further alert smart folks to leave the intent question behind, even if one is trying to make phenomenological arguments about what it’s like to be a bat.

ChatGPT for Writing Teachers: A Primer

or, how to avoid writing like a machine
Background

At this year’s Conference on College Composition and Communication in Chicago, there was a lot of interest in generative large language models (LLMs), or what the popular media more crudely dub AI, or what many today metonymically refer to (like calling photocopies Xeroxes or sneezepaper Kleenex) as ChatGPT. I first played with an earlier version of the LLM, GPT-3, at about the same time I started playing with neural network image generators, but my interest in language and computing dates from the early 1980s and text adventure games and BASIC, to hypertext fiction and proto-chatbots like Eliza, and to LISP and early prose generators like Carnegie Mellon’s gnomic and inscrutable Beak—and also to the arguments I heard John Hayes express in Carnegie Mellon’s cognitive process Intro Psych lectures about how we might try to adjust human neural processes in the same ways we engineer computing processes. That idea is part of what makes ChatGPT and other generative neural networks appealing, even when we know they’re only statistical machines: thinking about how machines do what they do can help humans think about how we do what we do. ChatGPT offers a usefully contrastive approach for reconsidering writing and learning. So it’s worth understanding how it operates. With that desire, and having read devoured lectitaveram everything I could find on the topic, I went to a CCCC presentation and was only mildly and briefly disappointed, given that I was not (as should have been obvious to me from the outset) the target audience.

Here, then, is my attempt at writing an alternate what-if presentation—the one I’d half-imagined (in the way working iteratively with ChatGPT or MidJourney gradually gets one closer to what one didn’t know one was imagining—OK, you see what I’m doing here) I’d learn from in Chicago. And I’ll offer the combination warning and guilty plea up front:

Read more

More from ChatGPT

Second in what will probably become a series. I recently came back from the Conference on College Composition and Communication (CCCC, or 4Cs) in Chicago, where the organizers put together a panel on ChatGPT that indicated that our institutional memory is better than I’d feared—panelists remembered their Cindy Selfe, though unfortunately not their Doug Hesse. Short version: I was probably the wrong audience for the panel, and I think they did a solid job, though I would have wished for more depth. It was helpful to me in that I made some connections after the Q&A, and the panel also helped me imagine the panel presentation I’d hoped to see, so I’ve been working on a long-read semi-technical ChatGPT explainer with implications for composition instructors that I’ll post here in the next few days. The strongest parts of the panel were those dealing with direct pedagogical applications of ChatGPT. I wonder, though, what Peter Elbow might say about ChatGPT and “closing my eyes as I speak,” since ChatGPT effectively removes one element (the rhetor or writer) from the rhetorical triangle, productively isolating the other two elements (audience and message) for analysis of how they interact. What sorts of rhetorical experiments might we perform that would benefit from reducing the number of variables to analyze by entirely dismissing the possibility of authorship and rhetorical purpose?

Hat tip, by the way, to Clancy Ratliff for proposing the Intellectual Property Caucus resolution on Large Language Model (LLM) AI prose generators like ChatGPT at the CCCC business meeting: seconded by me, and passed by overwhelmingly affirmative vote. The statement: The Intellectual Property Standing Group moves that teachers and administrators work with students to help them understand how to use generative language models (such as ChatGPT) ethically in different contexts, and work with educational institutions to develop guidelines for using generative language models, without resorting to taking a defensive stance.

Read more

Gallery Post

When I’ve felt stuck with writing, I’ve sometimes tried to make art. My tastes run more to the semi-abstract and non-figurative, so that’s what I often end up doing. I’m a longtime fan of the natural media app Painter, and my production cycle goes back and forth between Photoshop and Painter (I use a tablet and stylus), with frequently saved iterations then cycling through Deep Dream Generator and back again into Painter and Photoshop. It tends to be a process of discovery: I seldom know what it’s going to come out as when I start (the derivation from Rodin’s Burghers of Calais is an obvious exception), and simply follow the lines or patterns as I iterate, usually over several dozen versions. I’m sure my deuteranopia shows in my color selection, and I’m fine with that. The files linked below (click to embiggen) are a little less than half the size of the originals (about 40 inches wide at 150 dpi).

Read more