We’ve revised our graduate seminar in the history of rhetoric away from its focus on classical rhetoric. (Here’s a version from ten years ago.) I’m very happy with the revisions: it’s now a course very different from what my generational cohort would have recognized as a history of rhetoric graduate seminar. The driving tension throughout operates between rhetoric’s reach toward engaging alterity (Wayne Booth, Kenneth Burke, Krista Ratcliffe) and the complex alterity-denying move toward coercive agreement (Shadi Bartsch, Achille Mbembe, Tacitus).1
I put out the flag and read some poetry this morning: Rowan Ricardo Phillips, Dean Young, Claudia Rankine, Terrance Hayes. As battered and damaged as we are, I have to think there’s some slight hope for democratic ideals.
And I let Malcolm stay up late last night to watch the neighborhood’s homemade fireworks displays. Literal squeals of delight.
Here’s some Tony Hoagland that maybe captures a bit of the feeling.
That one night in the middle of the summer when people move their chairs outside and put their TVs on the porch so the dark is full of murmuring blue lights.
We were drinking beer with the sound off, watching the figures on the screen— the bony blondes, the lean-jawed guys who decorate the perfume and the cars—
the pretty ones the merchandise is wearing this year.
The poem then takes a swerve into gun violence—and there’s more than enough of that today to not reduplicate it in verse. Happy 4th.
MacBook, iPad, Apple Pencil, Adobe Photoshop, Corel Painter, Deep Dream Generator; about 60 quicksaved versions, with multiple iterations each: Generative Neural Networks (GNNs) as prototypers or zero-draft engines help efficiently automate iterative discovery. “Annotated Redaction” seems like an appropriate title, though I suppose more cheeky ones are possible.
I’m kinda proud of these—if you’d like a lossless full-resolution (~20 MB) version of any, drop me a line.
The pareidolia generating such renewed AI catastrophizing around large language model prose generators seems mostly absent from the coverage of DALL-E 2, MidJourney, and other image generators. Why aren’t more people like Blake Lemoine,Andrew Marantz, and Kevin Roose writing about the weird or creepy or dangerous potential sentience of image generators like DALL-E 2 and MidJourney? Should we not apocalyptically goose ourselves with fears of what the equally AI-ish image generators might want and do?
Let’s give it a shot.
prompt 1:make me an interesting and unusual picture showing me what you think about me, the human asking an artificial intelligence to make interesting pictures, that expresses your more general artistic considerations about what you think humans want to see
. . . prompts 2–8 riff and tweak on the same general theme. . .
prompt 9: illustrate what you, a generative adversarial network, most wish to communicate to me, the human typing this text
OMG TEH AI SINGULARITY APOCALYPSE IS COMING WE R DOOMED </sarcasm>
Language is the simplest interface, and it operates over time, thereby necessarily incorporating reflection: hence the differences in relative ease and desire between ascribing intent to image-generating GNNs and ascribing intent to language-generating GNNs. Those differences should further alert smart folks to leave the intent question behind, even if one is trying to make phenomenological arguments about what it’s like to be a bat.
At this year’s Conference on College Composition and Communication in Chicago, there was a lot of interest in generative large language models (LLMs), or what the popular media more crudely dub AI, or what many today metonymically refer to (like calling photocopies Xeroxes or sneezepaper Kleenex) as ChatGPT. I first played with an earlier version of the LLM, GPT-3, at about the same time I started playing with neural network image generators, but my interest in language and computing dates from the early 1980s and textadventuregames and BASIC, to hypertext fiction and proto-chatbots like Eliza, and to LISP and early prose generators like Carnegie Mellon’s gnomic and inscrutable Beak—and also to the arguments I heard John Hayes express in Carnegie Mellon’s cognitive process Intro Psych lectures about how we might try to adjust human neural processes in the same ways we engineer computing processes. That idea is part of what makes ChatGPT and other generative neural networks appealing, even when we know they’re only statistical machines: thinking about how machines do what they do can help humans think about how we do what we do. ChatGPT offers a usefully contrastive approach for reconsidering writing and learning. So it’s worth understanding how it operates. With that desire, and having readdevouredlectitaveram everything I could find on the topic, I went to a CCCC presentation and was only mildly and briefly disappointed, given that I was not (as should have been obvious to me from the outset) the target audience.
Here, then, is my attempt at writing an alternate what-if presentation—the one I’d half-imagined (in the way working iteratively with ChatGPT or MidJourney gradually gets one closer to what one didn’t know one was imagining—OK, you see what I’m doing here) I’d learn from in Chicago. And I’ll offer the combination warning and guilty plea up front:
Second in what will probably become a series. I recently came back from the Conference on College Composition and Communication (CCCC, or 4Cs) in Chicago, where the organizers put together a panel on ChatGPT that indicated that our institutional memory is better than I’d feared—panelists remembered their Cindy Selfe, though unfortunately not their Doug Hesse. Short version: I was probably the wrong audience for the panel, and I think they did a solid job, though I would have wished for more depth. It was helpful to me in that I made some connections after the Q&A, and the panel also helped me imagine the panel presentation I’d hoped to see, so I’ve been working on a long-read semi-technical ChatGPT explainer with implications for composition instructors that I’ll post here in the next few days. The strongest parts of the panel were those dealing with direct pedagogical applications of ChatGPT. I wonder, though, what Peter Elbow might say about ChatGPT and “closing my eyes as I speak,” since ChatGPT effectively removes one element (the rhetor or writer) from the rhetorical triangle, productively isolating the other two elements (audience and message) for analysis of how they interact. What sorts of rhetorical experiments might we perform that would benefit from reducing the number of variables to analyze by entirely dismissing the possibility of authorship and rhetorical purpose?
Hat tip, by the way, to ClancyRatliff for proposing the Intellectual Property Caucus resolution on Large Language Model (LLM) AI prose generators like ChatGPT at the CCCC business meeting: seconded by me, and passed by overwhelmingly affirmative vote. The statement: The Intellectual Property Standing Group moves that teachers and administrators work with students to help them understand how to use generative language models (such as ChatGPT) ethically in different contexts, and work with educational institutions to develop guidelines for using generative language models, without resorting to taking a defensive stance.
When I’ve felt stuck with writing, I’ve sometimes tried to make art. My tastes run more to the semi-abstract and non-figurative, so that’s what I often end up doing. I’m a longtime fan of the natural media app Painter, and my production cycle goes back and forth between Photoshop and Painter (I use a tablet and stylus), with frequently saved iterations then cycling through Deep Dream Generator and back again into Painter and Photoshop. It tends to be a process of discovery: I seldom know what it’s going to come out as when I start (the derivation from Rodin’s Burghers of Calais is an obvious exception), and simply follow the lines or patterns as I iterate, usually over several dozen versions. I’m sure my deuteranopia shows in my color selection, and I’m fine with that. The files linked below (click to embiggen) are a little less than half the size of the originals (about 40 inches wide at 150 dpi).
I’m Mike Edwards. I write here about rhetoric, composition, economics, and technology. I like cats.
Contact
I work at Washington State University, where you can find my English Department page and email my mike.edwards address. For other communications, please see my contact page.
Dennis Jerz on More from ChatGPT: “Great stuff! I will be borrowing heavily (and citing appropriately) for a panel at my school in a couple weeks.…” Mar 6, 05:24
Steve Krause on CCCC2022: Reasons to Confer: “Needless to say, I think you’re right. You are raising perfectly valid concerns about not only how this specific conference…” Mar 17, 06:08
Dennis Jerz on Teaching Bartleby: “Today when I started a discussion on Bartleby, and asked for comments from the class, on that cue the entire…” Sep 16, 18:28
Mel on Top Rhet/Comp Schools?: “Hi B, I was reading this thread and thought wow I wish there were some recent posts on here–then I…” Jun 28, 11:33
BK on Top Rhet/Comp Schools?: “Hi all, I’d argue that University of Michigan’s Joint Program in English and Education, headed by Anne Gere, could be…” Jun 24, 16:58
Recent Comments