This semester I’m excited to be teaching a 300-level elective cross-listed in the English and Digital Technology and Culture majors as “Electronic Research and the Rhetoric of Information.” I’m thrilled to be teaching the material, and it’s let me do some cool stuff in the classroom that I haven’t done before. We’ve been reading some selections from James Gleick and elsewhere about Claude Shannon and information theory (which fit together in interesting and provocative ways with Lessig’s thoughts in Free Culture on piracy on the one hand and with Michael Joyce’s hypertexts and Jorge Luis Borges’s “The Library of Babel” on the other), and grappling with Shannon’s idea that information and meaning are separable — which led me to put together a lesson plan that (1) used some technologies in the classroom that I’d never taught with before and (2) was fairly highly multimodal in its incorporation of graphics, sound, and interactivity.
I started by posing a question, telling students: I’ve got two songs in mind. One is an old song by a band that I grew up listening to and liked a lot, and brings back memories of hanging out in my friend’s attic room. The other is a newer song with a nice beat that’s at once quirky and catchy — maybe an information theorist would argue that those terms imply each other. Which song is better? (Yes, I acknowledged it was a rhetorical question, meant to highlight the subject of the day’s work.)
I then showed the two songs again, in graphical form
and asked: Which one is better? Can you tell how they might be different? Do these images carry more or less meaning than my descriptions? Would I be illegally pirating music by sharing the spectograms of their waveforms at sufficiently high resolution? What would the RIAA say?
That got some discussion going. The next step was to play the songs: I had both an iPhone and an iPad with me, one for playback and one for listening with Soundhound, a song-recognition app similar to Shazam, which functions in the way outlined by Claude Shannon: by measuring patterns (moments of peak frequency and amplitude) against an axis of time or frequency and then compared to a hash table linked to a sufficiently large database. The props worked, of course, identifying the songs in a few seconds each. (YouTube videos are linked from the above images: yes, I got to play “Gangnam Syle” as a part of a lesson.)
In addition to those two songs — which have meanings, obviously, beyond their meaning to me or beyond their waveforms — I then pulled out a ringer: Girl Talk’s “Oh No,” which Soundhound could only identify as either Black Sabbath or Ludacris. The point I was trying to demonstrate from Shannon was concerning the profound difference between information and meaning, and some songs (or texts, broadly construed) have more meaning than others, which can interfere with analyzing them as information. I also made the point that by such a definition, when one is doing the “electronic research” to which the course title refers, one is not looking for meaning, because one cannot a priori do so: instead, we look for information, which we convert into meaning.
That was as good a job as I’ve done this semester of stirring the pot and provoking discussion, and it turned into a really good, energizing lesson. Now to figure out how to do more stuff like that.