By Dave Bostwick
Vice Chair and Teaching Associate Professor
For those who prefer to gain academic enlightenment by listening to podcast banter instead of reading research text, Google’s NotebookLM is now publicly available as a study tool and more.
Users upload source material, such as book chapters or research articles. NotebookLM then generates study guides with practice questions or briefings with a summary and key themes. The WOW factory, however, comes with NotebookLM’s option to create conversational podcasts that offer listeners an informal overview of the source text.
The audio downloads as a wav file, so users must do their own mp3 conversion.
Podcast Example 1
Because it seemed ethically awkward to create an AI-generated podcast about someone else’s work, I tested the tool on one of my own chapter publications, Here’s a podcast summary for chapter 7 of UNESCO’s Teaching Journalism Online: A Handbook for Journalism Educators. My chapter is titled “Fostering Community and Engagement in Online Classrooms.”
Podcast Example 2
For a segue to ethical considerations, here’s an AI-generated podcast summary of my Open Educational Resource chapter titled “AI-Generated Content Meets the SPJ Code of Ethics.”
In the second podcast, the irony is overwhelming in a few places. For example, one host discusses the ethical challenges when “pictures, video and audio getting thrown into the mix” of AI-generated content, and the other host responds, “That’s kind of freaky.”
Also, at one point, a host reiterates one of my key points: “It’s about that human connection.” Except it’s an AI-generated voice emphasizing the importance of human connection.
Anyone who posts one of these podcasts for public consumption would need to be transparent about how it was created. The SPJ Code of Ethics says that media professionals should “explain ethical choices and processes to audiences.”
As with much AI-generated content, there are yet-to-be-answered legal questions. For example, what if someone used NotebookLM to create and share a podcast that included an egregious error. Is the user or the platform responsible for potential damages?
I played these two podcasts for several faculty colleagues in the School of Journalism and Strategic Media. Most were impressed that the podcasts sounded close to real. Chair Bret Schulte said the process seemed “unsettling,” especially given the school’s commitment to improving its podcast studio space and equipment.
On the other hand, Professor of Practice Ninette Sosa, a former correspondent and producer with CNN, immediately complained about the mechanical interaction between the two hosts. She said the conversation didn’t sound natural.
When I initially listened to my first downloaded podcast, my jaw dropped. However, I eventually noticed a few glitches and gaps. Also, one narrator started so many comments with “exactly” that it became annoying (which then made me ponder the psychological weirdness of being annoyed by an inanimate voice).
NotebookLM is currently labeled as an experiment, and it is branded as “a tool that helps users organize and analyze research materials.” College students could benefit from the platform’s range of options.
For future AI-generated podcast platforms, one can envision content creators paying for additional voice options, possibly including their own sampled voices. There could even be an option to download a podcast script, edit the rough spots, and then re-render the audio file.