ViewPoint
Richard Koci Hernandez on the potential of AI
Richard Koci Hernandez is a Senior Multimedia Producer and former Faculty Affiliate at the Possibility Lab. Koci was previously Associate Professor of Multimedia and Bloomberg Chair of Technology at UC Berkeley’s Graduate School of Journalism, and a pioneering innovator in journalism and multimedia. His work for the San Jose Mercury News covering the Latino Diaspora and the California Youth Prison System earned him two Pulitzer Prize nominations, 4 Emmy Nominations, and a National Emmy Award win for the multimedia project Uprooted.
Koci has spent his 20-year journalism career reporting in California and his work has appeared in Time, Wired, The New York Times, a National Geographic book, and international magazines. He has taught multimedia workshops for Stanford University, National Press Photographers Association, The Southern Short Course, and National Association for Black and Hispanic Journalists, among many others.
Lightly edited for clarity and length.
Let’s be serious—you can't be a creator, you can't be a journalist, you can't be anybody who's seeking knowledge and see a bright, shiny thing that everybody's talking about and not want to play with it.
First and foremost, AI has the potential to be a handy tool in our field, but it needs to be handled with care. We must prioritize human oversight when using AI to create illustrations, brainstorm ideas, or generate shot lists. AI can assist with efficiency, but it should never replace human intuition, creativity, or responsibility—especially when working with nonfiction visual storytelling.
Regarding accuracy and authenticity, AI tools can help verify the truth behind visuals and ensure that images or videos have not been manipulated. This added layer of verification strengthens the credibility of our work. But we also need to be transparent about where AI has played a role in the storytelling process—whether in generating visual elements or analyzing data—and make it clear to our audience when something has been artificially created or enhanced.
AI can also support diversity by helping to detect patterns of underrepresentation in our storytelling. These tools can identify biases in our existing content and suggest diverse story angles, ensuring we include voices and perspectives that might otherwise be overlooked. However, we must be cautious that the AI itself isn't perpetuating existing biases and ensure that human editors regularly review its outputs.
Regarding transparency, AI can improve our engagement with our audience by providing traceable processes. Whether through interactive data visualizations or by explaining how AI contributed to a particular project, we can give the public insight into how visual stories are crafted, fostering trust and accountability.
While AI can help generate efficient shot lists or enhance visual creation, it’s essential to preserve the core values of journalism—truth, fairness, and clarity. We must use AI to enhance, not replace, the very human aspects of storytelling. AI should remain a tool to assist us in delivering stories that are not only compelling but deeply authentic, diverse, and responsibly told.
In short, AI is a powerful asset when used ethically. It can help uphold truth by verifying content, promote diversity by addressing underrepresentation, and increase transparency by making our processes clearer to our audience. However, handling it with the same level of scrutiny and integrity we bring to all aspects of visual journalism is essential.