top of page

Using AI to Discover New Creative Frontiers with Holly Willis

  • Writer: Malia Lee
    Malia Lee
  • Apr 4, 2025
  • 5 min read

Holly Willis is the Chair of the Media Arts + Practice Division in USC’s School of Cinematic Arts and Co-Director of AI for Meda & Storytelling (AIMS), a practice-based studio under the auspices of USC's new Center for Generative AI and Society. She is the author of Fast Forward: The Future(s) of the Cinematic Arts and New Digital Cinema: Reinventing the Moving Image. At USC, she teaches a class on AI for Media & Storytelling and the intersection of AI & Creativity.


Q: You teach several classes about the intersection of generative AI and creativity. How do the students in those classes perceive AI in the creative space? Does their perception evolve while taking your class?

I taught a class called AI and creativity that was about 25 students from all of the different divisions within the School of Cinematic Arts, and there was a great deal of skepticism and worry. People weren't even telling their friends that they were in the class because there's such a negative connotation to AI. One of the first questions on the first day of class was, “Why should I pay for this film school education if now people, without coming here, can generate these images super easily and not have to pay anything but a monthly subscription to an AI software?” But we got to the end of the class, and the students realized you actually can't make anything good unless you really understand what storytelling is. There are different genres, ways of really capturing attention, and techniques for telling a great story that they felt like they were learning here. So I thought that was an interesting insight.


Q: What is a recurring topic that students bring up in your classes?

We're teaching a class right now called AI and storytelling, and it's a cross between the Cinema school and the Annenberg school for Communication. About half of the students are from the PR program in Annenberg, and the other half are from Cinema. We've been talking a lot about language models and how they work. I think issues that come up a lot have to do with what is truth? Should you be able to use generative AI to go along with a news story? Or is that not okay since the news needs to fully represent reality? I think all of that kind of, how do we have an understanding of what's real and what's truthful at a moment when we can create anything is a really important topic to students.


Q: What is something you emphasize in your classes?

When people are crafting something, they long for the kind of friction that comes with the challenge of making something. And I think that isn’t present in image and video generation because you can just churn out these images without that much care. With that said, I know that a lot of our students are taking ceramics. In ceramics, you have to really figure out how to, with your hands, build something, and then build a sense of skill over time. That's important to people, and I think especially for Gen Z, there's this craving for materiality, for touch. So bringing it back to AI, we explore ways in which we can create workflows or forms of making that allow that sense of craft and dedication so that you feel like you're actually making something and not just churning out junk.


Q: What are some innovative ways you’ve seen students use GenAI to elevate their project or an inspiring use case that really caught your eye?

One of the assignments in that first class was to create a self-portrait and use image-generation tools. One of the students made a project that started with a photograph, and then it went through these different iterations. And as you know, when working with these tools, things kind of morph and change. The AI generated a video from the photograph that went from a male to a female in the process of this iteration, to which the student was kind of like, yeah, this actually says something about how I feel sometimes that I didn't really recognize. I thought it was kind of an amazing moment where a tool, in its kind of uncanniness, actually spoke back to the student in a way. I love that aspect of these tools, sometimes the uncanniness and the strangeness, and when they do something unexpected, it can be really revelatory.


Q: What is something you would love to incorporate into your curriculum in the future?

I was just listening to a webinar, and the conversation was about a tool that the team had created that reveals how chatbots view us whenever we're interacting with them. It was showing how it measures your education, your financial status, and your gender, and then it gives you feedback based on what it assumes from your answers. If you're working class, it will give you very limited ideas and very basic language versus more upper class. It’s trying to attend to who you are, but there's all of this bias and a kind of shaping of available information - super troubling and provocative. What the team had created was this kind of dashboard where you could change your status. So you could change your gender or change your education and then see the different responses. After seeing this, you immediately begin to say, “oh, I should be questioning every time I interact with one of these.” I wish we were doing more of that kind of analysis in the classroom as we think about how we're interacting with some of these tools. What are the assumptions being made by them that we're just kind of overlooking because we're just moving really quickly and using them?


Q: What kinds of ethical or cultural concerns have emerged from your students or colleagues when it comes to integrating AI into creative practice?

Within the School of Cinematic Arts, and then also within Media Arts and practice, people are very skeptical. I think the school overall, I would say, 90% of this community, is opposed to certain aspects of AI, because it really radically reimagines the craft of filmmaking in a way that doesn't feel organic or good. I think that people are worried about data sets and bias data sets and IP and copyright. They're worried about job replacement, and they're worried about deep fakes and the liability for artists and actors. For instance, how do you own your own likeness if it can now be replicated and used? For writers, just the fact that they could be very easily replaced in a corporate environment that's looking to cut costs. Within our media arts and practice students, where we have the combination of the critical and creative, they bring to the conversation about AI a lot of a kind of sensibility that asks about power and bias, and that's really important to them. So thinking about who has access to these tools? How are these data sets created? What about environmental issues? All of those impacts remain kind of invisible when you're just cheerfully making AI-generated content but are extremely important.


Q: What advice do you give to students to alleviate the concerns mentioned above?

We really emphasize data set literacy, meaning students are encouraged to ask what is your tool? Where does the data set come from? How was it created? I think there's a bulldozing effect where we just need to get to the next best tool. I've been talking to my group of students about creating something like a personal manifesto about interacting with these tools for USC and SCA. Essentially, what do you need to be thinking about as you sit down to use these images, and how do you keep yourself aligned with your values?

Comments


bottom of page