Posted Aug 20, 2025 in MADE Podcast
In the 13th episode of MADE for U of T, we hear from artist and new media educator Jon Ippolito about the evolving role of generative AI in teaching, creativity, and the future of media.
Prefer to read rather than listen to the podcast? Below is a transcript of the interview. It has been condensed and edited for clarity.
Inga Breede (IB): Thank you for joining us today, Jon. My first question to you is: what is one of the major shifts that you’ve seen in the use of AI, both by students and instructors in higher education?
Jon Ippolito (JI): Well, there have been plenty, but I would say that if I had to pick one it would be an increasingly fewer people who are sticking their heads in the sand and completely ignoring it, and a gradual shift along the spectrum of from hiding from the technology to the next stage, which is sort of “okay, we are going to use detectors to just stop people from using it,” to the next stage of figuring out ways that you can use it here and there in the classroom. And finally, a more kind of strategic approach, where you’re very deliberate about where you allow it and where you don’t, to the eventual far end, maybe of encouraging its use in the classroom. So, you know in my experience that’s sort of like a series of, if you will, cascades of sand down a series of hourglasses, maybe, and everybody started in the denial part, and then gradually things are shifting from one stage to the next, so that more and more people are either abandoning detectors or getting on board with the idea of “Well, we might as well teach these things in the classroom.”
When I think of denial, I think of people who are, you know, ignoring the headlines and the exhortations of their fellow faculty to kind of pay attention to the way this technology is disrupting normal classroom behavior. Ironically, this is not always in the areas you’d think it would be. For example, I find that the community of writing teachers have gotten on this topic pretty quickly. They’ve really been innovative and open to sharing approaches, ideas, and debate online. Whereas, as an example, in my experience, computer scientists, who you think would be really open to these ideas, have been some of the most, sort of, in denial, because it’s very easy to write some boilerplate computer code now, and that’s the basis of many computer science assignments.
So, it varies by discipline, it varies by geography, and it mostly varies by individual. If you’ve been paying attention, you might have found yourself further along that spectrum from denial to strategic acceptance. But if you haven’t, people are kind of at all stages of that. I just think it’s sort of a slope where people are gradually moving away from the more banish approach, and more of the sort of cautious acceptance.
IB: In your introduction, I mentioned that you created the Learning with AI toolkit. What prompted you to develop this toolkit and what are some examples of how you’ve seen it used by educators and students?
JI: This field is constantly churning new information, new ideas, as well as technological and political developments, are coming along the pipeline. It’s super hard to keep track of that. So, to help people find the needle in the haystack, especially given the lack of transparency in a lot of these AI companies, it’s first approach was like “Okay, as an educator, I’m just going to make a Google Doc, and just like list all these resources that I found.” And that’s helpful to a point especially early on, but eventually it becomes really hard to sort of find what you’re looking for. Just this morning I’m part of a Google group where someone said “Hey, you know, are there more resources for law professors, teaching how students write about law, and how adjudication will happen in an age where AI can assist or hurt with that goal.” Fortunately, I was able to look at the toolkit and quickly filter by keyword like, okay, let’s look at “law” and “AI” whatever, and I got down to like eight sources or something. So, the goal of the toolkit originally was just like, hey, let’s find a way for people to be able to filter by discipline, assignments, stage ethics, whatever the sort of topic might be. So, for example, I could say: syllabus plus, you know, guidelines plus plagiarism, and get a bunch of policy statements about plagiarism. Or something more creative, like: prompt and image and classroom and assignment, and get an assignment where students have to practice generating images with AI.
IB: With regards to gen-AI, how has it empowered creators? And the follow-up question is, what is a common frustration that you (and your creative peers) have experienced around gen-AI?
JI: I think it’s empowered the everyday person in the sense that a car empowers someone to drive around. But a car also has costs that are kind of hidden; environmental costs and social costs. So, sometimes the sort of facile power that a new technology brings isn’t necessarily helpful in the long run. I think honestly, very few really creative people, like artists, who don’t just produce stereotypical images, are benefiting from generative AI. There is a long history of artists actually working with generative AI. And last year I did a teleconference with Christiane Paul, who’s probably the world’s leading digital curator, talking about the history of people, going back to the sixties, creating works with AI paintings and drawings, and then eventually digital works. So, artists have been exploring this technology for a long time. But that’s before Dall-E and Midjourney and Stable Diffusion came out. I think now, what I’m most paying attention to is artists who deliberately misuse the technology to sort of open up, subvert, or reveal some aspect.
IB: Can you give an example?
JI: Yeah. So, some of them find ways to use it that you might not have thought of. You know, a lot of people think well, “I could just generate a song right now, using Suno or Udio, or one of these, and then I could use that as a soundtrack, so it’s a way of not paying a musician, right?” Or making a magazine illustration without paying an illustrator. But there are folks like Holly Herndon who are creating a kind of version of herself she calls “Holly Plus” where you can essentially have her sing along with you. So, it’s no longer replacing you as a vocalist. It’s more like augmenting you, because now you have a backup singer. Or artist, Eryk Salvaggio, who’s been investigating the role of noise in these technologies because diffusion models that create these images all depend, they all start from noise, and they use that and kind of work backwards to find images in the noise. So, he’s very interested in, for example, prompting images to represent noise, which, it turns out, is really hard for them. So, I really am very interested in the artists who are, again, kind of misusing the technology to reveal something that maybe the commercial companies don’t want you to see.
That said, as a creator myself, I have quite a few frustrations about AI’s reinforcement of stereotypical ideas of what art is. Like, that art is a picture as opposed to art being some kind of intervention, that may involve a picture or not. Another challenge for me is sort of navigating this sort of knee jerk reactions from “AI cheerleaders” and sort of a traditional artists about whether this is stealing. There’s the sort of AI apologists, I should say, who say that, “Well, when we’re training Midjourney by digesting all this sort of uncredited human creativity, all the paintings and photographs and so forth, have been posted online, that’s just like traditional artist inspiration. Artists are always drawing from each other,” and that’s BS. Yes, artists do draw from each other, but they don’t like, go and chop up every single pixel. And yeah.
IB: And call it their own work.
JI: Yeah. So, that’s a misunderstanding of artists’ inspiration. But, on the other hand, I also am frustrated by the claims of someone from the sort of copyright industries, and these are typically not individual artists, they’re more like the organizations that represent them, like music labels, that claim that sort of existing copyright for regime or the regime that pre-existed before AI was somehow a great way to reward human creativity, which it wasn’t. It was predatory and only rewarded a very privileged few. You know, kind of dealing with those two very extreme positions is frustrating for me.
IB: We’re having this conversation in 2025. What are one or two things a person in the education space (whether that person is an instructor, an instructional designers, or a media developer) should be doing or trying to level up their skills and offerings?
JI: So, I think that’s there’s a couple of things one is, if you’re already a creator in the sense of making media, then I think you have a big advantage over people who are just using text because the sort of interface for text, like chat bots like Chat GPT, is just like an oracle, right? You ask the oracle a question. It gives you something that looks very good. That’s like some kind of godlike emanation from the super intelligence that hides a lot of the gears behind the technology. And when you go to generate an image with a sophisticated, not necessarily with, you know, to generate it right in Bing or Chat GPT, but if you go to a sophisticated image generator, something that has a lot of features, you know, Leonardo.ai, or running a model on Hugging Face, or something like that. Then you see all the bells and whistles that really constrain and parameterize the results.
A setting like temperature, which shows you how much creative deviation there is. Choosing different models and seeing, oh, this model makes it look more like anime, and this model makes it look more like 2D design, and so forth. Those kind of peel away the curtain and show you kind of some of the ways that these things are actually probabilistic. And you also get like typically four or more, you know, kind of results, which shows you also that oh, there isn’t one godlike answer. You’re actually getting a lot of different answers.
So, I like to recommend that people who work with these technologies do try image generators, sound generators, video generators, which are still sort of incipient, but now are available. And you’ll learn quickly that it doesn’t get a good result right away. You have to do incremental prompting. There’s a neat feature in Leonardo AI called Flow state where you just sort of can kind of like zoom through a bunch of examples and click on one and say “More like this,” and sort of steer the model, and that’s very different than what people are used to with just a chat bot.
Another thing that I would recommend to sort of up your game, and this again, you have an advantage, being a media maker and doing this, is to use traditional terms from old, pre-AI, even pre-digital media. If you describe, you know, something like an establishing shot or a rack shot or zoom shot, you know, low angle, panorama.
These are terms from traditional photography or videography that can help you steer how a photograph looks, a composition. You can even use terms like F35 or you know, give it an F stop or an aperture setting. Again, the reason the model knows these things is that people posted to Flickr or Reddit often include the EXIF data from their photograph, so it’ll tell you what the lens used, or the you know, the particular film, or whatever. And that way using those terms in your prompt can actually recreate the look of those kind of photographs.
And the last I would just say, as a general rule, Anna Mills has a great phrase which is, use generative AI for input rather than output. So instead of just thinking that I’m going to generate things, which is, of course, what the term generative means, kind of ideologically, think of it as something that you could use also for feedback. So, if you create, I don’t know a painting of your own, or a Powerpoint, or a paragraph, feed it in and ask for criticism.
Yeah, exactly. And it can be from a standpoint of sort of you know “Hey, how’s my grammar? Is the sentence flow okay?” But it can also be conceptual, like, you know, am I taking into account different perspectives? Is there inherent, you know, sexism or racism? Or you know some kind of other bias that you see. What points of view am I leaving out? And you can do the same, for you know other kinds of media. So, it’s nice to think of it that way, and I don’t think enough people use AI for input rather than just output.
IB: Exactly! And I wish we could continue this conversation Jon but we are all out of time. But thank you again for joining the MADE community today to speak with us!
JI: It’s always helpful for me to connect with other people because we all have a little different piece of this puzzle.
How do I join M.A.D.E.?
If you're interested in joining the conversation, please fill out the Request to Join M.A.D.E. form and you'll be added as soon as possible. Can't wait to see you there!