Loading...

MADE for U of T | Ep. 09 | Dr. Philippa Hardman

Related people and/or projects: Introducing M.A.D.E. for U of T

In the 9th episode of MADE for U of T (see all episodes), we hear from Dr. Philippa Hardman, who discusses the role of AI in learning design and delivery and what educators should consider if they want to use it in their teaching.  

Listen to the podcast: The role of AI in learning design and delivery

 

Or read the transcript:

Prefer to read rather than listen to the podcast? Below is a transcript of the interview.  It has been condensed and edited for clarity.

Inga Breede (IB): Hello!  Welcome to another episode of MADE for U of T.  I’m your host Inga Breede, and in this episode we hear from Dr Philippa Hardman, an affiliate scholar at the University of Cambridge & a learning scientist and creator of DOMS™️ which is an  evidence-based learning design process. Phil has spent 20+ years researching learning science & how to design the optimal in the flesh, online & hybrid learning experiences. In this episode, Phil discusses the role of AI in learning design and delivery and what educators should consider if they want to use it in their teaching.  Let’s hear more from Phil.

Dr Philippa Hardman (PH): Well, thank you so much, everyone for being here. And thanks Inga for the invitation. I'm excited to talk to you and to learn from you.  So first of all, I've been interested in AI as part of a kind of broader interest in education technology for quite a few years now. And hopefully, it's reassuring to you that AI is not new. We've had AI in education for over 60 years now, and so over the last 20 audience or so that I've been researching this area, learning science and this intersection with technology, I've been really interested to think about. Well, why hasn't AI been more disruptive until now? So AI has disrupted many and technology more broadly to be honest, has disrupted many different industries. In fact, most industries in the last 10, 15, 20 years. And yet education has remained kind of untouched by this technology. And so I've been really interested in thinking about why is that? Why has AI specifically, but technology in general not being as disruptive in education as it has everywhere else. I mean, if you think about it, like the way that we all the food, the way that we watch TV, the way that we listen to music, the way that we buy anything. It's all been disrupted by technology. And yet education (and I'm very open to be challenged on this) in my opinion, has been very little touched by technology. And so that's the context in which I've been interested in AI more about like, well, where is it? And why hasn't it been more disruptive?

IB: And then January 2023 happens (laughs).

PH:  (Laughs) It was Christmas 2022. And yeah, something happened. And this is important - we didn't necessarily see the emergence of a new technology, although there are things that are different about generative AI, as we now call it. We can maybe talk about that. But I think what changed most at the end of last year and early this year was just our awareness of AI and our ability to interact with it, I think until then we had to really be computer scientists and coders in order for this to touch our lives and be something that that we could play with experiment with understand.  And ever since then, I've been really interested in really asking the same research question, but, like the question for me is: if AI hasn't disruptive education until now, and it could have, will it disrupt what we do now? Or will we just see more of the same? Will we just see a continuation of the way that technology in general has impacted education, which really is just to more to speed up what we already do than to disrupt it, you know, from the ground up.  And it's been quite a ride, and I'm very interested. And I am very optimistic about the impact that AI could have in education. But I also think that we also face a lot of risks, and for me, the biggest risk of AI and education is that it makes us much more efficient, much more effective at really ineffective practices. And this is the thing that I'm like, most interested in talking about and exploring with educators like you.

IB:  So that leads me to in May of this year, 2023, you gave a Tedx talk on AI and Education, and you wrote a reflection article afterwards. And you said “for every AI powered piece of a tech that pushes us towards more effective instruction, there are 10 examples which push us in the opposite direction.” Our audience today is mostly learning designers, educational media designers. What should we be looking out for?

PH: Yeah. it's a good question. And a really important one. I think I've been exploring this thinking about what would a rubric for learning designers and educators look like when we're trying to select AI technologies. And I think again, it's the same as any other technology, and I'm sure that you are like much more capable and ready to go on this than maybe you feel you are but really the main thing is pedagogical quality. So no technology, and I'm sure you know this already, but no technology is built pedagogically neutral.

Everything we build is a bit like any teaching experience. We make decisions about how we're teaching something every moment of every day in our jobs. And the same is true when we build AI tools and the majority of AI tools that I have seen built specifically in the last 6 months. So in this chapter 2 bit, this is where the biggest question mark is.  The pedagogical quality, and so overwhelmingly we're seeing, I'm sure you've seen them, but tools built, you know, target educators and learning designers, and they're basically tools for content creation. So more slides, more videos, more images, text to video. Or

IB: I saw a lot of that in at a more recent conference here in Toronto.

PH: What we're seeing is that pedagogy is what we might refer to as a “knowledge transfer pedagogy.” It's giving some content, and then you give it me back. And the reason that we build that sort of technology is because it fits within what we already do predominantly. The pedagogy that we see, like you name it, K. 12 to higher ed, learning and design, everything in between, default to this kind of idea that there is an expert who has content to produce and then we consume the content and we learn the thing. But when we know from like 30 odd years of learning science research (even more than that, actually, more like 40-50 years) is that that's a broken pedagogy. Or at least it's not optimized for learning outcomes. So it's not motivated. It's not actually optimized. My research suggests for two things, it's under-optimized for learner motivation and it's not optimized for the achievement of outcomes. So we only learn if we actually do something rather than just sit and passively absorb.

The thing to look out for is, what is the pedagogy that's baked into the tool?  Does it serve your learners?  Does it enable you to design and deliver active learning experiences? Formative and summative feedback? A really powerful authentic assessment? Or is it more content? Focused? And you know, admittedly, the content generation stuff is important. But I think it's powerful because it buys us more time to be more robust on our pedagogy. And I think, yeah, most of the technologies that we've seen emerge so far have just made us faster at what we already do, rather than kind of making this learning experience more optimal. I would also recommend that we always look at things like data privacy and transparency. This information has to be provided.

IB: And is that information easily accessible?

PH: Yes, yes! So this has to be provided. It's usually in the small print, but look for it, seek it out, and if it's not there, ask them.  If the creative at all is not transparent about like the data that they've trained their models on about what they're going to do with your data, then like for me, that's red flag. That's concerning. And I know that this idea of trust and data privacy is massive, not just in the education sector, but like brought more broadly.  And so models like OpenAI, for example - we have some idea of what's happening there. But we don't have full clarity on, like how those models have been built and where our inputs are going. So that's really, really important. And then the other thing I always look out for is accessibility and inclusion. AI is as flawed as humans. It's only as clever and as reliable as the data that it's been trained on.  So I'm always very interested to know how diverse is the data that this model has been trained on.  To give you a really like concrete example, if there is a tool that claims to be able to design a learning experience, what kind of experience is it building? What kind of, for example, case studies, is it presenting back? And if those case studies don't reflect the experience of all types of students from all types of backgrounds and identities, then they're not fair. And they're not giving all of those students an equal chance. There's something about the pedagogical quality for sure, data privacy and transparency, and then specifically going a little bit deeper on the diversity of the data that's been used as well; the robustness of it.

IB: There’s a lot of mixed messaging, I'm seeing online and at recent webinars and conferences that I've attended and it's either about embracing AI tools versus being concerned and cautious of the unknown. For someone brand new to the world of this even, you know, we've been hearing nothing but AI for the last 6 months. But if you're still brand new to this, and its impact on education, where would you suggest we begin that journey of discovery?

PH: My experience of this last 6 months is that we've seen in higher ed, we've seen a general shift from initial kind of panic and repression of AI to more embracing and experimentation with AI in the classroom, and there's lots of reasons for that, and we can maybe talk about that if that's something that you're interested in exploring. But I think what I would say, we've just had enough time to start to learn about that experimentation. We've had enough time to experiment in robust ways, and then like publish peer-reviewed stuff on it. There is information out there already from the last 6 months about different educators, how they have used these tools, where they work, where they don't work. So the first thing I would say is like, get into the research. There is pleasingly an AI tool called elicit. What they're really good at is enabling us to ask very clear research questions and just get summaries of the research. So you don't have to go deep into it. I would encourage you to use these kind of research summary tools to get to grips with what we already know? Where are the risks? Where are the opportunities? Where can we safely start to experiment? And why might we kind of hold back for now? And really, the two people talking about this is me (sorry, shameless self-promotion, and there is a guy called Ethan Mollick. You might know he's a professor in the US. And he's doing a lot of on the ground experimentation with students and sharing back his experiences.  We're seeing more and more of that, although I should say we're also still seeing a lot of repression, of Banning, the introduction of policies which make the use of AI plagiarism. I have lots of thoughts here. But it is happening, and so be aware that there are these two competing camps, if you like. One is like: team “ban in it” and one is team “embrace it.” And there is like, it's interesting to compare the perspectives of those different groups as well. and yeah, I think also.

I would recommend, and I was very reluctant to do this, because I'm not a technical person, but I would also recommend getting to know the technology.  So that things like being able to assess it in the ways we've just been talking about become much easier, and I would recommend that you have a look at some courses are on DeepLearning.AI, which is a new course site, they're totally free. And there's lots of courses on this designed by a guy called Andrew Ng, who is one of the leading professors on AI and always has been. He's got like the world's most popular MOOC on AI on Coursera. And he and his team have designed a series of short courses for non-technical people on just understanding what is AI. How the hell does it work?  What's machine learning? What am I looking for? What makes it a more or less reliable data set and all of those things in a really accessible way? I think that's incredibly powerful. Just to get our heads around it.  We need to be thinking about and asking and also just be reassured that the fact that this isn't new means that there is a lot of research, not just on the last 6 months, but from the 10 years, I would say, on the use of AI and education. One great example is over at Georgia Tech. There's a team who have been building intelligent tutors. So basically, AI powered tutoring/teaching assistance, since about 2013, I think it is. And they’ve done a hell of a lot of experimentation. They've learned a lot.  There's loads that we can learn there. Just get into the research, understand it actively, experiment with it if you can, and also become part of these communities and share your experiences. I think we're all learning together, and the more experimentation we do, the the more clarity we have.

IB: So, Phil, any new, exciting projects or talks that are coming up for you in the next few months in the world of AI and education that we should be aware of?

PH: I'm just continuing my experimentation with in, I guess 2 projects. One is: is it possible? Basically, I'm interested less in how AI might introduce me like disrupt technology more, and how it might disrupt pedagogy. So I'm continuing to experiment with this research question. Really, my guiding thing over the last 25 years has been to try to break down the gap between those people who design learning experiences and learning science research. And one thing that I'm researching is, can we use AI as a way of synthesizing specifically like pedagogy research, and then using that to build AI, and we would call it a large language model, but an education specific model, which is only based on peer-reviewed research into how humans learn. So that when you ask questions about how to design something, or maybe even how to deliver it, we can use AI to design in an evidence-based way. So that's one of my big projects.

And then at the other end the question also is, there's a lot of research to suggest that if the impact of a great learning experience, and I'm sure this might resonate with some of you, is the quality of the content. It's the pedagogy that we use as the quality of the experience. But it's also the quality of the professor or the tutor, or whoever is the teacher.  We now live in a world where you can take one module at Harvard for 0 pounds through a MOOC, or you can pay, I mean, I'm not sure how much, let's say $30,000 a year to go in person, and the different differentiating factor of the people. Is the support that you get through that content?  And so the other end of our research, I'm exploring with my co-founder, Gianluca Mauro, who is an AI expert from Harvard, is it possible to reproduce what really is the value? Which is the support that sits around the learning experience. So focusing less on content and more on experience, on user experience, on human machine and direction, and also on our ability to basically turn real professors into bots. So not to create a kind of general AI, but to scale human intelligence in the form of like professors and coaches. And the thing that's interesting here is my co-founder and I have a principle where we would only really experiment with AI if it's going to bring more value than it not existing. And I think what's interesting is that we found that sometimes the people who are the subject matter experts are not the best teachers.  What we're experimenting with is the ability to bring together the power of the professor's mind or the experts mind, but use AI to translate that into something that is consumable and supported in the way that, like the world's best coach might be able to support you. And  we're looking at like, yeah, what does it look like to combine certain skill sets and try to automate them? And do humans want that? Like one of the big things we're researching, and we're hoping to partner with the University of Copenhagen to explore is do we even want AI teachers ?

IB: And I'm glad you're asking about the human reaction to it!

PH: It's kind of exciting when you're just turning, you know, images of caps into cats on surf boards.  There's a lot more research to be done in terms of people's appetite for this kind of thing, and in what context, if you pay for your education, it might be that this is not something that you want, but it could be a very powerful way to realize, like the dream of the MOOC, which was always to impact people who don't go to university.  And in fact, it never achieved that impact. And so that's the other area that we're that we're focusing on.

IB:  Well, thank you, Phil, for meeting with us today and answering my questions and the questions from the MADE community.  Thanks for your time today, this was great!

PH: Yeah, it's been wonderful. Thanks so much, everyone. And I've got lots to think about, and if you have more questions, just fire them at me virtually because I'd be very happy to continue talking!


How do I join MADE?

If you're interested in joining the conversation, please fill out the Request to Join M.A.D.E. form and you'll be added as soon as possible.  Can't wait to see you there!

Article Category: M.A.D.E. Podcast