How should AI be used in our universities?

How should AI be used in our universities?

Show Video

Thank you for listening to the Ask AI Podcast. To get our latest episodes, AI news, discount codes for events, and links to free resources, sign up to our monthly newsletter by visiting askai. org and clicking the subscribe link. That's askai. org.

on this episode of the Ask AI podcast, we're going to the East Coast Dalhousie University to talk to Dr. Christian Blouin, who's also the AI lead for Dalhousie University. We are a Pan-Canadian podcast, so it's wonderful to talk to folks in all parts of the country, and this is an awesome conversation. Somehow beyond academia, we talk about the CRA, the printing press, and who does AI really belong to? Stay tuned, This episode is sponsored by Cinchy. Cinchy is the enterprise data collaboration platform that enables people, systems, and AI to co produce intelligent, reusable data products in real time. By eliminating data integration from IT delivery, Cinchy also makes it virtually impossible to violate data protection controls as you leverage AI technology for your business.

If you are an enterprise IT leader looking to de risk and accelerate your AI journey, this is what you've been looking for. Visit Cinchy. com to learn more. Wonderful to have you here. And so excited to have this conversation. You know, you teach in computer science and have.

Uh, a long history in that field. Why don't you tell our viewers, like, starting really basic, what is a computer scientist? I I, I wish you'd started with an easier question. Um, it, it's a good question because computer science, unlike most academic disciplines, actually very young, a lot of my older colleagues were not traditional computer scientists because it didn't exist then. Uh, and, and we are changing. Very quickly. Uh, the nature of computer science fundamentally is we have this gadget.

We have this machine that has incredible properties and and we kind of blend mathematics and whatever this machine is capable capable to do in in an attempt to solve. People's problem. Uh, so to me, a computer scientist is the person who tries to find a way to use that gadget, the computer, to solve important problems for humans.

Wonderful. And I love how right off the gate, right out of the gate, you're talking about computers as assistive technology, computers and humans working together. And I can't wait to dive into that more. But you, you mentioned something else that was really interesting on how computer scientists, it's changing, it's evolving. And certainly, I feel like folks are, Going to breakneck speed. Everyone it's changing so fast these days.

So why don't you talk a little bit about how the role of a computer scientist is changing, particularly in this AI revolution? Well, within the AI revolution, what I think computer scientists have to learn is to actually give the front stage to everybody else. Um, computer scientists may have developed the foundation of machine learning. And generative, uh, AI, but the application themselves really belong to a much broader sample of, of the professional population. Um, so, uh, I, I believe that, uh, that, that. That's what a computer scientist is, and I think that, uh, with AI coming and entering our classrooms and entering our laboratory, we have to ask ourselves really difficult questions, such as what is going to be computer science in seven years, or what is going to be computer science in 50? And I think that A lot of items, a lot of element that we believe to be computer science will just become standard knowledge in the general population. Wow.

Um, that's really interesting, particularly around computers assisting humans with, with these big problems. Can you give some examples of some big problems that only AI can solve and that are not going to be led by computer scientists at all, but folks from other disciplines? Sure. Well, what machine learning in general is very good for is pattern detection. And it has a much better attention span than humans.

And we have used computers. Data sets to help us detect patterns, but I think that when it comes to capturing trends where we are not quite sure what we need to look at and we just throw everything, every possible kind of data at the same time, I think this is where AI has the potential of breaking a barrier and actually transcend what we would hope to be able to do simply as human using only I think that it does not belong to computer scientists because the depth of the understanding of the problems have to come from expertise that we do not have. Now, I started my career as an academic, both as biochemist and a computer scientist. And in fact, I'm formally speaking, a biochemist.

Um, and and what was interesting to me at the time, uh, was to be able to leverage these gadgets, these computers to do things in in a biochemistry environment that was literally impossible for human to crunch through that amount of numbers, for example. You must have been reading my mind because I was exactly going to go there next is talking more about your experience and expertise in biochem You know, I'm looking here even at some of your papers And it's like biological event extraction using subgraph matching. A lot of these are bioinformatics all sorts of Really deep science pieces. So you I'd love to hear more about You as a scientist in the biochemical field, you know, with protein evolution, molecular modeling, and those areas of expertise, tell us more about that. Yeah, so, uh. The PhD that I've done have very little to do with computer science.

I was really interested in very specific area of protein engineering. Uh, what I wanted to do was if we can build high fidelity theoretical model of proteins, then we can use computers to design new one to perform things that have not evolved. Through natural means. And at the end of my PhD, I kind of realized I'm like, I run these experiments. I get all sorts of interesting results.

And a lot of them I cannot explain because we don't have the instrument to measure and I kind of made this shift to, uh, what if we use the entire evolutionary history, uh, as our sample. So let's look at DNA sequences for proteins that are doing the same thing over. millions of years or hundreds of millions of years and try to pull out of the data the kind of meaningful information that we don't have the instrument. That is how I ended up buying my self teach yourself programming book and kind of slowly, slowly started to turn over to the dark side. I love it. That, that shift.

You're like, I've got just like what you said at the beginning. I've got a problem to solve, and I'm going to use computers as a tool to solve this problem. So I really love seeing you put your ethos in practice. Uh, let's talk about disruption. You know, everyone's feeling AI and feeling it as such a.

Huge disruptive event. Everyone's like, Whoa, when did this happen? What's happening? And people feel like I just the last two years. And, you know, you and I having I mean, certainly I've been in the field for a long time, and I'm sitting here being like. The Journal of Artificial Intelligence has been around since the 70s.

You know, really, a lot of the early, early models are, are very old. We just had computers catch up with what the scientists were conceiving of what was possible, right? And then they became technically possible. And so, you know, you have this article here talking about Dal for Dalhousie's AI lead, which is you, aiming to spark the conversation and connection on a rapidly evolving information future. And in this article, you mentioned that The pace of disruption is increasing, and even if it stays constant, we don't want to find ourselves, uh, in a place where we'll be, you know, criminalizing everything.

We need to be defining ourselves by what's not allowed. We also need to be clear on what we are trying to achieve as a university. And this was in regards to academic integrity in the age of AI, and certainly working in the provost's office on academics. Tell us more about this disruption.

Tell us more about how it relates to academic integrity. Yes, and I'd like to divert the conversation away from academic integrity as fast as possible, but I think that it's important for this question because this is where it begins. Um, the, the notion of academic integrity, I think, uh, started to Shift when the internet came, uh, and, and, and the university went from the send the monopoly center of information and data to a much more democratic approach where the information is out there, whether it's good or bad quality and we, we adapted over time, you know, at first we're like, well, don't use wikipedia. It's terrible because it's not edited by a professional, um, and then slowly kind of have mature and we had a good 20 years to do this, um, With AI, what happened is all of a sudden things that we believe to be inherently humans are trivially done by an application that runs on a browser. And, uh, everybody, a lot of my colleagues wrote their syllabi in January, not knowing what generative AI was.

And by the end of the term had a lot of you. term paper submission that almost started with certainly I can help you in writing. So, so like the rules had shifted so much and then a lot of my colleagues just saw the, um, if I didn't think about something, uh, it's necessarily not honest because it kind of gets around what.

I thought students would be doing in the first place, and I think that in a world where we are graduating professionals for the next few decades, these tools are going to be increasingly pervasive and everywhere and embedded in operating systems and other applications, and we would be doing a real disservice in telling students, this is bad, and you have to stay away. From it because it is inconvenient for me to assess how much, you know, so this is with this frame of mind that I became more interested in. Okay.

Let's. Try to help people, um, learn about AI so they can form their own opinion because there's a lot of hype, uh, online, uh, right now. And, and, uh, and that's why I volunteered to, to be the institutional lead because we need to figure out.

How do we, how do we, uh, communicate to people who are not computer scientists who do not understand the mathematics underneath, uh, but really want to be able to form their own opinion? Um, and, and my fear is that the pace at which, uh, AI applications are being deployed by startups and by different applications and embedded in existing applications that I am afraid that at some point, uh, university professors who are supposed to be training professional for the future will just feel demoralized at trying to catch up constantly, uh, trying to climb a mountain. They don't understand how high it goes. Interesting. Um, fair. I, I, that, you know, not everyone wants to talk on this academic integrity piece. And I really appreciate, you know, when you say you raised your hand to be a part of helping the institution do this.

In this very same article, you even say, you know, asking faculty to figure this out on their own isn't realistic or fair. And you mention that again at the end of your answer here on You're just constantly playing catch up, and I saw something online recently that I thought was a really excellent way of explaining it, and they said that this revolution in AI is similar to what calculators did for math and arithmetic. You can do a lot more, and you can do it faster, and now you're focusing on different areas of math. Beyond just what you would do on a calculator, and if maybe that's a triggering idea, maybe that makes you think of something in AI, and if so, you know, we'd love to hear any parallels that you're thinking of now.

But I thought it was a great example. I like the calculator analogy because in many ways it's helpful, like from purely from a functional perspective, when I was young, I remember people telling me, well, you need to be able to do long additions and multiplications because in a world where there is no calculator, how would you do it yourself? And I think that this argument has proven to be, um, Pretty pointless, uh, in the end, the main Even I'm laughing, not to cut you off, but even I'm laughing. I'm like, I can't imagine a world without calculators. When did I ever have that? well, we never know when the zombie invasion will all of a sudden just get rid of all calculator, but we'll still need to do long divisions and we were cautioned quite sternly when I was in grade 4, uh, about this, um, I think where the analogy is breaking down between a calculator and AI, um, is in the sense that, it's, You could have pulled a calculator apart, open the hood, look at every component, and explain in a highly predictable manner, um, how it works.

There is no emerging property to a calculator. But AI is a very different beast, just by the nature of how it's structured. We train it. There's Billions of parameters, and the only thing that we ask from the system is to actually give us the best accuracy, the best recall, but we're not able to open the hood, look at a neural network and say, yeah, I can see knowledge here. And so.

We are dealing with a calculator that has an emergent behavior, and we do not know where the edges of this. So, and for this reason, I think it's, it's a little bit more wild, uh, to, um, to, to work with a item to work with a calculator. I love the way that you explain how the analogy was like interesting, but where it breaks down, that was really fantastic.

We're coming up into the latter portion of this interview and our time together, and I know we've kept it fairly high level. We've been talking about what does it mean to be a computer scientist? How people who are not computer scientists will be engaging with artificial intelligence as a tool to solve their various problems. But I'd love to open it up to, you know, what's one thing? Right now that you'd like to double click into a little further. I, I've got some ideas, but I'd love to open it to you on something that you'd like to potentially share with our listeners.

Well, I think something that's important to me, um, would be, uh. To sort of separate the conversation about AI with the technology and look more at, uh, democratizing the technology for everybody. Um, I, I think an example that comes to mind is how the CRA kind of decided to approach the Canada Revenue Agency, decided to approach AI with the principle that they believe that everybody working at the CRA should be sufficiently informed to be part. Of a meaningful con conversation, uh, on what it's, what is AI and where are the limitation and what we should and should not do with. Uh, and that is, to me, the most important aspect of, of my role and as an institutional lead, is I am really interested in figuring out how do we get.

Classics professors, how do we get nursing professors, uh, how do we get university staff to, uh, learn what they need to know and to remain current so they can be involved in the discussion and they are not dictated. What is right and what is wrong, because I think we have an opportunity right now in the next couple of years to very much define how we are going to be making decisions for quite a while, and we should not leave this only in the hands of people who understand the mathematics. Such a great point. I love that. I never would have thought the CRA Canada revenue, revenue agency would be ahead of the curve with AI, getting people to be AI literate, but very important points on nursing and classics professors to be involved. To me, it feels like.

You know, with the printing press, they needed to get the masses to read and write. With AI and all these tools, we need to get the masses to be able to understand and use them. It, to me, it feels a lot like that revolution and, admittedly, I don't know the history of the time horizon from the printing press invention to when we actually had mass reading and writing, uh, being taught to the populace. So I'm not sure what the timeline is there, but it feels like we're behind, not just from an AI perspective, but even just like a basic understanding of coding and computers. A lot of the people that I talk to in, you know, my hometown in Sault Ste.

Marie, they're like, I can't figure out. My Facebook privacy settings. I can't figure out my phone settings, much less how do I engage and interact with these tools that maybe sometimes they have a somewhat user friendly interface. But what we've already seen is like prompts or everything.

So what is your take on, you know, if we look at the printing press, then we had to teach the populace to read and write. Okay, now we have all these AI tools. Now we need to teach the populace how to use them and leverage them.

Uh, and really democratizing that to your point earlier. To me, it feels like we're behind or, or are you seeing, you know, advances that maybe I'm not. No, you're completely right. So, uh, I think that we still teach mathematics in, in junior high and high school, like, pre university, as if everybody was going to become an engineer.

Uh, we are teaching one kind of mathematics. It's very dogmatic. Most people, particularly, uh, women and, and people from underrepresented groups don't see themselves in that very rigid group. Framework and and kind of exclude themselves from having the confidence of saying, I can sit down and I can figure this out. I think that going forward, though, we can more or less shut the lid on the machine and say, you know what? We are no longer programming.

We are. Speaking, uh, so, so the, the programming language has become more and more natural language and everybody has natural language abilities. It also means it's not just about writing. It's not. just about speaking.

It's also about reading. It's about understanding. And I think that if we want to build a next generation of people that will adapt very well to an AI world, we have to change a little bit what we call mathematics in p to 12. Maybe do more discrete sets, talk about database, talk about data instead of talking about polynomials.

But at the same time, I think we need to drive the humanities and language arts, which, again, we're doing at a very superficial manner. But if these skills were driven better, uh, In into teenage hood, we would have people who feel much more confident to interact with new system, interrogate them, understand how the, the, the react to certain prompts and how to actually work with them to get better work out of them. I That's great.

I, I, I would happily trade, you know, polynomials for understanding databases. That would have been much more useful. Um, and love this bent towards the humanities language skills and how programming language is really becoming natural language more and more. Um, the last thing I'd love to touch on with you is On the next disruptions.

Like what's coming next? And I know in some of our pre notes love predicting the future because I think we're really good at getting it wrong from the flying cars and 100 years ago. But I think in the shorter term, uh, the, uh, the advent of, you know, functional, artificial, general intelligence. system, uh, system that work towards a more abstract goal, breaks down into steps and can actually perform more meaningful tasks than to simply just write text. I think this is getting fairly close. If anyone tries Agent GPT, for example, you'll realize it spins its wheel and doesn't do anything useful.

But if you Google Agent GPT, then you get all these results, like this is a game changer and we'll never use GPT again, uh, for example, and that's ridiculous. So, so, but people don't have the skills to actually engage with these. So, I think the artificial general intelligence that is essentially kind of 1 layer above the, uh, the, uh, the chat bots is 1 thing and I'm really intrigued by, uh. The, uh, the brain computer interface, which I think are going to hit the consumer market in, I don't know, maybe 10 years.

I Well, general intelligence sounds a little bit intense, and I don't know if I'm bought into the, you know, human machine brain connectivity. I used to think it was really cool, now I'm definitely a little nervous about it. Um, but thank you so much for being with us and having like a really simple conversation that everyone can understand on the implications, who should be involved, why we need more people involved, and just spending time with me today.

Thank you Dr. Christiane Blouin. Thank you for the, uh, the conversation and for the invitation Absolutely. And keep working with the University of Dalhousie to get more classics professors and nursing professors using these AI tools that is truly what we need in the future and build out that humanitarian, oh my goodness, and build out that humanities department, even though that's not your area, but I love championing other departments. I think that's wonderful. I and they're well on their way.

It's amazing how fast people adapt. love it. Thank you. And that's another episode of the Ask AI Podcast.

I loved how he said that AI doesn't belong to computer scientists. What a novel, hot take that I'm very into. And who would have thought the CRA is leading education in teaching their populace on how to interact with AI. I think that's incredible. Just like how with the printing press we needed to teach people how to read and write and interact with these technologies, with AI we need to teach people how to interact with these technologies.

And that's what this episode with Dr. Christian Blouin was all about. Thank you for joining us.

Thanks for listening to this Ask AI podcast. This episode was edited by James Fajardo. Original music was provided by Mike Letourneau. The series producer was Chris McLellan.

To view the episode transcript, stream the video version, and get helpful links, check out the episode post in the Ask AI blog.

2024-01-09 23:52

Show Video

Other news