Artificial Intelligence in Education: Looking Ahead

Artificial Intelligence in Education: Looking Ahead

Show Video

Yeah, so welcome everyone to the panel on artificial intelligence in education. And our panel today will be really looking ahead at what they are seeing, what they're paying attention to, and what they see in the future of AI, especially for education. So thank you very much for joining us panelists.

I would like each of you to introduce yourselves and also provide a little bit about how you think AI will influence the future higher education. So, Ann, I see you at the top of my screen first if you want to get started. >> Sure, happy to. I'm Ann Dzuranin, and I am an accounting faculty member. And other panelists I'm the rookie here on AI, I was recently involved in a very large crowd sourced research project, which in on itself was fun. Three hundred and twenty eight co-authors on a paper testing the use of AI to answer accounting questions.

So I teach data analytics in accounting. And one of the things that we focus on in the analytics class is critical thinking. And I think that the use of AI and the ability I should say, of AI for students to have access to, really emphasizes how important teaching students to think critically is.

Because not everything that they're going to get out of AI is going to be accurate and they really have to become educated users of the technology. >> Thank you. David, why don't you go next and then you can call on maybe who you think should go afterwards. Now you're on mute. [LAUGHTER]

>> Sorry about that. So I'm David Gunkel, Professor of Media Studies in the Department of Communication. On the research side, I mainly concentrate on the ethics and the law of emerging technology. And I've written six books on AI and ethics over the last decade or so. On the education side and interacting with students, I'm really working to bring AI into the humanities and the social sciences curriculum specifically in our discipline of communication. And have done a number of books on making artificial intelligence accessible to students who are non-stem fields and trying to get them engaging with the technology.

I think we are looking at a time right now when we have to integrate AI across the curriculum. There is no field that will not be touched by these innovations and developing strategies for bringing this technology into our various disciplines and engaging our students in critical engagement, I think is really our job as educators. So I will turn it over to my colleague, Andrea Guzman because she comes after me alphabetically. >> Thank you, David. I'm Andrea Guzman. I'm an Associate Professor in the Department of Communication. I'm over on the journalism side in the Department of Communication.

My area of research focuses on human machine communication. And how people conceptualize of artificial intelligence or what we call compute communicative AI, such as Siri, Alexa, as well as now ChatGPT as a communicative other, I also study and teach about the integration of artificial intelligence into media industries. I've been teaching AI to students or about AI to students since before I came to NIU.

And so I've been teaching it here at NIU though for about eight years. And also teaching other educators through professional organizations about how to talk with our students and approach AI in the classroom. I am the lead editor of the forthcoming human machine communication handbook, which looks at how AI and robotics are affecting multiple fields. Everything from health care to spaceflight, to engineering, to commerce, to business. I also serve on a board, a media advisory board for the Institute of Public Relations, which is one of the largest professional institutions for public relations.

In that capacity, I work with and hear from practitioners and how they are integrating these technologies into their spaces. Like my colleagues have also indicated. Teaching about AI is necessary. The approaches to how we teach about AI though will, of course, vary across discipline.

Although I think there's some central tenets that we'll talk about today, that we need to be thinking about. But I also want to talk about the fact that teaching about AI does not have to be scary. And there's really good ways to positively approach it, even if we're just learning about it ourselves.

So thank you. Next, go on to Maoyuan Sun. And I know he's does some excellent research on trust and AI.

>> Thanks Andrea. Hello everyone. I'm Maoyuan Sun, and I'm an Assistant Professor in the Department of Computer Science. And my research focus is a human limited action and a visual analytics.

And I teach like a human-computer action, teach vision analytics. And also I teach like a programming language courses in the department. And I think AI is a very important part. So definitely my first touch the fundamental principles that in the AI stuff. But also the tricky thing is, since the rise of ChatGPT, and suddenly one day it all challenge to all the computer science homework assignments because the students can simply post the questions in ChatGPT.

They can write a diverse version of this code. And even we cannot recognize the thing is actually written by the student or somehow generated by the tool. But unfortunately that's what happens now. And I think that will be the chains during the past few years or even longer. So instead of trying to blocking it, and I'm also thinking is almost impossible for us to blocking students using it. And we have to think about a better way to help the student actually visit and smartly use it.

And that somehow may also revolutionize the whole education. This is my current standing and thanks. >> Thank you. We also have another panelist, Mona Rahimi, but I don't see them in the participants panel yet. So that's okay. We'll hope to have them join us soon,

but I want to just go over a little bit about how the panel will work. Basically, there are some questions that I have that I can pepper in, but please feel free to add your own questions. We do have quite a few people in the session, so at least for this point, I'm going to say add your questions to the text chat area and I will let the panelists know.

If there's an opportunity and it makes sense, you can certainly turn your microphone on as well. So just to get things going, what I'm interested in is this new term I'm hearing, artificial literacy. It's the new form, maybe new to me of digital literacy education.

Do you think that students will need to understand this form of digital literacy, such as the mechanical nature of it, the biases and of course, the error patterns that might come into play? >> I think I'll start just from the perspective again. So I'm a faculty member in the Accounting Department at the College of Business. So we've been working with our students particularly in accountancy on data literacy and digital literacy because quite honestly it's just being used so much in the accounting profession, they really have to understand it. Piggybacking on what I said in my introduction, I really do think it's important that students in all areas understand what artificial intelligence is, what it can and cannot do so that they can critically evaluate the information that comes out of it.

It's just so many people, it's this black box. And it's a really dangerous thing if they don't understand the limitations of it. So I would say especially understand the limitations even if they don't understand the computer programming behind it. I will expand on what I'm here, so yes.

I think it's important to think about we're at the beginning of a new technological phase and I like to the introduction of the Internet and the worldwide web in terms of some of the historical patterns we're seeing and how people react as well as how these technologies will fundamentally change and have already fundamentally changed in some instances, how industries and learning will work. So when we're asking that question, the way I hear that question is would be the equivalent of back in 1997 when I was first in college. Should we teach students the Internet? Hands? So, yes.

We do need to be talking about artificial intelligence. What that literacy will look like I think we can identify some central tenants as Anne talked about and as you've talked about ethics and bias, understanding what it is although that can be a difficult question to answer sometimes and its limitations. And then also thinking about more importantly when we're doing this and I can expand upon this later, is what your students already know about AI or think they know about AI because there's a lot of gaps there as well.

So yes, this is definitely a form of technology they need to understand but there has to be some thought put into how we approach it next. >> I will just echo everything that has been said so far and say that when I came to NIU 25 years ago, I was hired to teach digital literacy. I'm pivoted to teaching AI literacy somewhere around 2010 or so. And I will say that when students encounter technology that they don't understand, the technological object is like magic.

They type in incantation and it does something and they don't know why it does it. And our job as educators is to demystify the technology and that is pop the hood on the black box, let them see inside so that they understand enough of the technology to understand how it impacts their world, their discipline that they are studying and their future possible careers. >> I think I just agree, but I think it is also a bit tricky parts about the AI is a visionary about the application side. And a lot of the tech company want to somehow wrap up the technical things on the top of it, like a by adding some usable stuff easy to use, and they always claim that they want to benefit a diverse group of people even though they don't have that heavy mass of statistical background about it. But I think it's very important for the [inaudible] at least to understand that technology is not everything.

It's due to have is disadvantages. It can potentially have biases [inaudible] depending on the domains or the background of the major of the student definitely computer science students, engineering student they may want to or they may have to touch more technical side. They must understand the most details and the training process, the whole computational stuff. But for the other majors I'm starting to learn how to use this very is important. But be beyond just the learn how to use it is important for them or at least to the instructor to teach them, to introduce them or black box is not always 100% correct.

You cannot simply say I purely trust them [inaudible] skills away. I think that analogy, it would be very similar to the situation like a calculator shows up. So does the students do need to learn the maths to do the calculation? I think the answer is definitely yes. They need such like capability of doing such kind of things, but anyway that do is there. If they learned the things, if you want to use it, yeah, sure. Go ahead.

But without the tools, like we'd better want to make sure the student can do it. >> Thank you. Excellent thoughts on that. And one of the things that it makes me think about is, it seems like this is something that students from all disciplines need to understand. I think it was Andrea said,maybe not all the mechanics but they need to understand how it affects them and how they can use it in the manner that's appropriate to their discipline. So do you think that AI is going to be part of student's future careers? I guess the answer to that maybe yes, but how is it going to be important to their future careers? >> Yeah, I'll start with that.

And it's not their future careers, it's their careers now. And again, this is discipline-dependent, but it's part of being part of journalism careers now for 5-10 years depending on where you're working. And I think the same can be said for various industries.

And I want to also add something here, it's not just about career preparation. This is about helping students understand the world and society. So I taught an honors class on AI and media across the university and the reason why that's important is because the news that a lot of people are consuming now is shaped in some way by artificial intelligence. Some reports people are receiving are written by artificial intelligence.

A big concern around ChatGPT is how it can possibly ramp up misinformation and disinformation. I think there are several ways to think about this going back to the literacy question. There's the discipline and industry-specific questions about AI that we need to address, but then there's an artificial intelligence and social component, and political component and democratic component as we've talked about it because we've seen in social media people use social media maybe in their careers or to build their careers like through LinkedIn. But we've also seen how it has had implications for the political sphere and how people in their personal lives and part being part of a democratic system need to understand this.

We need to think about artificial intelligence, not only within our disciplines and our careers but at a larger societal level. >> The way that I explain this to my students is we are not looking at a future robot invasion. We are living through the robot invasion right now.

And if we expect it to look like science fiction with this dramatic uprising or the robots descending from the heavens with Reagan's and everything else, we're looking in the wrong place. It's going to be more like the fall of Rome where we invite these things into our world to take over various roles in our social reality. And slowly but surely more and more AI will be embedded in our everyday activities, it already is. And if we aren't keeping our eye on how this incursion is taking place, where the changes are happening and what it all means for us, we're not educating our students to be resilient as they graduate and step into their careers and take on their first jobs. And I think our job is to really help them be attentive and help them see where these changes are happening and what they can do now to begin preparing themselves to not only understand the reality that they live in but also the future possibilities that they're going to face after graduation. >> Go ahead, Ben.

>> I totally agree. I think that the AI and also a more innovative tools like shows up today, potentially revolutionize the ways like how we do the work. For example, they recently published, I think Microsoft just last week, released 365 Copilot stuff and the whole feature is changing the workflow, that people start working on writing articles, creating media type of things.

Where originally people start everything from scratch, they do the things. While now with embedded ChatGPT type stuff, so all the contents can suddenly be automated and generated by the whole computational side, then somehow we have to rethink the roles that people play in the workflow. Well, at least in the Microsoft advertisements like video stuff is still the whole process is steering by the people. People initialize the conversation, the content is generated, but would be the other way around, would be the computational content, computed stuff, well, directly impacts people first, and then the whole workflow would be shift toward the direction that starting by the AI's direction. Then how the student may think about the ways and would they adapt to such changes? I think it would be important to think about. >> I really see this as being an equity issue.

So possibly are populations that are already vulnerable, do you see an equity gap widening? >> So I've studied this extensively. I started with the digital divide back in the initial stages of the Internet and the web and there has always been a have and have not divide between the individuals who not only have access to the technology, but have the skills and the opportunity to use it. I think the AI divide and the robot divide, if you want to call it that, is only going to exacerbate already existing social inequities.

And part of what our role is as public educators working in a public institution is to try to level the playing field, at least with regards to the students that we serve and the populations that Northern Illinois University has contact with. But obviously, it's a global problem, and getting a handle on this is going to require a lot of initiatives locally, but also thinking globally in the process. >> Does anyone else have any comments on that? I just saw some nodding heads, so. [LAUGHTER].

>> Absolutely. If you think about one of the things that sparked a thought for me when David was talking about robots taking over everything, and as far as the equity issue, I think that definitely in business, we see that a lot of the jobs that are being automated by not only robotics, but also by AI are the lower-level jobs. And so I think that that does exacerbate that have and have-nots.

And although I don't know what to do about that because we can certainly prepare our students at NIU to be the people who don't have their jobs replaced. I don't know as a societal issue, I don't know how to solve that problem. >> So I think everyone's point, and to Anne's point, it is a bigger societal issue.

And we're seeing a lot of analogies with what happened in what was called the first automation crisis in the '60s. And as someone who's grown up around the Rust Belt, I constantly draw connections in working class areas with what happened with industrial automation when robots came onto industrial lines. And there can be a lot of lessons that we can learn there as well. I think it's important to keep in mind several things here. First, on the question more of automation and what's going to be taken over, there's automation, there's augmentation. So there's some ways in which these technologies definitely are very helpful and are worked into human workflows in which they assist with productivity.

To Anne's point, yes, we're seeing some possible job loss and actually haven't seen much of it yet in journalism. Currently, we're seeing more jobs opening because you have things like AI editors that you didn't have before, but it would be naive to say that some jobs may go away and some new ones to be seen as well, but there's also this question about internships for students, and some of those lower-level jobs that can now be given to AI may have been those entry-level positions for students. So that also gets wrapped up in this. And I also want to talk a little bit about the fact, going up David's point, there's always these haves and has nots. We also want to think about that in our teaching of AI and thinking about the populations we serve here at NIU because our students already are impacted differently about AI and already may have different feelings and thoughts around AI, particularly if they're from a community or from a group that's already highly surveilled through something like facial recognition and teaching about these technologies, students will have very different reactions to them and that's really something we need to think about as educators, is that there's also going to be differences into how students process and understand these technologies. And I want to be clear.

Some students are scared of these technologies. I've had students outright refuse to interact with a robot in my class. And so we can talk more about that as well. But some of those things do get wound up in these larger social economic issues as well. >> I want to ask Cindy's question in the text chat, but that's an interesting path. I guess I never really thought about the students being nervous about this, but absolutely.

So Cindy said, I'm teaching a production course this summer on using AI and instructional technology. So how to use AI tools to design and develop? Are there resources you suggest I provide students, like including ChatGPT, DALL·E, AIVA AI, and Clipchamp? Even I see you've added something into the text chat. Go ahead. >> I am right now in my AI robots and communication course doing a big unit on computational creativity, which is really addressing these questions. And so we are engaging students in using DALL·E, Midjourney, ChatGPT, the GPT Playground, some of the music generating algorithms that are out there to create original music. Frankly, right now, you can make an original film using just AI generated images, AI generated script, AI generated voices, and AI generated music.

And knowing what this means for those who teach production and for students who are engaged in production activities I think is going to be crucial for getting our students to really understand these tools and what opportunities and what challenges there are for the creative industries as they move forward. >> Can I ask a quick clarification question to Cindy? Are you looking for more resources in terms of these are the AI technologies I should be teaching or are you looking more for resources in terms of, is there a larger, more holistic way to teach about AI? Are there certain resources I should be including about biases in AI? And you may have those already. Is this just more of a production question or is this more of a how do I broaden the course question? >> I'd like to broaden the course.

It's a production course. So technically it's about them developing something and I will work with them in the tools, but I have trouble only letting it be a production course. [LAUGHTER] >> There's been a lot of great work done about artificial intelligence, machine learning in terms of the ethical implications. There are some institutes. So AI now institute has been looking at the integration of artificial intelligence into everyday practices, for some time they put out annual reports that talk about things like biases that I use in my classes that are fairly easy for students to understand.

And I can send these along later on. There's some newsletters that I've subscribed to by some thought leaders in the more of the AI bias camp that will constantly provide updated resources on artificial intelligence bias or how to think critically about these technologies that are, again, accessible to people without technical knowledge. And I think sometimes educators, and I certainly felt this way when I was getting into this, if you don't have a computer science background, you're like, well, how do I even cut through this and find this information? But there are a lot of people out there who are actually working to make this information more easily accessible. And so I can definitely share some resources.

I know David has a lot of these things in his class and I'm sure other people do as well that are accessible to people who don't have a computer science background, but they can help them to teach and think about and to share with our students. >> Great. Thank you so much and thank you everyone for keeping the questions coming. A lot of interesting thoughts here. Reva says, I'm interested to hear from the panelists on what is actually being automated today in their discipline. One reads about shady things in the media but I'm more interested in what is actually happening versus cute prototypes. That's question one.

So let's start there. >> I guess I'll start with accounting, that's that's the area in which I have the most knowledge of how AI is being used. I think first of all there's AI, there's cognitive technologies, there's robotic process automation, and there's machine learning. And the experts on this panel can identify them better than I can but my understanding is they're very different things.

So it depends on who you talk to. A lot of people in the profession, an accounting profession like to lump it all as AI but it's not necessarily AI. That being said, you see large accounting firms using it in audit, particularly using AI to help identify risk. And so able to give the technology the background of the company and have the AI identify the risk areas. That being said, you still need the accounting professionals to determine whether or not those are accurate. So we definitely see it in that.

We see it I would say probably right now more what I would refer to as cognitive technology. So in other words you built into it. It helps with decision-making but again, they're standard decision-making. So if this happens do that, if this happens do that type of thing. But we also do see AI being used in analyzing for example leases. There's different ways that you have to account for leases and AI is being used to be able to instead of a person having to read the lease you're putting it through artificial intelligence for that to do the analysis.

Those are just a few examples. >> I'll go next. So in the field of communication we see it all over the place. Already we are replacing film and book critics with recommendation algorithms and music critics with recommendation algorithms. Most of us now choose our media content not based on reading a review but based on a recommendation we get through one of the streaming services and that's all AI generated.

Script writing in both television and film, a lot of script is partially written by AI. We see it with the replacement of focus groups. We can feed a script into an AI that will evaluate the script and calculate what the box office receipts will be if the film is made and they actually use this to decide which films get made and which films don't get made in terms of our entertainment.

With the interpersonal communication we see it on the customer relations side in which we are now chatting all the time with chatbots and we're not actually talking to human beings and call centers and we're replacing all the call center personnel with chatbot facilities. The generation of new music. A lot of music is being now either co-written or in some cases entirely written by artificial intelligence and there are a number of algorithms that help on both the composition side but also on the production side.

And visual imaging with DALL·E, [inaudible], mid journey, all now providing original imaging that can be utilized for visual communication that is replacing the role of human artists in a lot of online publications. So it is really exploding in the field of communication and information. >> The second question and certainly jump in. Maoyuan, do you have something to talk about? >> I just a simple comment from computer science, AI is everywhere. >> It is everything, right? [OVERLAPPING].

>> Yeah. [LAUGHTER] >> All the tools like a tech company builds most of them [inaudible] AI background. And even for the learning process I think AI is the most common.

Like McAdams was just a recommendation. A whole bunch of online materials, learning staff, they always recommend that you just go here from there. And also computer science, a whole bunch of things. You have to touch the AI because we are working on that thing, we're creating those stuff. So the students have to understand the technical details and to work with that thing.

>> Yeah. And maybe you can address, we've the second question. One of the things I've seen in my field is that earlier tools haven't gotten rid of programmers. They're just enabling existing programmers to do more.

Microsoft worked both. It got rid of a lot of clerical help and it also raised the standards for what the rest of us need to do to produce paper. Have you seen more of that doubling maybe of different tools may be addressing different needs? >> Yeah, I'll add. In journalism,

that's more of what we're seeing. In the journalism process, there are multiple parts that are being increasingly automated. And I do want to bring up a good point because I think this was also what Reva was talking about or trying to get to is, how much of this is marketing? Because we hear a lot of hyperbole in the press, including about their own use of these technologies. But then also are these things "really AI? " Within journalism there's this Nick de [inaudible] is a great researcher and he talks about, if you squint hard enough it could be AI and that some processes are artificial intelligence, some are machine learning and some though are just more simplistic automation but because they come in and stand in for a human in some way they get thrown into the AI camp.

That might be something that people want to look out for in their disciplines or in these workspaces. Not necessarily is it or is it not AI which is important but also to think about automation that may occur through things like algorithms but doesn't quite get into the AI camp. But in journalism right now we're seeing more of the augmentation, what Reva was getting at with MS Word got rid of clerical staff but it did all these other things. We're seeing it being integrated into processes that can help and can bolster productivity.

In journalism it's not just content creation where we talk about, yes, there's technologies that write, for example there's social media listening technologies that alert journalists to trends on Twitter or to trends on social media about something that's happening. We use machine learning in investigative reporting which is extremely helpful because there are some things that investigative journalists look at the amount of paperwork they have to go through. Like the Panama Papers which was a big global investigation, could not have happened without AI and then in distribution. So it's opening up new avenues for sure. One of the questions so becomes, what we're talking about is AI going to do this, is machine-learning going to do this? It's important to keep in mind there's a difference between the technologies and the people who make decisions in business. In journalism for example, we could possibly say, well, these technologies aren't great but they work good enough.

And if you're a media company that is owned by investors looking for return on investment, they may say, well, that's good enough then and we'll get rid of people. So these are the types of things we're seeing now. But it's not just a question of the technology, it's also a question of the economic systems and larger decisions by higher ups. And those are the harder things for us to forecast, to tell our students about. What decisions will people who are far removed from their positions make in the future. >> I'm learning so much from all of you.

Thank you so much. And it's really sticking in my head that even the title looking ahead, the future is now I guess. [LAUGHTER] And so how do we look ahead? And it sounds like it's still a little bit murky and it can be difficult to know, especially in probably anything technology rapidly changes and changes our world. So did anyone else have any comments on really that future? What do we see even a year from now? >> I'll just say given what we've experienced over the last week and a half with the release of GTP 4 and then Google Bard just a few days ago. I'm really hesitant to make any predictions very far out, because the rapid rate at which these things are coming at us has exceeded everyone's expectations.

And [NOISE] a year from now, who knows where we'll be. But certainly, it will be more of the same as we see these things in advance and even have other players enter the market. >> Yeah, that smile on your face when I said that [LAUGHTER] was all telling.

I do want to get to the questions in the text chat. I'm a language instructor. I do not have a computer science background. I don't want to develop oral exercises like Siri, where students can have a conversation, open resources from text-to-speech or speech-to-text, but I do not know how to craft them together. Is there a template that I can use? So I guess maybe one of my clarifying questions here for this, texts to speak, what category of artificial intelligence would that fall under? Or it seems like a tool? >> Yeah. A student can use it is a tool. You can type a text and it turns into audio and an audio go back to text.

David already answered my question. He has some idea about that, and thank you David. But maybe other panelists have an idea because I want to create something like Siri to my students. I can see from text to audio, but I want to craft both of them into one complex. I think it's like broader assessments or practice for my students. And I see a couple of them.

And I try to read it, but it just seems very complicated and I wonder if there's some easy template that I can use? >> So I'll just follow up because I answered already in the chat. But really, what we're talking about are really two distinct technologies that are very interrelated. There's the chat bot, which is the text-based thing that you do on your keyboard and then read the responses on your screen.

And then there's the spoken dialogue system or digital assistant like a Siri and Alexa, which just jams speech to text recognizer on the front end and a text-to-speech synthesis module on the backend. It is much easier to just deal with the chat bot in terms of engaging students in the technology, because the other pieces of the chain of events are making things more complicated in the process of teaching this material. So I have my students build a chat bot using the Pandora bots platform, which is what the cookie chat bot uses which is a six time learner prize winning chat bot.

But it's really fast, easy. It's taking very little programming knowledge on the student side, and it's more about scripting responses and developing a chat bot that you build a personality for. And then you can share it with each other. And we run a quasi learner price competition in class, we talked to each other's chatbots and try to see which one is the best performing chat bot. But, there's a lot you can do with the available tools that are already free to us through various vendors without having to get really too complicated, and involved in much more of the technological components that you might want to have in play.

So if you can deal with a chat bot, I can show you Pandora bots and get you started with that. If you really want to build a Siri or Alexa, that's going to be a little more of undertaking because you have to worry about the two, the input and the output modules, that feed into the chat bot algorithm. >> Thank you for that really clarification and other information. There's so many tools and resources that you're throwing out there. It's just mind-boggling.

[LAUGHTER] It really is. So another question we have in the chat area, which chat area now seems so different to me. Recent AI art generator episode images of Trump being arrested scared a lot of people regarding potential disruption, power of AI in politics, do you see a regulatory mechanism catching up soon? Should there be a regulatory scheme for these? >> So I just have to laugh here, and then I'm going to send it to David because this is David's area of expertise, but we haven't caught up on the Internet [LAUGHTER] or social media. And if you want to thank researchers are flat-footed on this politician, it's difficult. From the journalist's perspective, this is extremely disconcerting.

It's been going on for awhile, but that is actually the number 1 concern among journalists, is not necessarily whether or not AI is going to take their jobs. Because again, we've had a little bit longer to get used to this. ChatGPT is different, but AI has been around in journalism now for 10 years, we've had a little bit more time to get used to that concept. But this is actually the bigger concern, is how can automated technologies? Because one thing they also do is make it easier for people to create content. And that can be extremely helpful, but it can also ramp up this disinformation and misinformation in environments.

And I know David has been working on the regulatory stuff. >> So I've been involved with a number of meetings or the bi-partisan policy center in Washington DC, which is a joint congressional policy think tank. And there is desire on the side of regulators to do something.

But there hasn't been a lot of practical traction with getting anything through either the Senate, or the House with regards to any new regulation for AI. The Biden administration put out the roadmap not too long ago, but that really is about protecting innovation in industry and less about protecting consumers and various vulnerable populations as we roll these things out. The EU is far ahead, obviously because they're legal system is much more statutory, organized than ours is.

And so they have been the leaders in developing a regulatory framework and some initiatives. But even there, they lag behind the innovation and the technology, and they recognize that keeping up with these changes is a real task in its self. A colleague of mine had said that technology moves at light speed and law and policy move at pen and paper speed, and that's what we're trying to negotiate here, and that's the complication that every government is looking at.

And because these things are done nationally by different national governments, every part of the world is going to have a different version of AI policy. And then reconciling these across international boundaries, is going to be a second initiative that's going to have to take place through some UN or some other international cooperation. >> Thank you. I'm just paging through the text chat here, and I guess then, as we get to the end of our time today, this is a question that I had as well. And it really is about privacy. What should we be talking to our students about as far as privacy when they're creating these ChatGPT accounts, they're using Google.

There's algorithms tracing their search histories. Again, what should we be telling our students about their privacy? >> I'm laughing because I do a terms of service exercise with my students. And I ask them to read the terms of service for their favorite social media platform. And they come back horrified as a result of actually having read that thing that they never read and then just click agree on. I think one of the places that gets students really focused on the privacy concerns is to actually invite them and help them to read the terms of service.

Because once you read the terms of service from OpenAI, you get to see exactly what data they're scraping in the process of creating your user profile, and if you just click agree because you want to use the tool and don't pause to think about the privacy implications of signing up for one of these Beta tests or one of these widely available services, I think we're missing an opportunity to engage students in a conversation about platforms, platform capitalism, and privacy. >> Anyone else have any thoughts about privacy? >> Thanks. The thing is a bit tricky. A lot of tools like if you as a user or the student as a user want to use it, the kinds of like a fast them to click the green button, elastic cream. Click that button. There's no way for them to proceed to start testing the functionality.

Like, I think it's a bit tricky thing, we do can like it brings all that like a lack of privacy issues to the students and let them know. All the stuff like the possibly use the things will be collected from you. But again, like the decision still has to be laughter for the individual who would choose to use it or not. I know this is a pretty like tricky things like almost all of the apps like people use today.

It has to go through that. And that would be imagined 99% of the users, the specific click legacy shrink or had, yeah. >> I think this issue of privacy. So everyone's like, I want to take David's class. Yes, I'm a former student.

For most people don't know that I'm actually an NIU grad and I had a class with several classes with David, so yes, take David's. If we all could just have a class maybe when David needs to teach a masterclass here. But also, to Miyo's point, the fact that you either agree, or you don't use these technologies. There's some things when we're teaching about these technologies also to be reflective of is, what technologies are we adopting to teach about AI? Are we, anytime I talk about technologies in my class, I try and clarify, these are the technologies we're you using? This is what they mean for privacy. But I also want to bring out another point here, and this actually comes out of my work with industry. And a lot of people are ''playing around with ChatGPT'' and putting things into it.

And there's a really big debate right now in industry about proprietary information. And I also want to bring this to people's attention as researchers. So I know sometimes researchers are playing around with the technology, like I have this really long paragraph. I want to turn it into an abstract. I'm going to throw it into ChatGPT, see if it can do it.

ChatGPT, if you look at the privacy, you're turning that information over. Right now OpenAI says it's not using that information to then form new responses, but there is this concern. For example Boeing, I know for a fact, has completely blocked out open a ChatGPT because it's a defense contractor.

Other, these much larger questions here around privacy when it has to do with proprietary information, sensitive information. And again, this is not necessarily the way ChatGPT is working now, but if we think about other applications and they may, they may function in the future and the way that they grow and gain knowledge. This is, I mean, we could spend an entire hour talking about this.

>> I know [LAUGHTER] there's a lot more we can talk about. But as we do wrap up, any of the other panelists have any final thoughts? What, very important key takeaway did? Maybe we sort of missing this conversation that you would like to share with the audience? Did we cover everything? [LAUGHTER] >> I'll make one quick comment and thinking about just with David and Andrea were saying One of the things that I have done in my class, because you always want to think about ways to bring ethics discussion into your class. And these discussions about these privacy policies and signing away all of your information. I have in my county information systems class, I had students do debates on that. Because there's a trade-off between convenience of using the website or the app and privacy concerns.

And so it's, it's made for some really rich discussion among the students to kind of set that up as a debate. So I'm just throwing that out there since as that might be a really good way to engage your students in this conversation. >> Yeah, I'm definitely taking notes on these really important assignments that I think is a really positive way to use these tools and have these conversations.

So I do want to just keep our ears open as we wrap up the session. But I also want to make sure that I put in some time to thank all of our panelists. Thank you so much. Really engaging, really interesting conversations and so much more to unpack. So thank you so much for your time and your expertise.

Yes there's lots of thank yous in the chat area. And thank you again so much for just being part of this and spending some time with us on a Friday morning the week after spring break. So there's many cases the week that we just ramped up for the second half of this semester. And thank you all participants for joining us and being so engaged in the conversation.

>> Will this recording be made available or transcripts? Some people are asking. >> Yes. Thank you for pointing that out to me, but yes, the recording will be transcribed and available to everyone that at least attended. It may also go on our website.

>> Okay. >> But I'm going to stop the recording now. So if anybody has any maybe sensitive questions.

2023-03-30 13:15

Show Video

Other news