How do people feel about AI?

How do people feel about AI?

Show Video

hello everyone and thanks so much for joining us on this wonderful Tuesday afternoon we are very excited to welcome to this webinar events uh artificial artificial intelligence technologies have increasingly uh impacted many aspects of Our Lives over these last few years from the kinds of information that we see online so the way that we apply for jobs or the ways we interact with public services these last few years I've seen the UK government begin to develop a road map for regulating AI systems and these last few months have seen public interest in AI systems Spike as a result of the launch of powerful systems like chat GPT as these Technologies become increasingly more essential to our day-to-day lives it's all the more important to understand the expectations and concerns the British public have with them in November of last year the Ada Lewis institutes and the Allen touring Institute conducted a nationally representative survey over 4 000 adults in Britain to understand how the public currently experience and understand AI we're very excited to share the results of the survey with you all today um today I'm very excited to have a few fantastic guests who are going to be speaking with us about these these results um they are Dr Helen marquette's a turing fellow and director of the public policy program at the Alan Turing Institute in the professor of society and the internet at the University of Oxford Helen is also professorial fellow at Mansfield College we also I'm excited to have Dr Shannon Valor the Bailey Gifford professor in the ethics of data and artificial intelligence at the University of Edinburgh she's also director of the center for technomoral futures in the Edinburgh Edinburgh Futures Institute um joining us as well is Dr Gina Neff the executive director of the mindaroo center for technology and democracy at the University of Cambridge and Ben Lyons head of external Affairs Innovation at the center for data ethics and innovation well today we have some quite a lot of results to get through we're going to start with the presentation from two of our researchers about the results of the project at a high level rashni malvaria is a researcher in public participation at the Ada Lewis Institute and Dr Florence anik is a research associate in the Online safety team with the Alan Turing institute's public policy program after this presentation we'll move into some discussion questions and a moderated discussion with all of the panelists including what these findings May mean for policy and regulated regulatory practices going forward if you have questions you'd like to ask we very much welcome you to ask them and to work them in there's a q a function at the bottom of your screen which you can find and if you need captions there's also show captions function that you can use I'm very excited to Welcome All to here today and I'll hand over to roshni and Florence to lead us off with the opening presentation thanks entry um so I'm just going to provide a little bit of background to the survey before we show you the main key findings um so firstly why did we conduct this survey and why now um well people are coming across artificial intelligence Technologies more and more in day-to-day life and these Technologies can impact many aspects of people's lives and there's currently National focus on AI and surrounding policy and we wanted to understand how the British public are currently experiencing AI um and this is part of a key goal we have at Turing to drive an informed public conversation around AI um so there does already exist some work on public attitudes to AI um however it's been a really difficult topic to ask people out people about because AI is a term is really hard to Define and is often quite poorly understood so for this reason we wanted to ask people about a range of specific AI uses that people might come across in different contexts and for a variety of purposes rather than asking about AI as a whole and as a part of this we first wanted to find out how aware people are through a wide range of different uses that are um of AI that are in place and how much experience they might report having with them um existing public attitudes work also tends to ask people about how they feel about AI Technologies on a kind of positive to negative negative type spectrum and we felt it was really important to allow people to express positive and negative sentiment at the same time so to express how beneficial and how concerning they see various AI Technologies to be recognizing that people might see benefits and concerns simultaneously and we also wanted to understand in Greater detail the specific ways in which people think uses of AI can be beneficial and also the key concerns that people have more generally at this time of national conversation around AI we also saw the importance of finding out about the Public's preferences for governance and Regulation and also how explainable the public would like expect AI decisions to be and given the fast-moving nature of this conversation we wanted to ask um an up-to-date sample of adults living in Britain we chose we wanted a nationally representative sample to ensure inclusivity and also to allow us to make comparisons across key demographic features so to answer our research questions we designed a survey which asked people about their attitudes towards and experiences with um 17 specific uses of AI and for each use we asked them how beneficial and how concerning they perceive them to be and about the key risks and benefits that they associate with each one and the 17 uses were chosen based on emerging policy priority priorities and increased use in public life and you can just see the uses that we asked about here so they were clustered into seven different overarching Technologies um and these Technologies for face recognition risk and eligibility assessments targeted advertising online virtual assistants Robotics and simulations um and we had different examples for each one and for most of these uses respondents were randomly allocated to to one one or two and following these questions people asked questions also about their preferences for governance regulation and explainability um so our sample was drawn through cantar's public voice panel which is a probability panel meaning that members of the population are randomly selected and effort is made to ensure that each member of the population has equal chance of being selected and this ensures a true cross-section of the population which limits the risk of sampling bias and this is kind of considered the gold standard standard of of sampling and Survey research um we of course made sure it was nationally representative across key features um and this included um a kind of an offline sample of harder to reach participants that were interviewed by telephone as opposed to completing the survey online um and our final sample included over 4 000 people um nationally representative of Britain we provide descriptive statistics for the proportions of overall responses for each question in the survey we use chi-square significance testing to test for Meaningful differences between groups based on for example demographic features like age or gender and also based on attitudinal features as well um and in the report we also have some further multivariate regression analyzes which allow us to understand predictors of the extent to which people believe benefits outweigh concerns of some of these Technologies and allow Pastor rationally um who will talk to us about some of these key findings thank you Florence um so we asked Esperance mentioned overall how beneficial people felt each technology to be and separately how concerning and we found that for most of the Technologies we asked about people had broadly positive views you can see from this figure where green bars show um the percentage of people that think the technology is either somewhat or very beneficial that 88 of people felt that the use of AI for predicting cancer risk was beneficial whereas this figure was only 33 for targeted political advertising and we found that generally some types of uses of AI reviewed more favorably than others in particular uses that related to science and education and use as related to security so for instance facial recognition of water control were viewed to be more beneficial whilst applications in some types of Robotics advertising and even some public sector uses of AI was used to be least beneficial and generally this pattern followed in overall levels of concerns as well with over 70 of people feeling either somewhat or very concerned about the use of autonomous weapons and similarly for driverless cars as well on top of this around 60 of the public were concerned about targeted online advertising as well as the use of AI and recruitment and although uses in science such as to determine the risk of cancer viewed favorably around half of the public expressed concerns about the use of AI in healthcare and we actually found that 55 of people felt virtual Healthcare assistance to things like potentially Healthcare chat Bots were concerning a 48 felt the same about robotic care assistance and when we looked at overall benefit scores for each technology alongside overall concern we found that for 11 of the 17 Technologies we asked about perceived benefit actually outweighed concern suggesting broad positivity for some uses of AI however this broad positivity is not the complete picture and there's a lot of nuance in attitudes when you look more closely at the data regardless of how they responded in terms of how beneficial concerning they perceived each episode ID to be overall all participants were able to select any specific benefits they thought related to that technology and similarly any specific concerns and this was from a pre-selected list of potential benefits and concerns and they can choose as many options as they thought replied they could give an alternative response in an open-ended response they could say they don't know or even select none of these as an option and we found that there are some very specific advantages and disadvantages people see in relation to me all of the AI Technologies we asked about regardless of how concerning or beneficial the technology was perceived to be overall so here we have the examples of AI for detecting cancer and facial recognition are border control which are the two technologies perceived to be most beneficial of all of the ones we asked about but even still we can see here that half of the public said they were concerned about relying too heavily on this technology over the Judgment of Professionals for cancer risk prediction and 47 were concerned about what this technology meant for accountability for mistakes kind of where would responsibility lie if it made an incorrect judgment and similarly for facial recognition our border control people were concerned about accountability again but on top of this also concerns around job cuts and the reliability of these tools came up and driverless cars and autonomous weapons the two technologies that people were most concerned about and humans still interestingly 63 percent of people felt a benefit for driverless cars could be improved accessibility for disabled people or those who may find driving difficult difficult and for autonomous weapons over half said preservation of soldiers lives could be a benefit and more generally when we looked at the range of benefits and concerns that were commonly chosen across all Technologies we found that people often felt speed efficiency and making things fair and more accessible were key benefits of many AI applications whilst the loss of human judgment accountability for mistakes and transparency and decision making were key concerns we ended our survey with some more general questions about regulation and governance of AI as well as opinions on whether an explanation should accompany AI made decisions and we found overall about the public one regulation and they value these explanations when asked what would make them more comfortable with the use of AI 62 of people said laws and Regulation and 59 said cliff sieges in place to appeal AI decisions and in terms of governance and who you should regulate an independent regulator was the most popular option selected and kind of both these findings together suggested desire for Avenues to appeal decisions made by Ai and broadly for regulation of these tools finally when presented with various statements they make trade-offs between having an explanation on how an AI system reached a decision or having an accurate decision from that system they found that the public value explanations most people believe humans should ultimately make decisions and be able to explain them but this was followed by AI either sometimes or always providing an explanation even if it reduces the accuracy of that decision and only 10 of people said accuracy is more important than providing an explanation and this preference for an explanation aligns with some of the specific concerns we found earlier in the survey which were around transparency of decision making and accountability for mistakes highlighting a general reluctance to blindly defer to AI made decisions so to conclude we'd like to acknowledge the colleagues that contributed to this project it was a huge collaborative effort between both the Alan Turing Institute and the Ada Lovelace Institute as well as lsc's methodology department and we're very thankful to our funders as well so both the Alan Turing Institute and the humanities research Council for making this project possible and taking together we feel that the findings from this project have implications for policy makers and developers alike and we're reeking to the airports so with that I'll hand back to Andrew thanks so much roshni I've saw lots of exciting and fascinating findings there for us to discuss I'm really excited to welcome in our panel uh I'm Helen Gina Shannon and Ben um Helen I thought we'd start with you just if for each pass is going to give about three minutes or so of their initial thoughts and response to these findings and and Helen I'm curious if you want to start us off with kind of what you think you're in sort of the big surprising most interesting findings were from this uh this survey I like hello um thanks Andrew I mean I I guess I I guess three things that I think are are really notable here I mean one Florence mentioned at the beginning we wanted to we wanted not to ask about sort of the general um uh AI in the general sense because um we do feel that that's a very difficult thing to either Define or understand we wanted to ask how people experience these Technologies in daily life with which they are already um intertwined and the point about AI is it's a it's a kind of horizontal technology it gets into everything but the implications are also vertical it has different implications and different effects in in different contexts with different uses and in different sectors and I think this has shown that um uh uh really really clearly um the kind of value of asking about it and really showing that people have very nuanced I wasn't surprised by this because I think you always find um uh this out when you do public engagement work but you know finding out that people have nuanced and sophisticated views about these Technologies they can be positive and you know the positive findings it was that for the majority of the Technologies people were um positive about them for the majority of the uses that we asked about but being able to have those positive uses while maintaining a variety of concerns I think that's something really important to come out of it um of course the point about regulation is important as well I mean this is where the national conversation is at the moment and it is really important to understand um what people want out of Regulation um and I think those findings will be very important going going forward thanks thanks so much Helen yeah I think some really good points about that kind of nuance of these findings and um about the the challenges of of uh not moving beyond the sort of general definition of AI um Shannon I'll hand to you now for your initial thoughts and kind of any interesting or immediate sort of um uh surprising findings that jumped out to you oh sorry Shannon you might be muted you would think three years in uh we would be figuring out how to do this um thanks Andrew I I'm really excited to um to dig into this in the conversation and um I'll start just by mentioning a couple of things that I found really interesting and made me have further questions as these surveys often do right where we we get a piece of information and then um that leads us to an even deeper question that we want to ask so one is um where are the Public's learning about AI so we're understanding uh the level of familiarity they think they have and the extent to which it's based in personal experience but if it's not from personal experience um where who are they listening to um and this is particularly urgent considering the latest hype cycle around chat GPT and how much uh Distortion of the reality of the technology has been present in some media outlets and in some sort of social media discourses so so one question is you know how well-informed how authoritative are the perspectives on the AI that the public uh is is getting a second question I think is um the difference between younger and older Generations especially with their um question their answers to the question of who's responsible or who should be responsible for governing these Technologies um I wonder if that reflects a difference in you know younger Generations faith and institutions um and whether we might then have to have to Grapple with that but the thing I want to talk about uh most I think is the um that what might seem to be a surprising result about the favoring of explanations over accuracy but actually uh It lines up quite well with some of the intuitions that many of us have and and work that I've been doing on uh answerability as a way of thinking about responsibility and trustworthy autonomous system so we have a a project that's working on that and one of the things that's really interesting about that is that if if you if you see this as irrational right um then you might be missing something that is if you think it's irrational to care more about the explanation of a less accurate system and favor that than a system that's more accurate but can't explain itself I think what you're missing is the importance of the trust relation uh when we are vulnerable to something that has Us in its power or has the power to uh harm us or help us and uh there's a clue I think in the results where there was a great concern about the loss of professional judgment and I think we might focus on the word judgment naturally but actually I think it's the word professional that we need to focus on because what do professionals do um they profess certain it's rooted in uh the the word to profess to profess a vow uh to others and so professionals are people who profess vows or or duties to those who they serve to those who they help uh and these are vows to use their knowledge and power responsibly and in ways that vulnerable others can trust right so a professional uh is someone who you can trust to slice into your brain if you have a tumor right a professional is someone you can trust with your money if they're a professional financial advisor and I think one of the things we just need to understand is that the reason I think for this favoring of explanation even if accuracy is is uh is is not as high is that we want to know that there's someone on the other side who understands our vulnerabilities to this system and is willing to assume certain responsibilities and profess certain duties to us uh in this interaction and I think that's what we're missing uh so far with AI and autonomous systems because we don't have that accountability or that answer ability to uh to these systems and their power it's a very very good points about about that the the emphasis in that word professional um which which I I think yeah comes across quite clearly in a lot of the answers to these uh to the technology to be asked about thank you thank you so much Shannon Ben I'll hand over to you now as someone who's um done some public attitudes restrict yourself at CDI I would love to hear your thoughts on on the results of the survey thanks Andrew and it's a um it's a really important and timely piece of piece of research um I was asked yesterday by someone walk to the public think about Ai and the um frustrating but honest answer is that it depends and um I think you know the the focus in this research on use cases and really diving deep into different use cases is really beneficial and important I think you know in particular I thought some of the interesting nuances that came out of this research are the suggestion that um the public may see particular benefit in situations where AI enhances human decision making uh as opposed to to replacing human decision making um and and I think also the the idea around the kind of the the net benefit school so I think I think is a is a helpful way of of looking at the extent to which you know there are there are meaningful trade-offs which can be sometimes lost from just kind of overall positive positive negative type type questions and I think you know striking to see that you know for example using AI to assess the risk of cancer uh with something that people were you know saw as broadly positive with lower levels of concern but but interesting you know yeah but all the same but also um for us people with with concerns about where things could go could go wrong um I think um I'd also suggest you know their kind of concern is closely related to where there is easily identifiable potential benefit or harm to individuals where things can go wrong where and and where we've limited human oversight or challenge or or ability to understand the role of AI and decision-making processes um there can be significant um concern and I think in our research often the what sometimes comes through is is is a focus on the the role of the actor as well that's using AI as a bit of a proxy for for what is the intent and do I trust do I trust the system that the AI is being used uh Within um I guess also construct that um you know the concerns and the number one things that's really helpful here to come through is that the concerns are very different based on different use cases so striking to see on you know charged advertising for example privacy is a top concern on driverless cars reliability safety are are big concerns in other in decisions about individual it's things like accountability and contestability so I think it's really helpful to have have that Nuance on on if there are you know a number of principles which are important across the piece in terms of how we think about AI governance um the importance of different principles does does vary on a on a contextual basis and then what that means varies people um what I think going forward would be really interesting to see how these attitudes change based more on personal experience right so you know to take that context you know targets advertising is something that people see in their day-to-day lives driverless cars people read about in the media um but don't necessarily use themselves um uh and you know a cdis are working within the department for science innovation technology will you know we'll be looking really closely at these findings to inform our work on on AI Assurance um uh and then for how we build a strong platform of of tools to trust with the AI and we will be focusing on some of the use cases which come through in this work including work on connection and automated vehicles and HR and Recruitment and we'll be sharing more on that um very soon um thanks so much ben yeah it's some really fantastic points there I think that notion of sort of who is behind a technology is a very much an important factor that we've been covered as well in this uh in the survey um and um I think at that point about sort of this being a longitudinal study how these how these attitudes may change over time is an excellent one I can see some questions as well from the the audience about sort of how some of these Technologies were chosen the main one of the main reasons we chose some of these Technologies was and partly due to policy windows or media focused and attention on these Technologies as they're being covered so we very much hope this will be something we can look at longitudinally see how these change over time um I can see that Gina is is our last person to give some initial thoughts about this before we move to a q a I can see those in the audience have already started to add to the Q a list but please continue to do so I will work those in Gina I'm curious for your thoughts on any kind of of the major top interesting findings that stuck out to you in Reading they're important well first compliments to the entire team for the hard work that went into this great report it's incredibly useful the timing is apt and it gives us something to base on base the conversations on I I think for me one of the things that's very striking is the very large gap that we have between the risks that people perceive about AI between experts between a survey and between the news conversation that we have and one thing that I think will be incredibly useful for this report to do is help point to where we are in different points of those different conversations where experts are worried about particular concerns from machine learning artificial intelligence Technologies what trickles into public conversation and public understanding and then how news conversations may be changing or shaping that I think I think keeping those three um in mind help us design better ways to raise awareness around different sets of concerns that we want to do and and I think the best way to point to that is something that's in the report is really a gap between today's fact and science fiction so driverless cars we have a lot talking about driverless cars in in the news but we don't so much have a conversation about say facial recognition Technologies being used by London Metropolitan Police we have a lot of conversation in the news about some of these cases um but but the kind of everyday machine learning algorithms that people are using aren't necessarily ones that they're they're they're worried about or concerned about even though they have experts have enormous concerns through the questions of buying bias fairness accountability accuracy so this report really is to be commended by a disentangling some of those and it points to I mean what I find really interesting it points to how um those perceptions around say driverless cars are dominating the perceptions that people people have um I'll point to you know one of the questions asked about virtual reality and education for example but how um data are being processed in schools already through the use of third-party apps that are commonplace in British schools um you know that's a that's a concern that it would take a long time to explain in a survey and it's something that many parents would have um uh uh experience with but seeing it as AI is not necessarily something that people do because they're not thinking of um the whole smart phone tablet PC array as being something that that is driven by these Technologies and then third and I would say um importantly is you know this survey marks a moment in time before the conversation shifted to chat GPT and what I find interesting that we're having the national conversation now about generative AI is exactly what Shannon just said about the professional right the middle class and professional worries around what tasks will be replaced have really driven um a news agenda in the last six weeks and it's something that certainly is concerning to government to policy makers to Civil Society organizations and and I think what we have here is a is a is a great um capture of how people are imagining what AI will be used for and done before generative AI becomes something that that is commonplace understanding in the in the news cycle so that perception I think is useful to hold on to and will be very useful to go back to in 6 12 18 months time to see where where where people are overall I would say this um record is to be commended um it's it is incredibly useful to have the understanding of where people are worried and where they are not worried it also shows the work that we have to do around responsible artificial intelligence Technologies and and how we create National conversations about what risks we're willing to bear and what regulations in what use cases and what scenarios we are willing you know we need to to get in place and so that is um going into the conversations that will be happening over the next week through London Tech week and and all the AI events and announcements that will be made then are incredibly important to help show um that we've got a strong factual basis to have those conversations on thank you so much you know I think some some really fascinating points and just um just I think one thing that I'm hearing a couple panelists and quite a lot of the of the questions in the audience I'm sort of touching on is um what if anything has changed uh since chatbt has been launched and and where what is shaping public opinions how might um a survey that is done in November of 2022 differ from a survey that's done in November of 2023. um Helen Shannon I I wonder if I could start with you about this question I'm curious Helen if you could start us off what do you think might be different in in a in six months time if we were to run this survey again um in light of the last six months of sort of the general AI Madness so to speak well thanks yeah and I I liked it that Gina said that because in some ways I I mean at first it seemed oh no we missed chat GPT but I mean it wouldn't have been a good time to ask about it in this kind of it you know this really peaking um hype cycle I really think we we do have to wait a bit until you know people have domesticated it because the uses we asked about have are in some sense domesticated even if they don't know that they're Ai and Gina made a good point there I mean you know a lot of AI I don't know Google translate it's very sophisticated today I but people don't necessarily recognize it as AI um so it's good that we weren't just asking people about GPT I think your question though really does reinforce the importance of keeping the conversation going I mean a survey is great but it's a sort of and I I'm so happy that we have this sort of Baseline and I should say to everyone on the call actually you know this the data here we're releasing the data and it will be a complete Treasure Trove for people to delve down into some of these issues I mean I noticed somebody asked about facial recognition technology and why weren't people so worried about them well you know it may be that they don't know very much about them they don't know so much about what the concerns might be and so on and that might be a really important indicator for the public um conversation um so there's lots to do some of that can be done by analyzing the data but you know in six months time I hope that we would be doing more things for instance I know that all the organizations represented on this call and I'm sure many organizations in the audience I can see the Ico there for example are involved in all sorts of Citizen engagement work I mean the science of Citizen involvement has really moved on so there are things like citizens juries I think we will be using chat GPT itself to ask people what they think about the latest generation of AI Technologies there's all sorts of ways which it will be really helpful so you know I think we've got to keep at it as it were this has to be a continual stream of Engagement rather then rather than a oneself thanks so much Alan I'm Shannon I'm curious for your reviews as well I'm responding to that I mean um it does seem like a quite a lot has changed I'm curious how you might anticipate these findings to reflect that in a future version of the survey yeah I think that's a great question but I agree with Helen that I actually think it's ideal that um the the survey was taken before uh this uh latest uh round of of kind of AI had been confusion because I think that would be reflected if you did uh the survey right now in in what you saw that is you would you would see uh you would see fear you would see uncertainty you would see confusion um and and and you know public attitudes are really um most valuable when they're relatively informed uh relatively as uh Helen has suggested um based on a version of the technology that uh has begun to settle uh into the fabric of daily life and so we have a sense of uh how it might possibly affect us or or change us or organizations or institutions and the current environment is just uh not conducive right to that kind of informed perspective it's conducive to confusion uncertainty I do think it will be interesting uh in perhaps a year's time to see what has happened particularly around two issues with chat gbt that we know about early on uh one of course being the um uh perpetuation uh of uh false or erroneous uh content that leaks its way into authoritative settings right there's a lot of worry about the pollution of trusted sources by chat gbt and if we don't get a handle on that very quickly uh I think a year from now uh public attitudes around AI are going to be profoundly affected by that right the whole point of AI if it has any point right is to provide some additional a kind of cognitive strengths to the human social environment and if it's draining uh our ability to track truth if it's uh muddying the waters between expert perspectives and um fiction then it doesn't fulfill the the function that's implied by its name so so I think there's a sort of critical moment we have here to figure out whether we're going to address those uh those risks and the other thing that will be interesting is people's attitudes about um the copyright copyright issues and the uh the the ownership of creative Labor uh and the compensation of human creative Labor uh and the fair treatment of that labor um because I think right now there's a lot of anger brewing and in many parts of uh the community about that uh and again it remains to be seen whether this is a an issue that is telling says we can domesticate in some way uh that is going to return to a uh an equilibrium that is broadly acceptable uh to uh to uh The Wider communities who are impacted by this or whether we're we're going to to fail in that domestication right and and public attitudes will be very different depending upon what turns out to be the case so uh so yeah I think maybe in a year's time we want to know hoping for more positive uh sort of outcome in that time but yeah I think very very excellent points about this particularly about the expectations that are currently underway um Ben I want to turn to you and ask a question about um really drawing again from some of the questions the audience are asking about how this survey results um compared to previous and other surveys that you've done at CDI and I think there's a particular question here around trusts I think comes up in this survey that I was curious I'm aware there's something that you're a team of also looked at this notion of trust in these kinds of systems I'm just curious how do you think these findings are in any way um challenging or changing uh um what previous surveys have have shown and um yeah a bit more sort of about how you how your team are constructing or understanding and public trust in these systems but I think I think firstly you know one of the things I think is really helpful is the delineation between headline findings towards the term Ai and the nuances of different views that come that come through when you when you're looking at different use cases you know so for example we run a regular tracker survey into public attitudes towards data and Ai and we are about to start work on the third wave of this of this tracker survey and would encourage people on the call who want to feed in thoughts and questions to get in touch but um but we one of the questions we ask is a big is a big kind of use one word to describe to describe what you think about Ai and scary is a word that comes up um quite quite a lot um and quite been quite and quite you know quite and quite large um the when when people look at the at the use cases um for AI uh and we're both both in practice and and potential use cases we tend to find as you've seen in this research um higher levels of support for a number of of mainstream or likely likely use cases and on balance the sense that when people consider a number of a number of um use cases that the perceived benefits for those for those use cases might might tend to outweigh you out outweigh you the risks um that means you know and and so I think I think drawing out that distinction between um the term Ai and the applications is is um is really helpful here um I think I think what I think one of one of the areas where perhaps um I think the jury's jewelry might be might be out is on the um on the on the explainability accuracy um Point I've seen I've seen a range of research which which points more in the more in the side of of accuracy being being important um and um and some research which looks across different sectors and some which is I think a particular focus on on Healthcare and so I think I think that I think there's a range of evidence at the moment which which points in slightly in slightly different ways on that and I think and I think that it will I think other research I've seen I've seen done tends to suggest that it you know that there is a quite close relationship between putting context again there in terms of in terms of where accuracy is most highly valued and where and where it's their ability um is um is important people um I think one of the point I make is is that [Music] um it's striking you as as others mentioned you know this is this is this is kind of perhaps you know the last the last people in the last one of the last big pieces of research before AI uh was a widely understood widely recognized um term in the UK and you know and often um you know when we've done public engagement people have heard the term but actually um they kind of might know it's something to a computers or people might link it to cookies or to um or or perhaps to slightly more kind of Science Fiction um scenarios but people don't necessarily feel able to define AI um and I think what we're going to see um over the coming year is people defining AI much more closely based on their experience the chat GPT um which is you know clearly one you know um you know that that sort of chatbot llm is one is one significant um application of AI but it is it is just one application and so I think it will be interesting also in future research to ensure that people can consider the variety of ways in which AI um can be used and be aware that you know things in the future are going to be very very highly framed both based on consumer experience which actually PT but also as we see in the adoption of Technologies um which sectors adopt uh AI most quickly and most effectively and and perceptions of the impacts of that adoption of AI as it as it stops being something that is purely theoretical and and and and and I'm just either through you know I mean is that is kind of experienced by people did a really important thing to you tomorrow thanks so much ben it's interesting that point about accuracy um I think anyone any machine learning researcher in the room might be sort of um wringing their hands about accuracy being the most important one instead of precision or recall but uh it's a it's a fascinating point I think one for for more qualitative um uh study after or in future research Gina I want to turn to you um there's a lot of questions in the chat uh proposing different governance uh proposals and one of the findings from the survey was this notion that people really do want regulation there's some interesting demographic differences as Shannon pointed out um between people seem to want but there's there's also so many governance and and Regulatory proposals being announced right now from moratoriums the um focus on intellectual property rights um a Bill of Rights like the US is proposing the kind of context uh sector-specific approach that the UK's white paper on air regulations taking to the risk-based approach in the EU I'm curious sort of what do you what do you sort of from these findings is there any kind of oversight or Insight you might have about uh what the the findings might mean for a policy maker whose Financial in this position of trying to figure out how to govern these types of Technologies I think the strongest takeaway from this report is we have to think about AI in specific concrete use cases um the the idea of using this poorly constructed term AI to blanket over a lot of different Technologies a lot of different um uh methods um underline a lot of different activities and a lot of different situations is is is is a difficult one to get your head around anyway and to to you to do that for regulation is not likely to be successful so what this report really shows is that people have different perceptions of what is useful to them what is risky to them and those perceptions are I think very smartly crafted in this report through those use cases now that said there big use cases that weren't in this record for I'm sure really good reasons you can't ask about everything but for example um you know people are already experiencing uh AI in entertainment recommendations for for what they watch on streaming services is already something that's driven by um algorithmic um decision making that's a case that I think most people I suspect would um a whole does not particularly risky to them if you get a bad Netflix recommendation or a bad Amazon recommendation it's not going to harm but you know asking whether or not someone approves of facial recognition Technologies being used by the police might differ depending on the perception that people have if those Technologies are helping to make policing fairer or make policy more biased and that will I would assume change depending on people's attitudes toward the police so I think there's a a lot of subtlety that we still need to do around how we have conversations about regulation I think a policy maker today picking up this report should um first be grateful that there's some some sense cutting through a whole lot of hype that is out there right now and um and let me again make one more plug that the that the report focuses on these real everyday I like Helen and Shannon have used the word domesticated these real everyday interactions that people have rather than the very far out far-fetched science fiction um kinds of concerns that we're hearing around the moratorium letters and the um um you know it's like its own hype cycle we we have a cover a convenient cover is happening now of future far out risks rather than the conversations we really need to be having about the everyday right here right now risks of artificial intelligence technology systems and so what this reports to be applauded for is it helps us anchor those conversations about governance and something that's real and here rather than something that's imagined and out there really really brilliant points I I think uh Helen FMA I'd love to hear your thoughts in the same question of sort of lessons to be learned here for policy makers and and uh those working on AI regulation right now absolutely and um I yes I I very much agree with Gina I mean of course of course the UK has gone down this sector-specific route um for regulation which you know in many ways is exciting there's a real chance here to show it can work and as the report shows very clearly and as we've all said I think you know uh it's having different implications in different sectors so there's different implications for different regulators I mean one of the things I would like to see is this kind of this kind of way of of measuring public attitudes but also the other ways that we've talked about you know actually being used by Regulators you've got a kind of clear message here about what people think would make them feel more comfortable with certain with Technologies um and you know we have seen for example uh the touring with the um Ico um produced the uh project explain the icos explain ability guidance was a co-bag guidance and that's why I really want to see I want to see kind of insights from reports like this and of course other other reports um actually feeding in to the design the development and the deployment and the regulation of these Technologies I I think that's what what we would all hope for excellent points Shannon I kind of want to twist this question slightly um you know I think right now we're seeing a lot of calls for industry self-regulation again it's almost like we're we're living through uh 2012 all over again um you know calls from moratorium calls for sort of voluntary governance practices and I I think it was as you mentioned your opening remarks it's fascinating to see this kind of um uh generational difference in what subbing people are expecting I'm curious if if the survey suggests any findings that might um shape or or influence practitioners of uh who are developing or deploying AI systems going forward yeah that's a great question um so I'll come back to this question of accountability and human judgment um and professional judgment uh certainly but um but broadly speaking I think one of the things that it tells practitioners is that um people really don't want uh their minds to be replaced or um uh made obsolete uh and the nature of human judgment is arguably a value in and of itself not just because of the accurate results that it reaches in solving certain kinds of problems or puzzles right that human judgment is the capability that in and of itself we have a reason to want to exercise uh to cultivate and and retain and strengthen and so I think often I I see for people in the responsible AI space there's a very clear focus on issues like fairness transparency safety uh and those are all quite important but often there's a sort of techno solutionist turn where the assumption is if the machine can do a thing then it should do a thing instead of a human doing that thing which loses the question of what are the things that humans have an intrinsic uh need to be doing uh or right to be doing uh and and where can we make sure that new technologies support those capabilities and enhance our capacity to exercise those activities or those capabilities as opposed to coming in and changing the incentives in the in the economy or in the political system such that we can no longer freely or or broadly exercise those capabilities uh and and that's what we we worry about for example in the creative sector with the incentives for Creative labor going away as a result of an unplanned intrusion of Technologies into that sector that was not uh uh set up to specifically enhance and enrich human creative Labor uh but uh in many cases is already being used to automate and replace it so we don't have to use it that way right but we need to change it's not going to automatically be used in such a way that humans aren't replaced that has to be a choice that has to be a deliberate question for uh policy makers but also for designers and what they have in mind when they build a tool right whether they're making it easier to substitute uh this for a human or to empower a human with the tool and then the final thing I'll just say uh because you mentioned Andrew this is not our first rodeo those of us who've been working in the kind of tech ethics field uh this is this is maybe our third or fourth Rodeo uh if not with AI right with social media or um it it the smartphones it's it's we we've we've run this sort of hype regulation um harm domestication uh cycle many times and I think what uh regulators and policy makers really need to learn from this is that incentives are the most important lever that they have for uh shaping a responsible AI ecosystem and talking about the goals that the ecosystem is all finding good and you know the current AI regulation is identifying the right high level principles but the question is do you have the incentives in the ecosystem properly arranged to reinforce the organizations and actors in that ecosystem to achieve that goal of responsible AI uh we didn't have those incentives in place for social media platforms right we had counter incentives where in fact companies who wanted to do the right thing uh had commercial and legal barriers to doing so uh they might get sued by their stakeholders right if they made a certain choice that favored the public interest over their stakeholders interests um we need to make sure that the incentives are different this time uh and um uh and I I hope that that policymakers can look at the desire for accountability in this survey and say we need to deliver that this time right um we can't just make the same mistakes over and over again this time let's lead with accountability and let's make the ensure that the incentives for everyone are there because in the end that benefits Innovation Innovation gets better people are more ready to adopt it uh there's less fear there's less uncertainty uh there's more trust uh but you you can't get that for free you have to get that through this kind of smart policy and regulation that strengthens Innovation by strengthening accountability brilliant points thank you so much Shannon um we have about two minutes left of time so I'm gonna ask each panelist they can just end us with um one question that they would like to see in a future survey on public attitudes to AI um then I'll start with you um and if we can keep these questions these answers quite short because we have we do have a hard stop at the hour I think um understanding um what where where people believe they've experienced Ai and understanding and understanding not just not just kind of the the attitudes towards um towards potential examples but but understand the differences perhaps between um perceived perceived use cases and and um and and features and behavior and attitudes on use cases people have experienced it will be a really important area of research for future research excellent point uh Gina like a qualitative researcher I'm going to ask three but I'm thinking like a like a survey because it can't be in depth um do you think AI will impact your job to what tasks do you think AI will impact at your job and How likely do you think that is to happen excellent questions um all all very job oriented um an exciting area Shannon I'll hand to you next yeah I I think it would be really uh interesting to learn more about uh what respondents mean uh when they say they want explanations or they say they want accountability because we know those mean many different things in different contexts and often people might say they want an explanation if that's the choice they're given but what they might be asking for is not an explanation right but a justification or some other kind of promise or or answers so that's one thing that we might be able to use a survey to learn more about um and another thing is uh to have people think bigger than just what's being presented to them and ask what kinds of problems would you like AI to help you or or your community solve or or or if not solve um uh at least tackle more effectively right instead of saying here's a technology find a good use for it um I I hope we can get back to this question of what are the fundamental human needs where they the things that we're lacking and how can new technologies help us kind of be Fuller and and flourish more uh more broadly and I think people are pretty good at figuring out what are the problems in their life that we don't seem to have the resources to solve uh or where there might be some way that technology could assist us and I'd like to see more information about that be gathered brilliant uh Power and I will leave you with the final word thank you very much well it's going to be really boring and saying well we must ask these questions again so we've got to a point of a comparison um and I think we should do that but we have to be careful because as surveys can become very they can become very sort of stultified by that kind of approach I agree with you Shannon really I mean I I think really delving into some of the questions and even some of the ones we haven't talked about here like the who produces these Technologies does it matter to you you know the kind of developers and the producers of these Technologies and and perhaps going back to something I take back earlier uh mentioned earlier actually using some of these Technologies to delve in in a sort of free-form way which is something that I think large language models are going to be um uh really exciting for with appropriate guard rails of course Bruce Lee has appropriate guardrails well I want to thank everyone so much for joining us today I'm afraid that is all the time that we have a massive thanks to all of our panelists and for our research team special thanks to Patrick Sturgis Oriel Bosch over and Katja Costa dencheva at the London School of Economics thanks the Arts research Council and the Alan Turing Institute for funding to support this project and to cantar for their incredible work delivering it and thank you all of you for joining us for this discussion we hope this is the start of many many more discussions on these attitudes to come have a lovely rest of the day and take care everyone

2023-06-13 07:49

Show Video

Other news