Prototyping AI ethics futures: Building capacity

Prototyping AI ethics futures: Building capacity

Show Video

everybody uh thank you very much for joining us um welcome to the first of this series of events organized by the ada lovelace institute which we'll be running every day this week under the heading prototyping ethics futures um i'm edward harcourt i am the director of research strategy and innovation at the arts humanities research council and when i have time to spare also a philosopher the program is being recorded we would love you to pose questions see the q a function at the bottom of your control panel the session is being recorded and will be available after it ends on the ada loveless institute website um the closed captions are not available um so it's a pleasure to be here oh yes and i should say um you're very welcome to engage in the conversation on twitter at ada loveless institute and ai ethics futures um it's a pleasure to be here um the i think everybody will be aware of how many pressing issues of an ethical and social nature are thrown up by ai and data science and so the ahrc is pleased to have been funding what we see as some cutting edge work in this area through our partnership with the ada loveless institute and the just ai network and you'll hear more about that today from one of our speakers alison powell a little bit later um obviously whatever kind of area you're doing research in there are major challenges now because of the continuing covid pandemic but there are challenges that are peculiar to the ai ethics space um i guess one could pose this in a couple of different ways one is in terms of skills so and try to think about the ethics of ai many people simply on the basis of reading an article in the new yorker or in prospect can reel off a list of problem areas like transparency responsibility diversity which bedevilled the field but how do you grow a community who can take the discussion of those issues to the next level who can deepen our understanding of those issues and help us to make progress with them that's not an easy task because we often faced with disciplinary communities with disjoint skill sets some people who are expert in tech development others who are expert in ethical and social science questions and growing an interdisciplinary interdisciplinary community where both sides of that skills divide can contribute in a symmetrical way and create a community which is no longer divided along those skills lines i see as the major challenge of this field and one which we as a research funder at hrc are doing our best to solve um so without more ado i'm going to introduce the first of our speakers dr allison powell um doctor allison is a member of the department of media and communications at the lse but she's currently the pi on the just ai network which is a co-fund between the ada loveless institute and the arts and humanities research council and her specialization has been in the various different manifestations of the way in which our values interact with the design of technology so allison over to you thank you so much edward it's really a pleasure to be here and uh today i'm going to take the opportunity of this amazing uh kind of meeting of the minds in this session to share a little bit of how about how the just ai network has approached these really challenging um features that edward has just outlined so i'm gonna share my screen okay um and so what we've been doing over the last 14 months has been exploring capacity building on data and ai from a humanities perspective and today i'm going to be talking about the network's work and this presentation was produced by the team at lse that has been working most closely on this myself louise hickman and emery bard and as you know um we're spending the whole week uh working through and investigating some of the different aspects of our practice over this year and we're also trying to use this week's events as a way of building out our community and helping more people to understand how to engage with these pressing questions that we have so what we've been doing in our work is exploring how to uh think about networking from the perspective of you know an ethics of care um and i really want to thank both the ahrc and the ada lovelace institute for the support of this project which has been prototyping new ways of building capacity by using different metaphorical and philosophical ideas about networking to do so and of course as with everyone who's been working over this past year we've been confronted with a massive shift not only in what we expected to do in this project but in fact our resources both our emotional resources um but also our kind of shift towards using technological resources to do this work of networking so what was expected to be a year featuring a series of in-person workshops co-located with major conferences across the technical social and creative fields of data and ai has become a multi-disciplinary process of experimentation iteration and reflection on the idea of a network and the ways that humanities-led work can build capacity on issues of extreme social significance like data and ai and i want to highlight and appreciate the entire team that's been working on this this is a team that has a huge amount of diversity and it's also a team that has a huge and wide-ranging humanities-based expertise from the core team myself louise and emery who have backgrounds in english american studies and communication and philosophy respectively to octavia reeve who has been our core partner within the ada lovelace institute who has a background in cultural studies our research assistants paulina garcelle corral and imogen fairbairn who are here at the lse in methodology and media and communications and are visiting postdoctoral researcher paula crutchlow who has been a practicing artist for 20 years and is now a phd in geography so this team was um lucky enough about six months into the project to have the addition of the just ai fellows who have also been supported by the ahrc to pursue primarily creative and research-led investigations of racial justice in relation to data and ai ethics issues and so they bring their own uh really wide-ranging um interests in different kinds of approaches to data and ai ethics yasmine buddhiaf is a creative technologist irene fubar emanuel is uh working at the university of sussex in media practice sarah chander comes from policy and squirrel nation which are irine ma ochu and caroline ward work have backgrounds respectively in neuroscience and art research so we really want to highlight that this project has benefited from this diversity as well as from this deep engagement with the humanities so today i'm going to talk through some of the different ways that we've been thinking about these problems that edward sketched out particularly from the perspective of networks and networking and as we know from joining things like linkedin networking has become instrumentalized so in our project we you try to use a wider range of metaphors and understandings of networking to open out different potential approaches using not only social scientific practices of network analysis but also notions of networks in relation to trees ecologies and relational engagements like the ones that i talk about in some of my previous work this provides us with metaphors that are more dynamic and can help to create these different sorts of opportunities for capacity building we also think of times and spaces as being really important in building capacity and particularly in generating these dynamic and potentially very controversial and contested conversations around the key questions of data and ai ethics our project has been a short one but we have also been reflecting on the different times and spaces of the recent past the emergent future and the emergent present and the speculative future finally our work has engaged really strongly with the concept of a prototype or the process of prototyping we can imagine a prototype as an object or practice that connects ideas skills expertise and critical reflections a prototype can also be a translation concept because prototyping is a common practice in technical fields where an idea first becomes functional as proof of concept through a prototype so prototypes are moments in a longer span of innovation and create and depend on feedback loops that are responsive to critical engagement so perhaps prototyping might be an ethics in practice prototyping might be a way of thinking of this generous exchange of skills that may be required for the future of data and ai ethics research so we had a series of research questions as we began our research we wanted to know what was this data and ai ethics research who conducted it how could people find out about it and more importantly how could people who were working in this area find out about what other people were doing in order to create these kinds of conversations that would be required to advance the field and address some of the more pressing issues finally we really wanted to figure out how we could think about creating ways to create inclusive dynamics and foreground marginal voices and emergent practices to prevent a further siloing of the ways that the work in it this field gets done so we began with what we call the recent past using methods of literature mapping and network analysis to identify who had been publishing on what where with whom and how they were describing it this proved to be quite challenging because an enormous number of papers has been published recently on ai and data ethics and this number has been increasing substantially over the past decade so what we decided to do was look at this entire decade and tried to think about uh what had been published and how we could how we could represent the relationships between the publications and the people who were writing them so emory bard led a process of literature mapping using large databases of publications the web of science acm archive ssrn the ieee and springerlink and did some bibliometric analysis based on search terms um related to data science big data artificial intelligence machine learning robotics autonomous systems and automated decisions in relation to ethics virtues morals or data ethics what we produced were a number of different visualizations both of people places and ideas i don't have time today to talk to you as ins specifically about all of the wonderful things that we managed to find through the analysis of all of this data but i will say that our data is available openly on github at just a i not just ai net dot github dot io slash mapping in this particular visualization we show where authors of papers are from this is what's called an institutional collaboration network it's drawn from a global data set what it shows is something that perhaps shouldn't surprise us the uk is a very prominent contributor to this field of data and ai ethics research we can see here that some well-known and well-funded universities are central to this network but what it also shows is that there's a range of other institutions that have many different collaborators internationally people are working with canadian american australian universities as well as universities across europe importantly we also see that there are many different connections between different types of universities large public universities but also technical universities and places where different kinds of skills are likely to be developed so i think it's very interesting to look at both not just who is writing but also if we have the ability to look in greater detail what kind of thing people are writing about this is a much more detailed network visualization which is based on the taught um keywords that are included in the abstracts of some papers here we've created two different clusters of uh mapping concepts of justice within the broad set of articles written about data and ai ethics which is on the right hand side and data justice which is this smaller fan shaped network of uh clustering on the left hand side here we have both authors and their papers so this visualization allows us to see which people have been writing together on these issues and whether there are any particular papers that are strongly connected into these networks on looking at at justice on the one hand within a kind of more general discussion and data justice more specifically but what's really interesting to me and particularly from this broader uh perspective that edward has set out for us is who connects these different conversations about justice as a philosophical concept and data justice as an operational research uh practice that often connects specific uh user groups such as local authorities with people who are writing on data issues so there are a few published papers that are cited in both of these groups one of them is written by joe bates another one is written by lynette taylor lynette and joe if you're watching well done you have illustrated that there are ways that we can connect different kinds of conversations on these topics through our published writing but also through different means of creating conversations what this image makes me think is what if we invited lynette and joe and a number of people from those two different clusters to have a conversation about their work their connections the disputes the different possibilities that might arise if they began to speak with each other which brings us to the next phase of our work in the emerging present here we've been investigating a lab model from a humanities perspective as a way of capacity building around data and ai ethics this work has been led by louise hickman and who is extremely experienced in creating these kinds of process-led spaces for interdisciplinary discussions over time which i think in this pandemic year have provided an alternative to conferences that's very powerful in our project we've experimented with these kinds of lab spaces in four different ways the fellows have hosted a lab on racial justice which has been sustained by the advisory group for that for the race for the um for the fellowship program who is a group of more senior and experienced scholars also working on racial justice issues they meet regularly they have broad conversations we also have struck three working groups that allow us to develop collaborations with uh with our uh um co-travelers in different universities and research settings from the private sector um in technology development through the ethics and practice working group to campaign to organ organizations who are interested in access as well as scholars working on rights access and refusal and data and environment and of course you have seen already that the rest of this week focuses on the topics that have been generated that uh the topics and discussions that have been generated within these spaces so we've made relationships through these explorations of lab-based work both by building our fellowship program in a way that involves many people who have been working on racial justice issues as well as people who uh who can advise on other aspects of the work we've also been building collaborations uh through our other working groups and through the uh the reflections in our lab-based model i want to just highlight specifically regionally that we've been really strongly connected to the west of england and the southwest since we work um we have we were inspired by the southwest creative technology networks fellowship program when we put together our fellowship program um and teresa dillon who was involved with that is also working uh with us and one of our working groups we've given talks in many different places and what i really wanted to illustrate with this um with this graphic is that our ideas and relationships in the kind of space that we've created in this emerging present is likely to grow over time in ways that we can't anticipate um we're just so grateful to everyone who has started to have these interesting conversations with us has hosted us for talks or has come to any of our open labs our near future ai fiction workshops or indeed to this week so finally we arrive as always at the speculative future where we think about how prototypes these kinds of attempts to provide a proof of concept for a particular idea might create opportunities for reflection and connection so here we decided to to to use some features of technical development um in a way to allow us to speculate on how we could grow these conversations uh over time and into the future so our our questions here were how could we build and support this kind of network that we'd already put into practice with an online tool how we could develop new conversations and new modes of engagement and whether or not we could use online platforms to perform a kind of networking with care so what we decided to do was to build a functioning technical prototype that would allow people to see how they had come to the field of data and ai ethics partly this was a response to thinking about what we learned through our mapping when we were doing literature mapping we could only work with what was already published that meant that all of the conversations that were happening now couldn't really be represented in that data set we could see whose work might have connected in the past based on citations but we couldn't see what might make a difference in the future and given that all of the issues that we're talking about are so pressing and significant we really wanted to think about whether we could do this in some way so we decided to produce a prototype that would draw on a kind of reflective survey on a sort of survey tool that would ask a participant to think about what motivated them to start working in this area how they defined their own work how they thought about their relationships with other people whether there were people or ideas that were really significant and generative for them also we wanted to know who was working in this field how long they had been working and what different kinds of backgrounds people had we thought this might be a really interesting way to generate conversations between people with similar interests and different backgrounds we have just about finished this prototype and today i'm really excited to walk you through how it works and then later on this week we will invite you to join and to uh to join the uh our our community of testers so the end uh and help us to finalize its design so when you finish finish our survey which asks you all of these interesting questions about where you have um what what kind of work you've done you res you you visit this page this web page this shows you a fingerprint which is a visualization of your answers to our questions about how and in which way you've been working on data and ai ethics we wanted this visualization to be provocative and visually interesting so we decided to use this kind of branching schema to show different ways that people's work can be represented in different areas so here i'm mousing over whether this person thinks of themselves as an ai as an ethicist what area of education they've been working in and lower down how their answers connect with the entire community that's been working on data and ai ethics this is a very very dense visualization we think it's really going to be very interesting to work through in workshop settings as an alternative to doing in-person paper-based workshops each of these circles is a particular question and each of the bars is an answer to the question so if you mouse over the circles you can see where your response is and how your response compares to others this really provides an opening for discussions so for example the the the in this response this person was not identifying themselves um as an ethicist nor were their colleagues identifying them as an ethicist you can see the comparison between this person's response and all of the other responses on the survey this helps to to uh to kind of generate conversations about difference how is my perspective different than your perspective these are some of the critical skills that the humanities brings to questions around data and ai ethics and these sorts of tools we think are ways for those conversations to progress so for example the person who has responded to this survey has a background in computer science but other responses have backgrounds in engineering and technology in history or in media all of the different um the the different um question arrays can be visited you can turn the labels on so you can interpret where the questions came from or you can turn the labels off and appreciate um kind of aesthetic invitation to discussion when you look more specifically at this individual fingerprint this also gives a kind of biography in in an image so for example this fingerprint allows this fingerprint has different color densities for different kinds of outputs whether those are policy reports academic writing convenings or workshops it shows whether people have been recently arrived in this field or have a very long experience many people who have been working in this area for a long time may not define themselves as ethicists but may have other related interests and educational backgrounds that are relevant so just to sum up on our prototype what we wanted to generate here is a kind of like conversation about connectedness about how our own biographies show our different kinds of skills and perspectives and how these biographies might also show in comparison and contrast places where people might wish to collaborate or exchange on their own identity in terms of this work that they're doing or the kinds of themes that they enjoy working on as i said we are launching this prototype this week please make sure you're signed up to our mailing list um or visit the ada lovelace um webpage later in the week where we'll be launching a button where you can go straight through to join this prototype also make sure that you follow on twitter and we will be announcing when we go live i'm coming to the end of my few moments of talking about how we've engaged with these kinds of questions around interdisciplinarity data and ai ethics and so i just want to close by reinviting all of you to the rest of this week's events unfortunately we have had to reschedule our networking with care in person workshop where we will be working through the uh the um paper versions of these prototypes uh to wednesday when it will be sunny um that workshop and the data walk will both meet and the at the lse campus in central london if you're able to join us there that would be lovely otherwise there's an enormous range of other discussions occurring across the week online there is also another off-site event in exeter on wednesday afternoon if you're down in the southwest of england and finally finally big thanks to everyone who has traveled with us and also to everyone who is here who will be a fellow traveler with us um our website is here is just hyphenai.net that will push you through to the ada lovelace website where all of our content is hosted we've been blogging consistently about all the different bits and pieces of our project we are also on twitter and please do keep up with us as we continue this work thank you so much thank you very much alison you've given us a great deal to think about there um so let me now introduce our second speaker heaton shah heaton is chief executive of the british academy and also deputy chair of the ada loveless institute eaton over to you thanks very much uh edward and uh alison thanks uh very much for uh putting this event together uh delighted to be here so many of you will know the british academy we are the national academy for the humanities and social sciences and what i wanted to do today was um in a sense speak uh at a kind of higher level allison's been showing us kind of how networking kind of at the ground might be uh thought about and reflected on from the perspective of capacity building i wanted to come in at the level of uh policy and consider how is the policy environment at the moment for the kinds of initiatives that we're discussing here today and uh you know where are the opportunities what are the threats and so on and i think when thinking about the opportunities the first thing perhaps to say is today uh the prime minister has announced that he will be chairing a new uh ministerial council for science and technology so that this is obviously very much the kind of thing that pertains to the work of just ai and the ada lovelace institute that that new ministerial council will be supported by a unit in the cabinet office and sir patrick valance whom all of you know is the government chief scientific advisor in the uk uh has also now taken on a new national technology advisor role and the government has also recommitted to increasing uh research and development uh expenditure to the 2.4 oecd average target uh i think by 2027. so that all i think is a kind of series of opportunities for the kind of agenda that we're discussing here today more generally this might be framed by the integrated review of security and international policy where again uh science and research were very central in the kind of government's ambition to become more globally minded uh and i think also post pandemic we're not quite post pandemic post uh some of the perhaps worst for the pandemic i have noticed i think something of a change in the way that the government and research operate together i think there was more porousness within the policy-making community towards research i think where posts that kind of had enough of experts uh phase of uh the kind of conversation now uh and i think that there's much more interest in data sharing which happened in particular speed during the kind of worst moments of the pandemic and there's an important question i think as to how far we can retain some of the best of what happened there uh into into future phases having laid out i suppose some some opportunities there in the policy environment there are also a series of threats um i mean the first obvious one is that a lot of the conversation is happening uh in the language of science and technology and the question i think is when is science broadly defined uh to include social science and humanities uh and when is it narrowly defined and there's no single answer to that i think it does depend upon uh the context and uh for example uh it was noteworthy to us that um patrick valance asked the british academy to do a report as our as the national academy on the societal impacts of the pandemic so you know he very much saw us as a conduit in our disciplines as a conduit to casting light and very much within his remit as governor chief scientific advisor will this new uh council see its remit in the same way um when you consider bodies like sage for example out of the 80 or so people who have been listed on its membership there's only one person from the humanities community so i think there's a question of uh our disciplines being fully represented which which is uh doesn't need thinking about there's also i think a pipeline issue and wider noise is coming out for example from certain bits of the policy community in particular the department for education around the value of arts humanities and social sciences uh some language around dead-end courses and so on which i think is pretty unhelpful uh and um i think that that that does need thinking about and i suspect will have a a long-term consequence on uh you know take-up and pipeline of humanity scholars and then a the third issue i think which needs thinking about with the creation of this new structure today or announcement of new structure today is the balance between what one might call a strategic approach within government uh and the room for bottom-up uh lead discovery and blue skies research and it remains to be seen whether today's announcement will uh change that balance at all my hope is that it will not that the government is uh going to retain commitment to that discovery of research which is very very important because you know had you gone back 10 years ago and tried to be strategic and say what should research be focused on would you have picked a pandemic you know it's all very easy to be strategic after the fact but uh it's very important to invest across a wide range of portfolios when the future is uncertain so let me just now touch on a few ways forward uh give given the landscape that i've outlined the first uh is recognizing that um as as the discourse towards uh subjects as perhaps narrowed in a way that may not always be helpful to the humanities and social sciences we need to uh take control of the narrative a bit more uh and stem has been a very successful way of describing the sciences and with the british academy working with others have developed a notion of shape social science humanities and arts for people in the economy as an alternative way of talking about ourselves as the subjects which uh deal with culture uh humanity people behavior society and uh that that is now i think starting to be picked up in a variety of places although the term stem took a decade before it it was sort of uh widely in use and it may well be the shape is is the same but that that we hope is a helpful intervention in just describing uh the role of our subjects and the way that shape and stem work best together second thing i think i point to the importance of a diverse funding landscape so having talked about the the worries that uh you you might find if everything becomes strategic and dictated from uh certain kinds of uh government priorities and so you know i talk in this context about the role of a body like the british academy which gives out a lot of discovery research funding uh following allowing you know cutting-edge researchers to to to fund themselves to to set their own priorities uh and uh you know go where the research takes them uh and just you know variety of examples of things that we're funding at the moment including a new oxford handbook of the philosophy of ai refugee-led social protection and the impact of digital technologies will work on data labeling and gender and those are just a few of the projects that we're supporting and i think that we work really well in partnership with our counterparts at the ahrc uh and the different kind of kinds of funding pots that we have really i think complement one another the third thing i point to is the importance of supporting early career researchers who i think have been massively impacted by the pandemic but even before that were i think in an increasingly competitive space and sometimes a precarious space in terms of their research funding and again i think national academies have got a role to play here the british academy has just launched a new early career research network which is funded by the wolfson foundation we're piloting that in the in the midlands uh in the coming year but we really want to bring early career researchers and uh together uh to support one another but also to be able to convene them and bring them in touch with uh businesses uh and policy makers in their regions and that convening function i think is also worth saying a bit more about not not only at the the level of early career researchers but also for example the british academy's role in bringing people together around policy questions so work we're doing with the ucl at the moment on ai and the future of work we're bringing together business trade unions uh researchers policymakers and so on so that convening and networking capacity building at multiple levels is very very important and then let me end on a final uh reflection which is uh that much of the the work that happens i think around science and technology and in particular in academia tends to be incremental in nature but i think one of the powers of the humanities and the arts is a space for social imagination uh and the chance to actually reconceptualize in a more radical way than uh perhaps is on the table an alternate vision or future and so although most research isn't going to be like that uh we need to keep a slice of our time and space uh for more radical perhaps utopian visions which paint alternative futures and those themselves can then go on to change the way that technologies operate and ethics operates i'll stop there thank you thank you very much heaton that's great um so i'll now introduce our third speaker professor gina neff gina thank you very much for joining us gina is professor of technology and society at the oxford internet institute and a member of the department of sociology at oxford university her speciality is the effects of the adoption of ai across a wide variety of different contexts gina over to you thank you very much i just want to make sure i can be heard yes first i want to thank um dr powell and the just ai team for this extraordinary amount of work that we are celebrating today and this week and in light of that also the ahrc for supporting this and ada lovelace for for helping to foster and build this network i want to make two points in my in my comments and i think they uh dovetail quite nicely and comments as well the first that this work shows that we're talking about today really is that ai can no longer be thought of as a domain of science and technology we have to stop thinking about ai as science and technology what this project the just ai network has done is help make these problems around a new technology sensible and making sense since making across multiple kinds of disciplines and multiple kinds of communities they've moved from theorizing and thinking um about the philosophical or ethical implications how should we think about ai into what should we do what should the practices be and in creating a tool that helps us visualize the communities of practice around this this this incredible problem that is occurring this incredible and urgent problem that is occurring around how to build fair safe effective and just artificial intelligence systems the just ai network has has really brought into focus this notion of the different kinds of disciplines that are involved and necessary and needed now what do i mean by this well first the work that we've been doing at the oxford internet institute has has uh along with the um a group called the women the um the women's forum for economy and society has has done a series of focus groups with fortune 500 leaders around what they think about ai ethics and not surprisingly surveys that we've done show that business leaders know ai ethics is a serious problem they should be thinking about so when we bring them into focus groups and say what are you doing no one can give us a clear answer so it's one thing when the largest technology companies have devoted teams of thinking about ethical ai we can argue the debates of these but move one step down into the companies that are doing our financial systems our legal systems our social infrastructure in our in our retail systems that are that are working in mortgage lending that are working in providing professional services that are working and creating the future of work one step down from those companies and they're at a loss for what to do in practice and so the kinds of work that the just ai network has taken on this year really helps us see how we might close this urgent implementation gap now this brings me to my second point why is this problem so urgent what what is urgent now i would put forward to you that when we're talking about ai i would love to be talking only about a set of statistical tools for helping to parse large davidsons as the co-author of a forthcoming book on human-centered data science i can tell you that that the kinds of ways we talk about fairness and ethics and justice in the social science and humanities is quite different from how the same concepts get talked about by our colleagues in data science and machine learning so my second point is um for those of us who are concerned about not about advancing the statistical models but are concerned about how power works in society how people's daily lives are affected and impacted we need to be thinking about ai and artificial intelligence machine learning technologies as inequality engines inequality engines in a post-pandemic unjust society in societies now what do i mean by this well the work that just ai has done it because they have expressly articulated the work around um ai ethics in a framework of data justice and in a framework of of the racial justice conversations that we've been having over the past 12 months they've allowed us to to connect some docs but the decisions being made about the infrastructures being built right now are happening in certain kinds of conversations that many communities are not invited into what the just ai network has done is try to begin to think about a way that you could start to have these community participation and really complex technologies to help bring more people into the urgency of the problems that we're facing so two things we can no longer talk about ai and machine learning technologies as just being the purview of science and technology right they're pervading they're becoming pervasive in many more societal realms there are many more kinds of leaders many more kinds of community leaders business leaders many more kinds of disciplines where the decisions are being made that impact many lives and so we have to start thinking about how do we bring our sense making into those kinds of conversations from multiple kinds of conversations and then the second is that we can no longer think about these technologies as simply neutral or ways of parsing data that they become part and parcel of building infrastructures that lead into kinds of inequalities and amplify with the potential to amplify those inequalities and so what can be done well i wanted to to give a um a nod to the initiative that the british academy put forward of thinking about shape so thinking about how do we have a set of disciplines that understand that we can no longer separate science and technology disciplines away from arts humanities and the social sciences how can we start to think integratively about this the kinds of knowledges we need to bring to not just this problem but to many of society's biggest problems that is one two how are we going to train up people who do the work and by that i mean where are we building the community capacity be it from a trade union capacity be it from professional organization capacity be it from business leader capacity be it from community activist capacity where and who are who is going to be involved in ensuring that we can make sure that we can expand the number of people able to enter into these networks and make powerful interventions from their own perspective their own situated knowledges their own abilities to move and motivate within their communities so with that um i want to again thank um allison and her team for the amazing work that they've done this year i know certainly i was watching the conversation go by and many people very excited about this visualization tool many people very excited to help make sense of their own work in ai ethics here on this call and my call to us would be to think who else can we be inviting into these conversations and how can we be expanding the work that must be done urgently thanks thank you so much um gina and uh to all our speakers that was great um so we've got about half an hour left a number of questions are appearing in the chat um and uh i can there are so many questions that i'd like to ask myself um so there's a question from uh trisha griffin uh she asks and this is a question for alison do the questions incorporate the ideas and histories of decolonial ethicists what might a decolonial ai approach look like in the context of your networks that's a wonderful question trisha thank you for asking that um the visualization tool as you can see from my walkthrough presents in its kind of um branching and and color-coded manner only a certain number of questions that we ask in this survey and the other questions that we ask in the survey are actually really about what gina has pointed to which is this larger conversation that needs to happen around how ethics is understood and indeed practiced in different communities so the survey tool asks for example how you identify your own work whether you're paid for it uh whether it's something that you are whether you're working on something that you um that is in line with what you've worked on in the past or whether you've changed your perspective it asks you to describe what the keywords are that you think are important because when we were doing the literature mapping we realized that we were of course limited to the kinds of keywords in the articles that were already published and in particular in order to do these kinds of large-scale digital um data visualizations we were really constrained by the infrastructure of access to those publications so when we did the first round we only drew from web of science partly because their data is better structured we couldn't get access to as many humanities publications not even talking about decolonial ethicists and yeah and and or black computing or any of these other areas where these discussions are unfolding because many of those publications are not in journal articles they are in books and so already when we were doing our literature mapping we could see that there were different kinds of gaps emerging so we were trying to think of different ways to address those gaps and in part the racial justice fellowship and the lab-based work that we've been doing around racial justice and for foregrounding these divergent ideas has actually allowed us to work quite a lot on the concept of the colonial ethics i'm going to give a shout out to mustafa ali who is based at the open university because mustafa has made a huge contribution to just ai particularly in this area both through supporting the racial justice fellows but also by contributing to another project that we undertook which was the commissioning of science fiction writing on particular areas of data and ai ethics that we worked through in a series of public salons that publication that comes out of those that commissioning which is two new short stories in a brand new essay from adam merrick tanya hirschman aaron and irinosun okajie is going to be out with meat space press in the autumn and it includes the community's commentary from the salons so that whole thing is a dialogue about what ai ethics might look like in the near future and of course this huge debate about what that ethics is to begin with so i guess the short answer is it's really hard to represent all of these divergent perspectives on a single plane and only by kind of shifting your perspective to be able to see them you actually get to some of the major ethical issues that we're contended with so thank you so much for the question thanks very much alison so the next question is from graham bukovsky um i won't read the whole thing but he says what does the panel think are the key issues associated with ai and data ethics that need exploring with the public in short what are the urgent opportunities for public engagement and ai and data ethics so i'm going to heaton would you like to contribute to that first yeah thanks so much and there's so much in a way i mean that the the the ai territories we know it's a general purpose technology and so actually cuts across everything and so i think it makes public deliberation quite tricky um i mean there's a mix i think of what you might say are kind of nearer term deployment uh questions and then sort of longer term ones and they're not just about ai so for example with i think we mustn't always skip over the data bit because there are data things which are non-ai things uh giving example waste water uh testing for the pandemic raises a bunch of data ethics questions on both sides you know i mean often people sort of think ethics is about saying no it's but it's often not that i mean sometimes to ethics demands that you act um but it's about kind of thinking through uh conceptually those sorts of questions so that's a near-term one you know longer-term ones might be self-driving cars or you know all sorts of things but the the issue i think is with public deliberation it's quite hard to have these conversations at a very high level of abstraction so i think it's a mixture of focusing in on a specific question or a specific area and saying what does this mean uh and then occasionally having some wider i mean the ada lovelace institute has been doing some brilliant work looking at biometric technology and the public uh perception of that but i think that is very hard to think about kind of at that level of abstraction so it's quite important to sort of drill down and say what does that mean in this particular area and it's worth just finally saying you know it is really important for this public deliberation to be a kind of balancing of the scale of the commercial interests which often drive technology and it's not to say commercial interests are bad it's just to say that they have a set of interests and views and in particular when thinking about the regulatory space public perceptions and views have really got really really important driving where we prioritize that great thank you very much um alison did you want to comment on that so i would say that from uh from the perspective that we've developed um i agree with hiton that this is really challenging because ai could be absolutely everywhere um and so i would actually take the question in a different way and i would say that what we need to be really attentive to is who is in the room when discussing that specific um application and really looking at specific applications also allows you to get to that diversity of knowledge and also allows you to ask those really significant justice-based questions um so just to give an example of one of the things that we've been thinking about through the year and this is a topic that also hasn't really disappeared the concept of vulnerability is actually one that is now defined in data and is processed through different kinds of ai systems so being defined as a vulnerable person during the pandemic meant that you could have access to different kinds of services but vulnerability itself became a kind of a definition that was constructed from commercial data and then filtered through a kind of policy lens vulnerability whether or not you could book a grocery delivery slot for example became something a major kind of you know ethical something to ethically contend with so for you know if we take this example um taking the perspective of nothing about us without us you can focus on any of those kinds of specific issues where you know you're no longer talking about ai as a sort of general all-purpose scary something that's doing everything but you're starting to look at what's happening with the provision of social services in my local area how are we going to manage data about service provision uh you know um or your access to food or um or indeed uh you know the quality of the water and the presence or absence of covet in the water thank you very much um i'm going to return to the chatted the q a in a second but i want to pick up on something that heaton just said and asked gina a question so gina you said um towards the end of your remarks how can we think integratively about ai and data science and ethics and how can we build capacity so that people involved in the discussion have both the requisite skill sets and i quite agree with you that that's an absolutely fundamental challenge but my question to you is um when we're dealing with the public sector i mean public providers are supposed to be just in their distribution of goods that's their duty but the private sector have got no particular duty to be to to resp to to worry about inequalities they don't have to you know they just sell to the people who have the power to buy how do you incentivize the people who are at the cutting edge of technology to think in that integrative way so um you said uh technologies amplify inequality what how do you bring ethics onto the radar of people who are developing new technologies in such a way that these infrastructures don't simply go on amplifying inequalities that's a great question and i have to practice my answer by saying first i am no lawyer but we have in place regulatory frameworks and structures that allow for making sure companies don't deceive the public and yet we know that there's a kind of ai snake oil happening at the moment you we say um you know government provision of services yes absolutely very important but as allison just pointed out there are enormous um uh and deep ties and links of how governments around the world purchase technology services and data from private companies decisions that we often don't have access to that often are not transparent to public scrutiny fortunately the uk is one of the global leaders in helping to train and think about how do we do the provision of ai for the public sector there's wonderful work that's been happening that's one of the reasons why we see a lot of activity and excitement happening in this country really can be a leader for how we go forward and thinking about ai for the public good but what does that mean how what are the levers that we have to work with private sector organizations and i think there's a few um the first is around their customers whether we are talking about um companies that sell and bend to other companies or whether we have end users we have to get really clear and transparent about what is being bought and what is being sold in a report we did last year um i did with maggie mcgrath and lyanna prakesh we we talked about the kind of ai failures that we see in news accounts around the world and one thing that's really apparent from many news stories that are at the cutting edge of how we're talking about in the press about ai is that many companies are selling goods and services as ai but then when you lift the hood what you see is either um outsourced engineering that's happening in another part of the world outsourced human labor that's happening in long global supply chains of work far from sets of accountabilities or you're seeing something that is um uh just gussied up and sold so so we have to have some we have to have some accountability and responsibility when it comes to this second you know the work that we've done in the implementation gap of ai we actually see one sector that's incredibly thoughtful about how they're making decisions around automated decision making in in their supply chain business flow that's financial services why because financial services are hugely regulated in most western countries and they have to be able to explain their decisions so having in place a set of um regulatory practices has led to a set of organizational practices company and firm decisions of how things will work through through workflows that really constrain perhaps the runaway speed with which the technology can develop but at the same time allows them to know they have fail-safes in place to make sure that they're making just and fair decisions and so i think that there's there's models out there you know we don't have to reinvent from you know from scratch there are models out there for how industries have put in place um smarter ways to think about responsible technology adoption what we what we know is they're not going to do it simply by relying on their data science teams so their data science and data engineering teams are not going to be the only place that these policies has to absolutely come through in terms of thinking very clearly with the communities where they work thinking about their stakeholders very broadly defined thinking about the unintended uses and abuses of their technologies and tools when they go out and are released and then thinking about how they're going to be responsive um to problems when they do arise are they going to be able to um deliver um fair and just decisions when they have a problem or are they going to try and sweep it under the rug fantastic thank you i'm sure there's a i mean i know there's a lot more to be said on that but let's return to the q a so here's a question from gabrielle berman and i guess this is going back to an earlier strand of discussion allison about how do you use your work on networks to get from what is to what ought to be so the question is um how do you move from focusing on english language publications and networks towards thinking more globally into a broader network and critiquing high income frequently western versions of ethics and what are the implications of that for justice again an excellent question i had a peek in the q a and i really want to thank everyone for their generosity and curiosity as well um this question about language power and money is a really central one for these kinds of conceptions of justice and also for these kind of provocative and difficult conversations i think we have to come our mapping as i said was very limited because not only were we limited to english language sources because these are the best indexed we were also uh by our funders um tasked with looking at networking and and kind of network and capacity building for the uk uh in relation to these kinds of topics um which meant that our um not all of our mapping looks at global data some of the mapping including the first slide that i showed in my talk today does look at global mapping in terms of co-authorship networks you do see uh universities that are outside the global north in that mapping you do not see them as central or dominant this is of course not a surprise because we understand how power and influence and particularly money move um so you can actually use the data that we've collected to create visualizations and look at their edges this is in fact what we did when we were when we were first looking to see how which kinds of topics we might want just to strike working groups around um the look at the working group on rights and access and responsibility was also driven by this question around vulnerability that emerged through the pandemic but the other two are look are based on analysis of the networks and places where things get a little thin or these kinds of connections between nodes are represented only by a few people so you can actually look at the data and look not at what is central and instead at what is marginal or poorly connected and i think that does provide an opportunity to sort of shift conversations um and introduce new and different perspectives as i said before we really were limited in looking at kind of you know the the extensive literature on decolonial ai and racial justice issues in ai but we also identified that that was missing from our data set and that was part of the reason that we advocated for some support for these fellowships specifically focused on racial justice so i i guess um network methods are often very interested in where things are dense and well connected but i think they can also be valuable for looking at where things are less connected and just the last thing on that point is that one thing we really wanted to do and we're not able to do because we didn't have enough uh human capacity or time was actually look at where money came from for these different kinds of uh research institutes that were central in the network and i think this is a really wonderful project which i'm hoping to do as soon as i have time to look not only at where research council funding has gone we were very happy thank you hrc for providing the public data on grants given in the area of data and ai ethics we were able to map that thank you to any university colleagues whose departments actually did respond to my request for who funded your data in ai ethics research but there's a lot more to do in that area and that will also provide another layer of discussion and perhaps another opportunity to see how you can redress some of those injustices thanks very much that's great so the next question is from helen palette and she asks she says the epsrc is mostly framing its ai ethics related funding in the language of responsible innovation is this compatible with the humanities focus on justice now as somebody who's spent quite a long time talking to the epsrc about this my answer to that question is yes it is because um i mean i first of all i don't think either the subject of responsible innovation or the subject of justice is the preserve of any particular discipline or set of disciplines there can just con universal concerns of all all human beings and i think it's the it's the road to ruin to think that because a question is an ethical question it needs to be hived off and dealt with in a kind of humanities-only space so um one of the things that ahrc is trying to do in its collaborations with the ada lovelace institute is to foster um interdisciplinarity and an integrated capacity so that um the data scientists and the ethicists can deal with this stuff together rather than dealing with it in separate bubbles um but yeah maybe maybe that's not your answer maybe i've got it all wrong tell me no i just wanted to build on that and i wanted to identify that the kind of the theme of ethics in practice has been something really significant for us and that has come out of that identification edward that these things are all these are always ethical issues it's just that they are not always described as such and i can see in the chat and in the q a there's a lot of questions about like how do you like how do you deal with the fact that like some people get to define what's an ethical question and some people then you know have their actual like moral responses to things classified as not really ethical because the ethics are happening over there this is a huge issue and i think our focus has really been trying to think about ethics as a practice and about our project as enacting a set of practices that were that that can potentially create different spaces for other people to reflect on their ethics as practice and so our working group on ethics and practice builds on some of my previous work on technology designers and their actual work their day-to-day work um that was the virtue project we looked at startups making internet of things technologies and we spent a lot of time just hanging around running workshops with people who worked in small companies trying to figure out where were the moments where an ethical and ethical like what we might like ethical hat person ethical hat-wearing people might say is an ethical conversation and other people think of as part of their job so identifying those locations is something that the ethics and practice working group is still thinking about they have an absolutely amazing panel from across industry and the ada lovelace institute on friday so if this is something you're interested in please make sure you attend that one as well thank you gina did you want to come in on this question i am a little bit of an optimist on this question so you know change has to come in two directions first we've seen a real sea change about how we're talking about responsible technology within the technology engineering science um sector so so so for example there is a large investment uh multi-um research council investment on trusted autonomous systems um you know they've just invited me to chair the scientific committee on that this international scientific committee that will help them think through how are they framing um what is in first a panel a a program of research around trusted autonom making better trusted autonomous technologies that we might actually want to think about how these intersect with society so as a social scientist i see some some courage you know in that the second is um you know nurip's the large ai conference has just put together for this round of submissions ethical review and this will be a set of reviews on oh not every paper but on a set of reviews where where um papers will go through that ethics process in a way to ask some serious questions and to ensure that people are raising and reflecting on and that's that's we want to see that happening not you know not as a not as a an emphasis for hire so to speak brought on to research teams that we actually see this kind of ethics and practice happening and social considerations happening in practice but i think we also need to remember that fundamental social science and humanities research still needs to happen and research about how these infrastructures and tech how these technologies are becoming everyday infrastructures absolutely needs to ask different kinds of questions not just how do we build better technologies but should we even be building these technologies you very much and um i think this is such an important question really the um i'm going to say another couple of brief things i mean just to underscore the importance of what allison said a minute ago that quite a lot as i understand it of what the just ai network is trying to do is a sort of consciousness raising because what is ethics is not necessarily what carries the label ethics there's a great deal of stuff that affects human lives for good or ill in other words is ethics that bears some quite other label and we won't be able to think in an integrated way about how to take things forward until we can see beyond those labels the the other thing i wanted to say was about um the notion of humanity's lead investigation so you might say well look the three of us have and heaton when he was still with us banging on about the need for integrating the tech and the ethics um in other words breaking down silos between humanities and um the sciences so what's humanity's lead here and i think we need to distinguish and alice and again correct me if i'm misrepresenting you but between the content

2021-06-28 04:34

Show Video

Other news