Algorithmic bias with Dr. Peaks Krafft – UAL Tech for All Conference

Algorithmic bias with Dr. Peaks Krafft –  UAL Tech for All Conference

Show Video

foreign hi i'm dr peeks craft senior lecturer and course leader of the new master's program in internet equalities at the university of arts london's creative computing institute today i'll be talking about algorithmic bias and how to think about algorithmic bias as not just a technological problem but also an organizational problem this contains some sensitive content so if discussion about racism bullying and harassment or state violence might be triggering to you i'd suggest taking a second and perhaps coming back to this when it feels safe for you to do so in this talk i'll begin by discussing what algorithmic bias is for those of you who might be less familiar with this research area then i'll talk about some of the approaches that researchers and technologists have tried to mitigate the harms and negative social impacts of algorithmic bias and i'll talk about why i think those approaches don't go quite far enough and what i personally would like to do about it so let me begin by describing a little bit about what algorithmic bias is you might be familiar with the work of joy bulenwini who famously in experimenting with the facial recognition system in her office at the mit media lab found that this system was unable to recognize her face as a dark-skinned person until she wore a white mask joy together with researcher timmy cabrew systematized these results finding that many facial recognition systems consistently are less accurate especially for black women these results are important not just because they mean that cameras or camera based apps will work less well for black women there have also been examples of police arresting the wrong suspects because of errors that facial recognition systems have made you might also have heard of the work of sophia noble who in her book algorithms of oppression talks about how search engines like google reproduce stereotypes and biases that are present in society at large on the cover of algorithms of oppression is an example of a user having typed in why are black women so in a google search bar which then has produced a list of offensive suggestions this kind of effect is important not just because google is producing offensive content but also because search engines and information online shape how we think about the world and how we think about each other another example from last year is that of the upcall a level marking algorithm which was systematically biased along class lines the f algorithm led to massive nationwide protests against the use of this system and subsequently the government's u-turn on its deployment but algorithmic bias doesn't just occur when technologies make mistakes technologies can also demonstrate bias in how they're used for instance mass digital surveillance disproportionately impacts black and brown people who are historically over surveilled relative to white people this is a case of a technology that is working as intended but the intention itself is biased beyond algorithmic bias we can also talk about algorithmic injustice more broadly shoshana zubov in her book the age of surveillance capitalism discusses how a small group of extraordinarily wealthy technologists is exploiting billions of people in their personal data in order to create profitable advertising enterprises such as google and facebook i've just reviewed some different forms of algorithmic bias now let me say a little bit about how researchers and technologies technologists have tried to mitigate the harms and negative impacts of algorithmic bias one notable approach is researchers who try to make algorithms more fair what this looks like in the case of facial recognition is such systems having relatively even error rates across different demographic groups the closely related technique is called model auditing in model auditing algorithms or ai systems are investigated not just for whether they are fair or not but also for what kind of data they're using or other impacts that the systems might have at the design stage approaches such as participatory machine learning have been developed which aim to be more inclusive and who is involved in specifying the goals or the algorithms that are used in machine learning or ai systems another approach that's taken after technology has either been procured or deployed is policy oversight the city of seattle in the united states for example in its seattle surveillance ordinance mandates that municipal departments disclose the surveillance technologies that they're using and subjects those technologies to city council oversight many tech companies also have internal ethics or oversight boards facebook notably recently passed along questions about moderating the content of former president donald trump to its oversight board those are some of the approaches that have been taken to mitigating the arms of algorithmic bias let me now see a little bit about why i think these approaches don't go quite far enough i want to argue that these kinds of ai ethics interventions are performative in the sense that they are theatrical while also being non-performative in the sense that they do not achieve their intended effects researchers and technologists are awash with solutions for algorithmic bias but only with a paradigm of the inevitability of the technologies themselves non-performativity is a concept that was introduced by sarah ahmed i also must give a hat tip here to the philosopher anna lauren hoffman for having first applied this concept to ai ethics sarah builds on the work of judith butler who wrote about the performative aspects of gender butler argues that gender is performative in the sense that performing gender roles actually creates gender ahmed flips this concept on its head in talking about diversity work at universities ahmed argues that such work is often non-performative in the sense that the work does not accomplish what it intends to accomplish it can be even worse diversity work at universities can also prevent other people in those institutions from having the resources to be able to make more effective interventions i'll argue that ai ethics is non-performative in this sense because the kinds of approaches that are typically taken to mitigating algorithmic bias do not achieve the fundamental goal of making just systems predominantly because such interventions do not address more fundamental issues related to the intention and function or intended function of those technologies and the interests of those companies or people who are creating these technologies let me take an extreme example to illustrate the point consider the united states pentagon the u.s department of defense which has its own ai ethics principles the us department of defense considers questions like what kinds of policies should be in place for the deployment of lethal autonomous weapons things like autonomous drones in applying the aif principles that the pentagon has developed they might ask how can drone strikes be distributed equitably this hypothetical example illustrates the point that asking about the equity equitability of a technology like an autonomous drone neglects the broader issue of the injustice of u.s military intervention another company axon which is the developer of police technologies like tasers and body cameras has its own ai ethics board that has considered questions like whether or how to deploy facial recognition software in its body cameras body cameras are technology that research has shown are actually ineffective at reducing police violence and therefore function mainly as another tool for mass digital surveillance and considering a question like whether to include facial recognition systems in body cameras because of their inaccuracy across demographic groups axon is neglecting the broader issue that body cameras as they stand right now are contributing to the unjust surveillance of the police as they're currently deployed google had its own aifx board for in fact fewer than 10 days after facing protests both from within the company and from the public at large for its inclusion of a leader of a major conservative think tank and notorious trans photo you might think though from their initial formation of this high profile af explorer that google is quite committed to thinking through questions of ai ethics yet when we look at the protests that workers at google made against google's participation in the military industrial complex and its contracts with the pentagon for software in autonomous drones and we look at the response that google had to this and other protests like the mass google walkouts when workers protested google's lack of action on sexual harassment by company leadership we see that rather than rewarding these employees who are raising these important ethical issues google instead retaliated against those who are organizing these important initiatives i've just highlighted some of the contradictions around companies and states that are participating in forms of a ethics interventions now let me say more about how when these actors focus on algorithmic bias as being a technological problem one that can be solved by making facial recognition more fair or by ensuring that software operates evenly across different demographic groups neglects the problems of algorithmic bias that are organizational another example from google is that of timika brew whose name you might remember from her work with joy bullowini on facial recognition and its bias timmy recently wrote a paper about the biases in large language models as well as their climate impacts timmy being a worker at google and in fact also one of the managers of google's ai ethics research was subsequently fired by the company after publishing this paper what this example illustrates much like the retaliation against the organizers behind the google walkouts and google's participation in project maven or its contracts with the us pentagon shows that those who point out issues even when those issues are technological are still punished as organizational practice therefore the perpetuation of algorithmic bias such as that which was highlighted by to me are secondary to the application of systems like large language models that are core to the business of google such as serving search results or ads these problems of an organizational nature are not unique to the tech industry or big tech we can also look at the academic research communities that are thinking about ai ethics the acm conference on fairness accountability and transparency is one notable conference that's existed now for a few years initially fact as this conference is called was a place where researchers were publishing work that was largely about issues like how to define algorithmic fairness mathematically over the years the amount of social science and philosophy and even art dealing with the social issues around ai increased in prevalence in this community those who are studying the mathematics of algorithms then formed their own conference the symposium on the foundations of responsible computing or fork this divide illustrates a cultural divide within the tech industry and those interested in technology between those who take seriously and do their work on the social impacts of technology versus those who are primarily concerned with technological development here we see an organizational problem in the organizational in the organization of the field of ai ethics not without some irony actually both of these areas the more critical side of a ethics and the more quantitative side of ai ethics are funded substantially by companies in big tech many of the organizers and participants at both fact and fork are either researchers at google or ethicists at google or at other companies the participation of big tech in this space stands alongside the systematic exclusion of marginalized groups in the area white men are substantially over-represented in the tech industry and this over-representation is reinforced by poor accountability for issues like bullying and harassment that disproportionately affect gender and racial minorities in the field what this creates is an unwelcoming environment and skewed demographics in the field that then reproduces the same cultures that prevent issues like algorithmic bias from being taken seriously by both researchers focus more on quantitative sides or by tech companies we can also see how these organizational problems are supported by other sources of funding steven schwartzman ceo and founder of the blackstone group the largest rental real estate owner in the world following the financial crash about a decade ago where blackstone bought up repossessed houses and then rented them in many cases to the same people who had previously been living in those houses gave a historically large gift recently to the university of oxford to form its new center for the humanities within which stephen schwartzman insisted there be an ai ethics stephen schwartzman who is a notable confidant of donald trump and has been throughout for president donald trump's in the presidency across the incidents in charlottesville and more recently the capital attack as well as trump's racist immigration policies and many missteps and offensive remarks what makes steven schwartzman interested in ai ethics or what kinds of a ethics is steven schwartzman interested in oxford is not the only place where stephen schwartzman has invented invested in ai ethics schwartzman also funded the mit college of computing to a large sum as well as the schwartzman scholars program in china at oxford schwartzman also insisted that the ai ethics center there be purely philosophical looking more closely at mit we can also see that mit which along with many other elite institutions and research universities is one of the main actors in the production of the research behind facial recognition systems which in some cases have been directly provided or formulated in partnership with agencies like companies in china or the chinese state for use in [Music] the systematic oppression of muslim minority in china the weaker people in the xinjiang or east turkistan region mit therefore along with many other institutions is involved in what for them is the profitable business of building surveillance technologies that are being used by in this case state actors for the oppressive enterprise that is required for those states to maintain their own power this example connects to a point made by simone brown in dark matters where brown points out that surveillance not only disproportionately impacts black and brown people but actually surveillance also in part constructs what blackness is blackness as a construct that in the united states was used to justify slavery and more recently the profitable enterprise of prison labor and here in the uk has played a role in the windrush scandal and the abuse of labor of black and other people of color is therefore reinforced by these digital surveillance systems companies then who are engaged in the research and development of mass digital surveillance technologies whether it be for state or commercial interests are therefore reproducing the kinds of systematic inequalities that historically have been the forces for economic development in places like the united states and the uk putting these pieces together we can view tech as just a component of the organized exploitation along racialized or gendered lines i've just given a little bit of an overview of the arguments that approaches that are taking to mitigating aggravated bias don't go quite far enough because they don't engage with the organizational problems of ai and information technologies let me now talk about what i personally am hoping to do about this situation there are three factors that i'm currently focused on one is taking personal responsibility in many cases in conversations about ai ethics the finger is pointed at someone else i've been guilty of this myself in this talk pointing the finger at mit pointing the finger at the university of oxford taking personal responsibility means recognizing personal complicity for instance i received my doctoral degree from the mit computer science department and i was recently a faculty member at the university of oxford i spend a lot of time thinking about what i can do both to dismantle my own privilege as well as to dismantle broader systems of white supremacy and the heteronormative patriarchy i'm guided in this work by inspiring recent efforts such as design justice data feminism or sarah ahmed's living a feminist life the second component of what i've been thinking about recently is engaging in a different kind of theory of knowledge production what i mean by this is that the tech sector and academic technologists must move beyond thinking close-mindedly about technology as purely computational or quantitative and instead engage with an understanding of knowledge production that recognizes different kinds of knowledge and the situatedness of all knowledge that is valuing what all people by nature of being people bring to understanding the impacts and development of in this case ai or information technology as well as considering the material impacts of these technologies both on the environment and on people's material conditions finally and perhaps most importantly i'm trying to focus on work that begins not with the specification of a problem and not with a research question but instead with a theory of social change my own theory of social change emphasizes the necessity of collective action as well as the importance of building coalitions i take inspiration for this kind of work from authors like adrian marie brown in brown's immersion strategy as well as groups like the student-led grassroots no tech retirees organization in the uk that is casting light on the human rights abuses of companies like palantir in their collaborations with the universities and governments or the uk's first tech workers union the united tech and allied workers union finally i hope to be able to explore all these kinds of ideas and more including radical re-envisionings of alternative ways of producing or conceptualizing technology and its role in the world through the creative computing institute's new master's program and internet equalities i'd love to talk about any of these topics with you or anyone you know who might be interested whether it's in this program or about any of these areas of research that i've been describing i've included here on the slide the email contact for the creative computing institute as well as my twitter handle i hope you have a peaceful day you

2021-02-12 09:46

Show Video

Other news