SERC Symposium 2023: Maximilian Kasy

SERC Symposium 2023: Maximilian Kasy

Show Video

okay thank you so much for organizing this workshop and for having me so I'm going to pick up on some of the topics that we saw in the first panel today and as we heard there's a lot of different debates about the social impact and the ethics of AI that have been going on for the last few years concerning kind of different aspects concerning questions of fairness and discrimination and inequality questions of privacy and data property rights questions of value alignment between machines and humans or between different humans all these concerns about the possible robot apocalypse questions about explainability and accountability of algorithms and algorithmic decisions questions of Automation and the resulting inequality in the labor market and so these debates can be fairly confusing or at least they were quite confusing to me and corresponding to this debates there are all kinds of efforts to to potentially regulate AI as we've heard and what I'm going to try today and also in this paper that I put on here is to try and give a bit of a systematic framework from an economics perspective of how we can think about all these questions and so you can hear the key key arguments I want to make and that I want to use to to structure these discussions first argument is if we if we look at what AI algorithms and machine learning algorithms more specifically do at the end of the day 95 percent of them boil down to some maximization of some measurable objective a single objective however if we look at what's happening in society and that's in some sense core to what economics is about there are different people and different people want different things and have different resources and it's a even bigger issue in a very unequal society and the society where there's a lot of heterogeneity of values and so if we want to think about an informative assessment about things on a society level but that's the impact of AI or the impact of regulations and so on then we have to think about the winners and losers that are generated by Technologies and by regulations and we have to think about how we trade off these gains and losses and how to aggregate them in some way and finally if we want to think about how we create broadly speaking socially beneficial AI we have to think about how do we align the objectives that the algorithms are maximizing with some notion of social welfare or Society level um well-being that's based on these trade-offs and so what I'm going to do now is I'm going to elaborate on these points a little bit in some background much of this might be familiar and then I'm going to use it to talk briefly about some of these these debates on the ethics of AI so to start with my claim was that AI is primarily about maximizing measurable objectives so here is a quote from one of the standard textbooks on AI arguing that AI is about the construction of rational agents and rational agents are designed to maximize some again measurable performance measure and the leading approach in AI these days is machine learning where machine learning essentially does this maximization based on Based on data and statistical inference broadly speaking and it's true for all these different sub-branches of machine learning right supervised supervised learning being the biggest one the most most well-known one in some sense but it's also true for for Fields like targeted treatment assignment multi-armed Bandits into many versions reinforcement learning and so on it's even two for for the language models and Transformers and that type of approach and so here just to elaborate a bit on that right what what are the objectives right since that's that's what I really want to drive home in this talk is what are the objectives and how do we align them with what Society wants in supervised learning it's typically some form of a prediction error loss right so you have a prediction G of X and outcome by you're trying to predict some notion of loss of how far you're away from that and Target the treatment assignment which is actually the setting that a lot of the debates about kind of fairness and bias and so on um touch on it's about assigning some treatment to individuals typically with the treatment is consequential for them to think about getting a job being in prison getting credit getting a medical treatment things like that and that treatment assignment is again done based on individual level features and that we might evaluate the outcome the treatment assignment based on the realized outcomes for the treated similarly in multi-armed Bandits where this is taking place over time you're trying to learn which treatments are effective or things like that you're trying to maximize the stream of again measurable outcomes why and again in reinforcement learning where the same thing happens dynamically but past actions also affect future States um so kind of throughout these slides I put in blue kind of the machine learning objectives now if I'm kind of Switching gears how do we think about social welfare and this kind of a natural summary of how economists and I think also many political philosophers would think about people in the theory of Justice would think about it is that a lot of different normative Notions of what social welfare is and coming kind of from different political directions if you want argue that ultimately social welfare is about some notion of aggregating individual Belfair right so formally you could think about there's a set of individuals in society we have some notion of how we measure these individuals welfare that they denote bi here and then we have some notion of how to aggregate it to a society level assessment um f of so F would be called the social welfare function in economics now there's a ton of question that's that that type of framework opens up right and I'm not going to address them just to put them out there that's the first question is who even counts who is part of the individuals that we're thinking about or this is just all the citizens of a country all the residents what about all humans on Earth to people people in other countries not current but about future Generations what about animals and so on there's the questions of how do we measure individual welfare that's a big debate would be do we care about opportunities or do we care about outcomes economists would often think about utilities so measuring welfare in terms of what what you are revealed to prefer in terms of your own actions in the theory of Justice would argue for resources or primary Goods is kind of the measure of welfare or send would argue for capabilities and so on and lastly there's the the big question how do we aggregate or trade off across different people and again that's what I want to emphasize here we don't just have one objective we have all these different objectives we are different Notions of welfare VI we have to somehow trade them off across people right so we have to ask how much do we care about millionaires versus homeless people sick people versus healthy people does it matter that some groups were victims of historic injustices and so on in terms of how we assess welfare and trade-off welfare um so lots of lots of questions but the the big question again here is we have this this notion of social welfare F of the of the different v's and that that's in some sense normatively what they care about and then we can ask how do we align that with what it is that the algorithms are are maximizing in AI and so that Grace is kind of the last background question I want to put up here which uh which is the question is who are the Agents of change right who who has the capacity to align um the objectives that the algorithm optimize um this is what we care about this is society or what we care about normatively and so a lot of a lot of the discussions in the ethics of AI effectively boil down to some some appeals to the the Aztecs of corporate managers corporate Engineers arguing there should be some self-restrictions maybe we care about the curriculum or so I would have to make them aware of ethics um in economics we will typically think that private corporations are in the first instance maybe not perfectly but to a large extent profit maximizing entities right so if there is some some conflict between what's what we care about normatively Society about profit maximizing the letter is going to win out um and correspondingly there are a lot of different potential actors in society that could be addressed by these debates about the ethics and social impact of AI right we don't have to focus just on corporations there is consumer Advocates there's unions there's courts and judges there's policy makers in the public sector there's social movements there's many people we could address and could think of as potential agents of change here um and I guess the the big big point I would argue for is that ultimately to align what um what the algorithms are maximizing with what we should be maximizing as a society is to actually give an effective Democratic control to the people who are affected by by algorithmic decisions over their objective functions that are being maximized and so let me now take my remaining time here to talk about some of these debates on the ethics of AI and think about how how this perspective or how this framework would allow us to rethink these debates so one one big discussion that we've heard about in the morning is about fairness and discrimination by algorithms and so what I want to do here is kind of I mean this is maybe a bit of a caricature but um contrast some form of a standard view on on fairness and bias um relative to a view that I would advocate here based on on the type of social welfare framework I put up where typically when people talk about fairness or bias um there are many different Notions that have been proposed but I would argue that 90 of them boil down ultimately to the same thing which is arguing that somehow we should treat people of the same Merit whatever Merit means similarly independent of group membership and so really what that's that's formalizing is something that's been around in economics for a long time is the notion of taste absence of taste-based discrimination which essentially says the context of corporate decision making if your profit maximizing it's okay and we only call it buyers if you're deviating from profit maximization if you somehow care cares you about group membership in addition to profits but that's in a way a very very limited notion of bias right and I mean in some sense it's baked into the notion of bias because it's just deviating from the objective and it's baked into this single agent optimization framework of of machine learning that we think about it that way but what what that implies is in some sense that we only call deviations from profit maximization are unfair which is different from how we would think about impact of of say an automated decision-making system on social welfare including on inequality right if social welfare cares about it in quality which is about a counterfactual consequence of algorithmic decisions for the welfare of the people who are being affected and maybe differentially for different people and so often there's welfare perspective will have different implications from a fairness perspective because for instance if you think about what happens as we collect more data prediction algorithms that better and so on generally that's going to lead to a more dispersed treatment assignment right the more information we have the higher the variance of the prediction gets so leading to more unequal treatments and so typically that's going to be increasing inequality by letters and so therefore be bad for social welfare if we care about inequality but it's also going to be good from for fairness because it's going to decrease the difference between predictions and the underlying Merit that's being predicted or another is a set of context where there's a kind of a contrast between the two perspectives is if we if we think about any form of affirmative action or redistribution or compensation for pre-existing inequalities that would be called biased or unfair by all the standard definitions of fairness that are floating around so it would be bad for fairness and good for for equality and typically good for welfare um all right another debate that we heard about earlier is on privacy and data property rights and data governance where I mean one well-developed framework in computer science is a framework of differential privacy which I'm kind of simplifying a bit but essentially it says that it should make no difference to you whether your data are included in a data set or not right and so there's no in a probabilistic sense there's almost no difference in the probability distribution of observed decisions or reports better not your your data included in the data set and one kind of remarkable feature of these definitions is actually that machine learning performance is essentially unaffected by by imposing differential privacy at least in in large large samples right so you can maintain this differential privacy that it makes no difference whether your individual data included or not without affecting at all what's happening Downstream in terms of automated decisions and any machine learning based system and so closely related to that is this notion of individual data property rights which has been proposed as one remedy here this idea that you should have a say of whether your data are collected and how they are processed and passed on but in some sense if differential privacy is maintained then at least on an individual level you have no like economic incentive to to clear whether your data are included or not because it's not going to not make a difference for you while at the same time all the downstream consequences remain right so expressing this in economic terms we would say there's an externality of of sharing your data and actually what pretty much all of machine learning AI is about is about learning these patterns making these inferences which are not about learning your individual data entry it's about um learning the relationship between different things in the data and so privacy in the sense of differential privacy and individual level data property rights can do nothing to address any of these Downstream consequences whether positive or negative right both for for potential harms of automated decision-making systems but also for the social benefits that come out of them and so again I would argue there we need some form of collective Democratic governance of how data are collected and used and individual level property rights will not be able to address these issues and the last debate I want to briefly address here is these discussions about explainability and accountability of AI systems where typically they're the question is asked about some kind of individual level recourse kind of from a more legalistic perspective right some automated decision making system has made a decision say deciding whether I'm admitted to some University or I get a job or get credit at a bank and so on and then there's maybe I'm unhappy about the decision do you have any recourse can I ask the decision to be explained and often that's framed in terms of saying that the that the algorithm has to be simple in some sense right so whatever simple means that's often a moving Target depending on what your background is say if you have some statistics training like a linear prediction model might be simple if you're not familiar with those they might not be simple and so on um and kind of related to this question of explanation there's the question of who's responsible for algorithmic decisions but this is again about individual level recourse what I would argue is we should think about explainability not so much at this level of individual decisions or the question of why a decision was made which in any case is not really a well post causal question right because causal questions are typically about what if something changes how would the decision change not why did something happen instead I would argue we need transparency and public debate about objectives and constraints and the space of actions that that algorithms can complain right so a lot of algorithms might be complicated what's going on in the background right take any type of deep learning model and the optimization algorithms that go into it and the hardware and so on the data sets Maybe it's kind of hard to have a broad-based debate about those but um as we heard earlier at the end of the day these are reasonably simple systems in the sense there's a simple well-defined measure that's being optimized and we can have a public discussion about this optimization right that's that's not hard to explain and debate um but who knows what's going into Facebook's algorithm for for showing us it's search it's social media feed and AD clicks and ads and so on but discussing about the fact that what they're doing is maximizing the probability that you're clicking on an ad that is a something you can do on a broad basis and um so if you think about explainability and accountability at the level of the system rather than the individual decisions and I think this is again something that we can have a broad democratic debate about and that debate has to be the starting point for Democratic control of what's being optimized so let me summarize my key takeaways there's a lot of different issues that are being raised by Ai and automated decision making systems more generally including questions fairness privacy value alignment accountability Automation and others that I haven't listed um resolving these these issues I would argue requires Democratic control of algorithm objectives and of the resources that go into maximizing these objectives like data computational infrastructure and in order to get this Democratic control we need to have a public debate and buy a new collective decision making at various levels in order to again align what algorithms are optimizing with what we care about as a society thank you [Applause] and we have the first question but we can have a second question there's another mic on the other side um max thanks for an excellent presentation I loved your Framing and taxonomy of all the issues from different perspectives uh I think I get that you know the algorithms have an objective that's aligned with profit maximization that's very clear to me in the context of predictive AI right we've been thinking about recommendation algorithms ranking or social media in those contexts I think my question is when I put it in the context of these generative models I'm not sure if the picture is that clear with a single objective because you know these models have been developed I guess for other completion or whatever summarization but they now do all kinds of things and it's not very clear to me what's the objective function uh there in terms of how they're developed it's certainly I mean I wish Simon was here still so I don't know if taxing would help here because you know they've developed for one thing but they do all kinds of things that we do not understand you know they give medical advice they give all kinds of things that would fall into very different Realms so how do we think about generative AI within your taxonomy it may be a hard question on the trip and the best person to talk about it but my understanding is I guess that in a way like this this systems are trained in like two two steps at a basic level right first the first one is basically a prediction right like the self-supervised learning where you're kind of I mean essentially autocomplete right you predict like what would be the next word on the internet in this text um and then my understanding is that something like chat GPT then there's an additional reinforcement learning step where you had like human trainers who would evaluate the output in terms of I know truthfulness or things like that and that's essentially defining the reward that's then being maximized which is some some weighted combination at the end of the day between what what would appear next on the internet and what um what would be evaluated as good by the human trainers of the model um yeah how do we think about what the right objective there is again I think it's an important debate but I I would stand by the fact that even these complicated systems there is a numerical measure of performance that they are maximizing and we can discuss what this numerical measure of performance should be next person thanks so much I mean this was uh really a fascinating overview last week a statistation from Berkeley was here giving a talk Michael Jordan and he was talking about how an AI a lot of us think about this you know trying to create this mimicry basically something that's intelligent and something that's autonomous so very much an individualized view of of what AI should look like or how we should think about this and he was pushing for this idea of the collective and thinking about this more in the context of collective rights and and contribution in that sense rather than this basic mimicry and your I wonder how this relate to the the general argument you were making about moving away from kind of individual property rights to the more democratic governance aspect and do you think there is a general view of moving towards this Collective aspect I mean so if I understand correctly what you think of what he was saying I guess there is I mean there's a bit of a demystifying going on right intelligence is what is intelligence who knows optimization is something that a pretty good idea what it is and that is kind of what's Happening under the hood in pretty much all these systems um my understanding what Michael Jordan is interested in I mean there's other question of there's not just one of these systems out there other autonomous systems there's many of them potentially interacting which has its its own questions right if they mean they might not all be aligned themselves and what they're maximizing um and I think that that again relates to the fact that there's I mean the same way there's not just one human out there if there are many humans and they create many Technical Systems which which interact in a way and the distributional conflicts that happen on the human level than kind of kind of move forward into the the conflicts potential between these Technical Systems I don't know if that answers yeah I know this is great thanks a more systemic view of the whole thing thank you [Applause]

2023-05-23 18:09

Show Video

Other news