Starr Forum: From Principles to Implementation: The Challenge of AI Policy Around the World

Starr Forum: From Principles to Implementation: The Challenge of AI Policy Around the World

Show Video

Greetings. I'm, Michele English and on behalf of the Center for International Studies, and MIT, Mexico, welcome. You to today's star forum, before. We get started I'd like to remind everyone, that we have many more events planned for, the semester including, one this Friday at, noon. On the philosophy of human rights, details. For this event and others are, available in the foyer and for. Those who haven't already we, also have a sign-up list so you can receive email notices on all of our events in, typical. Format today's event. Will conclude with question, with a question and answer session for. The QA I'd like to ask everyone to please be mindful of time and to ask only one question and. We. Will be using the microphones, during Q&A, and. Please. Identify. Yourself. And your affiliation prior. To asking your question. It's. Truly an honor to have with us today luis, videgaray to. Discuss the challenge, of AI policy. Around the world dr.. Videgaray, is director, of the MIT AI, policy. For the world project, and a senior lecturer at, the Sloan School of Management it, is also an honor to have with us Kenneth oi, he's. Joining the conversation professor, Roy is both the professor of political science, in the School of Humanities Arts. And Social Sciences and, of. Data systems and society, in the School of Engineering he. Is also director of the program on emerging technologies, at the, Center for International Studies at, this, time I'd like to invite professor Roy to the podium to provide introductory, comments, and to formally, introduce our, esteemed speaker. Dr., videgaray. So. Thank you Michel it. Is truly, a pleasure to, be offering. An introduction, for Lewis by. Way of background the, gentleman that you see sitting here in the front row has. A long, and distinguished, and sometimes sordid history, the. Distinguished, part is an undergraduate degree from Eitam the, sorted part is a doctorate, from MIT. And. Lewis also served as foreign, minister and finance minister of. Mexico. I have. To tell you a little story by way of introduction Lewis. Spoke to a group of 85 senior, American, officials, and, after. His presentation. Three-star, general, came, over and he. Said professor, is. Lewis. Available. To become Secretary of State. And. I indicated that that might be difficult to arrange but. His duties have included, working, on the, successor, to the NAFTA the u.s. MCA on a range, of issues putting. Out financial, crises and, handling. Problems. And disputes on everything from immigration to. Conventional. Investment, and trade policy. Lewis. Will be speaking today on a ai and, development. And. This. Topic, takes. Something that is near and dear to the hearts of all of us at MIT, ai. But. To have a serious, discussion of the implications, and effects of the, technologies, being developed here and elsewhere for. Development prospects, around the world the. Format will be presentation. By Lewis we'll, have a little bit of a conversation up front and then, we'll turn to you for questions and answers and, as Michelle, indicated, please, identify yourself. And we'll have. A microphone for you to speak, Louise. Thank, you so much for joining us today. Thank. You so much can I'm, very. Proud to be here very thankful, to the Center of International, Studies and to Ken in particular for inviting, me to be at, the start, forum today. Thank. You for all of you for being here, I. See, I see, many friendly faces. Take. You for the students from Sloane right here. And. I. Want, to single. Out somebody, who's here that, to me is very special. 35. Years ago actually. What I'm, gonna I'm gonna mention. To people because, I just saw one. That. He just gave me but I want to start with my thesis, advisor. It. Was Jim that was my, thesis was completed, 22 years ago. And. I, had, the privilege to. Go through MIT as a student, with the guidance of Jim, poterba one, of the best teacher. Is not only of economics, but about, a lot of things in life and. It's. It's a real honor for you to be here Jim thank you so much and I also. By. The way that your new office is looks, a lot a lot better than the old one because there the, building is no so, it's a it's it's great, I also, want to mention and acknowledge the presence here of the Consul General of Mexico. Thank You Consul. General for being here Alberto my friend that's. It's, an order to have the highest representative, the Mexican government here with. Us today so. Ken. Was, very. Keen. Was very nice. In, his, presentation. And. Believe. Me I I have a lot of let's.

Call Them war stories about, my career in government about. The us-mexico, relationship. And many other things we're, not going to talk about that today. Of, course we can always do that please. Feel free to approach, me and and we can always talk about any. Of those issues but we're going to talk about artificial. Intelligence today and in particular. We're. Going to talk about artificial. Intelligence policy, and, what. Is how. Is that evolve around the world why is that important, and. Where. Are we are. We going or are we getting somewhere with that so. Let's get let's get started and, this, shouldn't be this, shouldn't be alone so. What. Do we mean when we say. Artificial. Intelligence, policy that's the first question because there's, a lot of concerns about AI and. So. I'll. Try to provide, here, not, a definition, but, actually a collection of themes, that I would, constitute, a coherent, or a comprehensive. AI, policy, and as, you'll see we'll, start. With. A blank, page. And it, will quickly quickly, fill, up with lots of themes and lots of complex issues first. Of all. One. Key topic, of AI policy, is the, use of AI in. The delivery of government services, buying. Goods and this. Is this. Is something that is. Gaining. A lot of traction around. The world perhaps. A little bit of less visibility but, it's extremely important in, a lot of places health, care education, our. Core activities, core services provided by the government in many of the countries like, my own, the. Government conducts anti-poverty, programs. But. Also serve core services, likes a tax collection are, can. Be aided with, and. Can be delivered, with the support. Of AI so the use of AI to. Have a better, government, is, an important, first, thing by the way this is how I got involved with machine learning. Many. Years ago I was five. Five. Years ago I was, a finance, minister of Mexico and. As Ana such a oversaw, the, governance of the our, tax collection Authority and, we. Were trying to do things a little bit better and and. After, several. Frustration. Frustrating, conversations, with consultants. I ended, up talking to computer scientists, and this and realize. That this new well not so new methodology. Called machine learning could, actually help in. Doing things a little bit better. It's. A complex, implementation but, there's a lot of potential, in having, better delivery, of services, in. In doing that by the way a lot of there's, interesting research, here at MIT. Particularly. The Media Lab on how to do, better. Delivery, of anti-poverty. Programs. With, the aid of AI how to make them more effective how, to make, them more efficient. Second. A key. Question about AI policy, is how. Should government's, invest. Or. Use public resources, in AI is.

This Something that should be completely. Left out for the market or should government step in and support, both. The development and roll. Out the, the use of AI in, society. Of, course basic, question is research and development and this. Is a question. That looks a little bit different a, country, like the US or in a developing country. You. There's. But it goes beyond that. You. Have the question, of should. The government be supporting. Investment, vehicles for startups, venture. Capital funds with public. Resources. This. Is a model, that is very. Much used in Asia particularly. China you. See a lot of venture capital, coming, from the government into. Into, industry, so that's a second. Question. Education. Should. The government be doing more, in funding education in computing, or what about tax breaks and other incentives for. The, use of AI in the economy, I'm. Talking about the economy third big question third, big a theme is, AI in, the economy and a lot of the anxiety a, lot of the conversation, about AI is, about jobs displacement, and the inequality. That you can create and, it's a very valid topic of discussion. Here. Mit. Has, a very strong, task, force which is the, future. The. Work of future task force that is preparing a report there's already a partial, delivery of that and. We'll. Talk a little bit more about it in a second about. How a I care. Interacts. With the job market does. He does it create, new jobs. You see that will. Displace human. Beings. Those. Those questions are central to any AI policy, but it's but economic questions go beyond that it's also market power it's. Also called a means concentration antitrust. Policy today. We are seeing nose from Europe on data. On data policy. Just. A few hours ago the European, Union issued, a new communication, on how they're going to deal. With. Internet. Platforms and even. Things like algorithmic, collusion in financial markets which is a new topic but some, things that there's already some indications that these things might happen so a whole block of issues on the economy then, we have the socially, this. The the social responsibility, issues of I of AI things. Like privacy, the. Fact that, the. Type of AI that has exploded, is, a of. The statistical, type, it's a, machine. Learning and in particular deep learning it. Consumes, lots. Lots of information, and. So, how, do you deal with that model, of learning with. The. Right to privacy and to a private life so, that's a that's a big discussion also. The issue of fairness and bias and the possibility of discrimination. We. Know that algorithm. Algorithms. Can, be biased for, several reasons, explain, ability. Some. Some type of algorithms. But, in particularly. Deep. Learning algorithms. Are hard to are, hard to interpret are, too hard to understand how they get to the predictions. That they get to and. This could be quite frustrating and quite important, in some settings, think. Of a judge that is basing a decision, on a recommendation by an algorithm if, the judge cannot understand, why is. That recommendation being done that's. A problem but if you can think of that also, in the medical context, you can think of that in the job, market robustness. That is the. Consistency. And and, how how. Resistant, the, the algorithms, are to, either random, variations, in the world but also adversarial. Intentional. Attacks on their predictions. The. Question of accountability. Then. We have a fifth. Block which. Is ai. And, democracy. And the. Use of these sophisticated. Techniques, of learning very. Granular information, to. Manipulate. The. Minds of consumers but. Also of voters. And to. Influence, politics, in. Democracies, influenced. Opinion. Polls influence, of course elections. We. Have the questions of surveillance how. Machine. Learning can power new, tools of surveillance starting, with face recognition because. Well beyond face recognition and, the, emergence, of a surveillance, some, people talk about surveillance capitalism. Some people some people talk about a surveillance state but, this is clearly an issue and of course AI. Enabling. Authoritarian. Regimes, techno. Or three terrorism that we see in some places some, very important places around the world and, finally, the. Geopolitics way of AI this. Is a reality, this is a little bit of the of the elephant in the room not. Everybody, likes. To talk about the. Geopolitical dimension. Today of technology, but there's clear, that there are clearly rival. Models around the world now, we see a model of. Technology. Deployment in China we, see a different, model in Europe and we see an emerging, model in. The US according, to culture history, these, are different models and countries, around the world realize, that there's, a lot of talk about a technological.

Decoupling. Happening. As we speak just. Go to the general press and you'll see. Almost. Every day you hear you can, read about a coming, AI cold, war or a. Or. A. Lot of concerns on national security, and. And. And these division. Of the world into separate. Camps by. The way for a country that is. Neither China and or the u.s. nor part of the European Union this. Is a problem because. It means where. Do we stand, think, think, think for a moment of a, Latin American country or an African country where, country in Southeast Asia is. This about choosing sides so. You, have these six blocks, of themes we. Could have a lecture, on each. Block we. Could have a lecture on each line within the blocks, so, this is a very complex, problem, and this is not something that will be solved. You, know in a single report this is this is got to be a collective. Conversation that's gonna take a while it's. Gonna is gonna be years. In the in the making there's a seven topic that I've added in the middle which is sustainability. And we should keep in mind that, machine. Learning is a computational, the, intensive. Technology. It. Needs a lot of electricity that, means that there's a relevant. And increasingly, relevant carbon. Footprint, about. Machine. Learning so we need to keep that in mind I would add that as a seventh that's, the seventh topic okay. So. How. Are we doing in terms of developing a, consistent. Effective. Policy. Around the world as you may imagine this. Is a process that is your starting and the. First fact that I want to point out and these are all stylized facts is, that the. World of policy makers on one hand and the, world of computer scientists, are, very different and are. Far far, away, these. Means first, of all that, there's an information lag. So. Things that concern, computer, scientists. Today. Might. Become concerns of policymakers. Still. A, few years into the future think. About the question of privacy. Privacy. Has been an issue, a very. Important, issue for computer scientists for a long long time and. It's. Been a policy issue much more recently and and. You can also almost, on every topic see, there's a lack but, it's not just the lag. They're. There. These. Are technologies that are quite complex. You. Need if. You really want to understand, deep, learning you. Have to understand, a lot of math, yeah. Computer scientists will tell you it's not that complicated well if you're if you're not a computer scientist or or an.

MIT Scientist, it's it's hard. So, there's, need there needs to be some translation, and who. Does that translation, is also. Introducing. Noise who. Does the translation well you have the general press, sometimes. It's good sometimes it's, not that good but, I strongly, recommend. MIT. News they, do a lot of coverage of what happens here at MIT and also I might teach technology review that, they do a very good and balanced, coverage of what's happening around in the world of of machine, learning a. Lot. Of work. Multilaterals. The. World Bank you'd, be amazed how many countries approach. The World Bank or the OECD. Even non-members, approach. These organizations. Seeking. Advice. Looking. For advice, we, had MIT know that because then the. World Bank or the. Or. The OECD come to MIT to ask questions not to me but, people who really know about this stuff they, and, and, these. Are there's, a good translation then we have the thing thanks, something. Thanks are good some are very partisan so. There's a little bit of a mix. Then. Our consultants, and. Consultants. Are jumping. Into this opportunity because. There's a need for knowledge and consultants, are everywhere, and they're, making some very strong claims and I'll show you one in, a minute and, then of course one. Of the biggest sources of this. Translation, of knowledge are. The tech companies which. Is good because the tech companies are very strong in their knowledge but. The problem is that they are not unbiased they, have an interest they might have a conflict of interest in trying to influence. Through. The through. Knowledge spreading. Actual. Policies. And. Finally. There's a language gap. There's. A lot, there's. A lot of hype, a lot, of buzzwords, a. Lot, of things that. People. Write in there in the, correct context in the original, papers. Or essays and, make sense but, taking out of context just don't make a lot of sense and you, start used. To reading everywhere, about. Iko. Iko iko system, server innovation, leapfrogging. Opportunities, as a favorite of canon i and. And, and, you. See a lot of technical, term terms used, incorrectly there's, there's, a tendency, to very rapidly, adopt these these, I'll give you just a couple of examples I. Don't. Know if you're familiar with this book this is a book of two authors from MIT from the Sloan School. Eric, and Andrew, and, it's. A very good book it's a book written back, in 2014. The second Machine Age and, it's. It's, a brilliant book it's. Actually today six years after being. Published a very, good read still. And, if. You can see by the title they, were announcing, a second Machine Age does.

Anybody Today talk about a second Machine Age not. Really not in the policy world but people talk about is, the, fourth Industrial Revolution. This. Is by the way I I know closed shop very well we're we're. Friends this, is not a bad book but, quite frankly. This. Is a much better book. But. The platform in. Which it was in. Which he was not. Only published. But publicized. Was, much more powerful, of. Course the the World Economic Forum, as is is a loud voice. Around. The world with a lot of convening power so. Nobody talks today about a second Machine Age a lot of people talk about the fourth Industrial Revolution just cast a question what about the previous three nobody knows, bad. People, talk about the first Industrial Revolution I'll give another example. MIT. As I mentioned, is. Working on the work of the future task. Force in, the fall in October they release this report and, it's a very good report if you have a chance to read it I strongly recommend it it's not final still, a lot of work pending. But here for example if you're interested in what's going to happen to truck drivers. With, it with autonomous, vehicles particularly long-haul trucking, what's. Gonna happen here, you'll find a very balanced, balanced, approach, and and, and clearly, the evidence is, nowhere. Near. Claiming. That all jobs in truck driving in long-distance driving. Are going to be lost that, that, is that is not the case at, the same time literally. The. Same month that, that, report was published, PBS. Released. This documentary. On a i-i, need. Has an, entire. 40-minute, section, full. Of. Anxiety. Essentially. Announcing, that truck driving, is. Going to end in, a very. Short period of time and, that. Is not, science-based. It's a lot of hype just. Ask me which. Of these two materials. Has. Been more influential, has been more watch, or read obviously. It's, a documentary, on. Your. On your, right and. Then. Let me just skip to another favorite, and, this. Is I'm. Not going to name the, consultant, that, this. Is this is from a very large, consulting. Firm those. These. Two blocks I've added them on purpose to hide the name of the consulting firm because I'm not gonna say something nice about them, but. They are making, these kind of claims to. Me this is this. Is a phenomenal claim so these consultants, they have a tool kit a responsible, a a toolkit that enables, organizations that, means governments that means companies enables. Organizations to, build high, quality transparent. Explainable, and ethical applications, that generate trust and inspire confidence okay we're, done we. Just need to go to them. Quite. Frankly it's not that easy and just. Spend. Spend a morning around csail or go to the Media Lab and you'll see that these are extremely, difficult, issues to deal with this is not this is not a simple so. Let's. Let's let's, think what can a policymaker. Learn. Already, and. I want to to share with you three things that today a policymaker. Can actually, learn about the state of a policy or the or the framework, towards. AI policy. Around. The, world first. Of all, it's. Well-established that there's a need for a policy car driver. Um. If you go back to the 90s and the emergence of the internet and, we all some. Of you might be familiar with section 230 and it, was we start perhaps. Back. Looking. Looking backwards we can now claim that a little bit naive. He. Was thought that just. The, ability to share more information and to consumer, information was, gonna be good for the firt for everybody well, it turns out that it was very good. But also he had problems it was not it was not all good there, was and we see those problems very very clear we sit in politics, we see it in we. See it in market concentration we there. Are, many problems with that so ranging. From the CEO of alphabet. The. Parent company of Google to the president of MIT to the European, Commission to, the White House everybody agrees, that some policy. Guardrails, should, be there to ensure socially. Responsible AI. Number. Two and this is something that I think is consensus, this, is not this, will not be just solved by. By, the guys that know computing, and by, that, the scientists, in the field this, has got to be a very, interdisciplinary. Conversation. Where. An, a very inclusive, conversation, when. You hear, you. Wear the voice and the, thoughts of, people, from, different backgrounds. Not. Just from different disciplines. But also different cultures. Come. Together and, define, a policy this is not something that we just you. Can you cannot just go to. The. Computer lab and say okay get me some AI policy, this, has got to be this, has got to be a collective conversation and, then, number three and this is important, because there there's.

Already In the last three years a. Good. Number of AAA, principles. Or. Declarations. On principles. For AI have, been published, it depends on who you consult. But. Clearly. There. Are at, least 80 in our account. You. Know in the project that I lead we are identified, more than 80. And, there's already a literature, on the. Documents, on principles, so you can even read papers, criticizing. Comparing, since. It I saying the one I recommend the most is comes, from the Berkman Kling, Center at, Harvard Law they. Just published last month a very. Good a comprehensive. Review of these, principles and, all. These doc not all these documents are the same but, clearly consensus, is emerging so, I think, that right now we, can. We're still working on principals has, very very, small marginal, returns we, need to go to the next phase so, the key question is what comes next after principals. I. Don't. I'm not saying the principal's are not important I'm saying that we've. Got that. We've. Got a little progress there need to make this the next step what, is the problem with principals what, principles are not off. Using. Economist, parlance. Principals. Are a necessary condition, but not a sufficient condition for for, policy why, is that because. Policy, is about making. Hard. Choices by. The way in uncertain conditions and. Some. Of these principles are. You. Can create a. Tension. Between them so. You want a I to be very you and your algorithms, to be very accurate so. That when, an a a machine. Learning algorithm is predicting, cancer in in an x-ray you want to be very accurate but. You also wanted to be explainable. And you want it to be fair. Without bias you, want the information to be you. Want the information to be secured, so there is no risk to privacy you, want the jobs of the jobs. Of the radiology safe so there are no radiologists. Are gonna go away there, are many objectives that might be conflicting, with each other this. Is all about the. Trainers and policy. Maker policy, making I'm not I'm not a scientist so I I talk. To scientists now but. I come from the world of policymaking and I can tell you policymaking. In a t-score is about, understanding. The trade trade-offs, and making, tough decisions. So. What's. Next after. Establishing. The, principles to. Me the key question is about the. Trade-offs what, are the trade-offs involved. So. Let. Me before going into into some of the trade-offs and explain. What I'm talking about we. Just say it's. AI it's. Completely, unfair and absolutely. Inaccurate to. Say that computer scientists, don't care about.

The Societal, effects about, the ethics of AI or the economic impact of AI that's, completely, wrong, if, our computer scientists, have been working for a year and these issues and have developed some quite. Sophisticated tools, about, them a few examples for instance on privacy you, have, just. A couple of examples. Differential. Privacy has been there for four years it's, a notion that's been there in computer science you have edge, computing, and distributed. Learning including federal is it learning or split learning there are many other, types of ways, in handling, the data in the algorithm training, process to, protect privacy, on bias. There's a whole, literature. Unconstrained. Optimization. Imposing. Requirements. Like. The. Quality of false negatives or. Things that would prevent certain groups from being discriminated against, or. There's. A there's a whole literature on that. The diversity of the training data so. There's. And. Unexplained. Ability, there's. A whole, set of techniques for, post hoc explanations, things. That analysis, that are done after the training of an algorithm and there's, also the, methodologies. For making. The, algorithms, more. More. Interpretable, more, explainable, like transparent, design and. These are just examples and, we can go on like this for robustness, and for, accountability, there is there's, I'm. Being, very unfair by, just putting in a slide what. Is really a literature, of. Many. Many people working for years on these, so. I'm. Gonna show you, what. These, tools look like so. If you go to if. You if today, you go and, talk to people, who really know art, if it is visual, intelligence say I wanna, I wanna I'm concerned about privacy or I'm concerned, about fairness. And bias. What. Can you how can you help me they. Won't, give. You a box, and say okay here's an algorithm, that is unbiased or here's. An algorithm, that protects privacy what. It will give you something that looks like this.

What. Are these these. Are Pareto, frontiers these. Are combinations, that are, efficient, for instance, this one is from a thesis from one of. Recently graduated PhD. Student from Media Lab probably, Mexican alejandro camp arrow and this shows a trade-off, between the. Utility, or the accuracy of AI of a, machine, learning algorithm and. The. Degree, of privacy protection. So. There's. A trade-off if you want more, power more utility. In terms of accuracy you, lose privacy. And vice. Versa. This. One comes, from a. Book, a book. I write I really. Recommend is called the ethical our algorithm, from broth and currents to researchers. From UPenn. Just. Published last year and here. They show a Pareto frontier, between the. Unfairness of the algorithm, and the, accuracy, of the algorithm, as. Expressed, by its error so. You see here, that making. The algorithm, more fair, results. In losing accuracy. For. Some applications, losing accuracy is not that much. Of a problem but. For some applications. Its life with that or it's, it's, it's it's really, an. Important decision that hinges on this so these are not, these. Are not perhaps the things that people expect, from the, tools. Let. Me just give you an interpretation of of these what. I'm seeing I say here is that technical, tools, won't. Give you straight answer but will give you a menu of options and. Many of options that are, are. Efficient. Clearly. Let, it just go back to here you don't want to you don't want to position yourself with a policy that gives you here that you are under the frontier so, so. The menu it will give you a menu it, will give you a menu of options but, will not do not offer policy. Decisions. By themselves, in. Other words. These. Techniques. Answer. A question, with. A better question, and. For. A a policy, is how we answer that question that we get back so. We say give. Me an algorithm, that is fair give. Me an algorithm, that is explainable you'll. Get a sense of a trade-off and the. Key question to me on defining a policy, is, how do you set up, a framework, that, will allow you to give a good question, first of all and this comes back from a. Little bit of what I learned from Jim while. Studying economics. You. Need to understand, the nature of a trade-off and. I'm using here two concepts. Transplanted. From microeconomics, one. Is the elasticity of a trade-off if. What. Do I mean if. You have a curve that is very vertical vertical. A. Frontier. Then. You. Know you. Can gain a lot of protection. Of safe privacy. Or, gain a lot in in. Fairness without. Losing much in accuracy if your algorithm if the. Curve is flatter, it. Means that the trade-off is real so. A theoretical. Trade-off is not, enough to present a problem, the. Actual problems come from the. Slope, of the curve so a policymaker, that, gets these kinds of answers the first question that should ask is what's the slope that's. Kind of a boring question it's an odd question it's, an important question what's the slope of the trade-off if you really want to make to. Understand your options and the other one is the. Curvature, is. It is a convex frontier. Where, you see diminishing returns. If. You see that you're. Clearly that, the trade, of points towards, an optimal that is not going to be a current solution where. You're gonna have to balance so. If you have a curve that is full that is has, a slope and it's, curved you're, gonna be most, likely. Needing. A compromise, you're gonna need to to, is not going to be very effective. To be at extremes more. Important than that and this is a key concept that if I if, I wanted, you. To take. Away something out of this. Conversation. I, would, I would mention this. One the. Key challenge is. Actually not pinpoint into, the trait to the curves is. To create an institutional. Design that. Would allow for. Democratic. Decision-making about the traders. Because. And why why do i underscore, the word democratic. Because. Technology, is not just about technology and, technologists. It's. It affects us all and we. Want to have a set up, where we decide what. Is more important. Based. On the opinion of.

Society. As a whole not. Just either. The technocrats or. That. The, scientists. There's. Need to be a process of institutional, design to get there. Mmm. One more slide on the trade-offs the, traders are not just about accuracy, but about other things what about innovation. What. About international leadership a lot. Of people are concerned that imposing restrictions on, machine. Learning or things like privacy fairness, explaining ability will, slow down innovation, and, if. You slow down innovation, you. Might lose leadership, in the international arena. Actually. You read that a lot if, you don't know the general press you, really are a lot and if you go to Washington I've done that the, Capitol. Hill and to the White House, you'll. Hear this one. And. This. Is a very understudied question, I haven't seen any paper, showing. What. Is the empirical, relationship, between these what, is the theoretical relationship of this this is all based, on this. Is this is this is all based on assumptions and quite, frankly sometimes in. Emotions so. Let me let me just. Five. Final thoughts on regulation, the man I'm gonna, try to land these into more specific. Thoughts. On regulation. First, of all regulation. Is better to build it from existing frameworks, than, from scratch. That's. It's important to assess existing, frameworks, for say, consumer protection, and build, from there revising, and, trying to have a new law and new completely, complete. Legal instrument, about, algorithms. Second. Most. Regulation, makes more sense if it's sector specific so. Having just an AI, act, is. Probably, gonna be not. Very useful you, need to work through the sector's so it's it's better to look at healthcare it's better to look at consumer finance is better to to, to. Look at mobility and transportation but, some common, rules of course might be beneficial. It's. Very important to acknowledge that, there are many questions that we don't know the answer for so. This. Might not be the best time to be making. Hardcore, commitments, to, certain types of regulation, the, use of temporal. Frameworks. Like salsa close closest. Sandboxes. For regulatory. Experimentation. Preemption. Periods, instead of just outright. Bans of technology, this is relevant for instance for face recognition this. Make this makes sense actually, some states in the US are. Being criticized for punting then, or kicking the can that. Might, not be such a bad idea today, the, state of New York is doing that the, state of Vermont has actually a pretty good framework. For. Studying. The issues better before. Imposing, regulation. That is not a that, is not necessarily a bad idea. Number. Pre. Market testing makes sense we. Do that for drugs clinical, trials with that for cars why, shouldn't we be pre market, testing, algorithms. In rearm square, it's. Used for meaningful. Decisions decisions, that, you have. Life-critical, or legal or public resources are involved important decisions widen we, establish. Market a pre market testing just as we do clinical, trials for drugs. It's. A bit of a critique I'm gonna immerse, people tick. On the European model of excessive. Reliance and individual rights I think that it's not only about individual rights but it's also about empowering, the individual through technology, and establishing. Restrictions, on the, on. The behavior of. Corporations. Accountability. Is not just a challenge but, it's actually a policy tool and, well. Define accountability. Helps. A lot towards, addressing many of these challenges. So define. Accountability. Is actually a cross-sectional, it's. A cross-sectional. Activity. And then, the last two slides. Beware. Of regulatory fragmentation. What. This means i. If. You look at Europe. May. Be controversial, but. I think it's they, are doing something which is remarkable. Which is they're going through a very cohesive, consisting. Process of establishing regulation. Some privacy and now, they're moving into actual, algorithm. Decision-making. Algorithmic, decision-making. They just made announcements today about that, if. You look at China, China. Has a single policy very. Clear national policy. The. The. Priorities. Might not be the same as, in Europe or the US but they do have a consistent, policy what.

About The u.s.. The. U.s., does. Not have a well-established certainly. Not legislation, there. Are some drafts in Congress in Capitol Hill but. It's. The states, that are making steps towards. Establishing, regulation. So. Regulation. In the u.s. can. Emerge to be quite, fragmented, so. California is running, away with regulation. And other, states are moving in different directions how, do the states will look like. Like. That, so. Is this a problem for Facebook, is, this a problem for Google I don't, think so they. Lawyer up they. Have enough resources to, navigate, through these complexity, but, what about startups, what. About. Students. From MIT or, from Cornell, or. Or. Stanford that are trying. To start something and will, have to go through at. The. Extreme 50 types of legislation to. Deal with privacy, and fairness, and explain ability, this, is not the right approach I'm concerned that the u.s. is moving, in this direction and. My. Final slide I think. I think we are having. A huge, problem with trust and Trust, is probably the most important problem we have to defining a policy, and I'm going to show you two dimensions, of lack of trust one. Here. On the on the. Vertical, axis. Is trust. Of technology companies I think. That's pretty low today I. And. It wasn't like that just a few years ago, Google. Was an admired company. People. Wanted to work at Facebook Amazon. Was, cool. Today. There. Are companies there, are. Very. Field. And certainly, there's not a lot of trust towards. Them, on. These other axes the horizontal axis I have geopolitical, trust. Trust. Between home between. Leading countries how's. The relationship has, a trust between. The two leading countries, where are the US and China, it's. Pretty low. So. If these, prevails. We'll. End up. Defining. Policy. In. This, place we, can call it the corner of fear, this. Is where emotions. Of lack. Of trust and fear so. We will. Have policies, that, are dominated, by imposing, restrictions on, the, use of technology restricting. The companies but, also restricting. The flow of knowledge the cooperation between nations and you, see that a lot already and if, you ask me this, is where we are converging, very very rapidly, I'm.

Not Saying that we should be here this, is probably very naive there, are reasons to have trust. Issues both. In these axis and this axis companies. Have tech. Companies have given. Us many reasons not, to trust him completely, and certainly. The. Geopolitical dimension, is also true but, we probably should be somewhere. Around here, right. Now we are here to meet a key challenge on AI policy, is how we will how, we build the frameworks, how do we build institutions. And collective, decision-making. Mechanisms. To move us from here to here, thank. You very much. Okay. So. Can, people in the back see us well seated. Okay. Good so. What I'd like to do is. Just chat a little bit for. Five, minutes and then, turn to this for Asia's audience, who. Have questions, I'm, sure that they wish to pose it's also frankly the most diverse audience I've seen at MIT with. A mixture of. CCL. Geeks. Center. For International, Studies policy, wonks and, more. Than one or two nerds, so, this is going to be a great group so. Louise, in. Sloan school students as well yep, those are the ones that are better dressed. So. Louise. I want to push you a little bit on specifics. You. Said on the one hand that we, need to avoid fragmentation. But. You also took note of the need for sectors specific, approaches, noting that the problems, will be somewhat. Varied if, you go from medical, to consumer, protection for, consumer goods to finance and. Let. Me ask if there is a tension, between those. Two in. That the sector's specificity. May require. Regulations. That. Are quite, different when, you move from one area to another. Fragmentation. Which you would define largely with the reference to geography, but. It could also be fragmentation. With reference to a degree of incoherence, across, sectors, so. The question that I like to pose is on, both actually, in. Terms. Of sectors specificity. Could, you give us the, sector that you think poses, the most difficult, problems in terms, of the trade-offs across. Utilization. Of the data and the methods and protection. Of privacy, and equity and the. Second question I'll have is really on fragmentation, and its effects but first on sector specificity. What, sector, do you think poses, the most acute, difficult. Trade-offs, I'm, gonna leave my answer to the US okay because, this question if, you're presented internationally, is, a broader question but in, the u.s. some of the key sectors, that, are. In. Play for, AI. Influence, very rapidly healthcare. Finance.

Already. Have federal. Regulation. Consumer. Protection, these federal, so. This is certainly something that is doable, by. The way it's, interesting to take note on how federal legislation, on finance. Emerged it, wasn't originally that I decide. In. Its federally Federalist. Nature the u.s. in the 19th century and early 20th century had a very fragmented financial, regulation, that didn't go very well so. The u.s. didn't have a central bank eventually. The, federal reserved emerged and in. The in the in the first part of the 20th century the. The federal Congress at some point. Preempted. States from, doing more, financial. Regulation, and and. Eventually the the, regulation that we have in finance is federal so. I. Think. That the plataforms, and you have that for consumer protection more. Broadly and and. And. And also, if. You think about the, FDA the. F is for federal so. You already have even institutions. And you. Think about antitrust, policy and, you have the FTC the F is for federal so. I think, that. This. Might be a little bit in. The opposite way as your question go, in sectoral. Favors. Having, national, wider, approaches. Because the institutions, are already there what, I'm more concerned is with states coming up with generic, AI types, of regulation, and, we sing that in privacy California, already has its own CCPA, which, is inspired. But not the same as the gdpr of Europe and. That's. That cuts. Across every sector and, we. Are seeing. Draft. Legislation on, algorithmic. Decision making overuse of algorithms, to make. Decisions. Or. Predictions, in the in the whole of the economy that's. The kind of fragmentation, that, can be a layer that comes from from from below it can be very very, problematic and very hard for for for, smaller. Companies and, innovators, to navigate, so, I appreciate the point on small, companies, having difficulty, having. The staff to understand, 50 regulations, that might bear on privacy and consent and data utilization. Understood. But. One of the defenses, of a federal approach of a fragmented and federal approach is, that in an area characterized. By significant, uncertainty. Complexity. And controversy. There. Can be benefits, having. Experimentation. Having. Different, models being pursued in different areas to learn and see which works best or worst. And. I think that that has been historically the case and you see that in transportation, the regulation, of cars. Safety. Standards for cars in many many years it was local it was not it was not federal, and. These, dates other regulation, including, speed limits are obviously local so. There's yeah there's there's, there's. A learning process I'm, a little bit concerned about two things first the, u.s. is not operating, in isolation a lot, of things are happening in the world and you, see Europe. Pushing. Towards regulation, that has various, territorial, consequences. And very, influential I didn't, have China that is moving ahead with a different framework so. Yeah, I understand, the value of experimentation. And learning by diversity. But. I'm not sure that at this particular historical. Moment. The. US has the luxury of time, to experiment it that's in that in, in in that way I'm I think that that, fragmentation, can be costly in that that's. So, within a u.s. context. You could have a federal, government that would be pre-empting. Local, experimentation. Or state experimentation. Internationally. That becomes much more problematic. To. What extent, do, you believe that, experimentation. National. Differences, which. Some would argue reflect. Legitimate, differences, in culture, in values, the, trade-offs that you're talking about different. Countries may value privacy, or versus utility in different ways, to. What extent are we likely to be facing world of increasing. Diversity in. Terms of the policies, governing, AI and, if. So what, are the implications of, that diversity I, think that, diversity is inevitable, and, and to some to some degree very, much desirable, and. It. Reflects values. And culture and history and, that. Is that that is important, but, what we should be more concerned about this, incompatibility. I'm adversarial. Models I think that's that that's the big question international, thing so, so I. Was talking the other day with the Shang Wang who happens to be my neighbor. My office neighbor at. Sloan and he. Was telling me the concept of privacy didn't, exist in China as, such, and the word privacy, didn't. Exist in China 20, years ago this, is something where I was brought in by, this debate that we're seeing worldwide, and the, user and and the emergence of this technology, so, yeah it's natural, that beyond, the political differences, and and which are a. Conversation. On its own, yeah. Culture we, play will play a role but, I think what is more concerning, is to have frameworks.

That Are clearly, incompatible. We're, and we have conflicting, values, in, in countries that either separate. And we, become a world. Of technological silos. Or a world of technological. Conflict, I think that I think that that, here's, here, there's. A lot of room for diplomacy. In this in, this context, and I'm very happy that we're doing this as, part of the CIS because of that there's, a there's a I think that that. Regulation. Again. With a sectoral approach is, something that should be part of a conversation. In, either a global platform like the UN or in. Bilateral negotiations. Like the us-china. Talks. Or even, in like-minded, frameworks, like the OECD, should. So. I'll ask a couple more questions noting, that we'll be turning to, the floor in, just a couple minutes okay. But. We. Looking, very, broadly at. Development. North-south. Poor. Countries, rich countries, and a, technology, which is diffusing, very rapidly, with, significant, effects on the, organization. Of economic, activity, and political, activity and medical, activity. To. What extent, and this is a none fairly, broad question, wanna preface it as such but. To what extent do. You think AI will. Have the effect of. Providing. For opportunities, for, equalization. Limiting. Inequalities, more rapid diffusion of useful information, or to. What extent are we really talking about technologies. Which, will be owned and, captured. And controlled, by. The relatively, wealthy and the. Largest countries and companies. Contributing. To further a concentration. Of economic and political power, net. Effects or is this too broad a question, to be worthy of answering no, it's a it's it's, a it's. Definitely an, extremely, broad question but it's not an unfair question and I. I can. See both effects playing out and and. Policy. And countries. Individually and collectively, through, diplomatic means should be focusing. On both possibilities I see, tremendous opportunities for Equalization on on. Services. And the delivery, of public goods. The. Question of AI in culture is very, different in Boston Massachusetts, than. In Bolivia, or in. Ghana where. There, is here. It's a question of quality may be costs, who, pays for it how. Do we ensure that it's it's it's good Diagnostics in, other countries it's a question of access it's. Not whether this, is better than the existing doctor, there are no doctors there are no specialist in many places around the world so, technology, can can bring. Healthcare, to, places where it is not also this happens in education it can happen it can have a profound effect in agriculture, productivity. In, energy, efficiency so, there's, the the let's. Not forget that AI is. Or. The type of AI that is blooming right now has, a lot of potential for good things and those, things should be encouraged. And enabled, but, also we, are seeing it already, as in previous waves, of technological.

Innovation, That, there, there's there's, a trend towards job. Displacement and. Concentration. Of wealth towards, capital and the owners of and the owners of capital now, I'm not trying to be marxist here but it's a it's a phenomenal it's not a new phenomenon we've seen that in the nineteen it seemed that in the early, 20th century and we're seeing it again now, and the key question is how are we going to respond to societies, in the 20th century only. After two very very. Costly. Wars and a huge recession. Institutions. Were changed, and there, was there. Were situations that were balancing. Across. Across. People and across groups. Right. Now it seems that we're moving in the in the opposite direction so I think it's a it's, a very real policy question someone, asked one last question noting, that Michelle should probably get ready for harvesting. Questions on the floor as well to. Move from that very broad question to a very specific one if. We take a cell phone which better not go off at this moment and. We, look to AI. Data. And. Let's, call it multiplying, the, effectiveness, of medicine. So. Some, folks are working on a, variety of very interesting cell, phone applications, using data that. Would take underserved. Populations. Be they in southern. Texas or, in, the developing, world and. Providing. The kind of concierge, medical, advice that. People get at MIT medical. But. Don't necessarily get elsewhere, meaning. That the AI is, being used in conjunction with measures. Personal. Measures and records to offer. Advice, and check. On whether, people are adhering to treatment, but. Also gathering. Information which feeds back in so. That would be a beneficial, use of, the AI. Serving. Underutilized, population. Of populations, that are underserved on, the, other hand those very same applications, are also gathering, data which. Are potentially. Being sold to, all kinds, of folks doing research. Good. Or our. Pharmaceutical companies, that are seeking advertising. Maybe, not so good and the. Issues of privacy and consent that we were talking about would bear on the. Beneficial. And adverse uses, of that information. Then. You turn to governments, and surveillance, and that. Very, same data in the hands. Of governments. Could. Be used again for good or evil it. Could be used for good we look at the wuhan situation. Now and the. Chinese government is using, prescription. Orders in medical records to. Try to track people for. Purposes, of containing, the epidemic, but. It also could obviously be used for, political, control, in ways that would be adverse I choose. This, one, example of a. 175. Dollar cellphone and. The. Associated information, because. Even embodied, in that one example is so. Much of what you were talking about and my. Head spins because I don't even know how to answer the question with reference to that one example and we're. Talking about a technology with implications, that go far beyond and. The. Question very simply put is. Not so much to answer that question on the cell phone and the medical data, but. How, could. MIT people. Engage. More effectively with. The very, difficult, values issues, that are raised how could we work, to improve the terms of the trade-offs how. Should we or what duties do we as technologists. Have to. Address these, issues what responsibilities. And duties do we have I think. Those. Are like six. Questions, it's. Unfair so, I'll. Let. Me let me start with, the. Phone and. Using. The phone as a delivery. Technology. For health. Diagnostics. And treatment well. First of all it's a phenomenal, opportunity and. In. Many places around the world including my. Home country Mexico. This. Is extremely appealing and you're already seeing, successful. Cases of. Say. Detection. Of retinopathy, the. IVC. S associated, with the diabetes, that. Is being detected. With. Through. The cellphone a picture, taken with.

Cell Phone and and you, see that already and those, patients are then, referred. To the proprietary. And that. That is creating, a, much, better care. Of Fortuna tea for people so that that is there but, I think that going back to the u.s. context, and this would apply, when. We talk about medicine. We. Should think, of the. In the same way that we think of any other technology, or drugs that these don't go unregulated, there's a reason, why there. Are prescription, drugs and there over-the-counter drugs, again. That, algorithmic. Tools for delivery. Of medicine should, also be regulated as such and there are some things that a patient should, not be deciding, just because the, phone said, it and there's. A very powerful. Algorithm. That everybody says works and then. Because of that I'm gonna take. This treatment you need, to go to the doctor and that that kind of algorithms, that kind of promises, should, be regulated, just as prescription, drugs, on. The, other question of information. I. Think. That we are. Going. To see more and more. Technology. Enabling. Privacy. Coexisting. With this, type of delivery, of systems a, lot, of the opportunities particularly using mobile, phones a. Lot. Of the distributed, learning process. We. We help to protect the data of pay of patients. Or individuals you have in your phone. So. And. Again if. You are the FDA. You. Should all it. Might be a good guideline to. Only approve for. Public. Use those. Technologies. That, have these privacy, protections to tools. Like. Distributed. Learning and not centralized, learning, the, difference is that that. In distributed, learning the out there's the the training. Data from your phone will, never go to a central server or, as or. A. Set of servers and, can be exploited it will remain on your phone and all these other. Things we go so so, there. Is there, is this problem I think that the larger question. It's. Not about, it's. Not about the technology it's. Not even about companies. It's about government and democratic, institutions and. That's, where I think we that we should be more concerned, about and because. This technology. Creates. Risks. For. A. Eroding. Democracy, by. Excessive, surveillance, and manipulation, and also. Enabling. Dictatorship, and, unfortunate, we're seeing that around the world and we, are seeing some of the technology, that exports, not just algorithms, but social control. Lise. Thank you we will turn to the floor for questions and. I'll. Cover this side Michel will cover the other side and if you could hold up here your, hands, and we'll. Try to get to you all right could you stand and. Please identify yourself. My name is Bill Weinstein I'm an MIT alum you. Talked. About the. Need for the policymakers and the technologists. To develop an understanding, in. Order to develop, policy, well then. You also pointed, out that one would like to have a democratic, consensus. About how this moves so. Now you've got everybody, out there yeah, who. Are not well versed in any, of the technologies. And, on. Top of that they are burdened by a plethora of cognitive, biases, which completely, distort, their ability, to understand, the, meaning of what's going on how do you reckon with that. Well. I I think that's I. Firmly. Believe in in, the Democratic control of technology I don't think that that, that, that.

We. Should live in a world where technology. Goes. Rogue. But. I don't, believe in, the opposite which will be. Technology. Control through referendum, or. That's. Because, and. And you see an example. And I'm going to be a little critical of California, here, if. You, see the CC the origin of the CCPA. This. Is a very complex piece of legislation. That. Didn't go through the, standard, congressional. Process of hearings, and drafting. And consultation. Of. Constituency's. This. Is a guy. Lots. Of merit. Who drafted. A draft. To the bill, gathered. Signatures, and, suddenly, the California Senate realized. That. It. Was it was going to pass probably, with 85% of the vote according to the polls so, they immediately grabbed, it and adopted in a day. That's. Not, necessarily. What. We should aim so this, is I think I find this distinction I think, the key question for policymaking, is. How do you create democratic. Institutions. For. The appropriate, policies, to emerge and this. Is not the, I I don't think that this should go unchecked. And that. Just because this is complex. The, people that don't know linear. Algebra should. Be out of the conversation, I don't, think that. But. You. Cannot do it by referendum. Particularly. In the context, of polarization. And this information that we live in through in, very much fuelled, by this technology or enabled by this test, technology, this this this I think this this, has to go through the. The. Workings, of. Representative. Democracy, that. Has delivered, one of the most successful political, models in. History. That. Is existing. Democracies, both in. North America and in Europe ok. Next. Question and again we ask you to give your name your. Social security number. Birth. Date your first pet and your high school mascot. So. I was, trying to link some of the themes and what you're talking about it seems to me that your. Point. About the. Differential. Impacts, for big tech versus startups, of a, patchwork, of, policy, is really important, and that, policymaking. Needs to take, into account the. Sort of knock-on. Effects of, locking. In too early or locking. In at the wrong scale, right, so you. Have a policy, that's different from California. And Oregon or, it's. Prematurely. Precluding. Or enabling, certain choices so. What. Are the new policy, tools that factor, in you know these increasing, returns to scale the path dependence. Using. Business, models, as well looking, at how this plays out visa V business.

Gaining Power or, changing. Its role in society, given. Different policy options so the traditional policy methods, might benefit. From simulation. Modeling. Kind, of what-if here's what happens if we do this quickly. Here's what happens if we wait and see here's what happens if we do this temporarily, it seems like this like a meta another, layer of complexity, on thinking, of policy, formulation. I did a little. Study a few years ago with an MIT student where model malaria, policy, and the, naive solution, which is to invest. In both prevention, and treatment was, the worst policy, right, it was better to go all in one or the other rather. Than some kind of middle ground and, it made me realize how complex, policymaking, is when, you look in this way, that's. A great question Ontario, thank you for being here. The. Well. I certainly, don't, think that fragmentation. Is is, a good idea but, I also mentioned, in the presentation that. I strongly believe in temporal. Experimentation. And temporal. Regulation as. A. Way to learn. I. Mention, the state of Vermont I mentioned, the state of New York. We. Had at the last. Semester we we had one of the. Members. Of the legislation, in New York that drafted they're, going through a study process. So I think that first. Of all there's got to be awareness, in, this in. Legislators. And policymakers and. There's got to be an understanding of what it is and. To. Me the greatest concern is jumping. Too early I'm locking, in establishing, path dependence, as you very well describe much better than actually, but. I I think that's, the reason why tours. Like having preemption, so. There's. I understand. Why people are very. Concerned, about face recognition particularly. In, the used by police and and government. But, I don't think that it's there it's, it's it is it's, probably the the optimal solution to ban it forever, and to impose a. But. But perhaps a moratorium or, it's. As much is much better, until. We understand, better and that technology evolves, and is mature enough and you go through an FDA, type of process, through. That regulatory. Sandboxes. Pros, and cons there's there's I. In. Mexico I led the FinTech, law. And we, we. Went through the process of calibrating, whatever you Latorre sandbox is is, not easy but it allows you to learn there's. A lot of learning to be here before committing to hard. Towards. Towards, something I think the worst case scenario, is where, you commit. Too early and have fragmented, commitments, and unfortunately. That is not a necessary oh can be discarded right now. Hi. My name is Daniela I'm a freshman here part, of csail thank, you so much for your talk, my, question is when it comes to mitigating, bias and. Manipulation. Through AI do, you think that there is systemic, institutional. Issues I want you to solve in governments, such as inequality, or, corruption, from corporations, first. Before we can entrust these governments with creating. Responsible, regulation, for, AI. Thank. You, thank. You for being here I'm thinking for the questions though sorry you. Mentioned two two topics. That are quite important but analytically. They're not exactly the same manipulation. And and, bias. And. Both. Are really really bad and things that we should be concerned about and any. Country should, have policies, about about. Those I think that that. The. Problem of of bias, in algorithms, has, been a little bit of a discovery, and it. Was not obvious in the beginning that, that was going to be the case probably. If. Econometricians. Had. Been consulted about the problems in dealing with datasets. And bias. Introduced by the de datasets, that problem, would have been identified. Earlier it's. The same problem, that you, deal with. Identification. In econometrics a lot, of that looks, looks, looks. The same but, I think, that. The. Discipline, was. Not prepared for. That and something. You find some truly horror stories on. On. That I think that the understanding, of the problem is, much better now and. There's. No one single fix, for this there's no magic, bullet I think. That having more, diverse teams, both. In the policy-making and, in, the, algorithm. Design, and training helps, actually helps, and this is not just soft policy. Because, it raises, awareness but, that is not enough and and. I, think that the. Quality of data and the, representatives, of the data is. A is is is a key I think. That relying, on fixes, on constrained, optimization.

It's. Always going to be problematic, because, you lose accuracy, you lose power so. The true fix is, actually on on the data and the true tension therefore the two that, true that, your trade-off is with privacy, so, there's, the trade-off between privacy, and. And. Biases, is real, and I think that's something that we don't talk that we don't talk enough is that intentional are, there. Companies. Or governments. That are intentionally. Creating bias maybe. But I don't think that's the that's the general that's, the general rule when inflation is a very different thing I think, that manipulation almost by definition is intentional and a, lot of people have discovered that these tools allow. For. A. Computer. System to, know a person, even. Better than the person herself and. That. That. That. Is a problem because. When. You you. Can spread very targeted, information and and. And abuse. Cognitive, biases. Then. You have an opportunity to, truly manipulate, markets, and. The. Political the political system to, me that's that's, that's, the that's the, key and I, think that in. Order to be constructive, and I just. Being. Gloomy here, I think, technologies, have an opportunity, in. Developing, tools, towards. Empowering. The, individual, in. Detecting. Manipulation. In. Manipulative. Intent, and, making. Raising. Awareness when a technology. Is trying to exploit a particular. Reason. For for. Cognitive. Bias and we. Don't see that enough there, are some examples some from MIT, both. At csail the Media Lab but. We need, a lot more a. Lot. More of, that and at the end of the day we, need Democratic control of government. Which is the very essence, of democracy okay. Let me try. To work my way across here and also, you know give the the make and model of your first car. Hi. I'm Giovanni do. You think Facebook, should be broken up I. I. Don't, I don't. I. Don't, know what anybody who posted say yes or no I think it's

2020-02-28 18:28

Show Video

Other news