Diane Coyle, Andrea Renda, & Charlotte Stix – Socioeconomics of Disruptive Tech #003

Diane Coyle, Andrea Renda, & Charlotte Stix – Socioeconomics of Disruptive Tech #003

Show Video

Koen Smeets: Welcome back to the interview-series  on the socioeconomic consequences of disruptive   technologies by Rethinking Economics NL. Today we will be focusing on the sociopolitical  and economical environment of the Europe. We will   especially be focusing on how this environment is  influencing how these technologies are shaped and   regulated. For that, three world-class experts  have been so kind as to make time for us today.  

Firstly with us today is Diane Coyle. She  is Bennett Professor of Public Policy at the   University of Cambridge and co-directs there the  Bennett Institute for Public Policy. She's also,   she was a Professor of Economics at the University  of Manchester. She specialises in the economics of   new technologies and globalisation, particularly  measurement of the digital economy and competition   in digital markets. Previously she was the  Vice Chair of the BBC Trust, a member of the  

Competition Commission, the Migration Advisory  Committee and the Natural Capital Committee.   She was the author of a number of books  in economics, including "GDP: A Brief but   Affectionate History" and "The Weightless  World: Strategies for Managing the Digital   Economy". Secondly with us today is Andrea  Renda. He is an Italian social scientist,   whose research lies at the crossroads  between economics, law, technology and   public policy. He is Senior Research Fellow  and Head of Global Governance, Regulation,   Innovation and the Digital Economy at the Centre  for European Policy Studies. From September 2017,  

he holds the Chair for Digital Innovation at  the College of Europe in Bruges in Belgium. He   is also a non-resident fellow at Duke University's  Kenan Institute for Ethics. He is a member of the   High-Level Expert Group on AI as well as a member  of the International Advisory Board at European   Parliament’s Science and Technology Options  Assessment. Lastly with us today is Charlotte   Stix. She is an technology policy expert with a  specialization in AI governance. Her PhD research   at the Eindhoven University of Technology  critically examines ethical, governance and   regulatory considerations around artificial intelligence. She is also a Fellow to the   Leverhulme Centre for the Future of Intelligence  at the University of Cambridge and an Expert   to the World Economic Forum's Global  Future Council on Neurotechnologies.  

Most recently, Charlotte was the Coordinator  of the European Commission's High-Level Expert   Group on AI and she was also awarded as a 2020  Forbes' 30 under 30 of Europe and a Young Global   Shaper by the World Economic Forum and in  her spare time, Charlotte runs the bi-monthly   EuropeanAI newsletter with >1900 subscribers.  And with that, I would like to move to the first   question for professor Coyle. Could you tell us  more about the digital economy, especially its   main divergences from the economic perspective  from the “traditional”, non-digital economy? Diane Coyle: The first thing to say about digital  technology is it's what economists call a "general   purpose technology", which means it's got wide  uses throughout the economy and it affects many   different activities and sectors. And of course  we've experienced that particularly since 2007,  

with the arrival of 3g and smartphones and we  now, particularly this year with a pandemic,   understand how much digital has transformed  our daily lives and we use it all the time.   In terms of its economic characteristics,  there are some differences. One is that, like some previous technologies,  there are very high upfront costs   and then very low marginal cost of using lots  of digital technologies, and so there are these   increasing returns to scale. It's been true  of other industries in the past, but this is   one of the important economic characteristics.  Another is that there are what economies call   network effects, which means that the more people  use the technology the more you benefit from it   yourself. This was true of telephony but it's also  true of lots of digital applications. And together  

these create quite a unique kind of  market dynamic, winner-takes-all dynamics.   In that, you've got to cover the upfront costs.  So companies that enter the market are going   to make losses for a long period and they need  that funding, they need a large market to help   them cover their initial costs. And they reach  a critical mass at some point, when the market  

tips towards them and so in many of digital  markets you find one or two dominant players.  And this is something that's obviously come to  the fore recently with lots of countries and   the European Union just this week, publishing new  regulatory and competition approaches to digital   markets, because I think we have steadily  realized the importance of them in our lives   and the fact that we are very dependent on these  markets. And I think there was a great example of   this dependence and therefore vulnerability with  some of the hacking that occurred just recently   of US government websites. And this has nothing to  do with the underlying technology, it wasn't to do  

with internet security, it was to do with economic  concentration. So on the same day Google went down   and the US government got got hacked, and the  vulnerability was about the market concentration   rather than anything to do with the technology.  And so we're at a point now, where a lot of   governments and authorities are starting to  think very hard about what the implications   of these technologies have been. On the one  hand, like any big technological advance,   you've got the potential for improvements  in productivity in people's quality of life.   And on the other hand, because of the special  characteristics, quite distinctive vulnerabilities   in our economies and societies as well. Koen Smeets: That's very, very fascinating,  

I’m sure we'll move also very to the regulation  aspect of this. But before we do that,   I was wondering, could you tell us, we often  see technology, it's often framed as something   inevitable that just happens to us. But I’ve  seen, heard you argue that it is actually a   social, a construct by its social-political  environment and therefore we can influence   how it's shaped. Could you expand on that? Diane Coyle: Sure, let me give some examples.   I talked about the need for digital companies  to raise enough funding to cover their losses   initially. This is one of the reasons why the  giant digital companies are US or Chinese,   they've got very large domestic markets that they  can expand into, but also they've had the funding   to cover those losses because the structure  of their financial sectors. And that's been   harder in Europe, so Europe doesn't have the same  dominance in any of these markets. Another example  

is about data. And there's a lot of debate now  particularly about personal data and understanding   the implications of big digital companies  taking a lot of individuals and data about   their activities and transactions, and using that  to sell new services or to target advertising.  And that's possible because there's been  a presumption that data is something that   can and should be owned, and it's owned by the  companies that are collecting and structuring it.  

And that's that's a presumption that has sort  of come about by chance really, because of the   structure of legal ownership, but that's something  that could be legislated. We could legislate for   an entirely different data economy that was not  about ownership and transactions in the market,   but about terms of access and who has permission  to access and use different types of data,   which is much more linked to Elinor Ostrom’s  view about how you would organize the collective   use and distribution of resources. If you think back to previous episodes of major   technological changes, they've had very different  outcomes. There's a lot of debate now in economics  

and policy communities about whether automation  will accelerate and whether it will lead to a   lot of job-losses and inequality. If you look  back at previous waves of technological change   and compare and contrast them, the Industrial  Revolution and the 1950s and 60s, they both saw   a lot of automation, a lot of change in the  labour market and the kinds of jobs that people   people did, but the 50s and 60s saw increasing  employment and reducing income inequality.   And that was shaped by the social and  economic context of the time, and the   fact that it was imperative after the sacrifices  populations had made during the Second World War,   to ensure that those technologies  delivered wide benefits for society.

Koen Smeets: So then seeing how this, the  importance of such a social political environment,   Ms Stix, could you tell us more about what this  environment looks like within the European Union? Charlotte Stix: Sure, I mean I think for the  European Union it's really important to also   mention that, recently in the Digital Markets  Act and the European Commission has actually   called for fines for up to 10 per cent of the  annual global turnover for online gatekeepers.   So those are the really big tech giants, you  know the ones that do have a lot of the data,   that do manage a lot of the access and what  can be done with our data. I think what's also   important to notice, you know in terms of future  technologies, emerging technologies. Europe has   had quite a leading position with the GDPR in  terms of how we think about of these things,   and how internationally different  actors look towards Europe and sort of   build on the work that Europe has done in this  field. You know, it is lacking sometimes in,  

you know industrial, how to say that, in  startups and SMEs, that is a criticism that   is valid and that is being tackled, but Europe  has really had leadership position in terms of   ensuring that technology is used and deployed  for the citizens and for the individual and   empowering and enabling those individuals to  really, you know look towards what happens with   their data, what happens with the technologies  they use and ensure that they are protected.  I think there is also something important  to mention, that oftentimes you have these   people that would say that Europe is  regulating itself into irrelevance,   which I mean, it's, it is a valid criticism  depending on where you come from. But   with the recent white paper  on AI, you see that Europe has   really tried to set the bar very high again for  consumer protection, for ensuring that individuals   have access to so-called trustworthy,  human-centric artificial intelligence,   which is a key element in the future and  hopefully puts Europe in the leadership position. Koen Smeets: And could you expand more on the   environment, how they're tackling the  weaknesses, through CLAIRE and ELLIS? Charlotte Stix: For sure, I mean CLAIRE is one  of the sort of thriving ecosystems, networks,   academic networks in Europe, and they have done a  lot of work also proposing their lighthouse centre   on artificial intelligence, which would be a sort  of CERN for AI, where part of the idea is that   Europe has great research institutions, that's a  fact. Unfortunately our retention rate isn't as   good as it could be, so you know having a really  ambitious project, a sort of moonshot project,   would potentially attract, not only researchers  that have been trained in Europe and are European,   but also those from other countries, and  it would also enable and empower industry   players to potentially, you know harness  European researchers from that center.  But you also have a couple of other networks  and tools. So you have the public-private  

partnerships, you have the coordinated plan on  AI, which is sort-of at a member state level   where various countries collaborate to  counteract fragmentation in Europe. I mean,   we have to remember that the European Union  isn't one country, so to sort of ensure that   really everything is pulled together and that  there are strategies across countries borders.  You have the Digital Innovation  Hub networks, where again   there is a really big effort and a push to ensure  that the industry is working with governments,   with academics and researchers, to  think about how to increase the,   to increase the access to this technology, to  increase funding, but also to enable the public   and other actors to really engage with  these technologies in the future, and that's   something that doesn't quite exist in this formal  variation in other countries that are comparable.

Koen Smeets: And what I was quite curious  about, is that Europe is often portrayed as   far less of a world player in AI than the United  States or China. But I do think that, if I for   instance looked at you, what you've written, it  does show that Europe is actually quite a player.   Could you expand on Europe's position in  AI and how do you expect this to develop? Charlotte Stix: Sure, I mean as I said before, I  think it really depends on your angle, and where,   how you weigh what you think is important.  So if you weigh the number of startups   and the sort of funding for startups as  really important and key to AI-progress,   then that might give a completely different  result than if you weigh, as I’ve said before,   efforts towards AI-regulation, standardisation,  certification, and consumer protection.  

Now, one of those things is quite sexy, the  other one might not be as sexy. That doesn't   mean that it isn't really important, doesn't mean  that it's isn't vital, and doesn't mean that, it   doesn't give other countries and players a sort of  direction that is being followed internationally   when you look at, for example what the United  States is currently looking at. So it might not   be the most ideal, in the shape and form that  the regulation, or regulatory framework, is in   right now, but it is still a guiding structure for  other countries on what they actually want to do   or not to do. And I think that, in and of itself,  is a leadership position. It might not be the   leadership position, but that really does depend  on how, how you weigh what is important to you.  Now, you could also say that the European Union  is putting in a lot of effort on ensuring an   ecosystem of excellence, which you know looks  at technical capabilities, which looks at   funding for artificial intelligence, because  of course if you regulate, you need an equal   technological, you know pool to regulate,  and it's important to enable that pool,   and to really ensure that the industry is  thriving as well. So I think Europe has a really   good chance here, to have a leadership position in  ethical, in trustworthy artificial intelligence,   and which I’m sure that Andrea will expand on,  and it is important to not undervalue that and   underplay that, just because it is maybe not  as sexy or cool, and I say this on purpose,   than having really hotshot startups  and really big industry players.

Koen Smeets: I think that's  very, very interesting,   and with that I actually also want to move  toward Professor Renda. I was really curious,   how exactly are we regulating these  technologies, and how do you see this? Andrea Renda: Well, all digital technologies, of  course, they are a big spectrum of solutions, and   incorporating hardware, software and various  different solutions, applied in different sectors.   So there's no single regulatory framework,  obviously in Europe that encompasses all of   them. And indeed, for a long time the regulatory  framework has been very light-handed in Europe,  

but just as it has been in many other parts  of the world. We have to remember that,   when the internet started permeating our  lives, like in the early 90s in particular,   concrete, intentional choices were made not to  regulate the internet, and to leave the baby   grow-up, let's say, maybe spoil the baby a  little bit, with the lack of regulation, in the   belief that this would lead to the  so-called "Permissionless Innovation"   and a very ecumenical development, let's say,  of the internet, where there will be some value   for each and every player, each and every user,  from this fantastic ecosystem, nurtured by the   effects and the and the phenomena in economics  as well that Diane was mentioning before.  Network externalities, potential sequences  of one generation monopolist. So high market   contestability, very turbulent, but also very  dynamic and creative environment. Now obviously,   we've seen that things have not exactly gone  in that direction, meaning that after a first,   if you wish, very creative season in which, being  a one-generation monopolist, did not necessarily   mean having a guarantee of remaining, the one that  rides the wave in the subsequent generation. It's   a sort of early big bang, if you wish the  dynamic environment. The situation is very  

much crystallized, along centripetal forces that  have generated four, five, six large players that   accumulate and capture most of the value that  is being generated on the internet. And this is   something that happens in unregulated environments  in most cases, because the overabundance of the   information on the internet is also, and the  modularity of the products, that are largely   consumed and distributed over the internet, has  generated also, a, what if you want to go back to   the early days of economics and behavioral  science, for example, what Herbert Simon   would have defined a situation in which a wealth  of information creates a poverty of attention,   and a situation in which a few players that  have captured the attention of the end users,   can monetize their attention in various ways. So this we know where this is led.   In many respects, inequality in many markets, so  see what has happened in the United States this   year [2020] in terms of the economic security  of those jobs, right. From 300.000 people on  

unemployment subsidies at the beginning of the  pandemic, to 30 million three weeks after that.   So it's, just to give you an idea of what, maybe  some economists would would declare and would   describe as being the marvelous, flexibility and  of the of the US labour market, I’d rather would   define it as a lack of economic security, and  potentially over time a lack of social cohesion.  And that is, also brings me to maybe a comment  that is more general, with respect to the to the   questions in the that you have formulated so far  and the answers you've gotten, which I very much   agree with of course, is the fact that, technology  is a means, and that's as such it is not regulated   at EU-level, meaning regulation tends to be  technology-neutral, meaning what you regulate is   the products and the services that use  different types of technologies over time,   with some exceptions that I’m about to spell-out  that are more recent, where we start already   building more technology-specific regulation. Technology is a mean, so we should treat it as a   means. If we think that artificial intelligence  is an end, and I come to my experience in the   High-Level Expert Group on AI, on the first day  the European commission asked me and the other 51   colleagues there, we want to become more  competitive in AI, and let me provocative   here a little bit, who cares, right? I care about  meeting the Sustainable Development Goals. If I  

can do this with AI, let there be AI. If I can  do this without AI, well that's, then let's   see what AI does. But if AI, if really aiming  at becoming competitive in AI means, having a   perhaps an exaggerated approach towards filling  factories with robots, and without creating a   meaningful complementarity between artificial  intelligence and human beings, and maybe   it means that we will not meet our Sustainable  Development Goal aids of achieving full and decent   employment for everybody, well that is I think a  very distorted way of looking at public policy.  So I’d rather have a different optic towards  regulating technology where I think that   my vision, well I base my regulation on the  principles, which are, in the case of the EU,   very much nested in the treaties, and in what  we call the European Union values, although   we would have, we would need to have  another interview about what these are   in particular. And we also are clears about,  clear about the goals, the vision that we want   to realise in the medium term, and then we deploy  our technology policy in that respect. Okay. 

Now, just let me wrap up on this, and I think  in the High-Level Expert Group we have then   taken at least partly that direction of trying to  treat technology as a means. So not sacrificing   our medium-term prosperity on the, and sustainable  development, on the altar of digital technologies.   And then over time, I think, to close the  loop that I’ve opened in my answer, the   largely unregulated environment that it started  with, I don't know, the WIPO treaty in 1996,   the Telecommunications Act, or the Communications  Decency Act in the United States, the Information   Society Directive, the E-commerce Directive at the  European Union level in the early 2000s, 2000 and   2001, that loop is now being closed, because the  European Union, maybe before the United States as   realised, maybe through competition cases first,  and then later through regulatory attempts,   that the time was right to start rebalancing the  power that has emerged through those centripetal   forces that I was describing before. And then  starting to experiment first with antitrust laws,   and then realizing over time that maybe antitrust  is not enough. And then starting to look into   what member states have had for a long  time, rules on superior bargaining power,   rules on abuse of economic dependency, and  bringing that and them back into the EU-level   competition policy and related regulatory  interventions, to build what today we call the   Digital Services Act, the Digital Markets Act, but  before then the Platform to Business Regulation.  In all this, and sorry Koen, you already know  that, I know you want to ask another question   so I’ll stop in 30 seconds. In all this, the one  off attempt that has been a sort of game-changer,  

albeit and perfect in this process, is certainly  the GDPR. So an attempt to introduce in this   largely unregulated environment, with also social  norms and self-regulated behaviours that are very   different from what was happening in the real  world, in this environment, introduction of a   non-negotiable set of rules. Maybe not fully  complied with, maybe not having the impact   one was expecting at the very beginning, but  still an assertive decision to make something   non-negotiable, for example, or rigidly protected,  which is personal data, has been a game changer in   terms of turning the tide towards understanding,  and this is something that is shared in the US,   in the EU, and in most other parts of the  world, think about Japan or other countries,   that the internet has to be regulated and  digital technology has to be regulated. Koen Smeets: I think it gives a beautiful  overview on many topics including the   environment and how this has been historically  shaped in the European Union. Thank you for that,  

and I was also very curious if you could  shortly expand on what exactly have been   the recommendations of the High-Level Expert  Group on AI. Could you tell us more about the   Ethics Guidelines for Trustworthy AI and the  Investment Recommendations for Trustworthy AI? Andrea Renda: Yeah, so the High-Level Expert  Group was given two mandates basically,   two products to develop. One was this Ethics  Guidelines and on Trustworthy - well Ethics   Guidelines on Artificial Intelligence actually,  the original mandate. And that has been I think,   the most fruitful elaboration of the High-Level  Expert Group because we did two things, cutting   a very long story short. A subset of the group,  in particular the academics in the group, have   given a precise direction to the work of the  High-Level Expert Group, without replicating   the dozens of dozens of ethical principles  that are already available, from bioethics   to artificial intelligence everywhere around  the world, private sector, public sector,   international organizations, governments and  so on. We have decided to define the ethically  

aligned in AI in a broader way as trustworthy AI,  but with a way, in a way they would have concrete   reference to the legal system. So initially,  the first thing that we said, is trustworthy AI   has to be legally compliant, even before we start  talking about ethics. Because laws are there,   and we know how fluid and difficult to grasp  is the digital subject matter, it is far from   established that all the digital technologies out  there easily comply with all the legal rules that   we have in place. So legally compliant from GDPR  onwards obviously, ethically aligned and then   we define four key principles of ethical and  responsible development of AI, which go from   the protection of human autonomy and agency, to  the prevention of harm, fairness, explicability.   But we did also, and then we would, so we added  a third pillar which is the robustness of AI.  So trustworthy AI is also AI that has gone through  some process, in an ongoing governance, not just   an exemptive process, an ongoing governance that  guarantees that best efforts are made to make   this AI-product resilient and robust towards  external attacks, right. Within the ethical  

pillar we have done something that, in my opinion  is a little bit of a game changer compared to the   proliferation of ethical principles that are out  there. We have tried to convert those principles   into requirements, and the requirements into  concrete questions. So to guide AI developers   and designer and developers and deployers,  through a process in a way that would enable   them to self-assess at least, perhaps in the  future to be assessed as regards their alignment   with the good practices in trustworthy AI. Now this is the basis for the white paper on  

AI that the European commission has presented  in February, for what concerns the ecosystem of   trust, and will be the basis for the forthcoming  regulation on AI that will be presented in the   first quarter next year [2021], although there are  a number of problems there that we can elaborate   on but obviously only if we have time. But still,  the underlying DNA and overall texture of that   regulatory intervention is in that initial input. The second is a set of recommendations   for policy which are oriented towards data,  towards the role of the public sector,   towards skills, towards infrastructure. And  them, they are very broad. I think they have been   a little bit more diluted in the process of trying  to get agreement between such a diverse group.   Overall I think I have mixed feelings with respect  to the experience of the High Level Expert Group.  

I would save as a flag that we have been able  to put the Ethics Guidelines on Trustworthy AI   as being something that, at least has made a  little step forward in the direction of something   that is a huge castle, the regulatory framework  for AI that we still largely need to build. Koen Smeets: I fully agree with you. And I  was wondering, going back to Professor Coyle,   how, what is your perspective on the regulation  of disruptive technologies? And it's especially   in the context of the earlier comments you made  on the digital economy, and also the comments by   Ms Stix and Professor Renda, and could you  tell us more about especially the difficulties,   for competition policy in a digital  economy? What are its unique difficulties? Diane Coyle: Well, there are several things  I’d want to say about that. First of all,  

traditionally competition policy hasn't had to  think as much about the dynamics of markets as it   does in these digital markets. And that's because  as Andrea was saying, the context now is that   you're trying to make sure there's competition  for the market so new entrants can get in,   even if they then become dominant for a while,  in the way that Facebook overtook Myspace, or   other browsers overtook Internet Explorer. So  there's a need to kind of reshape competition   policy to think in that way, complicated by the  fact that a lot of the big companies that we might   worry about operate in many different markets.  In a normal market inquiry or merger inquiry,   you'd define a particular market that you're  looking at. It's much harder when you're thinking   about one of these very large companies that's  got lots of different activities in a very complex   ecosystem as it's called, of people supplying  to the platform, people using the platform,   moving into different markets with its userbase. The hard thing though I think is thinking forward.  

There's a lot of debate now about whether Facebook  should ever have been allowed to buy Instagram,   at the time nobody was at all concerned about  it because Instagram was very small. Was there   anything competition authorities could have  looked at at the time to give them a clue? Well,   one possibility is you would look at the price  that Facebook was willing to pay for Instagram.   I can't remember exactly what it was, but it's  a very large sum of money for a tiny company,   and that was a clue. And then the other is that  you need to look at, much more closely at the   board documents and the strategy documents  that competition authorities have access to.   So it changes the way that you think  about competition policy and apply it.  But then the other thing is this whole question  about regulation that Charlotte and Andrea have   been talking about and I’d like to build on that.  We've had this debate for a long time, framed by  

business actually, that says regulation is bad,  it clogs up the economy, it stops companies being   as productive as they might be. Sure,  there has to be some, but it's generally   a bad thing and we want to avoid it. And there  was a regulatory freefall really for digital   companies right back in the 1990s. But you can  also think about regulations and standards,   and you know an example might be setting the  voltage for electricity, which was about safety   but also about setting a standard which created  a level playing field, gave all the businesses   in the market a clear set of standards that they  all work to and grow the market. Another example  

would be GSM and mobile technology, a standard set  by Europe which you could also see as a burden on   companies that were not using that technology  originally. So regulations sets standards,   shapes level playing field markets and  makes it possible for them to grow.   And there's also now a lot of discussion  about what kinds of regulation do we need   in these digital domains, which will  range from mandatory codes of conduct,   to particular technological standards  and interoperability of data and so on.   And I would point out that we wouldn't be  having this debate about standards and ethics   if companies had been behaving differently, and  so part of what's happening now is a response   to the behaviours. I don't think it's all about  ethics, though. Ethics are really important but  

being an economist I would say incentives are  important, too. And a lot of you know, it's not   that a lot of engineers and data scientists  are bad people, they're not evil people,   but they're operating in a system with very  powerful incentives that shapes what they do.  So public service procurement is an area I  think is going to be interesting to think about.   When governments are buying technologies  in public-private partnerships or just to   deliver public services, they need to think  really carefully about what they put in those   contracts. So at a minimum, data access. The data  does not belong to the company that's providing   the technology through a contract, it belongs  to the public. I think it would also be really   interesting to think about public service  AI, and the public sector itself developing   applications which are, because they're different  business models then the kinds that operate in the   private sector, will lead to very different kinds  of behaviour, and I think it'd be very healthy to   have those those comparisons. So an example might  be smart cities, or transportation, or health,  

where if public authorities can use their data,  respecting people's privacy and data security,   obviously, but use that to deliver benefit  for people in general, then that is a form of   competition with the private sector that will  change the behaviour of the private sector.  The other the final point I’ll make about this,  is that we tend to have quite generalized debates   about AI, about data, actually you need to get  much more specific, because it's different in   different sectors. A lot of attention focuses  on the advertising-driven models, the big tech   giants, and our personal data. There's a lot of  other types of data as well and we need to think   about how the value from those gets distributed.  One of my favourite examples is John Deere,   the tractor manufacturer, which has provided lots  of sophisticated IT-equipment and software in the   cabins of tractors, and farmers get a lot of  useful information from that about the soil, and   the weather conditions and so on. But John Deere  encloses that data for itself and is creating new  

software and new services to sell, which are  higher profit margin than selling tractors.   And so it's capturing that kind of value, rather  than sharing that with the farmers and actually   in that case is even trying to use US courts  to forbid farmers to mend their own tractors,   on the grounds that they have copyright over the  software in the cabins. And so that's an another   kind of area that's not about personal data and  we need to think much more about the Internet of   Things and, particularly in Europe where there's  a industrial advantage in that kind of technology.

Koen Smeets: So I think that's a very, very  interesting, comprehensive answer on the   difficulties related to this, and I was also very  curious, because you've also been quite critical   on how this relates the way we teach economics.  Because economics, especially in its basic models,   it's quite negative on regulation, it's  assumptions say, markets are perfectly competitive   and work and such. So I was wondering, and you  also have talked about these assumptions influence   policy makers and economic experts in the  society, so could you expand on those subjects? Diane Coyle: I’ve been very involved in trying to  change the economics curriculum over the past few   years. And it's driven by an experience with  policymakers who learned their economics some   decades ago and it was very much shaped as, by  this perfect market benchmark, and so always   starting from the position that generally the  markets are the best way to organise things,   but you might think about exceptions, you might  think about market failures. And although I   think that's a really useful intellectual  framework for testing how you might make   a more efficient use of resources,  I don't think it's the right place   to start in the modern economy where things  like increasing returns to scale, market power   power, structures in the labour market are,  there's such obviously an important empirical   phenomena that you ought to be starting there. And  of course you can then use the underlying economic   theory to, as a kind of thought experiment or  to test your ideas, but you should be starting   in a different place. And so I hope that, I think  that's changing. There's huge student interest  

in anything about digital economics, for the  obvious reason that it's really affecting our   lives in in a big way. And so as I do detect  change. I wish there was more economic research   going on in this area, I mean obviously some  fantastic academics, but it's been a bit slow.  I started writing about digital technologies  in the 1990s, we've had the internet   widely available since at least the mid-1990s,  and it's really only just now that you're seeing   a huge growth in the amount of economic research  being done on competition in digital markets, and   I regret that it's been so slow. We  should be we should be further ahead  

than we are in terms of both the analysis  and the data, the empirical understanding. Koen Smeets: I saw professor Renda quite nod,  could you also expand on how you see this? Andrea Renda: Well I was nodding on, first of all  on Diane's example of John Deere because I think   it's very telling of what is happening in a number  of markets, and what has already happened in a   number of markets in particular and economic  sectors, in particular in the United States,   where farmers indeed need to purchase access to  data coming from their own land. This is something   that transforms them into slaves to those  players that are able to capture the value from   the real economy activity that they perform.  Indeed, this is exactly the concern that has  

led, the way I interpret it at least,  the European Commission to launch this   data strategy based on two main pillars. One is the, it's a sort of foresight,   it's a vision of the upcoming evolution of  digital technologies, in particular from   the centrality of the cloud as the place  where we store data, to more distributed,   even ultimately decentralized architectures,  where not only we store the data more locally, we   avoid sharing the data widely, and we  apply artificial intelligence in a more,   at a more local level. One example,  autonomous vehicles cannot afford   shipping the information to the cloud whenever a  decision has to be made and then receiving it back   after the cloud, the cloud-based artificial  intelligence has elaborated that information.   This creates latency, creates connectivity  costs, it creates some security problems,   because everything that travels long distances  is potentially exposed to attacks. In principle,   ideally, we would have a big brain inside the  autonomous car, but at current technology,   a big brain autonomous and able to fully process  all the information in an autonomous car,   means a half-hour battery duration,  right. So, technology advances,   currently we are at the at the situation in which  a lot of artificial intelligence and data storage   can be put in what we call "the edge". It's  an intermediate layer between the things, the  

connected things, and the cloud, and edge cloud  architectures are much more, let's say prone to   data management by real economy players, car  manufacturers, farmers, energy companies, and if   you create a governance and a legal framework that  it is conducive to such a sharing of data between   the players and the producers that populate those  sectors, you might create the preconditions for   their stronger bargaining power vis-a-vis the  tech giants, and a bit of a rebalancing of this   value that has been captured by just a fistful  of players so far. So that is a very acrobatic   attempt, but the Data Governance Act, the Data  Act next year [2021], the potential scaling up of   GAIA-X as a Pan-European project with a federated  cloud environment, is resting on this idea.  And a second idea which I think  is very interesting for economists   and people that study social sciences and  decision sciences more generally. The idea that,   maybe it's finally Larry Lessig's time. Meaning,  Larry Lessig in the mid 90s and late 90s,   wrote about the the prevalence of code as,  rather than law as being the determinant of   what is possible in the internet environment. Well  the attempts through GAIA-X and data spaces of the  

European commission can be interpreted as a way  to translate legal codes into software codes.   Meaning, being part of GAIA-X means, in  principle, we'll see the realisation in practice,   committing to compliance with GDPR by  design, and committing with some forms   of data interoperability by design, and perhaps  implementing protocols for use and control over   data by design. So we're actually thinking, you  know the normative power of Europe that we've been   thinking and then Charlotte was mentioning before,  now the idea that there's a Brussels effect,   or that Europe can be a standard setter also  for digital technologies around the world,   in my opinion it's chiefly dependent, at  this moment, on Europe's ability to create   an environment in which it governs technology  also by technology, not only by legal rules,   and courts, and regulatory agencies, and that  I think is an enormous area for economists, for   interdisciplinary social scientists  to study at the moment, and it really   increases and strengthens, if you wish,  the muscles of the of the economist   if the economist is willing to venture into  this, into alternative forms of governance,   which come from a long tradition in economics,  obviously, and their interaction with technology. Koen Smeets: It's very, very fascinating. Ms  Stix, could you expand on that as also from  

the technical perspective, and perhaps if you  also have comments on the economic perspective. Charlotte Stix: I mean sure, I mean, so the sort  of products that we produce and deploy in Europe   and our competitiveness, or the competitiveness of  our industry, is intrinsically linked to ethical   and technical as you said, considerations and I  mean this has been mentioned before by Diane I   think, ethics is not you know the be-all end-all,  but ideally ethics and technical considerations   should merge and align, right. And in Europe  they do that, and that is a really important   direction to point out and in the white  paper which follows the ethics guidelines,   the seven key requirements from the High  Level Expert Group that Andreas mentioned.  

In the white paper that has been translated into  technical obligations, or legislative obligations   for high-risk AI-systems which is equally grounded  in technical requirements, you have requirements   for training data, so to ensure that reasonable  measures have been taken, that outcomes don't   lead to prohibited discriminations, data record  keeping, documentation programming and training,   information provided, robustness and accuracy, so  to ensure how a system can adequately deal with   errors in the life cycle or through attacks  and human oversight. And I think those sort   of mixtures really do put Europe in a unique  position in comparison to other governments,   internationally speaking. And it also  could empower European industry. So yes,   there is a Brussels effect, but it could actually  also lead to novel innovation quite frankly.  

So the testing and experimentation facilities  that will eventually need to be built, in order   to ensure adherence to these ethical/technical  obligations, that ensure that you adhere to the   relevant legal framework will set completely  new structures as to what products that come   onto the European market will look like, and that  can encourage innovation because a lot of these   considerations are actually forward-looking and  they address both technical and societal problems.  So if you think about the long-term effect  of AI-systems on the climate, yes it is often   touted as, you know being able to tackle climate  change but it is also a massive contributor to   worsening climate change. So if you think about,  for example putting this as one of the technical   obligations to address topics such as these,  you could encourage the European industry to   focus on energy-efficient learning, which might  put Europe into a different position, a different   position in the global scale. And as Andrea has  mentioned you know, Europe does have the data   strategy, does think about edge computing, GAIA-X  is an initiative across different member states,   and there is value to capture there. And as Diane  said you know, with the example with the farmers,  

that is a really big problem that Europe is  also addressing and trying to pre-emptively   tackle. So it is suggesting to open these forms  of environmental data, in order to harness them   for the individuals and for the public sector and  so that they are not, you know resold for purposes   or for groups of people that should have access to  these things in first place. So I think Europe is   really going into a lot of different directions  here, trying to mix and merge competitiveness   with ethics, with technical considerations, under  the helmet of pushing the ecosystem that we have   from an industry perspective and the ecosystem  we're creating from legislative perspective. Koen Smeets:   Fascinating. Professor Coyle, would you care to  expand on professor Renda's and Ms Stix' comments?  I think you're still on mute. Diane Coyle: Sorry. It's the phrase of 2020,   you're still on mute. The thing that struck me  actually listening to Andrea and Charlotte is, I  

completely agree that there is a real opportunity  for Europe here, not just to shape outcomes in   Europe but to shape them globally, and to have  a leadership role in setting standards, and   providing models. The thing that struck me though  is the need for an interdisciplinary approach.   And this applies to academics working on these  things as much as it does to policy makers.   You need computer scientists obviously because you  need great technical know-how to set standards,   regulate effectively, deliver value for  people. Economists, lawyers, deeply involved   in competition policy and writing regulations.  You also need to involve politics, because these   changes in our society that are coming about,  they need to have legitimacy. And we know we're   in a context of great polarization in lots  of countries, inequalities being exposed and   broadened by the pandemic and the economic crisis.  And it's going to be really important, because the  

technology will drive significant social change,  to have political legitimacy and accountability,   and then also social psychologists, behavioural  psychologists, because this is all about   how people, how people behave. But if you think back to the Industrial   Revolution, we tend to talk about something  like railways as changing transportation,   which obviously it did, but it also drove  urbanization because food could be grown   outside cities and brought into cities and  so the population that could be sustained   in modern urban centre was much bigger. And  that's been a huge change in social, political,   and economic life through the 20th century. And  that's the kind of scale of eventual impact on   our societies that we're talking about with any  general purpose technology like digital and AI. 

And so fundamentally, although I think  what we've been talking about is important,   the really important thing is the legitimacy  of the changes, and ensuring that these   technologies deliver benefits for everybody and  not just making a few people very rich indeed. Koen Smeets: And focusing on the economics  curriculum, you mentioned the importance of   interdisciplinarity and I think we see  this also not in this interview but also   in the other interviews in this series, and I was  wondering how, what should we exactly then change   in the economic curriculum? And also, should we  include more pluralism, should we include more   real-world perspectives, should we include  more interdisciplinary, how do you see this? Diane Coyle: Well, the example I point to  the CORE economics textbook, the economy,   which I was one of the co-authors of, and   I think it does take this much more  empirically-founded approach and incorporates   things like power dynamics, and inequality, and  distribution right at the heart of the curriculum.   I think it is important to understand older  debates in economics, but I wouldn't go   so far as to say that, like some of the humanity's  economics is always about contesting sets of views   and values, it's both. Obviously, we all  bring our values to these questions and   it has been a mistake in economics to say that  you can separate the normative and the positive,   the values, choices, and the empirical evidence.  I don't think you can separate them, but at the   same time I think it's really important that as  economists we try to be as impartial as possible,   looking at evidence and bringing empirical  evidence to bear on these social problems. So it's  

an uncomfortable middle position, but I’m not an  advocate of complete pluralism in the curriculum. Koen Smeets: That's interesting. Ms Stix,   how do you think we should include in  the economic curricula, what would be   most important from a policy perspective  and perhaps also an economic perspective? Charlotte Stix: I mean, I think I can just echo  what has been said so far. I think in all fields,  

not just exclusively economics, if you start  talking about emerging technologies it is really   important to include a lot of different fields  working on this. And coming back to the earlier   point, I think particularly technical researchers,  it's important to understand what you are looking   at and the actual capabilities of this technology,  not as it is now, when it's already on the market,   but what it can do in two or three years. You  really do need to speak to those researchers   doing their PhDs right now on these topics,  in order to know what is the cutting edge.   Well, because these technologies shape  economic markets so much and they shape   government's decisions so much and so quickly,  that you almost need to have an anticipatory view.   And you can only have this anticipatory  view if you don't react but if you sort of   already understand what the next steps  and what the timelines for technological   development will be. Now that doesn't mean you  need to become a technical expert, by no means   is that. Well, it might be possible  but I don't think it is required,  

but it does mean that you do need to  engage with those people that are working   on these technologies right now. Because otherwise  the cycles are becoming too quick and you can   lose track really quickly and it does shape your  research or your proposals in your economic work. Koen Smeets: And then as a closing question  before we move to final statements,   Professor Renda, how do you see this in light  of professor Coyle's and Ms Stix' comments? Andrea Renda: Well I'm, I agree with with  both of them. I’ll bring in a little bit  

of personal experience as well, in trying to  encourage students of economics today to really   listen to what their teachers have to say, but  at the same time develop their own intellectual   path, independently and in a  multidisciplinary way, as much as possible.   I studied economics, I specialised initially  in a subject called "economic analysis of law",   or "law and economics", which was heavily  dominated by the Chicago School and Neoclassical,   Neoclassical economics behind heavy use of very  standard cost-benefit analysis, the translation   of this into an approach to competition law which  I would say was almost minimalistic. And I have   navigated through those waters by trying to  stick to my own understanding and my own beliefs,   which were very sceptical, and I was very  sceptical of many of those principles. And I kept   applying this, and I still apply this, when today  for example, we apply economics in public policy   in a way that still uses maybe very standard tools  like cost-benefit analysis that are in most cases   unrelated or disrespectful, in some cases of  governance, of distributional impacts, and   stick to something that in my opinion is one  of the, of the key problems but has been one of   the key distinctive traits of economics over the  years, which we call methodological individualism.  

In economics largely, was still, analyse people's  utility or happiness, in a way that is completely   unrelated to what happens in the surrounding or  what others have, so the relative dimension of   that. And this I think has made economics say a  science, that princes and policymakers looked at,   as a rocket science, which it's obviously  not, and so it has determined part of the   popularity of economics over time. But at  the same time, it's a huge limit in this,   and this social science and something that  calls for contamination in the positive sense,   with many, many other social sciences today. So, in principle, I would ask students today and   I would ask the ones that develop curricula  in economics to try and depart at least   partially from what we normally have, especially  in textbooks microeconomics books, such as   indifference curves or things like the utility  functions, and what we immediately start learning,   very, very soon in the economics 101, which is  the fact that people's happiness and utility can   easily be proxied by income. And that is something  that in my opinion has created disasters in the  

application of economic policies, in developed  and developing countries around the world. Koen Smeets: I’d had loved to expand on that  but we're already quite a bit on time. So   for closing statements, Professor Coyle, perhaps  also in light of what Professor Renda just said,   if there's a one thing you'd could say  to students in economics watching today,   related to the topics we also  discussed today, what would that be? Diane Coyle: My one piece of advice would be  about the kinds of questions that you pursue   in your career, in your studies, and your  subsequent career, wherever that is. There   are very strong incentives in life to stick to  small problems, you know fix a particular detailed   policy issue or research something that will get  you a paper published in one of the economics   journals. There are lots of really clever people  spending all their energy on small questions.   We've got some really big questions facing  us at the moment, and so my advice would be,   obviously students are really interested in  those big issues, and so have the courage   to pursue those big issues, because it needs  the younger generation to be working on them. Koen Smeets: I think that's a beautiful  answer. The same question, Ms Stix,  

if you, if there's one thing  you could say to students,   especially in economics watching, related  topics we discussed, what would that be? Charlotte Stix: Sure, I mean, I think I would  pretty much, well first of all agree with what has   been said, and come back to what I’ve said before  about working with various different experts. You   can't work in silos anymore in the world that  we're in, not with this technology either.   And you cannot come up with completely novel ideas  focusing only on your, you know narrow, specific   question that you're looking at, it's really  important to broaden your horizon and to engage   with all of the knowledge that is out there.  And you know, the knowledge from frankly  

various different fields, and I think that's  where fruitful connections, and new ideas,   and also approaches to tackle specific problems  can be drawn from, and that's really important. Koen Smeets: Yes, I fully agree and I  hope that this interview-series also   can contribute to that. Professor Renda,  for you the same question as the very last   closing questions. Do you have any  last tips, recommendations, advice? Andrea Renda: Well I think I can maybe relate my  answer back to what has been said throughout the   hour, I think to digital technology in particular.  I think the emergence of digital technology  

and interconnected environments, such as the  internet, has given economists and social   scientists more generally, a unique opportunity  to study the evolution of an ecosystem,   that could be seen, at least at least initially,  could be seen as a standalone one. Still today   I think, we have the possibility of studying the  evolution of the digital economy as a as a living   system, if you wish. Where the needs, the external  needs, the needs to perform certain functions and   technological evolution, determining a different  morphology of that living system over time.   We've seen some basic foundational elements  which Diane has summarised in her first answer,   we've seen the first evolution, which largely  was due to the way in which the internet   was structured you know. Code determines  what's possible. With today we are building   from the central nervous system of the internet,  the cloud, we are building the peripheral nervous   system. The potential that we will have in  using the Internet of Things for regulation,   for public policy, for social life, for the  economy is enormous, and we need creative minds   that are grounded in social science, including  economics, that help us understand and anticipate,   what this will mean in terms of governance,  regulation, and public policy. So be applied,  

be creative, be broad-minded, and importantly,  be inspired by the public good, which I think is   very important and sometimes difficult. For, and  especially economists that have a very rich market   in front of them, if they specialise in something  else than the public good, to remain really   concentrated on what still economists can do that  is great today in terms of contribution to the   evolution of our public policies, and overall the  way in which we govern our economy and society. Koen Smeets: Thank you, I think it was  a beautiful answer and a great closing   statement. And I want to thank each of  the panellists for taking the time today,  

and we hope to see the viewers again  at the next episode. Thank you!

2021-04-27 03:15

Show Video

Other news