Stanford ECON295/CS323 I 2024 I A World Without Work, Daniel Susskind
good afternoon so I'm delighted to have uh Daniel suskin with us today uh Daniel susin is a research professor of Economics at King's College in London and a senior research associate at The Institute for ethics and AI at Oxford University he's the author of one of my favorite books if you haven't read it yet a world without work he also has a new book what's what's the new book it's growth or Reckoning the growth Reckoning which he talked about this afternoon uh he's a former Kennedy scholar at at Harvard University where he's held numerous posts in the British government including uh the Prime Minister strategy unit the cabinet office and the policy unit attend Downing so he's going to give a talk a little bit of overview of some of the things that he's been working on go for about 40 minutes for that and then we'll have a little fireside chat and and do questions with all of you so terf thank you very much welcome Daniel thanks for coming all the way over from London I hope the jet lag's not too bad keep you adrenalized with hard questions yeah right fantastic well thank you thank you very much for for having me it's a great it's a great pleasure to to be with you all to talk to you uh about a world without work uh and what I want to do is I want to begin with a story uh the great manure crisis of the 1890s and it should have come as no surprise for some time in big cities like London London and New York the most popular forms of Transport had relied upon horses hundreds and thousands of them to heave cabs carts wagons Wes and a whole variety of vehicles through the streets and with these horses came manure and lots of it so one enthusiastic Health officer working in Rochester New York calculated that the horses in his city alone produced enough to cover an acre of land to about 175 ft so almost as tall as the Leaning Tower of Pisa and apocryphally people at the time extrapolated from these calculations to a sort of inescapably manure filled future um a New York commentator who predicted that piles of the stuff would soon reach the height of third story Windows uh a London reporter who said that by the middle of the 20th century the streets would be buried under 9 ft of the stuff and it said the policy makers didn't know what to do you know they couldn't simply ban horses from the street they were far too economically important but the twist in the tale is that in the end policy makers didn't need to worry in the 1870s the internal combustion engine was built the first in the 1880s it was installed in the first automobile uh and only a few decades later Henry Ford of course brought uh the Model T to the mass Market uh cars were were now a mass Thing by 1912 New York had more cars than horses 5 years after that the last H drawn tram was decommissioned in the city the great mure crisis was over so this parable of hor as Elizabeth colber called it in the New Yorker has been told many times over the year and in most many times over the years and in most tellings of the story um the decline of horses is cast in an optimistic light as a tale of technological Triumph but for this man wasil leonti if the Russian American Economist who would win the noo prize for his work in 1973 the same event suggested a far more unsettling conclusion what he saw was how a new technology the combustion engine had taken an animal that for mill had played a central role in economic life and had banished it to the sidelines and and then in a set of uh articles in the early 1980s he made one of the most infamous claims in modern economic thought What technological progress had done to horses he said it would eventually do to human beings as well drive us out of work what cars and tractors were to them robots and computers would be to us now today the world is gripped Again by Leon's fear here in the US about 30% of workers believe their jobs are likely to be replaced by robots and computers in their lifetime in the UK the same proportion think it will happen in the next 25 years and what I want to do today is to explain why I think we need to take these sorts of fears seriously not always their substance as we shall see but I think certain aspects of their Spirit now will there be enough well-paid work for everyone to do in the 21st century I think this is one of the great questions of our time uh and what I want to do today is explain why I think we need to take this threat of technological unemployment um more seriously and so in particular what I want to do are six different things the first is I want to set out just a little bit of the history of automation anxiety this isn't the first time that people have worried about the impacts of Technology on work the second is I want to say a little bit about technology in general the third is I want to say a bit more about one technology in particular uh which is artificial intelligence is something that has really captured people's imaginations over the last uh year or so uh and I think there has been some hype and there has been some overe excitement but beneath that something interesting has happened and I want to spend a bit of time exploring exactly what that is why it matters so much for thinking about the future of work and I want to explain why I think we need to take these technological challenges in the world of work seriously I want to set out a theme that has become increasingly popular particularly in the last few months with respect to technology that what we ought to do is try and redirect technology um to not try and slow it down but instead pursue the sort of technology that protects good work rather than undermines it and I want to share a few thoughts on that and then finally I want to close with what I think are the three big problems that we'll face um in a world with uh less work and explain how I think we ought to respond to them so first the economic history economic growth is a very recent uh phenomenon in fact for most of the 300,000 years that human beings have been around economic life has been relatively stagnant um but over the last few hundred years that economic activity came as you can see to an explosive uh end the amount each person produced increased about 13-fold and World output rocketed nearly 300 fold um so imagine that the sum of human existence was an hour long most of this action here happened in the last half second or so you know in the literal blinking of an eye and it was Britain that led the economic charge thundering ahead of others in what's now known as the industrial revolution around the 1760s and over the following decades new machines were invented and put to use that greatly improved the way that goods were produced and these Technologies allowed manufacturers to operate far more productively than ever before in short to make far more uh with far less and it's here at the beginning of modern economic growth that we can also detect the origins of automation anxiety so people start to worry that using these machines to make more things would also lead to less demand for their own work as well um the anxiety that automation would destroy jobs sort spilt out into protest and descent so during the Industrial Revolution back home technological vandalism by the so-called ly was you know widespread in 1812 the British Parliament felt forced to p uh to pass the the so-called destruction of stockings frames act uh Etc so destroying machines became a crime punishable by death uh and several people were soon charged uh and in fact executed uh in Britain for destroying uh machines importantly though this automation anxiety wasn't confined to the 17th and 18th centuries it continues right after that up until the present day so in the last few years there's been a sort of frenzy of books and articles and reports on the threat of automation yet even as early as the 1940s the debate about technological unemployment was so commonplace that the New York Times felt comfortable calling it an old argument uh in fact in almost every decade since 1920 it's possible to find a peace in the New York Times engaging in some way with this threat of technological uh unemployment and yet and this is the key point and yet most of those anxieties about the economic harm caused by these new technologies have turned out to be misplaced so looking back over the last few hundred years there's really little evidence to support that primary fear that technological progress would lead to large pools of permanently displaced people um it's true that workers have been displaced by new technologies but eventually most have most have found new work to do and so the really interesting question is why you know why in the past despite the fears of so many people did technological progress not lead to mass unemployment as as many people worried and the answer when we look back on what actually happened over the last few hundred years is that the harmful effect of technological progress on work the one that really preoccupied our anxious ancestors is only one half of the story so yes machines took the place of human beings in performing certain tasks and certain activities but they didn't just substitute for people they also complemented them at other tasks and activities the tasks and activities that hadn't yet been automated increasing the demand for human beings to do those unautomated tasks instead and it's this helpful Force which is so often forgotten about uh which works in a variety of different ways that again increase the demand for displaced human beings to do work that hadn't yet been automated uh and and this this complimenting Force Works in in various different ways as I said and and for those who want a sort of uh uh some more uh detail on how it works do take a look at the the book A World Without work I you know explore lots of different ways in which this complimenting Force helps increase the demand for displaced workers but the key point is this which is that distinguishing clearly between that harmful substituting force and that helpful complimenting Force helps to explain why those past anxieties about automation were so often misplaced so in the clash between these two fundamental forces our ancestors just tended to pick the wrong winner know time and again they either neglected that helpful complimenting Force altogether or they imagined that it would somehow be overwhelmed by that harmful substituting force and that is why they repeatedly underestimated the demand for the work of human beings that would remain there has always been by and large enough to keep human beings in employment so that I think is the the context and it's important context for thinking about the impact of technology on on work today and I want now to with that background turn to think a little bit about technology you know so every day it feels like we hear stories of systems and machines that are taking on tasks and activities which until recently we thought only human beings alone could ever do you know making medical diagnoses and driving cars drafting legal contracts and designing buildings composing music and and writing news reports um given the uh time constraints this afternoon I just want to focus on the General way in which I think about the technological progress that's taking place at the moment and I think it's you know although machines can clearly do more than they can in the past they still cannot do everything you know there are still clearly limits to that harmful substituting Force the problem is that the boundaries um between what these machines can and cannot do are unclear and they're always changing and so what you see are lots of books and articles and reports that try to work out the new limits of uh machine capabilities and they use lots of different approaches so one is to try and identify which particular faculties are hard to automate so a popular finding for instance is that new technologies struggle to perform tasks that require social intelligence so face-to-face interaction empathetic support So if you look at the data from 1980 to 2012 jobs that require a high level of human interaction grow by about 12% as a share of the US Workforce um a very different T to identifying kind of faculties is to instead look at the tasks themselves and ask whether you know particular tasks and activities have features that make it more or more easy or less easy for a machine to handle them so if you come across a task where it's easy to define the goal straightforward to tell whether or not that goal has been achieved and there's lots of data for a system to learn from then that's the sort of task that can probably be automated and so identifying cats as a you know a good example of that the goal is simple just answer the question is this a cat um it's easy to tell whether the system has succeeded or not um yes that is indeed a cat uh and there are lots of cats out there in the internet you know surprisingly uh disturbing number of cats apparently 6.5 billion photos of of cats on the internet so but the obvious problem with trying to Mark out the limits of machines in either way either identifying faculties which are hard to automate or particular tasks which are hard to automate is there any conclusion that you reach is going to become outdated pretty quickly um those who try to identify these boundaries are like the proverbial painters of the fourth rail bridge in Scotland um a bridge that's so long they supposedly had to start repainting it as soon as they got to the end because by that point uh the paint would have already begun to peel so spend too much time trying to come up with a sensible account of what machines can do today and the chances are that by the time you finish you're probably going to have to start again uh and readjust so I think a better way to think about technology and this is what I do in my work is to try and stop identifying particular limits you know repress that sort of desire to want to taxonomize and try to look at the deeper Trends and I I think when you do that what you see is that although it's very difficult to say exactly what machines will be capable of doing in the future I think it's pretty certain that they're going to be able to do more than they currently do today now over time machines are gradually but relentlessly going to take on more and more of the tasks and the activities that at the moment only human beings alone could do you know take any technology today you know look at your laptop you open your smartphone that's the worst it's ever going to be um and you know I call this General Trend task encroachment and when you look at the three main capabilities that human beings draw on when they do their work whether it's manual capabilities so those that involve the physical world whether it's cognitive capabilities that draw on our capacity for thinking and reasoning or effective capabilities are their capacity for feelings and emotion what you see a machine gradually but relentlessly encroaching on more and more tasks and activities that require each of these capabilities um and if you if you have a look at my work you'll see hundreds of examples of this process of task encroachment underway um I think it's important though just to remember that the examples that I given my work and that some of you will have read this week on on the reading they're not meant to be exhaustive some impressive ones are missing um others you know in years to come will no doubt look pretty tired and I think the claims um I'm talking about aren't meant to be you I think it's quite important the claims that companies talk about are not taken as gospel either um I think at times it can be quite hard to distinguish you know serious corporate Ambitions and achievements from you know provocations uh drawn up by marketeers whose job it is to you know try and exaggerate things I think the icing on the cake for me was when someone asked at Christmas if I would like an artificially intelligent toothbrush um you know I'm I'm not quite sure how intelligent you need to be to brush your teeth but the the general Point here is that you know if you dwell for too long on any particular emission or exaggeration I think you're going to miss the the bigger picture here uh which is that these machines are gradually again but relentlessly encroaching on more and more tasks and activities that in the past required quite a rich range of human capabilities um you know economists are pretty wary of labeling any empirical regularity as a law or as a rule but I think you know this process of task encroachment has proven to be almost as lawlike as any historical phenomenon can be you know barring catastrophe it seems pretty likely to continue and and this I think is you know the most valuable way uh to think about technology in general um what I want to do now though is just focus on one technology in particular which is artificial intelligence because I do think something really interesting has happened in the field and I just want to you know spend a bit of time setting out exactly what it is and that that'll help me in explaining why I think we need to take these challenges of Automation in the world of work more seriously and the story I want to tell begins in what I call the first wave of artificial intelligence um and the first wave took place in the 1980s and the 1980s were the time when my uh dad who was my co-author in writing the first book that I wrote the future of the professions he was doing his Doctorate in artificial intelligence and the law at Oxford so almost 40 years ago he was already trying to bu systems that could solve uh legal problems and something really interesting happened in the UK in 1986 which was that a very difficult piece of law was passed called the latent damage act and it turned out that the leading expert in the world at the time on this very particular piece of law was a man called Philip kapper and Philip happened to be the dean of the law school at Oxford where my dad was doing his Doctorate and Philip came to my dad and said look this is just absurd anytime anyone wants to understand if this law applies to them they have to come to me and so what he said was look why don't we work together join forces and build a system based on my expertise that people could use instead of having to come to me and that's exactly what they did from 1986 to 1988 they worked on the development of the world's first commercially available AI system uh in the law uh now this was the home screen design for the system they built my dad assures me that this was a cool screen design in the 1980s never being entirely uh convinced of that just to give you a flavor of what they are up against here's an extract from the law uh section two of this act shall not apply to an action to which the section applies there a more readly understandable piece of the law I I love this though they published it in the form of floppy discs at a time where floppy discs genuinely were still floppy and you know essentially what they did together was they built an absolutely gigantic decision tree where you answered these yes or no questions and you navigated through a tree that had quite literally millions of branches through it that my dad and his colleagues had manually painstakingly crafted together in the computer science lab now what's interesting is that they weren't just doing this in law this was the General approach in AI back then and in the beginning most people like my dad most AI research has believed that building a machine to perform a given task meant observing how a human being performed that task and then copying them so some people tried to replicate the actual structure of the human brain others took a more phys more psychological approach and try to replicate the sort of thinking and reasoning processes that a human being appeared to be engaged in a third thing was to try and draw out the the rules that human beings seem to follow but in all these different efforts human beings provided the template for machine behavior in one way or another what's interesting is that ultimately this approach of you know building machines in the image of human beings uh didn't really succeed and despite that kind of initial burst of optimism and enthusiasm in the 1980s uh there wasn't really a lot of serious progress in in artificial intelligence and as many of you will know as the 1980s came to an end and as the 1990s began research interest funding basically progress in artificial intelligence dried up and a period known as the AI winter began um where not a lot of progress happened in the field field at all and that first wave of artificial intelligence that had raised so many hopes um ended really in in Failure the Turning Point comes in 1997 and it's a moment that I think lots of you will be familiar with this is of course when Gary kasro who at the time was the world chess champion was beaten by a system called uh deep blue and it was owned by IBM now what's really interesting about this is that if you had gone back to the 1980s and said to my dad and his colleagues do you think something like this will ever be possible and remember they were some of the most open-minded they some of the most ambitious Progressive people thinking about AI back then if you had said do you think something like this will ever be possible they would have said to you emphatically no you know we will never do it and the reason why they would have said no is very important the reason why they would have said no is because at the time they were of that first wave mindset so they thought again the only way to build a system to outperform a human expert is to sit down with a human expert get that human expert to explain to you how they solve whatever problem it is you're trying to solve and then try and capture that human explanation in a set of instructions or rules for a system to follow but here's the problem and you know Gary casprov is such a good example of this problem if you sit down with casprov and say look tell me how it is you're so good at chess he might be able to give you a few clever opening moves or closing plays and chess fans in the room will know that that's what he does in his books and online tutorials and so on but in the end he'd struggle he'd say things like it requires gut reaction Instinct intuition judgment creativity Instinct and so on in short he would say I cannot articulate to you exactly how it is that I'm so good at chess and that was why my dad and his colleagues thought something like this could never be automated if a human being cannot articulate how they perform a task well where on Earth Do We Begin they worried in writing a set of instructions for a machine to follow now what they hadn't banked on the mistake that they made was not to anticipate the extraordinary growth in processing power that would happen in the the decades to come so I'm using here an index of computations per second as a measure of processing power not a lot happening between 1850 and 1950 only in 1950 you get what looks like a relatively linear increase in processing power of course though this is not a linear increase in processing power the the y- AIS there is a logarithmic axis each step up a a 10-fold increase in computational power what you're looking at is an explosive growth in in processing power and and this is what my dad and his colleagues just did not anticipate back in the 1980s so by the time that Gary Kasparov actually sits down with deep blue and remember this is more than 25 years ago you know deep blue is already calculating up to 330 million moves a second Gary Kasparov at best could juggle 110 moves in his head on any one turn he was blown out the Water by Brute Force processing power and a huge amount of data storage capability it didn't matter and this is the key point it did not matter that Gary kaspero couldn't articulate how he was so good at chess it didn't matter the system was able to perform the task in a fundamentally different way it didn't need to replicate his thinking process or mimic his reasoning so this deep blue result was a sort of practical Victory but it was also an ideological Triumph as well you know we can think of most AI researchers up until that moment as purists you know closely observing human beings acting intelligently and trying to build machines like them but that was not how deep blue was designed its creators didn't set out to copy the anatomy of human chess players the the reasoning they engaged in or the particular strategies that they followed rather they were pragmatists you know they took a task that required intelligence when performed by a human being and built a machine to perform it in a fundamentally different way and that is what in my view brought artificial intelligence out of its winter it's what I call the pragmatist Revolution and a generation of systems have now been built in this pragmatist Spirit crafted to function very differently from human beings judged not by how they perform a task but by how well they perform it so advances in machine translation as many of you will know have not come from developing a system uh that mimics a talented translator but have come from companies scanning millions of human translated pieces of text to figure out correspondences and patterns on their own likewise machines have learned to classify images not by mimicking human Vision or millions of previously labeled photos hunting sorry not by mimicking human uh Vision but by reviewing millions of previously labeled photos hunting for Sim similarities between those and the particular photo in the question and this is interesting from a practical point of view and from an economic point of view because one of the things I'm really interested in is how the economic literature has um in in my view you know systematically underestimated the capabilities of machines so what are the tasks of driving a car making a medical diagnosis or identifying a bird at a fleeting Glimpse have in common well these are all tasks that at one point leading economists and until very recently leading economists said could not readly be automated um and yet today all of them can be you know almost all major car manufacturers have driverless car programs countless systems that can diagnose medical problems and there's even an app developed by the Cornell laboratory of ornithology that can tell you what a bird is at a at a glimpse so what went wrong these economists were purists believing that machines had to copy the way that human beings think and reason to outperform them now when they were trying to determine which tasks machines could not do they imagine that the only way to get a machine to perform a task was to sit down with a human being get him or her to explain how they perform that task and then write a set of instructions based on that human explanation for machines to follow and because m beings find it so difficult to explain how they do these things the thought was that these sorts of things couldn't be automated either and it leads I think to one of the most important ideas in in my work and one of the most important ideas I want to share today which is what I call this artificial intelligence fallacy and you see it in lots of P popular commentary on the future of work but also more expert analysis too and it's it's the mistaken assumption that the only way to develop systems that perform tasks at the level of human beings or higher is to somehow copy the way that human beings perform that task now that might have been true 40 years ago in that first wave of artificial intelligence but it's simply no longer the case so let me give you a practical example and if you know if I were talking to a an audience of professionals doctors lawyers accountants Architects they say look Daniel that's all very interesting but you don't understand what I do in my work requires judgment okay and judgment is the sort of thing a machine can never do and I see this question can a machine ever exercise judgment it's the wrong question to be asking in light of these technological developments that I've just described in fact there are two more important questions the first is this to what problem is Judgment the solution why do people go to doctors or lawyers or accountants or Architects or whoever it might be and say look I need your judgment give me your judgment and the answer to that question it seems to me is because of uncertainty you know when the facts are unclear when the information is ambiguous when people don't know what to do they go to their fellow human beings and say look I need your judgment perhaps based on your experience to help me make sense of this uncertainty so really the question we should be asking the interesting question isn't can a machine ever exercise judgment but it's can a machine deal with uncertainty better than a human being can and the answer to that question as many of you will know is of course they can you know that is precisely what they're so good at doing they can handle far larger bodies of data than us and you know make sense of it in ways that we act in alone simply couldn't perceive so you know a good example of this are the medical diagnostic systems that have been developed in the last few years this is the one developed at Stanford that can tell you whether or not a freckle is cancerous as accurately as leading dermatologists how does it work it's not trying to copy the Judgment of a human doctor it knows it understands absolutely nothing about medicine at all instead it's got a database of 140,000 odd past cases and again it's running a pattern recognition algorithm through those cases hunting for similarities between them and the particular photo of the troubling Freckle in question you know it's performing once again the task in an unhuman way and it doesn't matter and this is the key point that a human doctor might struggle to articulate how they make a medical diagnosis that inability of human beings to explain how they do very tricky you know subtle complex things turns out to be far less of a bottleneck on automation than many people thought was possible in the past can machines think now love the question from a sort of philosophical point of view but from a more practical point of view I just don't think it's you know a particularly useful question and to see why I want you to think of another system owned by IBM this one called Watson and it's claimed to fame of course was that it went on the US quiz show Jeopardy 2011 it beats the two human Champions at the game of Jeopardy okay and I love this but what I particularly like about this is that the day after Watson won on Jeopardy uh the Wall Street Journal ran a terrific piece by the the great philosopher John S with the title Watson doesn't know it won on Jeopardy right and it's it's completely right it's completely true know Watson didn't let out a cry of excitement it didn't call up its parents to say what a good job it had done you know it didn't want to go down to the proverbial Pub for a drink the system wasn't trying to copy the way that those human contestants thought or the way that they reasoned but it didn't matter it's still outperform them um it's what I call in my work an increasingly capable non-thinking machine and I think that is what the second wave of artificial intelligence is about very broadly and that's what we're in at the moment systems and machines that are using remarkable advances in processing power in data storage capability and in algorithm design to perform tasks that might require very subtle faculties when performed by human beings but perform them in fundamentally different ways and the consequence of this is is that you know in practice a whole realm of activity that we thought was Out Of Reach because it draws on subtle faculties like judgment or creativity or even empathy that turns out not to be the case more and more of these activities are within reach and I think it has you know quite important implications for thinking about work and I want to spend um a bit of time thinking about what those implications might be um I think the the challenge for now in in my view the next sort of five to 10 years is not a challenge where there aren't enough jobs to be done full stop because of technological progress the challenge is that there is you know work to be done but for various reasons people aren't able to do that work that seems to me um to be a useful way of thinking about the challenge in the labor market due to technological progress at the moment and and in the medium term uh sort of frictional technological unemployment and I think there are so if we think again in terms of those two forces that I mentioned before I think you know for the next decade or so in almost all economies that harmful substituting force uh that displaces workers will be overwhelmed by that helpful complimenting Force um that raises the demand for their work elsewhere but there are three reasons that I think think that increasingly in demand work might sit Out Of Reach of more and more workers and I just want very briefly to explain what those are the first reason why I think that work might sit Out Of Reach of more invol workers is what I call a skills mismatch where displaced workers simply don't have the skills to do the new work that has to be done the new work created by technological progress and I think this is the most familiar reason for sort of frictional techn olical unemployment and I won't say too much more on that the second reason though is what I call the place mismatch where displaced workers just H just don't happen to live in the particular place where work has been created it's worth remembering that in the early days of the internet there was a moment when it seemed like these sorts of worries about place would no longer matter now people spoke about the death of distance and how the world is flat but actually in looking for work today the place where you live uh matters more than ever the third aspect that I the third mismatch and I think this is one of the ones that we spend least time thinking about is the identity mismatch but despite the fact we don't spend too much time thinking about it I nevertheless think it's important and this is where displaced workers have an identity rooted in a particular sort of work and they're willing to stay out of work in order to protect that identity so think of adult men here in the US displaced from you know traditional manufacturing roles by new technologies so there are some people who would say that these men would rather not work at all than take up and it's an unfortunate term but take up so-called pink collar work which is a term designed to capture the fact that many of the jobs that are hardest to automate and many of the jobs in which we're going to anticipate job growth in the future are disproportionately done by women so 97.7% of preschool and kindergarten teachers in the US are women 92.2% of nurses 82.5% of social workers so those mismatches it seems to me are you know thinking about the impact of technology on work in sort of medium-term mismatches like those feel to me to be the kind of more pressing challenge but if you look further um into the 21st century I think we might see the emergence of a second type of technological unemployment one where there's just simply not enough work to be done full stop um and I think this is a you know a less comfortable idea it's not frictional technological unemployment but it's structural uh technological unemployment so you know can this idea be right what about the fact that after three centuries of radical technological change there's still enough work for human beings to do doesn't that tell us there's always going to be sufficient demand for the work of human beings and the argument that I make in in in the book that's on the reading list this week is that the argument is no and the fundamental reason is because of that process of task encroachment that if you think of those two forces that I mentioned before I think there can be little doubt that as task encroachment continues that harmful substituting force is only going to gr stronger their workers are going to be displaced from a wider range of tasks and activities than many people thought was possible in the past why though can we not simply rely upon that helpful complimenting Force as we have done for the last 300 odd years and the answer in my view uh is that that process of task encroachment also has a second a pretty pernicious effect that over time it could wear down that helpful complimenting Force as well not only strengthen the substituting Force but also wear down that complimenting force and again for those who are interested in the sort of intricacies of that argument due take a look at the book I just want to you know take the the premise for now as given that structural technological unemployment is a threat and explore what it might mean and and what we should do about it so there is an increasingly popular response at the moment which is that we ought to try and redirect technological change so the argument being that our current path of technological progress might be leading us to a world of structural technological employment where there isn't enough good work for people to do but we can avoid that path if we want to and we can choose a different technological path uh if if we want to um and I think you know the clearest articulation of this is in the work of donon asoglu uh which is you know captured nicely in his his recent book he accepts that what he calls excessive automation is underway but he argues again that we can avoid that path and we can take a different path uh to the one we're currently on if we want to and I think you know the key idea here is a really valuable one it's a really important one you know often when politicians and policy makers talk about technological progress it's as if we're on a train you know we can push forward on the throttle and speed up and get more technological progress we can pull pull back on the throttle and slow down and get less technological progress but the direction of travel is sort of predetermined it's fixed by the the tracks and all we've got to do as a society is just sort of trundle along those tracks and the spirit of the argument that Asam moglu makes the spirit of this idea of directed technological progress is that this is wrong that a much better metaphor is not the sort of train metaphor but it's a nautical one you know now the picture is that policy makers are sort of bobbing about on the Open Water they can raise their sales to speed up progress they can lower them to slow down but we can also steer you know our boat wherever we please on the sea and it's a sort of far more liberating uh metaphor we're not confined to the kind of narrow task of just deciding whether or not we want more technological progress or less technological progress but we can change the sort of character of technological progress the type um and we can chart a very different course uh if we want to so you know the question is what what direction should we be heading in what direction if we can redirect technological progress should we you know want to want to go in and in The Simple Story thinking about technology can either substitute for workers or complement workers there's two paths either we go down a path where we encourage the development of technologies that substitute for workers and that is a path that you know many people worry we are on um and we can you know on this path people develop technologies that that replace workers or instead there's another path that we can go down and this is the path we ought to be going down uh which is that we develop Technologies which help workers that complement them um and so the big question is well how do we how do we choose that path if we want to go down a different path if we want to sort of steer away from a problem of structural technological unemployment how do we encourage the development of Technologies in society that complement human beings rather than substitute for them and yeah the answer is that through taxes and subsidies through laws regulations through social norms and Customs this is the the argument that we these are the tools that we can use to redirect technological progress by tweaking and changing these things we can change the incentives that people face in society and encourage them not only to develop more Technologies but again to develop technologies that complement workers rather than substitute for them and just to see one one example of this uh in the US every single year since 1981 the effective tax rate that Business Leaders have to pay to hire workers has been far higher than the effective tax rate on using machines so the result is a very strong incentive to develop technologies that again substitute for workers rather than compliment them um and the argument is that look this is the sort of thing that we can change if we want to we can change the sort of incentives that exist in society encourage the development of technologies that complement workers rather than substitute for them and you know in principle I'm pretty sympathetic with that idea indeed my my new book argues that there are sort of broader applications for this that we ought to think not only about redirecting Ai and think you know redirecting away from harming workers and harming you know kind of politics and so on but we ought to think about how we can redirect technological progress more generally uh to protect other things that we might care about whether it's the environment whether it's the level of inequality in society whether it's the health of places and communi so I start from the point of view of thinking that this idea of redirecting technological progress is quite a useful one but I also think it's got limits um and I think to see those limits it's worth thinking about the environmental case so here you have like with automation a technology that you know technologies that can be harmful um in the sense that you know we have Technologies which are dirty they pollute they emit carbon dioxide but there's another path where we have Technologies which are not dirty but technologies that are clean um you know green technologies ones that don't do as much damage the environment um and what we have done over the last 30 40 years or so through taxes and subsidies through laws and regulations through social norms and Customs is to try and change the sorts of technologies that get developed in society to encourage people to develop clean Technologies rather than rather than dirty ones um and so again you know rather than substitute and compliment we've here got dirty and clean and we've tried to encourage people to develop clean Technologies um and to some extent we've had some success in that but I don't think anyone could argue that we have been successful uh enough um you know if you think about the rise in global temperatures um the last uh eight years have been the hottest eight years on record in Earth's history You know despite the fact that we have known what we need to do to redirect technological progress with respect to the climate despite the fact we know have known what we needed to do to encourage people to develop clean Technologies rather than dirty ones so there something like a carbon tax of about $100 um per metric ton of uh carbon by 2030 is thought to get us on a path to De uh to to keeping uh temperature rises under control we have failed to do so um so we have a strategy now with respect to the climate which is in part mitigation encouraging the redirection of technological progress encouraging the development of technologies that are clean rather than dirty but we've also had to accept uh that our strategy has to involve adaption too that there are limits to our capacity to redirect technological progress and that we're going to have to live on a warmer Planet as well um and I think a similar lesson applies to thinking about AI in this challenge of structural technological unemployment yes we can try to shape the direction of technological progress and I think that's a good thing to do encourage the development of technologies that complement rather than substitute for human workers and um but I think we have to accept that there are immense political and technical difficulties with doing that um and as with what's happened with the climate over the last 40 years we have to accept that strategy with respect to the impact of AI on work has to be not only mitigation but also adaption as well um so that's just another reason why I think that we need to take these challenges of um a world potentially with less work uh more seriously that our capacity to sort of choose a different path well I think it's something we should certainly try to do um is is constrainted so let me share some thoughts um in in the the last few minutes on the problems that we might face in a world with less work um and I think there are three big problems that we need to think through if we take seriously the prospect of a world with less work and the first is the economic problem which is the problem of inequality you know today the labor market is the main way that we share our income in society for most people their job is their main if not their only source of income so how do we share out material prosperity in society when the traditional way of doing so paying people for the work that they do is less effective than it might have been in the past now in my work I argue that the only way we can do this is through the state that we need a big state um now the important Point here is that what I have in mind is not uh a big state of production it's not teams of people sitting in central government offices trying to kind of command and control you know uh economic affairs from a distance we tried that in the 20th century didn't work very well what I have in mind is a big state of distribution a state that takes a larger role in sharing our income in society if our traditional way of doing so paying people for the work that they do is less effective than it is than it was in the past the second problem actually has little to do with economics and it's the problem of power um in the future I think our lives are likely to become dominated by a small number of large technology companies who are responsible for developing these Technologies um now what's interesting about this power that these technology companies have uh is that I think the nature of the power has changed over time you know in the 20th century our main preoccupation was with the economic power of large companies we worried about things like Market concentration predatory pricing and so on and so on and we had you know had tools and um for identifying concentrations of economic power and intervening where appropriate to break up those concentrations of of power what I think is important in thinking about the 21st century is that we are going to be far less worried with the economic power of large companies and far more concerned with their political power in the impact they have on things like Liberty and democracy and social justice and whether or not those things are under threat so just to you know think about those a little more you know today technology is increasingly strain our Liberty a seller can be blocked from advertising certain Goods in an online Market a cryptocurrency holder can forget their wallet key and lose their you know Fortune to the blockchain an electric bike rider uh can be forced from going above a certain speed even if there's an emergency uh at hand Technologies determine questions of social justice you know algorithms which decide which applicants get a job which citizens get social housing which borrowers receive financial loans which prisoners are released B on parole and Technologies also shape our democracy you know search engines sort and shape what information we receive online media platforms sift and select which conversations we take part in social networks determine who is Amplified and who is muted I think these issues around the political power that these large technology companies have in particular the impacts they have on things like Liberty social justice democracy are going to dominate our concerns about power in the 21st century in the same way that concerns about economic power dominate our worries about large companies in the 20th the third and final uh challenge is the challenge of meaning and purpose um there is a a joke that I like about a um a Jewish mother and her and her her son and they're they're swimming and well the the son is swimming and he's out at Sea and he's clearly struggling he's going underwater and the mom standing on the shore says help my son the doctor is drowning um and I like that because it captures quite nicely this idea that work for many people is not simply a source of income but it's also a source of meaning and Direction and fulfillment and purpose um and if that's right then the challenge of automation the challenge of this idea of structural technological unemployment isn't just that it might hollow out the labor market leaving some people without an income but it might also hollow out that sense of meaning and purpose and fulfillment that people have in life too and this leads I think to some of the most radical things that I've written about uh which are what we you do we have a does the state have a role in shaping how people spend their spare time um you know to we have designed a huge variety of interventions for shaping people's working lives labor market policies do we need to think in the 21st century about about Leisure policies how we shape for good or bad how people spend their spare time as well so let me just finish now on a note of optimism you know and and I do um remain optimistic and and you know optimism is a sort of running theme in in my work and the reason is simple which is that in decades to come technological progress is likely to solve the economic problem that has dominated uh Humanity until now so if we think of the economy as a pie the traditional challenge has been how to make that P large enough for everyone to live on so at the turn of the first century ad um if the global economic pie had been divided up into equal slices for everyone in the world most people would have just got a few hundred dollar almost everyone lived on or around the poverty line and if you rolled forward a thousand years roughly the same would have been true but over the last 300 years and as we saw right at the start economic growth has s and this growth was driven by technological progress economic pies around the world have exploded in size so Global GDP per head today the value of those individual slices is already about 11,000 12,000 um we have come very close again to solving that economic problem that has played uh humankind um for centuries and this idea of technological unemployment in a sort of strange way will be a symptom of that success uh that in the 21st century this technological progress will solve one problem how to make the pie large enough for everyone to live on but it will replace it with three others those problems of that problem of inequality that how we share out this income if we can't rely upon our traditional way of doing so the problem of power and in particular the political power of large technology companies and third this issue of meaning and purpose um and you know I think clearly there's going to be immense disagreement about how we should meet these challenges about how to share out prosperity about how to constrain the political power of big Tech how to provide meaning in a world uh with less paid work but these are you know it seems to me in the final analysis just far more attractive problems to have to Grapple with and to have to try and solve than the one that haunted our ancestors for centuries which was how to make the pie large enough in the first place so I will finish there thank you very much for your attention look forward to question thanks right here um so that was terrific thanks for that overview uh let me take a few minutes just talk about those last three problems we talk about and we'll open it up to or ask you about them um so the first one about inequality and if uh people aren't you know yeah some work can be a pain but it's also a way most people get income so if you don't have that then you you talked about a big state for distribution one of the other meetings um was Sam alman's Moors law for everything I'm sure you read it at some point um and uh he didn't imagine a big state he imagined you know worldcoin or some Ubi that got that distributed maybe you can say a little bit about about you know to say a little bit more about how inequality might be addressed with or without a big state and whether or not what your thoughts are about a universal basic income yeah um so I mean the the challenge of a world with less work is that on the one hand these Technologies as I was saying will make us kind of collectively more prosperous than ever before but at the same time they undermine the traditional way in which we've shared out that Prosperity through through paid work um and so the the challenge the kind of high level challenge is okay well then how do you move from a world where we have traditionally taxed what is valuable the work that people do uh um or we have taxed um and you know that's how we raised the money and then redistributed it um if we move from a world in which human capital is less valuable to a world in which other types of capital are more valuable um it seems to me that we need to find a way to either tax those you increasingly um valuable other types of capital whether it's you know particular technologies that we're talking about or find a way to give other people um ownership in those Technologies um those feel to me to be the so I either either we're going to have to find a way to you know tax those valuable types of capital and redistribute it through some kind of basic income as you say or we need to give people you know direct ownership of those sorts of that of the valuable capital and and I can't I can't see how you can do either of those things without the state in some way taking a larger role in in both of those in both of those um tasks and and that's what leads me to this idea of the big state it's not a big state of production it's a big state of of distribution um big state in terms of power but it doesn't necessarily need like rows of people like you were showing no that's right so it's Social Security doesn't require much of uh exactly exactly that's right um on the issue of basic income I mean I and I write quite a bit about it I mean I I have issues with lots of as you I can see the the appeal which is that it solves this this distributional problem how do you share our income if you can't rely upon the labor market to do it well a basic income gives everybody a slice of the the pie without being conditional on a um on their status in the labor market and so I can see the appeal I mean I have various problems with it one is it's not obvious it needs to be in the form of an income and so I think these you know and samman is you know a proponent of the in that piece of the idea that it ought not to be an income it ought to be some kind of you know basic equity in these Technologies um I think there are really difficult questions about so I think there's issues around the I I think there's issues around the B you know how basic is basic um you know both the left and the right at various moments and sort of in history have agreed with the in principle with the merits of a basic income but if you look at what they mean by basic it's very different so those on the right who have supported it have supported it because it promises sort of simplification of the tax system and what they mean by basic is very little uh you know a floor with beneath which no one can fall whereas those on the left who have supported it have supported something far more ambitious you know a basic that allows people to really flourish in life so I think there's kind of difficult conversations about basic my bigger issue is about the universality this idea that it comes with no strings attached um and for many people it's the sort of defining feature of the basic income but I think it's the most kind of problematic feature and the reason is this and we were talking about it earlier which is that you know in my view social solidarity today comes from a feeling that everybody is pulling their economic weight through the work that they do and the taxes that they pay and if people aren't in work there's an expectation that they ought to look for work if they're able to work and and the problem with a a basic income is that it's just you know offends that sense of social solidarity it means some people are taking from the collective pot without giving anything in return and so um in other words I think you know a basic income does a very good job of solving these issues of distributive justice ab
2024-09-18 01:51