Governing AI Before It’s Too Late | GZERO Reports | GZERO Media

Governing AI Before It’s Too Late | GZERO Reports | GZERO Media

Show Video

- Hi, everybody. Welcome to a GZERO special report, Governing AI Before It's Too Late. That's right. It's an urgent problem.

I'm Evan Solomon, the publisher of GZERO Media. Look, long before the Barbenheimer phenomenon broke out, there was a significantly more important phenomena that happened. Now, it happened years ago, but maybe the date to look at is November 30th, 2022. That's when something called ChatGPT launched.

And for many people, that's the beginning of the AI revolution. Now, of course, you might think, no, it doesn't. It started, the Greeks had an idea of a robot that would protect Crete. What about the Turing test back in the fifties or 1956 when AI began as a field? Yes, yes, there's a long, long tradition of AI, but now it poses an urgent problem for governments and for people and urgent potential opportunities. The question now is how to govern AI.

What do governments do? To find out, I'm joined by the founder of both GZERO Media and Eurasia Group, Ian Bremmer, and also by Mustafa Suleyman. He is one of the co-founders of a new AI company called Inflection AI, but you may know him also as one of the founders of DeepMind, which was sold to Google back in 2014. They've come together to publish an incredible article in Foreign Affairs on the AI paradox, how to govern AI.

What do we need to do? How do we get it right? Is it even possible, or is this tilting at AI windmills? Well, let's find out. Ian and Mustafa, first of all, thanks for joining us. I didn't get the memo not to have a jacket, but I guess you guys did.

- Well, we've spent more time with each other over the past few months, so it's gotten more casual. - Yeah, I got that. Ian, let me start with you. Bill Gates has said things like AI is as important, as impactful in innovation, for example, as the computer itself. Just let's start with the big picture, Ian. Give us a sense of how significant the AI revolution is and why this moment is so urgent.

- Well, a year ago, just a year ago, there wasn't a single conversation I was having with any head of state, any top minister, any head of a multi-lateral organization that was asking me about AI. Today, literally every one of them is concerned about it in fundamental ways as an opportunity and also as a danger. I've never experienced anything remotely like that in my 30 years as a political scientist. The other thing I would say is that the powers that have control of this AI and the AI revolution are not governments. They're overwhelmingly technology companies in the private sector. So it's interesting.

I mean, almost any other governance challenge I've ever dealt with, I felt more or less like I could try to write that piece by myself. I mean, I could get some research around it. This is one that I just couldn't do because actually, if you want to understand it, and certainly if you want to govern it, it turns out you actually have to work with the people that are doing the technology. And those are first and foremost, these engineers, these scientists, these technologists, and these owners of these companies. And that's been, I mean, for me, it's been quite a learning experience and it's also been very eye-opening as someone who lives in what we consider a westphalian world, right? A world where governments are the dominant political actors. When we talk about AI, we're increasingly not talking about that at all.

And that's where I think this is so interesting. - Ian, that's not just an interesting answer. It's probably the best intro to Mustafa I can imagine because that's one of the reasons you co-wrote the article. Mustafa, give us a sense, because people are hearing about this in the frothy press and they talk about that money's pouring into AI. Governments are worried.

There's some folks who are saying this is the end of the world and some people are saying this is the beginning of the greatest upside. Can you give us a kind of a sense from the inside, you know this world better than anybody, where we are? Where is the AI technology now in terms of its capacity? And give us a kind of a picture of where it's heading right now. - I think to many people, I can understand why it feels like the AI revolution just started last year in November, but actually it's been going on for decades. And I think one way to sort of measure the rate of progress over the last 10 years is to look at the total amount of compute used to train the cutting-edge models of the day over the last 10 years. And in 2013, when at DeepMind, we created an AI called DQN, which played the Atari games, most of them to superhuman performance, so about 54 games.

This is like Space Invaders and Breakout and games like that. It learned to play these games simply by interacting with the pixels on the screen and playing around with the joystick controls. It used something like two petter flops, so 2 million billion calculations. Since then, the total amount of compute has 10xed every year for the last 10 years. And so that's only a crude measure, but it is a really eye-watering number, far, far greater than the rate of increase in something like Moore's Law, for example, which I'm sure many people will be familiar with. And so in some ways, this trajectory has been quite predictable.

We now have language models that can produce human-level quality text with the style and tone and content of whatever direction that you give it at any given moment. And that really is quite profound because language is the technology of civilization. It's the way that we communicate ideas. It's the way that we negotiate and it's the way that we ultimately get stuff done and create and invent. And so if that now becomes the preserve of machines, then I think you can start to understand why some people are feeling very, very hypey about the potential for this to be truly transformative. - And just for our viewers and our listeners, obviously, we used to talk about the Moore's law, computing power doubling every two years.

We used to talk about the Turing test. Does it kinda computer think for itself and sort of pass as human? As you talked about, Mustafa, your old company, when the computer beat the best Go player, Go, that very complicated game. All these were milestones, but we're moving into a new, I want to throw a new term out, and Ian, you can define this because you've talked about this a lot, a technopolar moment that because of what Mustafa's talking about, it has created a new thing. What is the technopolar moment, Ian? - The technopolar moment is when technology companies are functionally sovereign in the digital world. They define the platforms and how you behave on the platforms, what you see, who you interact with, whether or not you're allowed on the platform or not, how you behave. I mean, increasingly, the more time we spend intermediated by these algorithms, it's affecting who we are as people.

It's not just nature and nurture anymore. It's increasingly algorithmic. So on the one hand, you can say, well, sure, technology companies are critical in the digital world, but the digital world doesn't matter that much. That was true when the internet was first started.

But in the world of AI with the extraordinary exponential growth that Mustafa is not only talking about, but personally driving, suddenly what is meant by the digital world impinges on everything we do and engage in. I mean, dual use technologies made a lot of sense when you're talking about Huawei working with the Chinese defense industry, working with the Chinese military, and you say, well, you don't want the military to have access to these technologies, so you're not gonna work with Huawei. But when you talk about AI, everything is dual use. These are general models and you can use them for the civilian environment.

You can use them for national security. And so what we're saying is that suddenly in a technopolar world, given AI's power, that you have a small number of technology companies, which in short order may be a very large number of technology companies, that are actually determining sovereign outcomes. And if governments want to play a role in that, if they want to set the rules for that, they need to do it like now. They really needed to do it a year ago or they're gonna be left behind. And so the technopolar moment that Mustafa and I are talking about is one that will define how power is distributed in the global order over the coming generation. - You wrote, Mustafa, in the Foreign Affairs article about the technopolar moment that Ian's just describing, and you wrote this.

"If governments do not catch up soon, they possibly never will." What is the challenge to governments right now and why it's so urgent? - Look, here is the challenge. I mean, the returns to capital compound faster than the returns to other things. So if intelligence becomes capital, then companies are currently the creators of this new type of intelligence. And so they'll of course use that intelligence to advance their commercial agendas. Naturally, that's a great thing, and it will produce huge, huge value.

But if governments don't evolve as quickly, then not only will they not be able to regulate those companies, which I think is, as Ian says, now urgent and pressing, but they won't actually be able to harness the benefits of this new wave themselves. I mean, governments have to be creators, builders, makers, doers. I mean, if you are to stand any good chance of being able to reign something in and to govern it well, then you fundamentally have to understand how it operates. And that means that you have to be a builder and a maker. And I think that's the big challenge for government. It's sort of failed to exercise the muscle of being a first party creator of things in the last 30 or 40 years.

It's become a commissioner of services, buying in from outside. And that has real limitations. And unfortunately, I think we're about to see what that looks like as we pay the price for years of not investing in sort of government-funded activity like this. - Well, and there's gonna be questions. Do they even understand what they're trying to regulate? But let's get to that.

I feel like this has been the kind of, this has been the kind of intellectual hor d'oeuvre course to set it up. Let's get to the main course, which is the article, Ian, which is the challenge of governance and how could you possibly govern/regulate a phenomenon so dispersed and moving so quickly as AI? - Well, we're proposing a technoprudential set of policies, sort of like we have in global finance right now. It's called macro prudential. This is the idea that you identify and contain risks to stability, to global stability that are coming from artificial intelligence, coming from the proliferation of this technology, coming from the disinformation that emerges from chatbots and also can be used and pushed by those that have access to them, but without choking off the innovation opportunities, the enormous wealth, the globalization 2.0 that can come from investing and using AI. So that's the backdrop. That is the challenge.

That's what we need to do. And we set up in this piece a set of five principles that we think that any policy, any effective policy in AI has to actually adhere to. And then we talk about what we think some of those institutions should be. So, I mean, if you want, I can list the principles really quickly. - Do it. First, I like how you drop like a $47 word like technoprudentialism like it's nothing, but okay, but that's actually important to understand.

What are the five principles, Ian? If you want to outline 'em real quick. - Sure. They're a lot easier than technoprudential, I'll tell you that.

And by the way, the reason we like global finance as a backstop, not only because there are organizations that already exist that do this that limit harm and maintain global stability, but don't choke off innovation and they're not politicized because they know they need the global markets to function. You can be China, the US, Europe, doesn't matter. You know you need it.

And that's why we have a financial stability board. That's why we have Bank of International Settlements. That's why we have the IMF where the Chinese, they fully participate just like the Americans, right? So that's what we need when we talk about AI in the technological space.

So five big principles. First is the precautionary principle, do no harm. Second, these institutions, governance has to be agile because AI is changing so incredibly quickly. And it means that institutions, if they can't reconstitute themselves, if they can't move with the technology, they will be outdated before they're even created. They have to be inclusive. Mustafa and I already hinted at this when we say that technology companies are sovereign in the space.

That means any governance you have is gonna have to bring governments and technology corporations together as actors, not just governments by themselves. They have to be impermeable because you can't have slippage. These have to be global.

They have to involve the whole supply chain because if you don't govern all of AI globally, given the proliferation of the technology, you're not governing it. And then finally, they need to be targeted. One size will not fit all.

You're not gonna do all of this with one global institution. There's gonna be an entire architecture. So those are our five principles that I think if you want to have any success in the field in governance, Mustafa and I believe you need to adhere to those. - Those are a good framework, Mustafa. But people listening might think, haven't I heard about? Isn't the UN doing something? Isn't the European Union doing something? Haven't I heard something about like Hiroshima, they're gonna do something? Like where are we on global regulation today? - Yeah, it's a great question.

I mean, I think there are lots of efforts. There's the EU AI Act, which has been drafted for the last three years, and there's various other proposals. The White House has just put out a set of voluntary commitments that seven of the biggest large language model developers, myself included, signed up to, which I think is a great first step towards a more proactive and preventative approach.

I do think that what's different this time with this round of efforts is that having the precautionary principle as a headline goal is actually unlike previous waves of regulatory proposals. I mean, to start with the principle that we should be technoprudentialists, right? To be cautious is actually quite exceptional. And I think that that is a reflection of how seriously we and other governments I think are taking this moment, that it's time for a slightly different approach to the one that we might be used to. - I guess the question is, is this just naive? I mean, is this a bunch of governments, Ian, that are acting basically like the establishment when they saw Elvis Presley swing and said, "We gotta stop rock and roll." Impossible. Kids love it.

They're gonna make their own music. Like how possible is it to actually form a global regulatory environment when China and the US are at odds? It's a zero sum game for them. Innovation's bursting out in the seams. How realistic is any effort at regulation, even with these five principles? - Well, one, it's gonna become a lot more realistic as crises start occurring.

So if they don't get started, they can't work on it now. Look, the US and China don't have any direct defense conversations right now, though they do have lots of direct conversations in these global financial institutions. So I think our hope is that we're creating a conversation that a lot of people can see and can engage in so that when there are inevitable crises, and they're gonna get big fast, that people will already be pointed in the right direction. Because if you ask me, given where the state of the world is right now, we just came out of a pandemic where the Chinese refused to allow any data to be made public, and many, many people were killed as a consequence of that by COVID.

And the Americans pulled out of the WHO under the Trump administration. So I mean, our last three years do not give us hope on global cooperation, but everything that Mustafa is doing in his field makes it completely critical that we have to get this right really, really fast. It's not like climate change where we can ignore it for a few decades and then our kids are gonna go, "Oh, you guys, now you better take this seriously because look what you're doing." We don't have a few decades, right? So I think-- - But isn't that part of the problem? If even on climate change, it was difficult to get, I mean, we have global ideas and we have global targets, but we have lots of leakages, lots of people who just don't comply. That's been hard. So I like your prudentialism model because it's a financial model.

Is it easy to enforce or regulate, Ian? Or is that gonna be the challenge? - I let Mustafa weigh in on this too. It's gonna be very hard to regulate, but it's gonna be impossible to regulate unless the governments and the technology companies start working together on this now. And you know, Mustafa was there with President Biden and with the other top AI companies just a couple of weeks ago.

That was the first effort in the United States. It was voluntary. An executive order is likely coming down in the next couple of weeks.

Everyone sees the urgency. And usually when you talk about policy in a new space, people understand, like they think they know what the answer is, but there's no urgency. They're like, ah, we can't do that. We don't have the political capital. Here it is exactly the opposite. Everyone understands we need to take action now and what they don't have is a roadmap.

So what Mustafa and I are trying to do is help them with the roadmap, help an awful lot of government leaders who understand that they are like, really, really late to this party. Here's the way you need to think about it, and it's very different. - That they're doing. - Mustafa. - Just to add to that, regulation isn't just about the letter of the law. It's about the culture and the approach that we bring to both self-regulation, but also educating regulators about what's coming.

And I think that what's different this time around versus say 10 or 15 years ago with social media and then the tech platforms is that the current crop of AI leaders, myself included, we've been raising questions and concerns about these technologies for a very long time, but we're also natural optimists and we see the incredible upside. And so we want innovation to continue at pace and we are pretty confident that with sensible interventions, there are things that we can do collectively to make sure that they end up doing much, much more good than harm. The downsides can be mitigated. We're a very resilient species, we're a very adaptive species, and there's lots of evidence where regulation has been successful in the past. So it's not a dirty word or something to be feared. It's something to just be embraced proactively and thoughtfully.

- Just to quickly pick up, sensible interventions, give us an example of that. - For example, I mean, it's important that we understand what training data has been used in an AI system. Where has that data come from? Does it include underlying biases? Are there holds or gaps? Does it over-represent or under-represent certain ideas? Secondly, we should be able to red team these models. That means that you have independent external experts who adversarily attack a model to encourage it to produce bad behaviors. So for example, these kinds of tools can be used to provide a kind of expert coaching if you want to develop a biological weapon.

It's a step-by-step guide on where to get access to tools, how to put them together, what to look out for, how to run experiments. It's quite possible for us to red team these models to be able to demonstrate that they can't produce those kinds of outputs. My own AI, for example, called Pi, which stands for personal intelligence at inflection, doesn't produce this kind of bio weapons coaching support. And that's because we've worked super hard on safety and many of the other LLM providers have done the same thing.

So I'm pretty confident that as these models get bigger, they actually get more controllable and we can be very directive about the kinds of outputs they produce and those that they don't. So subjecting that to external oversight has gotta be a good thing. - That's fascinating.

Ian, let's just get practical because Mustafa, they're getting practical of what some kind of sensible approaches might be. One of the aspects of the article says, okay, we've got the framework in terms of the philosophic underpinnings of the five principles that you've outlined. Then you've got three, I hate using the word regime because in this world, people might take that the wrong way, but kind of three levels, three levels of governance that might actually work as kind of bespoke and delicate enforcement tools that doesn't stifle innovation. What are those three regimes? - Well, I'll talk about one of them since I'm the analyst and let Mustafa talk about the two that are gonna have to be rolled out for practical purposes. And that is the United Nations for Climate Change has a model called the Intergovernmental Panel on Climate Change.

And it is, for the last decades, you have a group of government leaders, of scientists, of public policy experts all working together from almost every country in the world, trying to figure out, okay, what is the danger of climate change? And so in this world of polarization and polemics and misinformation and performative analysis, we all agree that there's 1.2 degrees centigrade of climate change. We all agree that there's 440 particles per million of carbon in the atmosphere. We all agree the extent of deforestation that's gone on, the lack of biological diversity as a consequence, the coral bleaching. The reason we all agree to that is because the UN set together a global body with no power at all, simply driving the analysis to understand climate change, which is changing dynamically with our world as we move to new energies and as we pump more carbon and methane in the atmosphere, right? All of this stuff, well, we need that for AI. We need an intergovernmental panel on artificial intelligence that will allow everyone in the world to understand where AI is and how it's moving and where the opportunities are, how they align with sustainable development goals for humanity, but also the dangers of proliferation, the dangers of these models for national security, for the economy, for displacement of labor, for turning human beings into automata.

I mean, all of these things. And that is doable so that even if you don't get global trust, even if you don't build institutions that force everyone into compliance, that all of the governments will see that they have to be rowing in a similar direction, even as competitors, because that's just the nature of the field. - Ian kind of mentioned a kind of trust but verify model.

Mustafa, what are the two other levels? - Yeah, I'll address the second. Ian can tackle the third. So I think the second is having established the facts, the analysis, the insight. The next stage is really asking ourselves the question, how do we coordinate among the actual actors, the developers, the builders, the makers of these models today? And there are some good lessons in the financial stability board. This is a collection of central bank governors who have direct access to one another almost in real time across national boundaries, irrelevant of what is going on militarily or often diplomatically too. And they essentially operate as a kind of network of information sharing and efforts that really drive that stability through coordination. We imagine a similar kind of body for this technopolar age where the companies largely are doing most of the development, still driven by companies and governments as well, but where there is a kind of geotechnology stability board.

And the goal here is really to use that information generated from the Intergovernmental Panel on AI providing the factual updates, the audits, the red team reports so that we understand where the size of these models are at at the moment and what they're capable of doing. And then as more concerns arise, those best practices can be shared between and across the companies without stumbling into the existing sort of antitrust issues or other kind of anti-competitive constraints that there are. - Kinda interesting, Mustafa, kind of as you say, like a geotechnology stability board, like the financial stability board. And Ian, there's a kind of, I guess, you know, going back, we opened with Barbenheimer and Oppenheimer.

There's kind of a verification. It's almost like an arms control element to this. - Yeah, so I mean if you think of the two that we've already mentioned, one is to understand where you are.

It's to get the analysis right, just to look so that everyone knows this is the state of AI, the opportunities and the dangers. The second is how do you respond when a dangerous disruption occurs? Like when the global financial crisis came, we all knew we needed to come together. We had fiscal tools, we had monetary tools.

We need all the key actors to have a structure that they can respond when something really dangerous happens around AI, as it will. But the third is that the principal actors here, and here I'm talking about, at least for now, the United States and China and their core AI players need to be working together, talking together to avoid the most dangerous types of proliferation as this technology gets exponentially stronger. You look at GPT-4 and you think it's all over.

You gotta be kidding me. Mustafa tweeted this just the other day. It's like we are just at the beginning. Things that have seemed magical to everyone will be completely quaint and caveman-like within 12 to 24 months, right? And when you have that, it is critical that the countries that are most capable of blowing up the world have something that feels like arms control toxic.

We and the Soviets during the Cold War, we were still capable even though we hated each other. We wanted to win. We were still capable of talking to each other about the state of play of the most dangerous and disruptive technologies. We must-- - But can that happen? - What? - Can that happen with China? I mean, you see the Biden administration on the semiconductors and then-- - Evan, can that not happen with China? That's the question that you have to ask. With the Soviets and the Americans, we hated each other.

Our ideologies were directly opposed and we had virtually nothing in common aside from our collective humanity. With the Americans and the Chinese, the level of interdependence is massive. The level of technological dependence is still enormous, even though we've taken steps to decouple it in the past months and years. It should be much easier to do with the United States and China. The reason it feels more difficult is because we've entered a geopolitical environment where there is so much inward focus, where there is so much politicization and disinformation, there's so much divisiveness in our own countries that it makes it very hard to reach out. But ultimately we're 8 billion people on a very small and fragile ball and AI will prove that in ways that no other technology heretofore has.

We have absolutely no choice but to work with the Chinese on this issue. - All right. We're coming to the end of this. I have a couple more questions, but you talk about the importance of working with countries, China, the US. Mustafa, they're driving it. Ian earlier, and we'll get to this, bringing the technology companies into these treaties, into this process. But you're a developer. You know this better.

Yeah, aren't people developing things in not just China and the US, all over the world? This thing's spreading fast. Young people may be doing this stuff in the garage. This is a heavily dispersed technology. How do you capture that type of innovation and bad actors that don't care about this, but are very hard to track? - It's a great point.

And so I think that part of the contradictory or confusing reality of how this technology is emerging is that it's turbocharging both ends of the spectrum, both at the very, very large scale language models developed by the biggest technology companies in the world. And at the same time, these models are getting into the open source and proliferating far and wide super quickly. So if you take the example of GPT-3, which was released in the summer of 2020, just three years later, people have replicated GPT-3 in models that are 75 times smaller. And that means they are that much cheaper to train and deploy in production. So that's really significant because that affordability drop means that everybody then gets access if they want to in the open source community to iterate and to combine and to update.

And so we are gonna see the same trajectory for GPT-4 and all of the subsequent model classes. It's highly likely that over a two to three year timescale, all of these models get 60 to 100x cheaper, easier to use, and therefore they'll spread far and wide. And so when we talk about the proliferation threat, that's really what we see over a 10 to 20 year period.

I personally don't think that we're on the brink of enormous existential harms within the decade. But I think looking out further than that, when we're at a GPT-15, for example, and remember that each jump between a model 3 and 4 or 4 and 5 is an order of magnitude, so 10x. It's not just an incremental jump. It's an exponential jump, a 10x jump. So when I talk about a GPT-15, I'm talking about 10 orders of magnitude more compute used to train the cutting edge models in 15 years time or so.

And that I do think is, that's what we're getting at when there are some really serious things for us to contend with. - Yeah. Proliferation, a feature, not a bug of this, and maybe even built in with the generative side.

Ian, real quick last question on governance. One of the models you and Mustafa talk about is bringing in the tech companies into this. How do you protect against regulatory capture in the kind of political term or in the kind of regular term that the foxes are guarding the henhouse and making the rules? - Well, regulatory capture has been a defining component of the US political system for many decades now.

So that is a problem that pre-exists. I would much rather have that conversation happen on a level playing field, open and transparent, than I would have it happening through dark money and lobbying and a one-way conversation where the private sector believes that the government is its client and it's not, right? I mean, this is the problem. And look, one of the reasons why people are so angry in the United States right now, angrier than in many other advanced industrial democracies, is because the United States is particularly susceptible to the control from animal spirits onto governments. They feel like they're no longer living in a representative democracy. So this is a problem we are well down the road on, we're not fixing. But when it comes to AI, the urgency of getting it right, the dangers of getting it wrong necessitate the US government, other advanced industrial governments and corporations sitting down together to hammer this out.

The space is moving so fast and they know, the tech companies understand that this needs to happen, but they don't have the responsibility. They're certainly not spending the time or effort. I mean, if you're Mustafa and you know, this is cutthroat. He's in that room with six other AI developers and they're all raising tons of money. They want to kill each other, right? I mean, ultimately they want to be number one.

And so if you want to talk about governance in the space, they have to work with the governments together and the governments have no choice 'cause they don't understand what the hell's going on in this space. - Well, and so the argument that we make in the article is that this is the very reason why we have to have inclusivity because we are sort of not aspiring to a technopolar world. We're observing that a technopolar world exists. - It's here. - And so it can't just be governments and companies that are driving this agenda.

We have to have non-governmental organizations, civil society groups, academics, activists, critics at the table participating in the process all the way through. And I think that's actually gonna be critical because it isn't going to be the legislative process which gets here first, certainly not in the US. I mean, this is gonna be super tough to find bipartisan agreement on something like this, although we should try. It's going to be much more informal culture building, self-regulatory practices. And that means there's plenty of room to invite a much more inclusive and more diverse group of stakeholders.

- Evan, if a political scientist and an AI technologist can get together and agree on these things, surely, surely the head of of Google and the US government can, right? - Look, the problem's urgent. We gotta act quickly. So let's do a quick round of rapid fire. Okay, so quick answers here for an urgent time. Ian, what is the worst case scenario that you see if we don't get this kind of regulation? Worst case scenario and signs that were there.

- Worst case scenario would be we lose our democracy. Worst case scenario is cyber attacks, bio attacks, 10x worse than anything we've ever experienced before. Worst scenario is that the United States and China, in mistrusting each other, one takes massive preemptive strike against the other to stop it from developing a technology that it views would otherwise be an extraordinary advance. And that's just getting started. Mustafa can scare you more.

- I don't want to be scared more. Let's go to the other side. Best case scenario, Mustafa. - The best case scenario is easy. I mean, this is going to produce radical abundance. If we mitigate the downsides, then we are on a trajectory to have the most productive couple of decades in the history of our species.

I mean, we are creating intelligent agents that will be as good as most humans at teaching, at creativity, at invention, at research, in science. That is going to unleash unbelievable productivity in areas that will completely surprise us when we look back in 20 years. People who really just have very few tools and very little access to information are now gonna be turbocharged by access to a super smart, intelligent agent to help them about their work. - I wish you would ask me Mus's question. That's a much more fun question. - Well, I know.

Well, I know you did, but I have to ask you the one that makes you most uncomfortable. All right. Who needs artificial intelligence when you've got the organics right here, the real thing? Mustafa Suleyman, Ian Bremmer, thanks so much for taking the time today to talk about your fascinating new article in Foreign Affairs. It's called "The AI Paradox: Trying to Find a Way to Govern AI Before It's Too Late." And as you can see, it is an urgent problem. That article is now available online.

I'd urge you to check it out. It's really important. You can also check out all of our coverage of AI on GZERO Media at gzeromedia.com. You'll find free subscriptions to our daily newsletter, Ian's weekly newsletter, and lots more, including links to our PBS program, GZERO World. Also, if you're really fascinated by this, on September 5th, Mustafa is publishing a new book and it's worthwhile checking it out. It's called "The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma." It's all about AI and containment.

It's out September 5th, but you can pre-order it now. You gotta get knowledgeable on this because this is the most urgent dilemma we're facing. And as Mustafa said, this is only the beginning.

(upbeat music)

2023-08-25 02:58

Show Video

Other news