Will artificial intelligence save us or kill us? | Us & Them | DW Documentary

Will artificial intelligence save us or kill us? | Us & Them | DW Documentary

Show Video

There is a significant risk of human extinction from advanced AI systems. Japan now faces very severe aging problems. I would like to solve these problems. We don’t currently know how to steer these systems how to make sure that they robustly understand in the first place or even follow human values once they do understand them.

I love these technologies. I would like to create such kind of technologies to contribute to the human and human society. Even the brain nerve systems can be connected to the cyberspace. This is a rapidly evolving technology. People do not know how it currently works, much less how future systems will work. We don't really have ways yet to make sure that what's being developed is going to be safe.

These AI recognize human beings are also one of the important living things. This very small group of people are developing really powerful technologies we know very little about. People's concerns about generative AI wiping out humanity stem from a fear that, if left unchecked, AI could potentially develop advanced capabilities and make decisions that are harmful to humans. As the world grapples with the implications of this rapidly evolving field, one thing is certain: The impact of AI on humanity will be profound. With new AI technologies you can realize a fusion between the human side and technology side. This is one of the world’s first wearable cyborgs.

Cyberdyne is trying to create the very innovative Cybernics technologies, especially focusing on the medical and health care fields for the human and human societies. My name is Yoshiyuki Sankai. So I'm a professor at University of Tsukuba Japan and also the CEO of Cyberdyne. Let’s create bright futures for humans and human societies with such kinds of AI systems. I personally want to have an impact on making the world better and working on AI safety certainly seems like one of the best ways to do that right now. Many public intellectuals, many professors, scientists across industry and academia recognize that there is a significant risk of human extinction from advanced A.I. systems.

We’ve seen in recent years rapid advancements in making AI systems more powerful, bigger, more generally competent and able to do complex reasoning and yet we don't have comparable progress in safety guardrails or monitoring or evaluations or ways to know that these powerful systems are going to be safe. My name is Gabriel Mukobi. Gabe, you can call me. I'm a grad student at Stanford, and I do AI safety research and I lead Stanford AI Alignment.

This is our student group and research community focused on mitigating the risks of advanced AI systems. Well like, mitigating weapons of mass destruction, you know? That’s a good thing. Everyone can get behind that. These more catastrophic risks unfortunately do seem pretty likely. Many leading scientists tend to put single or sometimes double digit chances of existential risk from advanced AI. Other possible worst cases could include not extinction events but other very bad things like locking in totalitarian states or disempowering many people, concentrating power to where many people do not get a say in how AI will shape and potentially transform our society.

AI has become such a divisive topic. There are a lot of valid concerns. Some believe it could lead to job losses, increased inequality, and even unethical uses of AI. However, AI also has tremendous potential to benefit humanity. It could help us tackle some of the world’s biggest problems, such as climate change, disease, and poverty. HAL detects the very important human’s intention signals from the brain to the peripherals.

If the human wish to move, then the brain generates intention signals. These intention signals are transmitted through the spinal cord, motor nerve, to the muscle. Then finally, we can move. HAL systems and humans always work together. 20 countries now use these devices as a medical device. I think there's definitely great ways AI technology is used in medicine.

For example, there's cancer detection that's possible because of image recognition systems using AI that allows for, you know, detection without invasive tests, which is really fantastic and early detection as well. No technology is inherently good or evil. It's only humans who are doing this. Of course, we should be thinking about long term impact in terms of the direction in which we're taking the technology.

But at the same time, we also need to think about it less in a technical sense and more in terms of it impacting real-life humans today. Japan, I think, is quite optimistic about AI technology. There's a lot of hype at the moment. It's like a shiny new toy that everybody wants to play with. Whenever I go to the U.S. or Australia or in EU countries, there's far more a knee-jerk kind of fear or concern.

I was quite surprised, to be honest. Meetings on Wednesday are every Wednesday so there's usually some guest we bring in or some other SAIA researcher who presents. Then we have boba tea afterwards. Yeah it's a good deal. Kind of like a research lab. Do you happen to have an HDMI adapter to USB-C? Something to plug in? Oh you did plug in! Nevermind… sorry, I’m hallucinating.

And I’ll pass it off to our speaker, Dan Hendrycks. The Wednesday meetings are really good for inviting new people to. It’s nice to meet some new students, talk about why you’re interested in AI safety or not. So if you're wanting to synthesize smallpox or if this is a chemical place like mustard gas, you can do that. Access is already high and it will just be increasing across time.

But there's still an issue of needing skills. So basically you need something like a, you know, top PhD in virology to create a new pandemic that could take down civilization. There ARE some sequences online - which I won't disclose that could kill millions of people. That are actually more dangerous. Yes? So with the access thing, a lot of people bring up labs and, oh, maybe you don't just need to be a top PhD, you also need some kind of bio lab to do experiments.

Is that still a thing? So this would, it depends on, like, how good the cookbook is, for instance. Exqueeze me Certainly there are people who come in with disagreements they’re like oh, powerful AI isn’t coming for a long time or it doesn’t seem important to work on these things. We could just let’s build an accelerator or whatever. There's a large potential, especially for people doing engineered pandemics to cause a wide range of harm in the coming years. Now there are other instances of catastrophic misuse that people are expecting, too.

One is with cyber attacks. We might have AI systems in the coming years that are really good at programming but also really good at exploiting zero-day vulnerabilities, exploiting software vulnerabilities in secure systems. Maybe the top use case of AI will be making money. You might see a lot of people being defrauded of money, you might see a lot of attacks on public infrastructure, threats against individuals in order to extort them.

It could be a Wild West of digital cyber attacks in the coming years. Beyond that, though, there is a pretty big risk that AI systems could actually get out of the control of their developers. We don’t currently know how to steer these systems, how to make sure that they robustly understand in the first place, or even follow human values once they do understand them. They might learn to value things that are not exactly aligned with what we want as humans: like having earth be suitable for life on it or making people happy.

I was fortunate to have a very supportive family. Especially a few years ago, AI safety was a lot less mainstream so there was always some uncertainty of, Hey, is this going to be actually something that's helpful in the first place? Are you going to have a stable job? Things like this. But as time has gone on, as we've seen a lot more capabilities, advancements, a lot more people raising the alarm for AI safety and AI risk, it tends to be like every few days my mom will send me something like, Hey, have you seen this new thing? Unfortunately some of the worst case risks a lot of experts think there's a pretty significant chance of them.

Many scientists put single or double digit chances of existential risk from advanced AI. There's a recent interview where the US FTC chair said that she's an optimist, so she has a 15% chance that AI will kill everyone. Hmmm… My vision is a little bit different. We could create the AI systems.

This is one of the newly created species, I think. Generative AI systems are different from simple programing systems it has growing up functions. These AI recognize human beings also one of the important living things, like one of the animals. And because the human is also one of the living things, they recognize the importance of the humans. They try to keep our societies, our cultures, and circumstances. We human beings have some problems, aging problems or disease or accidents.

AI systems or some technologies with AI systems will support some functions. Japan now faces very severe aging problems. The average age of the workers in agricultural fields is now almost over 70 years old average. Wow! Ni, San. I would like to solve this ageing society’s problems. my childhood my mother bought me microscope, or some electrical parts.

Every day I spent a lot of time to have such kinds of experiments and challenges. I love to read science fiction books. “I Robot,” written by Isaac Asimov. If you've heard about AI in the last couple of years, chances are the technology you heard about was developed here. The breakthroughs behind it happened here.

The money behind it came from here. The people behind it live here. It's really all been centered here in the Bay Area. A lot of the startups that are at the leading edge of AI so that's Open AI, Anthropic, Inflection names you might not yet be familiar with they are backed by some of the big companies you already know that are at the top of the stock market. So that's Microsoft, Amazon, Meta, Google. And, you know, these companies are based here many of them in the Bay Area.

So for all of the discussion that we've seen about AI policy, there's actually very little that tech companies have to do. A lot of it is just voluntary. So what we are really depending on as guardrails is the benevolence of the companies themselves. So Gabe, I think, is an example of a lot of the young people who are coming to the movement now who are not ideological, who are really interested in the technology, who are aware of its potential harms and see this as the most important thing that they could do with their time their opportunity to work on what many of them call, like, the Manhattan Project of their generation.

You have to realize that unlike some other very general technologies that have been developed in the past, AI is mostly being pushed, especially the frontier systems, by a small group of scientists in San Francisco. And this very small group of people are developing really powerful technologies we know very little about. Some of this maybe comes from a lot of historical techno optimism among especially the startup landscape in the Bay Area.

A lot of people are kind of used to this “move fast and break things” paradigm that sometimes ends up making things go well. But as is the case if you're developing a technology that affects society, you don't want to move so fast that you actually break society. PauseAI wants a global and indefinite pause on the development of frontier, artificial general intelligence. So we're putting up posters so that people can get more information. You know, the AI issue is complicated.

A lot of the public does not understand it. A lot of the government does not understand it. You know, it's really hard to to keep up with the developments.

Another interesting thing is that most of us working on this have no experience in activism. What we have mostly is, like, technical knowledge and familiarity with AI that makes us concerned. AI safety is still very much the minority. And then actually a lot of the biggest AI safety names are working at AI labs, you know.

I think some of them do great work but they're still much more under the influence of the broader, you know, corporation that's driving toward development. I think that's a problem. I think that somebody from the outside ought to be telling them what they need to do.

And unfortunately, the case with AI now is that, like, there aren't external regulatory bodies that are really up to the task of regulating AIs. From the same mouth, you're hearing this thing could kill us all and I am going to keep building it. I think part of the reason you have so much resistance to the AI safety movement is because of the dissonance between people who talk about their genuine fear of the consequences and the risks to humanity if they build this AI God. So much of the debate around here has these really religious undertones.

That's part of why they say that it can't be stopped and shouldn't be stopped. It really feels like, you know, and they talk about it in that way, like “I'm building a god” and they're building it in their own image, right? I love the human and the human society, and I love science fiction. I would like to create such kind of technologies to contribute to the human and human society. I love to read the science fiction books and also I love to see the movies in science fiction.

“The Terminator” movies also is one of them, yes. But unfortunately, some movies in the US or European areas, most of the cases technologies always attack the humans. In the actual fields, technologies should work for the human and the human society. In the movie, The Terminator – classic movie Cyberdyne is a fictional tech company that created the software for the Skynet system, the A.I. system that becomes self-aware and goes rogue. Cyberdyne’s role in the story is to represent the dangers of AI getting out of control and to serve as a cautionary tale for the real world.

Is Cyberdyne named after the firm in Terminator? No. In Terminator stories, that company’s name is Cyberdyne Systems. Obviously, at some literal level, maybe you can unplug some advanced A.I. systems. And there are definitely a lot of hopes people are actively trying to do that.

Some of the regulation now is focused on making sure that data centers have some good off switches because currently a lot of them don't. In general, this might be more tough than people realize in the future. We might be in a state in the future where we have pretty advanced AI systems widely distributed throughout the economy, throughout people's livelihoods. Many people might even be in relationships with AI systems, and it could be really hard to convince people that it's okay to unplug some widely distributed system like that.

There are also risks of having a military arms race around developing autonomous AI systems where we might have many large nations developing wide stockpiles of autonomous weapons. And if things go bad, just like in the nuclear case where you could have this really big flash war that destroys a lot of the world, you might have a bad case where very large stockpiles of autonomous weapons suddenly end up killing a lot of people from very small triggers. Probably a lot of catastrophic misuse will involve humans in the loop in the coming years. They could involve using very persuasive AI systems to convince people to do things that they otherwise would not do.

They could involve extortion or cyber crimes or other ways of compelling people to do work. Unfortunately, probably a lot of the current ways that people are able to manipulate other people in order to do bad things might also work with people using AI, or AI itself manipulating people to do bad things. Like blackmail? Like blackmail. Yeah. Another important thing is: homosapiens changes the very awful wolf to pretty dog.

Homosapiens has of course the similar excellent brain, and technologies, and partners. Now we are here… so, what’s next? We human beings homosapiens obtain new brains. Okay? Additionally The original brain, plus brains in the cyberspace.

Also we fortunately have new partners: AI friends and robots and so on, okay Robotic dog also! What worries me a little bit more about this whole scenario is that AI technology doesn't necessarily need to be a tool for global capitalism, but it is. It's the only way in which it's kind of being developed. And so in that model, of course we're going to be repeating all the kind of things that we've already done in terms of empire-building and people being exploited, natural resources being extracted.

All these things are going to repeat itself because AI is only another kind of thing to exploit. I think we need to think about not just as humans who are inefficient, humans that are unpredictable, humans that are unreliable, but finding beauty or finding value in the fact that we are unpredictable, that we are unreliable. So probably like most emerging technologies, there will be disproportionate impacts on different kinds of people. A lot of the Global South, for example, hasn't had as much say in how AI is being shaped and steered. At the same time, though, some of these risks are pretty global. When we especially talk about catastrophic risks, these could literally affect everyone.

If everyone dies then everyone is kind of a stakeholder here, everyone is potentially a victim. Do you still plan to just keep doing research? I know there was like the PhD versus grad school. I am somewhat uncertain about grad school and things.

I think I could be successful, but also maybe with AI timelines and other considerations trying to cash out impact in other ways might be more worth it. Median opening AI salary, supposedly 900,000 U.S. dollars, which is quite a lot. So, yeah, it seems definitely the industry people have a lot of resources.

And fortunately, all the top AGI labs that are pushing for capabilities also hire safety people. I think a reasonable world, where people are making sure that emerging technologies are safe, is necessarily going to have to have a lot of safeguards and monitoring Even if there’s a small risk it seems pretty good to try to mitigate that risk further to make people more safe. Peace and military side are very near, I carefully consider how to treat it. So when I was born, there is no AI systems or there's no computer systems.

But the current situation is that young people start their life with AI and robots and so on. Some technologies with AI will support their growing up processes. People have been pretty bad at predicting progress in AI ten years in the future there might be even wilder paradigm shifts. People don't really know what’s coming next. But suppose David beat Goliath.

There's still some chance. The vast majority of AI researchers are focused on building safe, beneficial AI systems that are aligned with human values and goals. While it’s possible that AI could become super-intelligent and pose an existential risk to humanity, many experts believe that this is highly unlikely at least in the near future.

2024-09-05 18:12

Show Video

Other news