TecHype Election Special: How Big Tech Destabilizes Democracy, feat: Robert Reich & Janet Napolitano
[BGM plays] (Brandie Nonnecke) Welcome to TecHype! I'm your host, Brandi Nonnecke. Today we're digging deep into an issue that affects us all. The role of emerging technologies, especially artificial intelligence, in reshaping the very foundation of our economy, our society, and even our democracy. Whether you're feeling excitement or a twinge of anxiety, one thing is true. AI and emerging technologies are increasingly going to influence the way you live, work, and connect with each other.
From the way you receive information online to the number of jobs that will be created and lost. AI is going to be integrated directly into your life. but how do we make sure these advancements actually benefit everyone? How do we ensure they do not destabilize, but reinforce our democratic values? These are questions not only for the elite within tech circles, but these are questions that affect you. We're honored to feature two guests today who have been on the frontlines of these issues. Robert Reich, former US Secretary of Labor, and Janet Napolitano, the former Secretary of Homeland Security. With rapid advances in AI, an emerging technologies and social media platforms, there is a significant concern that these technologies could destabilize our democracy.
What do you think are the most pressing challenges that emerging technologies pose to the integrity of our democratic institutions? (Robert Reich) Well, the biggest problem right now is the weaponization of disinformation, and that would be a huge problem anyway. But if you get social media and technology, providing all sorts of ways of fooling people into thinking that lies are the truth and weaponizing those lies, then, it's very, very difficult for the public to do anything about it. I mean, a democracy depends on an informed public. Our only two responses are, number one, making sure that every social media outlet and every technology that's being used has in it some corrective feature, some moderation feature that make sure that the lies are just are not there or somehow screened out. And secondly, that people are trained in critical thinking so that they don't subject themselves to they're not vulnerable to these sorts of lies. Exactly.
But it's not only, presenting a lie is truth, but it can also be presenting truth as a lie. Exactly right. And then we get to this problem of the liar's dividend, where there's no discernable truth. Everything is a lie. And that's exactly what Orwell was warning us about in his 1984 novel, which, really was about the end of democracy, was about authoritarianism.
How does how do authoritarians persuade the public? Well, they took they take lies and they turn them into truth. They take truth and turn them into lies. And here we are 40 years later. We're seeing it happen right before us. Well, it's fairly, frightening because I've been involved in education and public education for, the last 45 years, and I've never seen anything like this in terms of a threat to democracy and a threat to public education and broadly concede. So what do you think we can do now in the state of California? Governor Newsom recently signed a bill into law, the Transparency Act, where it will require these developers of AI systems that can either generate or modify content to include, essentially a type of watermark that cannot be removed.
In doing so, the hope is that this would raise awareness to people that that content has been manipulated by AI. Do you think interventions like that will be effective at, you know, helping the public better understand truth from fiction? I think it could be helpful. Anything we do that signals to the public what the truth is, and that the truth might be manipulated is an advance.
It helps critical thinking. I mean, if people even want to be critically, thoughtful, they do need markers. One thing that you talk a lot about in the work that you do is around economic stability and social justice. I've been thinking a lot about the importance of third spaces. Right. One of the reasons why platforms are able to influence people is that Americans live very different lives, and we're no longer intermingling with each other.
Our children go to vastly different schools. They participate in different recreational activities. Is there any hope in the United States that we'll get back to a point where our democracy can flourish, where we no longer have these factions? Well, I'm very hopeful.
And my hope rests upon the notion that we as a society fundamentally depend on, people understanding what they owe each other as members of the same society. We talk a lot about rights, but we don't talk enough about duties. And the only way we know and understand duties and our responsibilities to each other is if we intermingle in these public spaces. As you're talking about technology could be helpful. I mean, Wikipedia, for example, is a public space. It's a public technology.
There's no reason that every technology, every piece of software and every platform has to be privately owned. right? And I think that there are some ideas around these decentralized types of platforms where they are not controlled solely by one company, but it's really different. Groups can control it.
Now, you talked about our duties, and one of our duties in the United States is to vote. And we're on the eve of a very, very important presidential election. Yet we see the role of these emerging technologies in influencing voters and mobilizing them.
How do we best ensure that the tech behemoths don't have an undue influence on our voters, and in part, on our democracy? They are already having an undue influence? I think one of the people who, are distorting our democracy more than any other person is Elon Musk. And he's going to be called out. And I don't care if I'm the only person calling him up. But he is doing things that are directly undermining our, our democracy because of his ownership of X and his willingness and actually his encouragement of dissent, information on that platform. Right. I mean, there's definitely, I think, an economic incentive we have seen in a lot of research that shows these hyperbolic disinformation stories.
They tend to go viral. Get people clicking and looking. I want to also point out that when Elon Musk took over X, he he bought up all the shares. So sole owner, how do we balance, though, that he is the owner of a private company with the influence on our election, right.
He does have First Amendment rights Well, he may have First Amendment rights, but X or any platform that would that has that much influence should be treated as a public utility. Okay. Well, even beyond that, you know, I find it anathema to democracy that somebody like Jeff Bezos owns the Washington Post and prevents The Washington Post from endorsing a candidate. I mean, that is a an abuse of the wealth and power in this country, and we cannot allow that. So do you think that antitrust is the main way forward when we break down some of these big companies or the ownership, like, lateral ownership across different industries where we're seeing this influence of these sort of tech giants taking over large media platforms. Yes.
I think antitrust is a major and very important initiative with regard to dealing with these giant tech behemoths. I would you call them, but also, we must prevent any particular single wealthy person from controlling a very important vehicle through which the public understands, what's happening? We can't have a democracy, with with that kind of, centralization as the great jurist Louis Brandeis said in the 1920s, we can either have great wealth in the hands of a few people, or we can have a democracy, but we can't have both. It's very true, and we are not currently in that situation. Are we? We have, these, you know, tech giants owning essentially the public sphere and shaping it. And, and I think that a very aggressive response to that to protect our democracy, not just our economy, antitrust is not just about economics, it's also about protecting democracy. And it's, that aggressive antitrust approach, is to be applauded.
I think what the Biden administration has been doing is good. But we need more of it. Rightly, an icon at the Federal Trade Commission has been leading a lot of this antitrust work, which has been quite impactful.
She's very good. Jonathan Kanter at the Antitrust Division of the Justice Department, also extremely good. They need more resources, and they also need an administration that does not call into question whether they're going to be reappointed. I mean, they need more staff.
Antitrust is going to be a central feature of our, response to a structural, kind of grotesque, this this, let me start again. Antitrust has got to be a response to the structural problem we're having right now in terms of an economy that is not supporting democracy. And we are on the eve of an election. And so I will pose the question if we have a, Harris Walt's administration, do you think that these issues will be taken up and those resources will be allocated appropriately so that we can deal with this issue? I think it's much more likely under a Harris Walz administration than under a Trump administration.
But even a Harris Walls administration is going to have to be pushed because, never underestimate the power politically of big aggregations of wealth. We have gone come to the point, partly because of the Supreme Court and the Citizens United decision, in which big money is polluting our democracy. Right. Companies have free speech, First Amendment rights, according to Citizens United. Yes.
And then according to the Supreme Court, money is speech. Well, I'm sorry, money is not speech. Corporations are not people.
And we need a Supreme Court that recognizes that. I want to talk about what do we do in our society to make sure that emerging technologies can lift everyone? What do you think are the key policy strategies that need to be implemented to ensure everyone from all ranks in our country are able to harness emerging technologies in a way that benefit them? Three things. Number one, you need a universal basic income. So everybody, knows that they're not going to be stranded and their, their family is not going to be, basically pushed off a cliff in this economy.
Secondly, you need good education available to everyone, all the way through college. So that and public higher education, that is free. So everybody has an opportunity and all kids have an opportunity to make the most of their God given talents. And thirdly, we need internet access for everyone that is, if not free, certainly practically free, because it is a vitally important piece of, being a citizen and also a social learning.
Yeah. I'm glad that you brought up, digital inclusion and closing that digital divide, my PhD in telecommunications. And I worked a lot on universal access and service policies. And it's been a very significant challenge in the United States to close that gap.
Do you think that the United States will be able to actually close the gap? And I will also say it's not just in rural areas. It's also in cities where we see people who can't afford to have access. We are able. The question is, are we willing? We are the richest country in the history of the world. We have the capacity to give the best education, including internet access to every child, to every family. If we don't do that, it's because we don't have the political will to do that.
If we don't have the political will to do it, it's because there's too much big money that stops us from doing it. Involved in our politics. Yeah, And I will say, if we don't do it, I think that we will get knocked down from being in that top position to being lower on the global stage. We need to invest in education in I think so too, because I think that many of our young people have extraordinary capacities and abilities if they're given the chance.
Yeah, I agree with you. Thank you so much for joining me today to talk about emerging technologies and AI and economic and social stability. Thank you so much for having me. My pleasure. One of the things that you and I are both very concerned about is the role of emerging technologies in national security and stability. What do you think are the most pressing challenges right now that emerging technologies pose to the integrity of our democracy? (Janet Napolitano) Well, one of the challenges is we don't know what we don't know.
The technology is evolving so quickly, much more quickly than policy can evolve. We don't really I think, have a good handle on how to measure risk. We don't even know really what to measure. Do we look at in the AI world? Do we look at the models? Do we look at the data? You know, how do we how do we, decide between those things? So we're, we're in this odd space where policymakers are really struggling.
There. I think there's a general sentiment that there need to be some guardrails sometimes not to let it run like, social media ran when it was first formed. But, a still a grasping of, you know, what is the right kind of policy? What are the right kinds of guardrails that are needed? Yeah, exactly. I mean, we're still at this stage, right now where the technology is quite nascent and we're unsure how do we even assess these tools? First, and once you assess, then what type of technical or governance guardrails should you put in place? Exactly, exactly. And and the government, it's really interesting. In DC, everywhere you go, everybody has, you know, their AI committee or their task force or their AI this or their AI that.
So there's there's no real nucleus. I know that DHS, my former department, is doing a lot in the AI space, probably as much, than almost any other federal department. I know that the CIA has people working in. I know the the white House, the National Security Council does.
Those are just three places where I work is going on. Definitely. There has been a sort of clamoring to get on top of this issue early.
Now, one thing in the state of California, we had quite a contentious bill, SB 1047, which would have put in, sort of some guardrails on these significantly advanced frontier models. However, as we know, the bill was vetoed, by Governor Newsom. But yet about 2 or 3 weeks later, we saw President Biden signed a memorandum on AI round national security. National security member. Yes.
And talking a little bit about, well, how do we best ensure that these frontier models do not cause bioweapons or other mass casualties? Now, do you feel that the memo is a proactive step at truly addressing the risks of these technologies, rather than having states adopting laws? I think the National Security Memorandum is more aspirational. It's it's not a strategy. It's not operational. It's it it it really is. We'd like to not have these kinds of things happen right. Which, you know, seems pretty obvious.
Obvious. But how they happen, who controls that? What are the actual metrics that are going to be deployed? Those are still unanswered questions. Yeah. And I think as part of that memo, there is a call for more collaboration also with industry, because industry often has this really unique perspective into the capabilities of the technology. Well, and AI is interesting because when we've had emerging technologies before, oftentimes they emerge from within the government, like the internet, it's like the internet itself. But I really has emerged from the private sector.
And so that's a different dynamic. And, and how you deal with that is clearly government and the private sector are going to have to work together. Exactly.
And I know what the you know, the federal level, there has been this increased focus on investing in infrastructure for public sector access to AI. These frontier models, not only to develop it for use within government, which could be for national security, but also to use it as a resource for testing the capabilities of these models. I know nest is also involved the National Institute of Standards and Technology.
Do you feel that we will be able to invest enough in the development of that infrastructure that will allow the federal government to be able to have, compute power and data and models available? Open question. We need to it's it's going to have to be part of our national security infrastructure among other things. It is hard to say.
Well, how much, you know, and when you get to an actual appropriation at an actual dollar level, you have to be able to say, well, this is what we need. And so that means you need to understand, well, what is the risk that you're trying to mitigate? What is what is what is the need to satisfy. Right.
And and then you have to be able to put dollar signs on it, and then you have to be able to get it through the House or the Senate. Which is an art form in itself. So it take quite a bit of time. I think perhaps one way to think about it is, is not to do everything all at once, to go with pilot projects, to go with phases, to to think about this as something that will have to happen over time. As the technology develops and as the policy and the policymakers get more sophisticated. Yeah. And I think actually
a big part of this will be industry collaboration with the public sector. Now we've seen with the national AI research resource, some of this collaboration already emerging. However, my concern is that the collaboration isn't that strong. What are some ways we could incentivize the private sector to work more closely with government, the public sector with a shared goal of ensuring we're not developing AI that harms our society.
Well, it's interesting in the cybersecurity world, yeah. When that when that was when that was developing, of course, there were no mandatory requirements put on cyber cyber networks. And it was pretty much as a voluntary regime. And, and many in the private sector were reluctant, for example, to, to quickly report when they'd had a cyber event because they didn't want people to know, they didn't want to let their competitors know, they didn't want, you know, their consumers to know that. And so, what it took, to kind of change that culture was the, the fact that more and more attacks occurred, more and more damages were, experienced.
And they began to realize it was in their own self-interest to do this. And so now in the cyber security realm, it is much more effectively collaborative. And that may be what happens with AI. Yeah, I can actually see that, because in the AI space, we're seeing the development of these AI incident databases essentially public name and shame. Yeah. For these companies.
And I imagine that those companies would prefer to report it themselves and have a third party reporting. Right, because they could get some things wrong. They might get everything right sometimes.
So definitely, I think you're spot on. We might see that example jump from cybersecurity over to AI safety. Right. Right. And if that happens I think that that wouldn't be a bad thing.
We certainly have cybersecurity problems. We certainly have network, issues. We certainly, experience and particularly where critical infrastructure is concerned. But it's much better than it was. And, and again, it has required almost a culture change within the private sector owners and operators. Exactly. And my last question for you, we are on the eve of a very important presidential election.
What is your greatest concern about emerging technologies affecting our election? Well, we've already seen it. We've already seen it in the just the multiplicity of misinformation and deepfakes and, and and kind of, infecting the information environment. And that's and it's only every day there's more and more and more and more we can expect even more and more.
But what I'm really watchful of is post-election, because it's likely we will not know the winner. Election night States, or some states that are battleground states are very slow to count. And as we know and experience in 2020, all kinds of conspiracy theories, arise. And we know some nation state adversaries are, in a way, licking their chops. And they they're not going to stop whatever misinformation they're doing now on November 5th, they're going to continue it.
And that in and of itself could cause us security problems of a very major, dimension. Well, let's hope that we get a more of a rain on this and that the, media outlets help to quell some of these conspiracy theories that are happening and that the platforms actually show up and try to label content and remove and suppress it. Yeah. Well, from your mouth to God's ears. But, I'm I'm somewhat skeptical.
Well, Janet, thank you so much for joining me today to talk about emerging technologies and its effect on the election and national security. Thank you. Thank you. Thank you for joining us on this episode of TecHype, where we got to sit down with Robert Reich and Janet Napolitano. We uncovered ways emerging technologies are influencing our society, our economy, and even our democracy.
Want to learn more about other emerging technologies and the laws and policies that shape them? Check out our other episodes at TecHype.org.
2024-11-11 15:48