Dr. Amy Zegart - "The past, present, and future of tech in American espionage"

Dr. Amy Zegart -

Show Video

What I want to do is start with a picture and start with a story that happened to me as I was finishing this book. So this book has been in my head for a long time. It started off as an intelligence 101 kind of project with Princeton University Press. And as you'll see, it turned into intelligence 2.0.

So not just how does the intelligence community work, but how are emerging technologies challenging every facet of the US intelligence enterprise? And this moment, over July 4 weekend in 2020, really captured I think this emerging world for intelligence. Everyone can see the picture OK on the slide? Yep? Yep. OK. So what you're seeing is a picture of a damaged building. There was a fire that broke out in Tehran at about 2:00 in the morning local time.

And the flames of that fire were so bright they were detected by a weather satellite in space. Now, Iran's Atomic Energy Organization released the photo that you see. It was carried by The Associated Press.

And they initially downplayed this fire. They called the building that you see an "industrial shed," air quotes, that was under construction. And they also said the damage was very limited.

And you can see, from the photo, it looks like the damage is confined to just a small corner of that nondescript building. But two people, who many of you may know, David Albright and Fabian Hinz, quickly discovered a very different picture. And they quickly concluded that Tehran was lying.

Using just openly-available intelligence images released by Iran, commercial satellite imagery, and simple geolocation tools, they geolocate the building. They discover, of course, it's not an industrial shed under construction. It's a nuclear centrifuge assembly facility at Natanz, one of the main facilities in Iran's nuclear program. And they also conclude, from overhead imagery from satellites, that it wasn't, in fact, a very small or insignificant fire. It was a very large fire, most likely caused by an explosion and quite possibly the result of sabotage.

Now, the fire breaks out at 2:00 in the morning local time. Within hours on Twitter, David Albright and Fabian Hinz, they work at two different nonprofit organizations. They both post their analyses on Twitter, within hours.

By mid-morning, The Associated Press is carrying their analysis. By afternoon, David Sanger at The New York Times has a major article. I could see some people smiling. David Sanger always has an article.

He's quite prolific, and he's very fast. I don't know how he does what he does, but he carries their analysis by early afternoon. By evening on the same day, Israeli Prime Minister Benjamin Netanyahu is asked whether Israel is responsible for an act of sabotage on Iran's nuclear program. Netanyahu replied in characteristic fashion, "I don't address these issues." We didn't get an answer from the prime minister. But the point is this entire episode transpired in the course of a single day.

It transpired driven by two men who don't hold security clearances, don't work in US intelligence agencies, and did all of their analysis based on openly-available or publicly-accessible intelligence. Right? This is the new world of intelligence in a nutshell. Now, it used to be that tracking dangers, including nuclear threats, was almost exclusively the province of governments and their high-powered spy agencies. But what I found in the book is that emerging technologies of all types, and we'll talk more about what I mean, have profoundly changed the intelligence business.

And they're creating a moment of reckoning for US intelligence agencies akin to 9/11, an adapt-or-fail moment driven by technology. Now, before I delve into the new world of intelligence driven by all sorts of emerging technologies, I want to spend just a little bit of time on the old world of intelligence and why it's so hard. And because I know folks come from a variety of different backgrounds, I want to do a little bit of level-setting about some basics of intelligence. So for those of you who are intelligence experts, just bear with me for a few minutes. OK.

Now, of course, we know that spying has been around a long time. It's as old as warfare and has always been a part of trade and diplomacy, statecraft and conflict. The first intelligence reports that were known to have survived were chiseled on clay tablets 3,000 years ago in the Amarna letters. Of course, we all know about the book Sun Tzu, The Art of War, written 2,000 years before the American Revolution. And our founding fathers, this is one of the more interesting parts of researching this book, finding out how much espionage was so crucial in the founding of the United States as well. We may often think about information warfare as an internet phenomenon driven in large part by the Russians.

But Benjamin Franklin in his day actually was quite adept at information warfare. He was a printer by trade. And he set up a printing shop in his Paris basement during the war where he literally cranked out fake news reports with fake ads and fake stories, all designed to win public and elite support for the rebels in Europe during the Revolutionary War. So we know that espionage has been around a long time, but most Americans know almost nothing about how our intelligence agencies work. Just to give you a couple of examples, in Congress, there are more powdered milk experts today than there are members of Congress who have ever served in an intelligence agency before.

The dairy caucus has more members than there are intelligence agency veterans in Congress. We also know that political scientists don't typically study intelligence either. I'll give you one data point. I counted up the number of articles over a 15-year period that appeared in the American Political Science Review; AJPS, the American Journal of Political Science; and the Journal of Politics. We can arm wrestle over whether these are the top three journals in the field, but they're well-regarded disciplinary journals. And over that 15-year period after 9/11, there were 2,780 articles published.

Only 5 of those nearly 3,000 articles dealt with any topic related to intelligence at all. So when you think about this period post-9/11 where intelligence is making headlines, whether it's failures with Iraq, WMD, or scandals and controversies, with detention and interrogation programs, counterterrorism programs, domestic surveillance programs, political science journals are focused on just about everything else. And this leads to my favorite slide, which I'm going to use in my course at Stanford in the spring quarter. It's, of course, a leading question. Guess which U2 is taught in more top 25 universities? U2, the band, or U-2, the spy plane? Of course, the answer is U2, the band. So I took a look at the top 25 universities ranked by US News & World Report.

More of them offer courses on the history of rock and roll than on anything related to US intelligence, which gives students a better chance of learning about U2, the band. As lovely as it is, I'm not saying I'm not a fan of Bono, but they're going to learn about that more readily than they are going to learn about US intelligence. So there's a gap, an education gap, at the elite level, at universities, and in the general public about what intelligence is and how it operates. And we can talk about what the costs of that are, I think, for intelligence effectiveness and accountability in a democracy.

So what is intelligence? This is a question Jack Goldsmith asked me a couple weeks ago, and he was quite right in asking it because it's more complicated than it may seem. What is it and what isn't intelligence? Just very briefly, intelligence is information that gives policymakers decision advantage, that they better understand and they more quickly understand both threats and opportunities than their adversaries. That's the most simple definition. But intelligence is not a crystal ball. That seems cliche.

But think about how much we expect intelligence agencies to get it right with pinpoint accuracy, whether it's how long the administration in Kabul is going to last, when exactly is it going to fall. Or you can see this with academic work. One of the great articles that I included in my chapter for Vipin and Scott Sagan's book looks at trying to assess how well the intelligence community did with respect to nuclear threats. And there, the standard was also pinpoint accuracy.

If US intelligence agencies were three weeks too late, they got it wrong with respect to the timing of a nuclear test. So we know that intelligence isn't a crystal ball. And yet, both the public and academic work often holds it to a crystal-ball standard. And of course, we know it's not a crystal ball for a number of reasons. Intelligence is fragments.

It's not whole pictures. Even smoking guns are ambiguous. And of course, adversaries are doing everything they can to deceive and to hide.

Intelligence, of course, is also not policy. Right? So intelligence officers are not supposed to walk into the White House and say, Mr. President, this is what we think is going on with Russia and, therefore, you should do A, B, or C. Now,

I got into a discussion with Condi Rice about this, and we can talk about that, too, intelligence may not be policy, but it often shapes policy in ways that intelligence officers may not fully appreciate, giving grist to the mill for some people instead of others. And as she pointed out in reading aloud about what she didn't like about my book, we did a fireside chat together, that intelligence officials, by their assumptions, by their agenda-setting of what they are focused on also influence policy. So there's supposed to be a bright line. But in reality, it's a lot blurrier than that. So I want to make sure I put the disagreements with my book front and center on the table as we go along. And then finally, intelligence is not just secrets.

Even during the Cold War, public estimates are roughly 80% of information in a typical intelligence report came from publicly-available information, not the clandestinely-acquired intelligence. So if that's all true and 80% typically comes from openly-available information, why are we spending more than $80 billion a year on 18 intelligence agencies in the United States? Or, as I put in the title of my slide a little bit more snarkily, what makes intelligence more valuable than a Google search anyway? And the answer is really three things. The first is that, of course, policymakers need intelligence at specific deadlines to answer specific questions. They need intelligence that's tailored for their needs. What's going to happen next in Ukraine, for example. Or longer-term questions, why is China dramatically increasing its nuclear arsenal or how is the cyber threat landscape changing and what does it mean for US policy? So it's that tailoring of information that is supposed to add real value.

The second key function is, of course, the old saying "speaking truth to power," which is intelligence agencies answering questions that policymakers may never have thought to ask or giving answers they may not want to hear. This is the spinach function of intelligence. Right? Giving people information that may be good for them even if they don't want it.

And then the third function is that synthesis of the publicly-available information with the clandestinely-acquired information, the intercepted phone communication abroad or the document or the human source revealing what's happening in Putin's inner circle. And it's that combination of public and secret that makes intelligence so valuable. So why is intelligence so hard? And here, I think Donald Rumsfeld got a bad rap when he had-- he deserved his bad rap in many ways for many other things. But in this particular instance, Donald Rumsfeld was on to something. So he actually said this, as you might recall, at a press conference in February of 2002, in the midst of the Iraq war.

I normally don't read slides aloud, but I think it gives you a sense of the quote and the Rumsfeldian poetry if I do. So what Rumsfeld said, this is his verbatim remarks from the press conference. He said, "As we know, there are known knowns.

There are things we know we know. We also know there are known unknowns. That is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don't know we don't know."

Now, what on Earth was Donald Rumsfeld talking about? It turns out that he was really channeling that one of the founding fathers of the CIA's analytic branch, a Yale history professor named Sherman Kent. Now, Kent wrote, in the 1960s, about three types of information that intelligence agencies have to deal with. The first, what Rumsfeld called the "known knowns," are indisputable facts. These are questions that have answers, and those answers are known to US intelligence agencies. So it's a knowable thing, and US intelligence agencies know it.

So an example of a known known is, does China have an aircraft carrier? That's a question with an answer, and US intelligence agencies know the answer. Yes, China has not one aircraft carrier, it has two, with a third under construction. So that's the known knowns. The second type of information intelligence agencies have to deal with are things that are knowable. These are questions that have answers, but they may not know the answer yet. So how do Chinese aircraft carriers perform under various conditions at sea? That is a knowable thing.

But to get that answer, US intelligence agencies may need to have humans on-board those carriers or they may need to have data from sensors over a long period of time. They may not know the answer even though it's a knowable thing. The third category of information is the most tricky for US intelligence agencies, and those are things that are not known to anyone at all.

So if Taylor were to go and ask Xi Jinping, for example, how long will the Chinese Communist Party stay in power? Even Xi Jinping won't know the answer, because it's not a knowable thing. This third category is the most vexing of intelligence challenges, because it gets at adversary intentions, number one. And number two, it gets at long-term trends that have lots of different variables that are very difficult to disaggregate and then determine probability estimates for.

So that's the old world of intelligence, which I hope we understand is hard enough. Let me spend more time on the new world of intelligence and how technology is challenging it. When I talk about emerging technology, I'm really talking about a convergence of a number of different things. Think about the internet and how much connectivity we have today, with more than half the world online. Think about social media connecting us in ways and helping us watch the war unfold in Ukraine in real time. But it also includes things like artificial intelligence, the commercial satellite revolution with thousands of commercial satellites offering capabilities for free or at low cost that only spy satellites could offer before, quantum computing, and more.

Never before have we had so many emerging technologies changing so much so fast, whether it's politics, civil society, or economics. So for the US intelligence community, what I argue in the book is that this convergence of emerging technologies is driving what I call "the 5 mores" and let me take each one of these in turn. The first and most obvious is more threats. Right? If a picture is worth a thousand words, the picture on the left of the slide is a Soviet missile being paraded through Red Square.

That was a CIA reference photograph during the Cuban Missile Crisis. In the Cold War, threat number one, two, and three was the Soviet Union, the Soviet Union, and the Soviet Union. Today, the threat landscape is dramatically more complex, driven in large part by technologies and the ability of actors to threaten in cyberspace. So for most of American history, of course, two things really protected us more than anything else, power and geography.

If we had more powerful military, then it meant we were more secure. And our geographic location with two vast oceans protected us from bad neighborhoods in the rest of the world. But those two factors, power and geography, do not protect in cyberspace. The United States is simultaneously one of the most powerful actors in cyberspace and one of the most vulnerable actors in cyberspace because we're so digitally connected, because we have such freedom of speech, which enables adversaries to influence our discourse and deceive at scale. And good neighborhoods and bad neighborhoods in cyberspace are all connected, so cyber is driving a fundamentally different and more complicated threat landscape.

So that's the first more, more threats. The second more is more speed. Intelligence has to move at the speed of relevance, and that speed of relevance is accelerating in dramatic ways. So the Cuban Missile Crisis, I always joke that you can't be a political scientist, I think, without mentioning the Cuban Missile Crisis at least once in a talk. So here's my nod to the Cuban Missile Crisis, although I have mentioned it already more than once.

So maybe I get double bonus points for that. If we think about the Cuban Missile Crisis, John F. Kennedy famously had 13 days from the time those U-2s snapped the incontrovertible evidence that the Soviets were building up nuclear missile installations in Cuba. He had 13 days to assess the intelligence and decide on a policy course of action. Fast forward to 9/11, President Bush had 13 hours from the time of the first attack that day to assess the intelligence before he announced to the nation that night what US policy would be.

1962, 13 days; 2001, 13 hours. Today, with cyber attacks, it's more like 13 minutes or 13 seconds or maybe we're even too late, because we know that cyber attackers are often inside our systems for weeks or months before the breach is ever detected. And that's true even of the most recent cyber attacks with SolarWinds, for example, by the Russians in December of 2020. So speed has become a real challenge, particularly when policymakers can get all sorts of other information on their phones from Twitter in almost near real time. And when a group of us from Stanford went to STRATCOM in Omaha a couple years ago, I went with my colleague Herb Lin and we were doing work on related topics. And the first question we asked, when we got to the battle deck underground, was, do you have Twitter down here? And the answer was yes.

And in fact, they typically have a Twitter feed right alongside the classified intelligence feed at US Strategic Command. So think about the speed of relevance of intelligence and how much that's accelerating, thanks to technology. So more speed.

So more threats, more speed. The third more, driven by new technology for intelligence, more data. I mentioned already more than half the world, by the last estimate, I think it's 58% of the world is online today.

More people on Earth have cell phones today than running water. Right? That's an astounding level of connectivity. And here's one sort of data point that really caught my attention. This is from the World Economic Forum. In 2019-- so this is even dated.

It's more than that today. In 2019, internet users posted 500 million tweets, sent nearly 300 billion emails, and posted 350 million photographs to Facebook every single day. That's the daily production of information online.

By some estimates, the amount of data on Earth is expected to double every two years. So if you're an intelligence analyst, you are drowning in an abundance of data. And the challenge is how emerging technologies can help make sense of that data and generate insight faster and better than adversaries. So more data.

The fourth more is more customers or decision makers who need intelligence to advance the national interest and who don't work inside government agencies. So this is a picture of a public service announcement to voters about foreign election interference. And it includes even General Nakasone, the commander of US Cyber Command and the director of the National Security Agency. When an agency like NSA is making public service announcements, you know times are changing.

So it's not just people with security clearances who need intelligence anymore. Voters need intelligence. Tech leaders need intelligence about cyber threats to and through their platforms. And critical infrastructure leaders in financial services and power and water and other sectors also need intelligence. What this means is that US intelligence agencies increasingly have to publish for the open world, not just the classified world. This, too, driven by new technology.

Then there's the fifth more, which I think probably, from an academic perspective, may be the most interesting development in this world. And that is more intelligence competitors. So intelligence collection, production, analysis isn't just for governments anymore. And so I spent a couple of years, dragged into this by Vipin and Scott, happily I will say, looking in particular at the open-source intelligence world related to nuclear threats.

And I liken this world to the Star Wars cantina. It's got people from everywhere in it. And fortunately, my friends who are in this ecosystem say they actually quite like being called the "Star Wars cantina," so I'm glad I'm not offending any of them. But what you get in this ecosystem is a wide range of capabilities and motives drawing people into the open-source intelligence world.

And I want to just give you a couple of examples to give you a sense of how wide-ranging this world is. So in this slide, many of you I'm sure know Sieg Hecker. He's the guy on the left with the Stanford Cardinal red shirt, standing on the Stanford campus. Sieg was the science co-director of CISAC. He's a physicist by training, nuclear physicist. And he's the director emeritus of Los Alamos National Lab.

He's a part of this open-source system. He says he doesn't want a security clearance anymore, because he's tracking North Korea and other nuclear proliferators using only publicly-available information. And we'll get into a minute why Sieg has decided he wants to do that without using a clearance anymore. So you have an expert, on one hand, in this ecosystem.

And the guy on the right is a hobbyist. He's a coin dealer by trade. He lives in Tennessee.

His name is Jacob Bogle, and he makes really great maps of North Korea as a hobby. And I took the language on this slide right from his website, which I love because he has at the bottom his big three. Hit him up if you want to know anything more about North Korea, coins, or political consulting. This gives you a sense of the range of this ecosystem. But taking a step back, I think there are some systematic differences between the emerging open-source ecosystem of individuals and organizations in nuclear threats and the traditional US intelligence community and we need to better understand these dynamics. The primary difference, if we go down the left-hand part of the slide where it says "OSINT," the open-source intelligence community, a wide range of motives of hobbyists and activists, people who are just interested, people who want to make money, they come with a wide range of capabilities as well because anybody can join.

All you need is an internet connection. There's no vetting of who gets to play in this Wild West of the open-source world. They bring a wide array of analysts' backgrounds. And as a result of the sort of speed with which this ecosystem moves in part, quality control is really much looser. Right? It's voluntary. It's ad hoc.

It's informal. And so anything can get out. Right? At any moment. And we saw this with David Albright and Fabian Hinz just a matter of hours before their analysis was put into the online ecosystem. Compare that to the US bureaucracy, well, there the organizational objectives are more narrowly tailored. It's helping policymakers gain decision advantage.

It's not wide open. It's not the Wild West. It's hard to join. Right? It takes two years right now to get a security clearance and get a job at the CIA, for example. So there are real hiring rules. Membership is not open to everyone.

As a result in part, analysts' backgrounds are narrower that they come in the door. And quality control for products is more formalized. Some of the benefits of red tape are standardization and mandatory peer review. That's not to say intelligence is always right. We know it isn't.

But there are more rigorous quality control measures in place. And the government moves more slowly, at the speed of bureaucracy, rather than the open-source world, which moves much more quickly. Now, there are pluses and minuses to this open-source ecosystem. The benefits, the primary benefits are, number one, there are more people that can put their minds to understanding and collecting information about nuclear threats or whatever the threats may be. The second benefit and this is something that Sieg really focused on with me is information is shareable because it's not secret.

It's shareable within the US government, between different agencies. And it's shareable with other countries. And finally, open-source intelligence players bring more diverse perspectives to bear on intelligence challenges. And since it's MIT, I can't resist an engineering joke or two.

So you know the optimist sees a glass as half full. And of course, the pessimist sees the glass as half empty. But to the engineer, the glass is twice as large as it needs to be. The benefit of different perspectives.

But there are also real risks of this emerging ecosystem. The most obvious is that errors can go viral. And when they do, the opportunity cost for policymakers and intelligence officers can be large. Because when errors go viral, intelligence agencies have to be the debunkers of last resort. And policymakers have to expend the most valuable thing they have, time, in understanding whether something is right or wrong and what they need to do about it.

There's also the risk of deliberate deception being injected into this open-source ecosystem. And perhaps most disconcerting, crises could become much harder to manage in a world where open source plays a greater role. If we think about third-party transparency in the midst of a crisis, it could actually make it harder for two sides to come to any kind of an agreement or a ceasefire.

And just as a thought experiment, I put out, of course, a Cuban Missile Crisis in the modern time. Imagine the Cuban Missile Crisis were unfolding today. And there are open-source intelligence organizations that are revealing what's going on in real time, sort of fact-checking what's going on, like we're seeing in Ukraine today. And imagine there's a tweet like the one I put on the slide saying, "Just in!" Right? "Satellite images show nuclear missile sites in Cuba. #crossingredlines." We know that, in the real Cuban Missile Crisis, there were two critical factors that averted nuclear war, or at least two.

One was time to think, and the second was secrecy to compromise. Kennedy, had he made the decision on the first day of the crisis, would have launched an air strike, which would have been much more likely to lead to nuclear war. Time to think proved pivotal.

And secrecy to compromise that nuclear swap, the removal of the missiles from Cuba, and exchange from the removal of US nuclear missiles from Turkey, a compromise so secret nobody knew about it for two decades. This open-source world doesn't provide secrecy and time to think. It provides transparency and speed, exactly the opposite of what we may need in a crisis. So crisis management could become much more difficult.

Let me end just with, what are the implications for both IR and what are the implications for policy? I'm so glad that Erik Lin-Greenberg is here with us today. I think he does great work in this realm. Secrecy and openness aren't just out there, that this is a growth industry, understanding the causes the consequences and the strategic uses of secrecy and openness. And I think Erik is part of a really interesting new set of work, looking at this in a range of different areas.

Keren Yarhi-Milo and Mindy Haas looking at secrecy and openness with allied communications, not just with adversaries. So I think there's a really exciting vein of research in IR theory today that gets into some of these issues. And for policy, I think there are some real implications about how much our US intelligence community needs to be radically reimagined for a world where open source and information and not just secrets is going to be playing a greater role. Let me stop there and stop my share so I can see all of you a lot better and open it up to whatever you want to talk about. OK. Thanks, Amy.

That was fabulous. I'm going to use the position of the chair and ask two questions. And everybody who has questions, please raise your hands. I'll keep the list.

So the first question is sort of, how much do we see governments using OSINT strategically to launder intelligence? And it's related to the second question, which is the signal-to-noise ratio as there's been a proliferation of OSINT tools and people. Just on net, we see all, pun intended, proliferation of people, proliferation of tools, proliferation platforms. Your sense of the net effect on whether it's more signal, more noise, or the ratio is the same? Less signal, more noise, so it's actually harder to sift out the truth in this ecosystem? Or whether the signal is getting stronger because of OSINT tools, both at the government level and in open-source intelligence? So why don't you start with those? And then I'll keep a list. Yeah.

Thanks, Vipin, for the softball initial questions. Appreciate that. So great questions.

I think, yes, we are seeing indications, from what we can tell, that the-- I like the imagery of laundering sort of clandestinely-acquired intelligence through the OSINT world. And I think the poster child for this is the "discovery" of Chinese nuclear missile silos over the summer that was reported in The Times through open-source intelligence. It's hard to pin down. But of course, you hear whispers that it was sort of look here, look here, look here from the US government to members of the open-source community.

So I think that's likely the case, that this was already known by US intelligence agencies but was laundered through the open-source community so that it could be more shared without revealing sources and methods. So I certainly think we're going to see more of that dynamic in the days to come. Signals to noise, yes. Right? Yes, more signals.

Yes, more noise. How does that play out? I think depends very much on the players. So one of the things that I have been arguing to policymakers and intelligence leaders in Washington is that the framing of OSINT is wrong. OSINT or open-source intelligence isn't just stuff. It's not just information that US intelligence agencies are bringing in.

OSINT is an ecosystem. It's an organizational living ecosystem with its own dynamics. And so that question, what's the ratio of signals to noise, depends on the ecosystem and that ecosystem is not fixed. It can be shaped. And so I think what needs to happen now, at a time where it's the most benign this open-source ecosystem is going to be, it's mostly dominated by American and allied responsible players in nuclear threats, for example. The challenge is to shape the ecosystem with norms, with standards, with nodes of engagement from responsible players in the ecosystem directly to US and allied intelligence agencies so that you're, in some ways, outsourcing the training and the validation of some of this information so that it's not so overwhelming for intelligence agencies.

So I think it could be. It depends on the topic. And it depends on the moment whether you're going to be overwhelmed with noise because of OSINT or whether you're really going to get much more value in terms of signals. Thanks, Amy.

The Chinese silo case was interesting. It went through several rinse-and-spin cycles, because the initial leak or laundering revealed one field. And then I think somebody said, there are more. Go find them.

And then there was another one. They're like, there are more. Go find them. And it was very clearly-- it appeared to be laundered to some degree. OK.

Because you got a shout out, I'm going to give the LG the first real question. Thanks so much for the talk, Amy. I was actually going to use Taylor's absence to ask two questions today, but Vipin asked my first questions about laundering. So I'll just ask my second. So maybe this goes back to some of your earlier work, but I was wondering if you can speak a little bit more about the relationship between the intelligence community and policymakers now in this world of increased transparency.

Right? When the IC goes in to brief a principle, how does the availability of all this OSINT essentially shape the kind of work that the IC is doing and the type of messaging they're using when they meet with policymakers? So I think we're seeing a new model playing out in real time in Ukraine, and I've been thinking about this. And I just wrote a piece about this in The Atlantic, so I'll give you my sort of 30-second take on it, which is you see with Ukraine the Biden administration deliberate strategy to release unbelievable amounts of classified intelligence, almost in real time. So yes, we've had declassification before but not so much, not so fast, not so granular. Right? So you think about false flag operations that the Biden administration and the Brits revealed about what Putin was planning. Troop movements. Right? Even when Zelensky said, you guys are hyping the threat.

Stop roiling my economy. Right? He said, you're panicking. The Russians aren't going to invade. Zelensky said this to the United States. The revelations kept coming.

So I think this is a deliberate strategy, and I think it's designed for probably three purposes. And we'll see how effective these are. I think purpose number one is inoculation. So we think about information warfare. The Russians have often seized the advantage, because they get the lie out before the truth.

And what we know, of course, from psychology research is, once we have fixed beliefs, it's very hard to shake them, no matter how much information we receive. The more you say it's wrong, it's wrong, it's wrong, the more we cling to those prior beliefs. So the first-mover advantage in information warfare appears to be very important. And in this case, the Biden administration wanted to use the intelligence to say you're going to get conned by the Kremlin. Watch out for the con. Here's the truth, and the truth got out first.

And I think that inoculation function we're likely to see-- we don't know yet but likely to see that actually played a pretty important role in generating support, not just among the public but among NATO allies. So there's a rally-the-ally function there. So that's number one. Number two, I do think one of the goals was friction, creating friction for Putin. This feels to me-- I'd be curious, Erik, to know what you think.

It feels to me like a page out of Cyber Command's playbook. The defend-forward strategy is the more you force an adversary to focus on their own defense, the less effective they will be at offense. So in this case, Putin we know, at least from what the Biden administration was saying, was really sort of put off his front foot. Because suddenly, he couldn't control the timing in the way that he wanted and he had to worry about, how did the Allies know all this intelligence? Who can I trust, and who can't I trust? What systems can I trust? What can't I trust? And the more he's stewing in his juices about that, the less effective he can be in other aspects of the war.

So there's a friction component to this strategy as well. And then I do think there may be something to this question of-- I'm calling it sort of covert action logic in reverse. So it's hard to hide behind a fig leaf of the Russian narrative when there's this constant declassification of intelligence saying what Putin's really up to.

And so I think this sort of removing the fig leaf may explain in part why we see China being a little more muted than they otherwise would be about the response to the invasion. Can't hide behind the fig leaf. I think it helps explain why we see the Swiss taking a side. Can't hide behind the fig leaf. The Swiss love to bank with anybody. Right? They weren't so picky before.

So it's the inoculation, it's the friction, and the fig leaf. And so I think that it's this sort of melding of open and secret that we're seeing and it's new. And we'll see how it plays out. But I don't know if you have a different perspective based on what you've seen. Yeah. Thanks so much.

That was really helpful. Next, I've got Nina Miller, who's writing a fabulous paper on misattribution, which is close to this topic. Yeah.

Thank you so much for this talk. I'm Nina. I'm a second year in the department. So my question is about connecting emerging technology trends with crisis management.

And specifically, if speed of intelligence is the same thing as speed of decision-- I guess, said another way, do you think that the rapid sharing of OSINT will always pressure leaders to actually immediately take action and make a decision? Are those two actually sort of separate considerations? It's interesting. I'd want to think about that more to just aggregate where speed comes into play. But we know that policymakers, of course, have a bias for action. Right? So they want to do something. The status quo is rarely what presidents or their National Security Council staffs want to maintain, because they're judged by doing something different, not keeping things the same.

But of course, that's not true with other elements of the bureaucracy, so I think it would depend. So we think about the State Department, is, by orientation, wanting to take a longer view. So I'd want to disaggregate that a little bit more and understand, who's the policymaker we're talking about? I hadn't thought about that before. But I do think, in general, the speed of decision making is being accelerated.

Because as we think on the ground, policymakers-- and I've heard many anecdotal stories of this in my interviews for the book. They go outside. They get on their phone. They see all this stuff happening in Twitter. And they say, well, how come no one from the intelligence community has told me anything about this yet? And so there's an inevitable pressure driven by the open-source world to show the news cycle to say, why is it going to take me two days to get the hand-carried document from the intelligence community when I'm getting all this other stuff in almost real time? But let me think more. And I'd be curious to know what you think about the different key policymakers and the extent to which they can be a brake as opposed to an accelerator on decisions in the use of intelligence.

I hadn't thought about that before. Good? All right. Thanks for a very interesting talk.

I'm a third year in the program. Perhaps more germane to the discussion, I used to be part of the OSINT ecosystem by way of CNS. Probably contributed more to noise than signal. [LAUGHING] Certainly more to snark.

[LAUGHING] That's a different conversation. So my question actually gets-- I think I'm asking a similar question to Nina but a different way. Right? Having contributed to the noise as part of the OSINT ecosystem, like the part of-- OSINT gets it wrong a lot of the time. Right? And with the Ukrainian crisis right now, I vividly remember there was a back-and-forth about, oh, there is this rumor that Ukraine has downed two Russian jets. And then there was this reversal from the OSINT community saying, oh, actually, no.

It was this tailfin. These were Ukrainian jets that were down. And then I forgot what the final conclusion was, and all of that went viral. So I guess my question is, given how often the OSINT community is wrong, what do you think should be the mitigation from a government? Or should there be a regulatory response? Should we rely on norms? What is the role of the intelligence community in response to the OSINT community getting it wrong and things going viral? So I guess I would differentiate between getting it wrong in the fog of war and getting it wrong outside the fog of war. Because in the fog of war, I think it's more understood that a lot of things are going to be wrong. Even official government assessments are going to be wrong, because it's the fog of war.

And outside the fog of war, then you have more time for quality control, for vetting things before they go out. In terms of what's the role of the intelligence community, if the intelligence community has to fact check, be the fact-checkers of the world for everything, they won't be able to do their primary mission. Right? So that's not a sustainable model. So the question is, on what things should the intelligence community most usefully weigh in to sort of verify and where do you let things go? And so I think we're seeing that sort of triage playing out right now.

So that's sort of point one, is they can't do it all, so you've got to pick. What are we going to really care about debunking what the OSINT community is doing? Point number two is, on the issues of greatest importance, there are responsible players in the OSINT community. Give them more-- and even if you may have been contributing to a lot of noise, I'm sure you were one of the responsible players in the OSINT community. But give them more tools. [LAUGHING] You're saying no? I'm sorry. I may have led to more trouble for you.

Bellingcat, for example, does a really good job of validating information and it's almost all volunteers around the world. So leveraging that so you have an ecosystem of verifiers within the OSINT community. Because I think, most of the time, we're now assuming that it's inadvertent mistake. It's not deliberate deception.

That's a different category of problems. But let's say inadvertent deception or mistakes, the community itself can do more to self-police. Right? Deception, deliberate deception is where I think the intelligence community can play a bigger role.

It's going to be harder to untangle. Right? And more important usually to figure out. Mina. Hi. Thank you for the talk. I was wondering if you could talk-- I don't know about this topic very much so more background-- but the motives of the open-source intelligence community itself, it is a point of pride.

They want to get it right, and they get personal satisfaction. There's an altruism that can help policymakers. They would be in government if they could get the security clearance or various other professional reasons. And what I thought was interesting was you mentioned the benefit of like collaboration with allies because information is shareable. So is there ever like that being a motive, of you want to get your ally but you want to be working with outside governments that kind of push policy in that direction? And if there are these sort of motives that are personal parochial motives, to what extent should policymakers take that into account when interpreting the analysis provided by the open-source community? Well, I want to thank you for that question, because this is a chance for me to plug. I have a deeper dive into this question of motives and who's who in the zoo in the nuclear threat world in the chapter that I have in Vipin and Scott's forthcoming book with Cornell Press.

So it is a wide range of motives. And when you think about organizational dynamics, it's not just the motives that people have. It's how they're incentivized. And so when I think about organizations, including the really responsible ones, they're naturally incentivized to focus on some areas more than others. So when I did interviews of folks in this ecosystem, for example, where are they going to get funding? Well, there's more funding to focus on North Korea and Iran than maybe some other nuclear threats.

Right? What types of threats lend themselves more to open-source capabilities? Maybe it's harder to get open-source intelligence about India and Pakistan and their doctrines, for example, as opposed to where the nuclear silos that the Chinese are building. And so what does that mean? It means that the ecosystem, by definition, inescapably tilts the collection and analysis of intelligence that it does to some issues more than others and to some specific parts of nuclear threats more than others. And so from an intelligence perspective, I think there is a cautionary tale that needs to be told here, which is the whole system is going to be skewed and the intelligence community needs to understand how open source is likely to be skewed, not because people have bad motives but because they're going to focus on the areas where they get funding and focus on the areas where their tools are most useful. And that doesn't mean that those are the most important threats or the most serious challenges.

It's just what they can do, and maybe they can do it better than intelligence agencies. And so what's the value proposition of intelligence agencies? What are the questions only they can answer that open-source players cannot? So I think we need to think about this from a business strategy perspective. What does a competitive landscape look like, and where is it that secret intelligence agencies can add distinctive value that open-source players cannot? So I do think this sort of tilting or skewing of the whole ecosystem is something we haven't thought enough about. But read the chapter if you want to get a sort of full listing of who's in this space and why they do what they do.

Need your copy edits, by the way. Oh. I shouldn't have mentioned that chapter.

Sorry, Vipin. [LAUGHING] Won't see the light of day without the copy edits. Oh, no.

[LAUGHING] Thanks. I'm writing a note down to myself right now. Taylor was really great putting me in this chair.

[LAUGHING] I'm going to go to the Zoom, and we've got Rich Nielsen first. Hi. Thanks so much for this talk. Rich Nielsen. I'm an associate professor.

And I'm one of those people who needed the leveling-up intro, so I'm going to ask a more navel-gazing question. What is your pitch to a PhD student who wants to study intelligence or self-reflectively here to their potential advisor who is trying to see whether it is viable for them to complete a thesis or something on this topic? Like I get the why from your talk. And I'm seeing a vision of like, oh, I can imagine now how this turn towards, for example, like open-source intelligence could be an interesting and viable thing for an open-source political scientist to study. But I mean, I study Middle East politics mostly.

And I feel like we don't study intelligence in political science as much as we should, for the same reason we don't study like the inner workings of the Saudi monarchy. It's just that they don't want us to know, so like you either have to have exceptional connection or you can't be sure whether you're making stuff up and other people can't be sure whether you're making stuff up either. And so I'm curious like, what is your how? What is your pitch to someone who wants to do this research? How are they going to do it in a way that results in answers rather than speculation? And then I guess, if we do more of that research, isn't that going to necessarily create more of this open-source intelligence? I think that cat's out of the bag anyway. But like I'm just thinking, if I teach an intelligence class-- I couldn't, but Erik might be able to or some other folks here at MIT do.

Teach an intelligence class at MIT, it's just going to create another set of MIT students who have a lot of hacker skills and go out on Google Earth like hunting for this stuff. So I'd be curious for your thoughts on both of these two questions, more about how academia can engage with some of the themes of your talk. Yeah.

So I've written about this, too, why more academics don't do work in intelligence. So my first caution would be don't do anything relying on FOIA requests of very sensitive information. So data is the lifeblood of what you're going to do, especially as you're writing your PhD, and you don't want to outsource that in the hopes that intelligence agencies are going to declassify, which you absolutely have to have.

So pick a topic where there's lots of information that's already been declassified. And by the way, there's a lot. So I've written a lot about the intelligence community. You'd be amazed at what's actually in the public domain, so take a look at what's already in the public domain.

It doesn't have to be from 50 years ago. There's a surprising amount of data. So I wrote a book about intelligence failures leading up to 9/11. And the Congressional Joint Inquiry had declassified a tremendous amount of information. The Justice Department Inspector General had a lot of reports. And honestly, if you read the footnotes, there's gold in those footnotes.

So if you're relying just on these agencies to declassify information for a research project, I think that's a very risky strategy. But start with something that maybe is a little bit historical, not absolutely current. And look at what's already in the public domain.

And there's a lot more of it than you might think. I think GAO reports, Inspector General reports, congressional reports, their footnotes, and their dissenting views can be treasure troves of information for researchers that aren't often mined enough for what they can provide. So there's a lot out there.

When I was doing my dissertation, I said I was going to go to Washington to interview a number of people. And one of my professors, who shall remain unnamed, said, what on Earth for? What could you learn from talking to people? I think that's not the dominant view today. But I think we are entering a moment where former senior officials in this world are much more willing to talk openly, so that's a resource as well.

Now, everyone always has their point of view. Right? They have an agenda, so you have to be really careful when you're interviewing people, particularly in intelligence because they're good at recruiting people, to triangulate what you're hearing. But I think asking people inside the community, there's really a move toward talking more about what these agencies do. So human sources, especially if you develop a trusted relationship with them, can be incredibly valuable. And so it takes time, but I did it as a doctoral student.

So I think it's absolutely possible to do. And then I talked to people right after 9/11. And I think one of the reasons why I got so much access was they couldn't talk to people on the inside. They wanted some of the failings of these agencies to see the light of day, and they didn't have recourse inside. So there are some incentives that work to your favor as an outsider for people to want to share with you things that they might not share with other people.

So I would recommend those two strategies. But don't rely on Freedom of Information Act requests. You'll be waiting forever, and you need to get your PhD done. Amy, do we think this is a kind of a two finger-- which, again, I'm using this chair. Should we be teaching OSINT methods and skills in political science, PhD programs? I mean, when I was in graduate school, ArcGIS started becoming sort of a method that students were teaching themselves and they're working with faculty.

Some of these techniques are certainly plausibly developable in graduate school. I will also sort of ask relatedly, there are classes within the OSINT community in the sense that, while governments have access to their proprietary information, the barriers to entry for good OSINT data are still pretty high. I mean, not everyone can afford a Planet or a Maxar subscription or have the deal that CNS seems to have with them. And there are still barriers to entry where you have sort of the Twitterati, who are just random guys in their basements who are sort of piecing things together, and then those with access to resources where the real sort of value is the access to the resources.

And could anybody with the same skills and access to resource do those things? So how do we think about PhD programs sort of incorporating this? And I think Raymond has a two-finger on this one. Three-fingers? No. Just a quick plug. MIT actually has institutional access to Planet. Does it really? So everybody has access to Planet. OK.

So maybe PhD students could do that. [CHUCKLING] Well, I think there should be PhD courses with OSINT capabilities. And I think a lot of it, though, Vipin, is good analytic techniques.

Right? So a lot of the skills are transferable, regardless of what the source is. Right? So are you checking your assumptions? Are you thinking about what could disconfirm what your hypotheses could be? How do you know this data is what you think it is? Are there other ways to get at this data? And teaching basic creativity, how do you know it when you see it? Right? And is that OSINT-specific? No. But any good research project, you have to ask yourself, well, what data could I collect that would show whether this is true or not? And that could be applied to OSINT just as well as it could be applied to any other type of data, so I don't think there's some sort of magic OSINT world that has to be taught in and of itself with one-- well, with a couple of exceptions. I think we have to be really careful about using satellite imagery. So I think there's an assumption that anybody can look at pictures, and it actually is a skill that requires more expertise. And I think a lot of people get it wrong, so that is something that requires more training and more sort of validation.

And I spent some time with some former imagery analysts that have sort of walked me through some case studies of sort of this looks like this, but this was wrong. So I think that is not so much for amateurs. That takes some more skill. But I love the idea of PhD OSINT classes where-- universities should be at the cutting edge of understanding new technologies, new data opportunities, and data tools and how scholars can use them in responsible ways to shed insight and shed light on really important questions.

So I'm glad MIT is now going to do this, Vipin, and you're going to teach the class. Raymond will teach it. I mean, this is where CNS had a huge advantage, because they're so good at training the imagery analysis. Yeah. OK. Next, I've got Suzanne Freeman on Zoom.

I'll stick to the Zoom. Thank you. My question is sort of, how would you compare the differences between the US intelligence community, which is a community, of course, in an advanced democracy that has lots of laws about intelligence and an intelligence community in authoritarian state? And do you think that authoritarian states are affected by your five mores in the same way as democratic states, or do authoritarian states benefit from OSINT in the same way or do they sort of face this type of change in a different way? Thank you so much for a really great talk. Oh, it's such a great question. So I'm going to give you my caveat, which is I have not done comparative studies of intelligence agencies.

That's the next book project. I'm doing an edited volume with Calder Walton, down the road from you guys at Harvard, and Christopher Andrew. So I wish I had a better answer for you about a systematic comparison of US intelligence compared to other countries, but I'll take a stab at the question.

Because it's a broader question about, is there an authoritarian advantage or disadvantage and how do they think about or deal with this open source world? In some ways, we're in the worst of all worlds from a US perspective, because the internet is free and open for adversaries to collect whatever data they want. But it's not free and open for us and other democracies to collect and intelligence on the streets of Beijing or on the streets of Moscow. And so OSINT doesn't affect authoritarian regimes in the same way, because they collect everything because they can, because there aren't laws preventing them from doing it. So your social credit score in China is very much a part of what the government knows, and facial recognition is very much a part of what the Chinese Communist Party is using. So there is no such thing as private data in the same way that there is outside.

Does this mean, however, that they are not as good at using commercial satellite imagery or other types of OSINT? I don't know. That would be an interesting thing. They're so used to having free access to data, either through theft abroad or through surveillance technology at home. Are they as creative and adept at using new sources of information? I don't know.

I would posit that I suspect and we may be seeing this playing out in Russia right now, there is an authoritarian disadvantage and the disadvantage is the speaking truth to power function. So one of the great things about OSINT in particular is you can have alternative hypotheses. Right? And it helps fuel diverse perspectives. And I think that what you see in authoritarian regimes is you don't really want to have diverse perspectives if you might get killed for providing an alternative point of view. And so te

2022-03-27 08:09

Show Video

Other news