The Time is Now: Redefining Security in the Age of AI

The Time is Now: Redefining Security in the Age of AI

Show Video

>> ANNOUNCER: Please welcome executive, vice president and general manager, security and collaboration, Cisco, Jeetu Patel. >> JEETU PATEL: All right, how is everyone doing? That's great. Well, I'm excited to be here. This is my third time at RSA with Cisco and I couldn't be more delighted because I think we have actually seen the importance of cybersecurity compound at a pretty exponential rate even over the course of the past three years. So, delighted to be here. I think the industry, the cybersecurity industry, is about to have a pretty seismic change in the way that it's going to operate, and I'm excited to talk to you about that.

But let's start with what hasn't really changed over the course of our lifetimes, over the course of the past thousands of years, which is we as humans have lived in a world of complete scarcity. And what I mean by that is in practical terms, all of us in the IT industry, we each have a certain contained set of budget every year, and each one of us, by our management, are expected to do a little bit more with just a little bit less. And that's the state of affairs that's been for a long time. This is the first time in the history of humanity that I think you can actually start to see that there might actually be us entering into a state of abundance that becomes a reality. And what I mean by that specifically is that the ability for us to augment capacity to humans is going to be so profound and grow at such different scale and proportions than what we have seen before that if you had, suppose, twenty developers on your team, expanding that capacity to a hundred through digital workers is not going to be hard to do and is going to be very plausible. If you had forty people in customer service, you can actually expand the capacity to maybe 250.

And this isn't just going to be within IT. You will also start to see that each one of us will have this, even when we start joining companies, we might have an employee benefits package, and that employee benefits package is going to allow us to have probably eight or ten assistants in the way that we operate on a daily basis. You might have a personal assistant, you might have an HR assistant, a coach, some kind of healthcare assistant, a financial planner of some sort, and all of these are going to be very, very plausible types of augmentations that we have. Now, what this is going to do is it's going to make this world of 8 billion people feel like in fact it's got the capacity and throughput of 80 billion people. And the question that you might ask is where are all – where is this additional capacity of digital workers going to be residing? Well, they are going to be residing essentially in digital cities that we call data centers. They are going to be both public and private.

And what you will start to see over here is a tremendous amount of scaled proportion differential of how these data centers are going to need to accommodate this increasing volume. In fact, the data centers themselves will need to get fundamentally reimagined to go out and accommodate for these additional AI workloads and digital workers. But it's not just the data centers that will need to be reimagined. What you will also have to reimagine is the underlying infrastructure for security that's going to go out and power these data centers, right? And the way that you think about these data centers is they are changing in two very fundamental ways. The applications are changing and the infrastructure is changing. How is the application changing? Well, we used to be in a three tiered architecture.

You had the web tier, the application tier, and the data tier. Each sat in their own dedicated piece of hardware. And you now are in a hyper distributed environment where you have thousands of micro services that run on hundreds of pieces of hardware and it's a completely distributed architecture. So, that's the first thing that's changing is the application topology itself is getting to be very different than what it used to be. And the second thing that's changing is the infrastructure that's powering the data centers.

We used to have the computational sub systems that were largely used were for general purpose computing, right? So, the way that you would do this is you would have sequential processing with CPUs; they did a pretty good job. And then all of a sudden came GPUs and DPUs and now what you have is parallel processing and vector math and matrix math that can be done in a very, very different scale and proportion, with the kind of workloads that can now be managed are a thousand X more than what you could do in the past. 10,000 X more than what you could do in the past. But the reality is, as you see this application and infrastructure change, there's a couple of things that still remain very hard, which is securing these applications is pretty hard. And securing the infrastructure is even harder. And for all of you practitioners – all of us practitioners that have actually been in this industry for a while, there are three specific areas of tactical concern that we have not been able to overcome.

The first one is the fact that segmentation is really hard; and what does this mean? If you assume for a moment that the attacker is in your environment and that the name of the game is to contain the attacker from spreading the attack through lateral movement, how you go out and isolate that attacker and segment is actually really hard. Now, it used to be relatively simple to do in the three tier architecture because what you could do is have your application tier that was tied to a piece of hardware to write segmentation rules and that was a pretty simple thing to do. As you get into this very distributed environment, it gets very hard to go out and have thousands of micro services that run on Kubernetes containers, that run on VMs, that all need to talk to each other, need to get, in some way or form, be segmented in an effective manner. This gets to be a pretty hard thing to do.

So, that's the first problem that we have in the industry. Segmentation is really hard and containing the attack and containing the attacker from moving and having lateral movement is pretty hard. The second thing then that's really hard is patching.

It takes a long time to patch. And in fact, here is an interesting statistic. The time that it takes from the time that a vulnerability has been announced to when an exploit happens is now down to single digit days.

It's soon going to be in hours and minutes. But the amount of time that it takes to patch a vulnerability is anywhere between twenty to forty-five days. So, you've got this critical time period of exposure for an organization where when the vulnerability is announced to when in fact the vulnerability get patched is a long period of time, and the exploits are getting compressed where the time to go out and patch is not getting any more accelerated today than what it was five years ago. In fact, it's getting more and more elongated because of the number of vulnerabilities you have.

Now, it's one thing to go out and patch the infrastructure in your data center. But what if you have things that you need to patch that aren't even in your data center and weren't even designed to be patched? What happens if you had to patch infrastructure in an oil rig? What happens if you had to patch a drone? What happens if you had to patch an MRI machine? What happens if you had to patch a robot welder? These tend to be extremely difficult problems to solve and it's a hard thing to go out as a security practitioner and get a handle on this. The third thing that's really hard is updates to critical infrastructure and to dated infrastructure.

If you think about one of the largest exposures that we as a community have right now, it's that the infrastructure that powers our critical infrastructure is actually very dated. And it's really hard to update this infrastructure because there are two change control windows a year, one probably at the end of the year, one in the middle of the year. If you miss that window, you have to wait another six months.

And you actually tend to see a huge overhead in going out and updating the infrastructure. And so even though the software manufacturer might have gone out and issued an update that would have solved the problem, that update doesn't actually make it in the hands of the customers and the infrastructure for a long, long time, right? And so, all of these problems are pretty acute problems that we have faced as an industry, and these problems weren't solvable, frankly, until now. But there's three key technological shifts that are occurring that are going to fundamentally change how we are able to go solve these problems.

The first one is AI. The second is kernel level visibility. And the third is hardware acceleration.

I want to take a minute and talk about each one of these for a second. So, when you think about AI, as AI gets weaponized by adversaries, the only way to stop those attacks is by making sure that you can use AI natively in your defenses, right? And so, it's extremely hard to go out and do something if AI is thought about as a bolt on. You have to think about it. The operative word here: is AI being used natively in your core infrastructure? So, that's one of the things that we wanted to make sure is how do you think about AI natively so that from the time you conceive of an idea for a defense, you have thought about how AI can go out and change that? The second big area is kernel level visibility. And the reason for kernel level visibility being really important is if you think about an end point being compromised and your traffic being encrypted end-to-end, the only way that you know if what's flowing through is in fact out of the ordinary is if you can in fact see what's happening from a visibility standpoint because you can't protect what you don't have visibility against.

And that's why I think eBPF is going to be a very, very critical technology which allows you to go out and look in the heart of the server and the operating system and see what is happening without actually in fact being inside the operating system. You can be sitting in the user space but see what is happening in the kernel. We will hear more about that in a moment. And the third one is hardware acceleration.

We talked about hardware acceleration with GPUs. Think also about DPUs, data processing units. What happens with data processing units? You can actually make sure that you have a massive acceleration of throughput for security operations and I/O operations with DPUs, with things like connection management and encryption that can be done a thousand times faster than what you could do before. So these three core technologies really allow us to fundamentally reimagine security in the age of AI. And the way I think about this is, you know, if you think in 1475, if Amazon was launched, it would be a wildly failed company because you didn't have the PC revolution.

You didn't have the internet revolution. You didn't have the logistics infrastructure to be able to do it effectively. You needed all these building blocks so that you can start imagining things that you couldn't have imagined before.

And this isn't just making sure that we actually build the next version of something that already exists. It's building the first version of something completely new. And what that is is a completely reimagined architecture for hyper distributed security, where you can take security to the workload. And to talk more about that, I want to bring my colleague, Tom Gillis, up on stage so he can tell you about the possibilities of what we can do with this completely reimagined architecture. Tom, come on up. >> TOM GILLIS: Well, it's great to be back at RSA and see so many friendly faces.

You know, I was thinking about it. It was fourteen years ago when I stood up on this stage for the first time and I talked about some of the trends that were happening in industry. And what I can tell you, in that fourteen-year period, we have not seen changes like the changes that are afoot right now. And it's those three building blocks that you talked about.

So, AI, everybody knows about that, but coupling the power of AI with new software technologies like eBPF and advanced hardware accelerators like DPUs and GPUs. These building blocks we can put together; the industry can transform, I mean, fundamentally transform the tools that you have been using in your environment for decades. So, I'm going to preview for you what some of these changes might look like.

So, I'm going to start with one of my favorite topics, network-based controls. Network segmentation is a foundational capability of every security stack. And if we need a refresher, if we think back to the Struts vulnerability at a high profile credit rating agency, attackers found a single unpatched Apache server, and from that vantage point, they made forty-eight lateral moves over the course of nine months. So, we want to stop that problem, right? We want to stop the obviously bad traffic. Segmentation is a basic capability; why is it hard? Well, if you look at an application and observe its behavior, it can look kind of random, right? Now, applications aren't random but they are asynchronous. So, in the industry, we have been using time as a basis to understand an application.

Let's look at it for ninety days, that's got to be enough. But if you're thinking about an app that maybe schedules delivery of sheet metal and you watch it for ninety days, you would say, okay, great, I understand it. If it turns out the factory runs out of sheet metal on day ninety-one, that app is going to behave very differently. So, with the combination of AI agents and tools like eBPF, we can understand the application much more intimately and we can create a more dynamic, continuous learning approach to segmentation that really emulates what a human would do – except there are no humans involved.

So, the system can be put together in a way where the first thing it will do is let's separate the obvious stuff, dev from prod. Separate these apps that never talk to each other. And then as it gets confidence in each individual policy, it can make successively tighter segmentation policies.

But because it understands the application, if it sees a change, an update to the software, something moves, it relaxes those policies, recalibrates, relearns, and tightens them back up again. So, this is an ongoing process that emulates that human behavior with no human intervention involved. This will be transformative. This alone will be transformative about how we think about managing controlling the network infrastructure, but that's just step one. So, there's a good news/bad news here.

The good news is we can do this segmentation in a very powerful and automatic way. The bad news is attackers assume that you have segmentation in place. Let's think about the recent attack we saw against a remote desktop application where there is a URL that if you appended a certain string to it, you get access to the box, right? Very high profile attack.

Segmentation would not stop that. And in fact, if you look into the details of that attack, the vulnerability was announced on February 13th privately, on February 19th publicly. On the 22nd, the attack happened. So, that time from when a vulnerability is announced to when attackers are on it is shrinking rapidly. Now, in our environment, you already have a mature set of tools called vulnerability scanners. These things are very, very good at assessing all of your applications and finding all of the holes.

In fact, they are almost too good. So, your vul scanners could easily produce five hundred, a thousand, maybe even two thousand CVEs every single week. Imagine if we applied these same tools to the output of the vulnerability scanners and we asked some basic questions. The first question I would ask is for the vulnerability in question, is it running in memory? eBPF gives us that insight so you know if it's running in memory, we want to pay more careful attention. Second question, is this vulnerability actively being exploited in the wild? Here AI is amazing because AI bots can be reading chat boards, the dark web, Git.

It's constantly looking at is someone talking about this? Are they using it or are they about to use this vulnerability? So, this allows us to create a report that we can take across your entire infrastructure. The infrastructure team can hand this report to their colleagues on the app side and say look, we've got 3,000 Apache servers in our fleet but there's a hundred of these Apache servers that have a new vulnerability I'm going to call Log 5J, okay? Don't worry. This is a theoretical vulnerability.

Log 5J. And if Log 4J taught us anything, it's that patching is hard. Patching takes time.

And so, App Team, while you go qualify a patch for Log 5J, don't worry, we will apply a compensating control somewhere in the infrastructure. And when that vulnerability is closed, we will automatically remove that compensating control. So, lifecycle management built in. Now, this is an amazingly powerful capability for protecting against known vulnerabilities, but the same building blocks, the same toolset can be training on the tactics and techniques of attackers to say, let's protect unknown vulnerabilities, right? These are things that I know a bad guy would do and I can protect against both known and unknown vulnerabilities in a highly automated fashion that will transform the way we do vulnerability management. That's the second area of transformation.

The third is the infrastructure itself. So, in a bit of an irony, firewalls – they are software products – and they are being targeted by nation-state actors. I say this with great confidence because I run a very large firewall business. And so, firewalls have bugs. Sometimes those bugs can be pretty severe. And the issue is not that those bugs can't be fixed.

The issue is firewalls are high performance, inline devices. And so, for many customers, you've got two change control windows a year to upgrade your firewalls, 4th of July and Christmas, and if your upgrade fails on the 4th of July, well, I'll see you next year. Let's compare and contrast that with the cloud model. When is the last time you had to update Amazon, right? When was the last time you had to update Google? You don't. It's constantly updating itself.

This is where things get interesting. Imagine if we take those building blocks that I talked about, imagine if we take this distributed system where we are putting little tiny baby network enforcement points everywhere, everywhere in your infrastructure, and we don't put one, we put two. There is a digital twin. Every instance – not in a lab. I'm talking about right up next to your Kubernetes cluster, right next to that analyzer, we are running a digital twin. And so, when version 2.0 of the software becomes available, we

also load version 2.1. And there is a local AI engine that's comparing 2.1 and 2.0 and it's looking at memory, latency, and jitter. And after some period of time, say three days, five days, it says, you know what? These things are exactly the same.

Now, we cluster between those data paths and we can, without opening a ticket or dropping a packet, we can move the traffic from 2.0 to 2.1. We run the same algorithms again to check: is this still the same AI engine locally? I'm not talking about in a lab. This is in live production traffic. We can qualify the release. We say these are still the same. We load 2.2.

So, what we are talking about is a network security device that upgrades itself. The same is true with policy verification. When you go to push that compensating control, you push it into the shadow data path. This is all done on the machines. We will take tests and make sure is this going to have some weird, unintended consequence? If the system qualifies, yes, this policy is doing what we thought it was going to do, it automatically promotes it into production.

So, what we are talking about is a network security system that writes its own rules, tests its own rules, qualifies its own rules, lifecycle manages its own rules, upgrades itself. This is transformative capability. Now, I was struck by seeing Matthew Broderick up here and I was remembering Ferris Bueller's day off. When I was in high school, I saw that movie, and also when I was in high school, I used to read Life magazine. Remember Life magazine? I remember reading this thing about jet packs and it was NASA scientists that had pioneered these jet packs. And I'm thinking, oh my God, this is coming in ten years.

I'm going to take a jet jack to school and it's going to be so cool. You know that feeling? I was like, I feel like I was promised a jet pack and I never really got it, so I'm a little banged up over that. This is not a jet pack. This is real technology that when you walk around on the show floor, you're going to see demos of this today. This is happening in months, not years.

So, the changes that are coming are really powerful and this is just the beginning. I'm going to turn it back over to Jeetu with some closing thoughts. Thanks very much. >> JEETU PATEL: It kind of pissed me off when I saw Matthew Broderick because he still has a full head of hair and when I was young – it kind of sucks being bald. But what did you think of what Tom said? Yeah? Folks, the reality is this is just the beginning and the beauty about this is all of these technological building blocks that we now have available to us allow us to actually imagine things that we just weren't able to imagine before.

And so, it's not just going to be around imagining how those three problems are going to be solved, but the way that a SOC operation works, the way that security analytics works, the way that networking works is going to be foundationally different than the way they used to be. Like I said before, this is not the next version of an architecture of something that already exists. It's going to be the first version of something completely new. Thank you all. You have been a great audience.

Take care.

2024-05-13 06:45

Show Video

Other news