Dell Tech Forums: John Roese Keynote, Dallas General Session

Dell Tech Forums: John Roese Keynote, Dallas General Session

Show Video

Our Global Chief Technology Officer, I'd like you to welcome the stage, Mr. John Rose. Watch the stairs, John. There you go. Don't trip. Thank you.

Thank you. And the instructions were don't trip. That's the this little fog over there.

The stairs just disappear. So welcome. Good to see you all. Good morning, Dallas.

Haven't done this venue in a while, but it's a pretty impressive space. Again, my name is John Rose. I'm the Global Chief Technology Officer of Dell. You know, this morning I I want to accomplish kind of two things with you. The first is to share, Let's call it our vision. You know, where do we think the world should be going, What's going to happen in technology? But the second thing I want to share with you is our execution.

You know, it's a a statement. You know, a vision without execution is just a dream. And right now in our industry we don't really have time for just dreams. We have a lot going on and quite frankly, the act of taking these ideas about what AI could be or how we could harness multi cloud and actually making them real is more urgent than ever.

And so this morning I'll spend about 1/2 hour with you. Hopefully you'll end with a feeling that we have a good picture of where the world's going, the problems that need to be solved and what that means. And then the second piece is that you will be on the journey with us and see that execution happening. The technology is materializing, The ability to do these ideas is actually becoming real. So let's start with a theme that's been consistent all morning, which is, you know, there are a set of unsolved challenges or technology areas that we have to navigate to be successful in our industry, whatever that industry is.

You know, there might be more than these, but over the last several years, these five have materialized that if we get them right, very good things happen. And if we get them wrong, we may fall behind or we may end up with excess cost or complexity and we may be at a disadvantage, you know, in no particular order, you know, artificial intelligence. The challenge we have is the current generation of AI is giving us a capability to rethink how work is done, not just mechanically, but thinking work, cognitive work. And So what AI is all about right now is a fundamental rethinking of the distribution of work between people and machines. That is a gigantic challenge that we have to get right. The 2nd is multi cloud.

For a decade we have been in the cloud era. We have been unapologetic about the fact that cloud is an operating model, not a place. Five years ago people thought we were a little nutty on this. Today it's very clear that modern IT is implemented using cloud operations principles and whether that's at the edge or in a private environment, a public cloud, a colocation, a SaaS service, that is what it is. The challenge is we have created many clouds and today most of those clouds don't really work well with each other.

They create silos complexity and the goal is not to have multiple clouds, it's to have a collection of cloud infrastructures operating like a system that that can become the platform of modern it. Pretty clear vision, hard to do #3 is edge. You don't know you have an edge problem yet, but you have an edge problem. And the problem is today most of the processing of the world is still happening in data centers. It is happening in aggregation points because to be fair, for the last decade, maybe 2 decades, most of us have moved it out of the real world and centralized it.

Remember the day is when you had an e-mail server sitting under your desk. You know that's gone. And the reality is as we move to this centralization of it, we did it because the kind of data problems we were dealing with were large scale centralized data problems, ERP, CRM, you could do it there. The future doesn't look anything like that. The data will be distributed. The data will happen in the real world.

And so we have to rebuild a distribution system to enable processing to happen where the data is. We'll talk about that the future of work. If we do all of this technical work around building AI systems and multi cloud and all of the underlying plumbing works and we leave people behind or the kind of user experience that our team members have is based on the previous generation of how technology worked, we will fail. And so in addition to making our infrastructures modern, making our systems intelligent, we have to rethink what is the job of a person in this environment, What is an experience of a human being in the AI world with copilots surrounding them? And then the last is security. Let me do a quick poll. Is anybody in this room completely happy with their security posture and goes to bed at night completely comfortable? Anybody.

OK, we've got one person. You're probably retired. OK, there we go. I got it.

Right. That's usually the case our security industry, and I'm I'm not blaming them, I was part of it, is irreparably broken and it's not because of any product or company. It's because we've created an environment that for every security problem there is a product and we are perpetually reactive.

And if we don't do something different with all of the other advances that we make, we will end up in a place where even if we have AI and multi cloud and edges and everything's working great that we are still behind the curve in terms of our ability to manage risk to sleep well at night. And so fundamentally security is one of those grand challenges that that probably is foundational to every other issue that we're dealing with. Now what I just described probably sounds a little bit like a lot of work, maybe doom and gloom, but let's flip it around. I want to share with you our vision, our vision, because these are not academic subjects, these are what Dell is working on, fixing these problems, solving these problems.

So imagine you take an optimist view and you ask the question of what if we get all these right? What if we make the multi cloud world operate like a system? What if it is coherent and data can be in the right place and applications can run in the right place and it can be efficient. What if we use that multi cloud platform to to power the fundamental rebalancing of work between people and machines and we end up with the biggest productivity increases we've ever seen in almost every industry? What if we solve that problem not just in the data centers, but AI can happen where the data is and it can extend efficiently out into the edge? What if on that journey we don't just transform our technology, but we rethink how our people use that technology. We redefine work and the tools that they use and the user experience. And then what if at the end of that journey we've actually made progress on security and our security posture starts to get better and we can sleep better at night. That's the vision of what we're doing.

The reason we're building products, the reason we're developing technology is going after that is because we would like that vision to be the end state for all of us. Hopefully you agree that's a pretty nice place to be. And if we can get there, that's a good thing and it's worth doing. So for the rest of the discussion, OK, that's vision. I want to talk about execution. And I'll go into each of these areas and just share with you a little more how we think about them, but more importantly, what we're doing, what's emerging, how you can consume them, how the technology will help you solve these problems.

So from an AI perspective, you know, one of the fundamental things that happened last November was the emergence into the mainstream of generative AI systems. Now, generative AI is not entirely new. It was happening before that. We have been working with large language models for many years. But in November of last year, a system called ChatGPT popped into existence that people started to see.

And I will tell you, the first time I used it and I know exactly what it is. I know exactly how it is built and what it does. But the very first time I interacted with it, I had to pause and reflect because I realized that it had changed my assumptions about AI. I always believed that we were a long way away from passing the Turing Test, and the interfaces were clunky, and it was still the province of technologists. And after using that particular first generation of generative AI and seeing how it interacted with me, the aha moment was it wasn't about necessarily a new piece of data or even a new technology.

It was that that approach of generative AI to make AI systems, generate content and engage with human beings using human language had suddenly democratized AI. The day before that, there were 200,000 people in the world that could build an AI system or use one. The day after that there were 7 billion. That is the catalyst for what Gen. AI is. It is not purely just the technology.

It's the fact that it has become accessible to every CEO and every board of directors, every business line leader and every human being. And so the creativity about what we could do with it is accelerating exponentially and that's driving the technology innovation cycle. And it gives us tremendous confidence that as we move from just predicting outcomes with traditional AI to creating entirely new content or new experiences at a human level, we are going to be able to transform almost every industry.

And we are fundamentally going to be able to rebalance work in a way that we've never been able to do before. And that is a big deal. It has lots of challenges with it. How do we secure it? How do we know the data's accurate? How do we deal with hallucinations? All of these things have to be worked through, but we will. None of them are unsolvable and in fact, many of them are starting to be solved.

And so we're very bullish that this is not a if it happens, it is already happening. And and clearly winners and losers in every industry will be defined by who fully takes advantage of these technologies in their industry versus who does not. Now from a Dell perspective, what is our role in Gen. AI? Well, the first thing we're not going to do is it's very unlikely we're going to build a bunch of original foundational models from Dell. There's plenty of them out there. One of the nice things we've seen is in the early days of Gen.

AI, there was a path that wouldn't have been good. The core models would be proprietary and controlled and only a few people would use them. We have seen tremendous progress in the opening of the ecosystem.

In fact, last week we announced A partnership with Meta. Meta did a really interesting thing with Llama too. They just made it open. That was a surprise to everybody. It's a good thing.

We have things like Falcon 40, Falcon 160, we have all kinds. If you go on the hugging face, the amount of tools available to you is just growing exponentially. So we don't need to do that. The problem is to make a Gen.

AI system work, it's more than just a large language model. You're building a system, and the system includes data and processing and networking and data protection and tool chains. And So what we realized is our value is twofold in this space. The 1st is very core to Adele.

We have to deliver, let's call it the performance layer of AI, the servers, the accelerators, the storage systems that allow us to run these new workloads. And we've done that. I'll talk about some of those products, but the second thing that we can do that makes us somewhat unique is we can actually assemble the system. We can pull the ecosystem together and quite frankly make it much easier for you to consume it instead of you having to assemble the parts and hope they work together. Our ecosystem which today includes you know meta, NVIDIA, AMD, Intel, hugging face, IBM and on and on gives us an ability to play that integration role or that organizer of the AI system to build this as a validated design or a thing you can consume. Now the biggest move we made over this year was with NVIDIA.

That was the first move and it basically because NVIDIA is the dominant performance layer on the accelerator side. We realized that if you just buy a GPU and then buy a server and buy a storage system and buy a tool chain and try to assemble it, you will spend a few years doing that before you do anything productive. And so we said, well, what if we just do all that work for you. And so we announced Project Helix, which was really an effort for the two companies to organize a first generation of Gen. AI platform and architecture. And the result of that is pretty interesting.

It allows the customer to start with Project Helix and the second day start building a chat bot or automating a process. You will see more of that from us. You will see more products. EXE 9680 is great, but there's other products that we built below it. In fact, I think today we announced a high performance flash storage system based on object scale. That's kind of the corollary to this on the storage side, you will continue to see that expansion of the portfolio and the ecosystem, but you will also see us play a more central role in organizing the ecosystem so it can be consumed.

The key with generative AI and the AI era is not that it exists, it's how do you move fast, how do you move efficiently, how do you make it enterprise grade. Those are the problems we're solving moving to multi cloud and as I said before, we've always believed cloud is an operating model. There are many ways to instantiate it and quite frankly the best outcome is if all those clouds look like a platform and a underlay for your enterprise, they work together. In order to solve that problem, we have done two things. The first, you cannot claim to be able to organize the multi cloud if you don't work with all the cloud players. And so if you may have noticed, we have partnerships with all of them.

You saw Satya up earlier, I mean Google, Microsoft, Amazon, Red Hat, VM Ware, Alibaba go through the laundry list. We reached out and said we don't want to compete with you, we want to organize you, we want to enable you. And the result is today we have by far the largest cloud ecosystem of partners that are kind of frenemies with each other, maybe even competitors. But with us, they're kind of all partners and that's a very powerful starting point.

But it's not enough because what we know is even if you have a path to talk to all of the cloud providers in the cloud stacks, the real challenge isn't having access to them, it's making them work together. And we see statistics like this that cloud silos, lack of interworking, different operating models actually is a significant impediment for people actually adopting a multi cloud environment. And so the second part of our activity was how can we build technology that makes it work like a system and we've been very busy. I won't drain all of this, but some examples we announced Dell Apex storage for public cloud. The ability across Amazon, Azure and on Prem to have a storage layer that is consistent.

Now why would you want this? There's going to be storage that's purely in a public cloud and there's storage that's purely in a private environment. But there are certain things that you actually don't want to constrain to 1 cloud. For instance, your system's of record. If you have a customer database or a core system of record and you're using multiple clouds, but you put that system of record in one cloud and that's the only one that can use it, you either have to replicate it or duplicate it or fragment it. This particular approach allows you to potentially land it on a neutral host, maybe in colocation, maybe in one of those clouds, but under your control that is accessible to any of the clouds that you use. It's a different class of storage, but it solves a unique problem that the clouds themselves will not solve.

The second Dell Apex protection storage, this was our first entree into multi cloud. We've been doing this for quite a while. Today we have 17 exabytes of customer data stored in public clouds as data protection, cyber recovery, cyber vaults and we have many more exabytes of that sitting in private environments. But over the last year we've expanded the portfolio that now whether it's Amazon, Google, Microsoft, Alibaba on Prem, you can build an independent data protection layer across your multi cloud.

Why do you want to do that? Well, let me give you some advice. Don't protect a cloud with itself. If makes a lot of sense to use a different infrastructure to protect the infrastructure you're trying to protect. That's very hard to do within a single cloud environment.

This enables it #3. We are big believers in hybrid. In fact, now I think everybody's figured out that not everything will live in a private public data center. You need stuff on Prem. You need stuff in other datacenters. And so we have become very rapidly the vehicle to hybridize everybody, whether you're hybridizing Azure Stack or Red Hat or VMware, you you are probably doing it on a Dell Technologies component.

And we announced a number of additions to this including the Azure announcements and work with Red Hat and we'll have more. But one of our jobs and values is any cloud and any cloud application should be able to be hybridized and the vehicle to do that is almost always going to be Dell which is a very useful place to be and it makes it easy for hybridization to occur. 4th Dell Apex Navigator storage in the multi cloud world is non trivial. If you create and try to organize the storage by just having more storage pools, that's not sufficient especially to the developer. What navigator does is it says what if we have a single pane of glass to not only know where all of our storage is but also to organize how it presents itself up to the developer. So a developer can access modern cloud native storage through a cloud storage interface in Kubernetes with without having to care that it's in Google or Microsoft or Amazon or it's sitting on our cloud storage layers or it's sitting on Prem.

The advantage of having that single pane of glass isn't obvious until you realize that multi cloud storage as a system needs something to orchestrate it, which is what Navigator is. We've been busy in other areas, we've been doing PC as a service for a while, but this year we rolled our PC as a service offering into Apex. Why does that matter? Well, it means that literally if you start, I don't know, using an Apex storage service in the cloud and you've set up your account and you're working with us and tomorrow you decide, I really don't want to manage those PCs for my people. You're already in the system that literally you can quite frankly just order a service which would result in PCs being deployed to your people, us managing them and you're not having to worry about them.

It's a very powerful user experience when we think about this. And then the last to share with you is we announced something called Dell Apex Compute. Turns out when you are doing stuff, there are things that are very repeatable and there are things that are bespoke and no matter what you do in it, the one thing that you will need is compute. And what we realized is it may be very useful to deliver that lowest level, that lowest common denominator, the compute layer as a service through Apex. Now think about this, Let's say you have some really bespoke proprietary thing you're trying to do.

Not bad proprietary, but just unique to you and and you look at who's going to do the work because it's a snowflake, you're the only one in the world doing it. You would think you'd have to do everything this layer says, well, no. If you don't want to worry about the compute and you want to just consume it as a service, we can actually deliver that compute infrastructure and then you can start your work above that, it saves you a tremendous amount of time. It allows you elasticity, It allows you access to the latest technologies.

It shifts the burden to Dell. And by the way, it's a fundamental underpinning on every workload that you're going to cloudify, every workload that you're going to deploy. Again, another Apex offering. The result, You know, it's a long list, but over the last two years, we've rapidly become the broadest portfolio for as a service consumption of it across our entire ecosystem, shifting to edge the edge world. The problem you will have it's twofold. One, you don't have a modern edge yet.

If someone says I want to run AI in a factory by deploying containers onto a platform, you don't have one of those. You might think you do, but you really don't yet, and we're going to have to build those. The second problem you have is edge proliferation, because we have a model today where everybody who instantiates a service in a cloud or in a data center looks at that service, and if it needs to be in the real world, they build an edge. The problem is you could very rapidly have five or six or ten different edges in something like a factory or a retail environment. We have retail customers that had five different edge boxes sitting in a store.

One for HVAC management, one for point of sale management, one for video surveillance, one for, you know, the general purpose applications. I even found one that was doing digital digitized time cards. It was a computer sitting out there. That is unacceptable if we allow that to happen.

If every workload that needs to be in the real world requires a piece of hardware and its own bespoke infrastructure that does not work. And and So what Dell realized is that complexity that's really in the way of people building edges needs to go away. And what we have done over the last four years starting with a research project and moving to something called Project Frontier and now Dell Native Edge is built what we think to be the right architecture for a modern multi cloud edge. And very Simply put, what we have done is separated the physical capacity pool of edge, the platform, from the logical workloads.

And the idea is that no matter which edge you want to instantiate, Anthos, Arc, VMware, Red Hat and IoT Edge, if it's built in a way that it's containerized software can run in a VM, you ought to be able to deploy it onto the same system, the same infrastructure. Now that sounds really good. It's hard to do. It's taken a few years to do that. We've done some acquisitions, but the product now exists. The idea behind the product, the big problems we had to solve is first, if you're going to put it out in the real world, the real world is not the data center.

And so there were two really important problems you had to solve. The first was you had to implement all the what I'll call the zeros at the edge, Zero trust, zero touch, zero it because you don't have many people out there. It's a highly insecure environment. And so we had to architect native edge to say the boxes.

When you ship them, you plug them in, they phone home, they self configure. More importantly, when they self configure, their default operating state is least privileged access. Zero trust you You can't run anything on that. Even if you have physical access to it, nothing will run on that system until the overall orchestration layer decides that it has permission to run there. Sounds extreme, but at the edge that's important because these are sitting in critical areas without a lot of physical security.

The second problem we've had to solve is if you're going to be a multi cloud edge, how do you make it easy for someone writing code in Amazon or Google or on a VMware environment or Red Hat environment to actually decide that some of that code should live on this platform? To solve that problem, earlier this year we bought a little company called Cloudify. If you're familiar with Cloudify, they're probably the best multi cloud orchestration framework, both closed and open source in the world. Their current model was to orchestrate between clouds.

Turns out if you can orchestrate between all the clouds, you can add one more cloud which is the edge platform and orchestrate any of those workloads to the edge. And so we've solved these two problems and the result is that we now have a highly simplified environment. If you want to deploy a new video surveillance package out to 10,000 stores, you create a blueprint and you automate the process. And quite frankly there's no manual intervention. If you want to remove an edge service, it is a software activity. But more importantly, the platform acts as a platform.

It is not tied tightly to anyone application so it can support what you do today, but but it can also be optimized and delivered to support what you need in the future. AI workloads, other types of workloads anyway. Our challenge with native Edge by the way, we think this makes tremendous sense. We have had immeasurable conversations with customers around the world saying how do we do Edge write and the conclusion is you need a platform that any of the clouds in the world and any software package could be deployed on in a DevOps style fully automated environment with all of the principles of 0. Touch zero, trust zero IT to actually be able to be operationally efficient.

So we built it. Our challenge and I need your help on this is no one else did. We're the only one that does it this way.

Maybe Mac and telco looks a little bit like this, but most edges are mono edges. You know you want to do Amazon by Outpost, you want to do Google by Antho by the way, we will enable those too. But we need you to start thinking about what is your Edge strategy? Is it just cloud extension or do you view Edge as a strategic platform that you have to build and then use across whatever cloud service? That's a decision every customer's going to need to make? The good news is, you know, imitation's the best form of flattery. I guess once we announced it, there are at least four different companies that have announced a similar intention. So maybe we are on to something here.

OK, future work. I mentioned people, but you know, let's talk about the PC for a second. First of all, we are very proud of the PC industry.

We kind of created it. We're very good at it. You know, we get accused of it dying many times and then it comes back to life and actually becomes the most important tool that we could imagine COVID. But in order to be good in this space, you have to be good at innovation on multiple directives or multiple vectors.

The 1st is around features. You should expect from us that we have a continuous stream of innovative features flowing into our products today. For instance, the Latitude 9440 has many LED's in it for the backlighting doesn't sound important, but it dramatically changes the battery life of the system. We're the first ones to do that.

We have haptic collaboration touch pads where we realized, hey, instead of having to leave the touch pad to mute somebody or to deal with a collaboration tool, why don't we just put it right there. Turns out that's a really useful feature and you should expect every time we iterate our products, more of those useful features show up. We're good at feature innovation, we're good at sustainability innovation. If you see over there Project Luna, which is a concept to say how can we do complete closed loop, how can we have a robot disassemble and recycle our products. So we've been leading the curve there, but even the newer products are now 75% recycled aluminum.

That's actually hard to do because our goal is to get to a closed loop model here to really have 0 environmental footprint impact through the life cycle of that system. From a security perspective, we continue to innovate. Hopefully you feel like our products are extremely secure. We would argue that they are some of the most if not the most secure PCs in the world. But the thing to remember, we don't just do things in the PC below the OS and save BIOS when we also do things in our supply chain.

Did you know that when you buy a product from Dell, if three years from now you think somebody has physically modified the product, changed a chip, adulterated it in some way, You can come to us and we'll give you an off board digital signature and fingerprint of what was assembled, the actual components in that system and you could compare it three years later to to determine if somebody had adulterated it. Now most of you are saying I don't have that problem. I'm not an intelligence agency or the Department of Defense. But it turns out, you know these are becoming more and more likely and in some regulated industries the ability to guarantee the integrity is important. Now even if you don't use that feature from us, I use it as an illustrative example because our security isn't just features on a box, it's embedded into the entire company. It's some very advanced end to end security approaches because we're we do sell to intelligence agencies and defense departments and large banks and you should feel comfortable that if your security needs change, we're probably already ahead of you and having capabilities that can help you in that.

And then lastly on AI, the PC is going to play a significant role in AI in two dimensions, one is very short term and the other is very long term. The short term 1 is, here's a statistic for you. Every company in the world that does generative AI, wherever they do it, we'll create a new organization. Some human being has to do that work. At Dell, we created a Chief AI Officer. Jeff Boudreaux has just been appointed to that role.

He's building an organization. That organization is not just marketing people and sales people, It is actually technologists because there is technical work necessary to do the data conditioning, to build the models, to experiment, to understand these systems. So the first role we play, which we've done for a very long time, is we are really good at building systems that engineers use, Precision workstations for instance. And and it turns out you have to ask yourself, your AI team, do they have the right tools? Do they have the right peripherals? Do they have the right processing? I've been through this when I was playing around with stable diffusion, I tried to do it in a cloud service. It just took forever.

And then I realized I have a precision workstation. I just downloaded it, ran it local, and I could run iterations much faster, you know, like orders of magnitude faster. I could get much more comfortable with the technology because I had the right kind of tool.

But the bigger thing that's coming at us, not about the development of AI, but the use of it, here's a prediction for you. By next year, we will see a proliferation of copilots. We're already seeing them with Microsoft and others. And what that means is there will be AIS that are very tightly coupled with you. They are there to assist you as an individual do things. And those AI copilots are incredibly resource intensive.

We all know that it is impossible to do all of that work in a cloud. Now, I'll give you the math. We expect that a typical copilot deployment at the low end is 20 teraflops of compute capacity per user. Now that sounds like a big number, but you can do that on a PC, no problem.

But if you multiply that by a billion people, there is no data center in the world big enough to do that. So we know that if you want to have copilots work, you have to distribute them. They have to actually be close to the user. The other reason to do that is privacy.

If the Copilot is on your PC and it's under your control, you will trust it more. And so the second thing that's going to happen in the PC World is the PC is going to evolve from being a platform for you and your applications and your data to be a platform for you, your applications, your data and all of the AI copilots that help you do your job. And that will be incredibly transformational about what the PC actually is, what performance it needs. That journey will start probably by early next year and it will go on over the next PC cycle.

Last thing I'll leave you with is security. I mentioned that we have an irreparably broken security model. I was part of creating that. I apologize. It is what it is. It's what we have to work with today.

Don't stop doing that. But it isn't good. And so about two years ago, I LED an effort to go figure out what Del ought to do. What should our point of view be here? And what we concluded is the only path forward was radical architectural shift. You could not go and find a magic product that would make all the pain go away.

It didn't exist. You couldn't make a firewall better and solve this problem. We had to change the way security worked.

And so we went out and talked to everybody, the Department of Defense, banks, customers everywhere, anybody who had an opinion. And it turns out that the architectural shift has been sitting in front of us for a long time. It's something called zero trust.

Now, most of you probably just had a visceral reaction to the word zero trust because for 10 years we've been marketing it incorrectly. We've been lying about it. We've been implying it exists. I will tell you right now, there's exactly one place in the world that operates in a full zero trust state in their infrastructure. It's the National Security Administration of the United States is nowhere else.

No matter what you think you're doing, you are not doing zero trust end to end across your enterprise. Today, however, we think it's achievable because there's only really three shifts that happen to go from traditional security to zero trust. The first, you cease allowing unauthorized, unknown entities to exist in your environment. A core principle of zero trust is every device, every person, every piece of data, and every application is authenticated and continuously authorized. Very hard to do, but. But if you do it, things get very different from a security perspective.

The 2nd is that you change your approach to policy. In the zero trust world, or in the traditional world of security, there's only three things that happened. The known good, the known bad, and the unknown. Well, today we obsess about preventing the known bad and finding more known bad than the unknown.

That's why we're reactive. Zero trust defines the known good. It defines acceptable behavior, enforces it with policy, and prevents everything else.

And lastly, from a threat management detection perspective, instead of observing our networks from the outside and trying to find the needle in a haystack in a zero trust environment, if everything is a known entity and all behavior is defined, anything else is an immediate threat you have to act on. And so you become deeply embedded. Now those are very important shifts and they make tremendous sense.

They're just incredibly difficult to do. And so our role in the world and our strategy today around zero Trust is to do everything in our power to make that easier. The biggest project that we launched we announced is Project Fort Zero. We're working with the US government.

They had built over three years of work a zero trust reference architecture. Well, that's interesting. A zero trust reference architecture for 1000 user cloud 151 controls the whole stack. You know, never seen one of those before until I walked into this data center and put my hand on 7 racks of gear doing real zero trust.

Turns out 70% of the components in that system come from Dell. And so I asked the DoD what we could do and they said you can industrialize this and sell it back to us. And we said can we sell it to other people? Sure, sell it to everybody. Now where we are in the journey is it's a project. I was there, yeah, two days ago in DC doing a program review.

We're about 81% complete. We'll go into certification in the next two months. And the result will be for the first time, we will have a commercially viable 0 trust data center architecture for some pretty demanding customers, in this case the Department of Defense and a few other departments of defense around the world. Probably not something any of you can use for the most part right now because it's too much. But what it does is it proves that zero trust isn't a theory.

It exists. And then the rest of our journey is OK, Well, how do we incrementally get you there? How do we make decisions about products and components that get you towards Zero Trust? What I will tell you is you have to decide what your security strategy in North Star's going to be today. Your North Star, just speaking for you I might be wrong, is to not be on the front page of the New York Times, to to not go to jail.

That is literally your goal in security. And I tell people that is not a goal. That is an expression of grief. That is a bad thing.

And So what you need to have is a better direction. And what we realize is if Zero Trust architecture can become real and there's a path to get there, then let's start moving in that direction. Let's make decisions that incorporate more and more of the Zero Trust principles and architectures so that we start bending the curve because we know that when you get there you have transformed security. So let me wrap up, I'll go back to the beginning. We talked about these five things.

I told you up front that you know vision without execution is a dream. We're not in the dream business, we're in the delivery business. Hopefully what you've seen is a vision and our vision is very simple. We believe that wouldn't it be great in the future if the multi cloud world acted like a platform and became a system that enabled you to efficiently use it as infrastructure for everything? Wouldn't it be great if that powered the AI revolution that's going to literally define every one of our businesses? Wouldn't it be great if we didn't just do that in data centers, but we did it anywhere? The data exists with Edge, but let's do Edge as a platform, not Edge as a point product. Wouldn't it be great if we rethought how our people use these systems? We created the AIPC experience, we made copilots work well.

We enabled our new developers and then wouldn't it be great if on that journey, as we start to bend the curve towards zero trust, our security posture improves over the next decade as opposed to going the wrong way that it's been going for the last decade. That's the vision of Dell. Hopefully you'll believe that we're not just dreaming because there's a lot of execution going on. And like I said, even yesterday we announced the new storage system to complement the XE series servers. You will just see more of that execution from Dell as we go forward. So glad to share that with you.

Hopefully it was useful. Thank you for your attention and we're going to transition. So thank you very much.

2023-12-11 05:40

Show Video

Other news