Choosing the Right GCE Instance Type for Your Workload (Cloud Next ‘19 UK)

Choosing the Right GCE Instance Type for Your Workload (Cloud Next ‘19 UK)

Show Video

Hello. We. Just thanks. For being patient we, gave. Everybody a couple more minutes because, thank. You for finding this place. My, name is Aaron blushes I'm a product, manager in, Google, compute engine and, I'm, joined by Patrick Hansel, today a software, engineer from improbable and. We're here to talk to you about choosing the right GC, instance, for, your workload. So. First off, the. Survey. Will open 20 minutes into the session your. Feedback is always appreciated. Each. Year we try and bring the most relevant content to, you at the right level to ensure that you get the most out of these events, because we know they're not easy there, use, good use of your time and. You. Know travel etcetera but, I would ask you to be gentle, since both Patrick, and I are substitute. Speakers, this was not our original. Session. But they pull this off the bench for it so if, you're going to give us a survey please be gentle but. Jumping. In. We're. Going to start with a question what. Does your workload what what, does your workload, look like, if. I was to ask everyone in the room what their workloads look like I would probably get hundreds perhaps thousands, of answers. They. Are different applications, they. Are they run in different infrastructures. They have different, potentially. Different processing, architectures, like CPU, or GPU, some. Of you might be focused, on running high, performance, web applications, others. Might be looking for a way to run ERP. Some. Might be looking for an, easy way to burst, into the cloud to free, up availability. On Prem some. Might be trying to move completely from on Prem into into, the cloud and. GCE. Can address each of those concerns, and when we talked to Patrick. He will talk to you about, his. Primary, concern which, is multiplayer, gaming. So. With all these different use. Cases for cloud I'd. Like to ask another question which is how. Do you find how do you find the process of matching your needs to. The capabilities, of the cloud, is. It, easy to look at the the different offerings from everyone and identify. Which VM families, for, the workload that might fit is, it. Easy to understand, where the availability. Is that you need is, it easy to understand, whether or not it, will fit within your budget I, bet. You that if, you look at all the variety, of offerings that are out there today this answer is not always easy. You. Could typically find yourself, looking. Through catalogs of instance types. Trying. To compare different performance, ratios, capacity, or capabilities, not, to mention the different price options, that are out there. And. What, happens when you're put and when your needs change do. You have to repeat the whole process again, the. GCP, team believes that it doesn't have to be this way. And. That is why we have always chosen to have a very different approach from day one we. Understand that you need simplicity, to make your decision easy. You. Don't want to spend hours, studying, a product catalog to try and figure out what you need. We. Know that you need flexibility. Because your your, needs, change and your environment changes, from, day. To day or perhaps over the years and finally you've. Always wanted to prioritize, efficiency. Because. In the cloud you shouldn't be forced more resources, than you need you. Go to the cloud so you can say hey I'm only going to buy as much as I need and have it be available whenever, I need it and of.

Course You, should be able to leverage the intelligence, of the platform, to make sure that you stay within your budget. GCB. Has always had the most flexible, and most cost effective of the cloud architectures, because. Of course it was built on Google. And. This year we introduced, workload, specific, families, so. That we could optimize and, provide, provide. More choice to. Customers, for, for. Things like, compute. Optimized or memory optimized. You. Have your general purpose, families with n1 and n2, which we introduced this year and. These. Families are simple. Flexible, and the best, for most workloads, they. Are available. And either predefined. So, you can just pick one of whatever we've decided like have, a number of processors, to the number of memory or, you, can choose, any. Any, set any size that you want the. Number of CPUs, to the number of memory doesn't have to be something that we decide it can be something that you have decided for your own that, fits best for your workload you can also use it to cost optimized, for. For. Licensing, concerns if you wish. And. Then this year we put in we we introduced I, think just. A few months ago compute, optimized, c2 and these. Are the best fit for compute. Intensive workloads, including. Triple A gaming, that's, what we're here about from Patrick. Electronic. Design automation, HPC. Those. Kinds of things they. Are they come with a 3.8, gigahertz. All core turbos, and offer 40%. Higher. Performance, compared, to standard GCE, general-purpose, families. And. Finally. We'll talk about our memory optimized, m2 families which, is. Fit for memory, intensive workloads like s ap HANA databases. Memory, analytics, those. Kinds of things and can go from 15. Or 2 up to 25, to one. Ratios. Of per core to memory. They. Can be configured up to 12 terabytes today, but, we are we. Have much larger sizes on the near horizon for, anybody who's looking for that. So. Now. That you know that what the families offering looks like how. Do you pick the right shape for your application, and remember, it's supposed to be simple. So.

First Let's let's. Review what, is definitely. A majority, of the workloads the majority of the workloads that we see, customers. Using and you, know that our data shows that fits of most of the applications, that are in your environment and. These. VMs are designed to be simple flexible. And fit, a majority of your operational, tasks. Now. For flexible, sizing. They. And, in. Most cases they shouldn't have to be something that you have to spend a lot of time thinking about, they. Have a wide range of, memory, to CPU ratios, up to 8 gigabytes per V CPU, and, of course they have custom machine typing which I talked about earlier where you can scale, depending. On what you which you want including. Considerably. More memory like well like I said you might have a lot more memory to a very small number of course to optimize for licensing. With. Custom machine types you can actually get the exact fit for your VM capabilities. Put. Simply it gives you the power to create customized, instances. You're. Not restricted, to, what. You would normally see for. For. A particular, type of balance, because of the infrastructure, that the. Google, that Google runs that GCE utilizes. You. Can take custom machine types and extend the memory for workloads, you. Know did you say hey I really want to make sure that I have a lot of large memory concern, and normally. On say, on pram or within a on. Print on Prem virtual, or on Prem bare-metal environment, you would be restricted to a particular, set. Of ratios but with. Custom, custom, VMs we actually we, give you the freedom to go, as far as you would like farther than sometimes people are comfortable, with with the number of CPUs, you can really. Scale that memory up if you wanted to. Another. Feature that we that we like to rave about and, is actually one of my personal favorites, is right. Sizing, so. With right sizing, it helps customers. To only pay for as much as they need, and. What happens is that we take 8 days of. Of. Your usage, right and we take the information, that we get from stackdriver. And we make a recommendation to the customer, about what the size of their applica, what the size of their VM should look like and, because. You're we scale and charge per core and per memory if you're using, too. Much or too little we could say hey you know what we're not seeing the best performance, for you here we think you could save money on this particular, VM there and this happens all the time so you work you were we are actually trying. To let, it and make it so that you pay us less on, a, on, a regular basis, because we believe that like, I said you should be able to take advantage, of the intelligence, that's built into the platform.

General-purpose. VMs, also adapt as your workload changes right so. If you, want. To is let's say that for a while you might need to take advantage, of, high. I/o you. Can I say you want to attach a, local. SSD to get the to get very fast performance for some period of time or maybe. You're just like well right now we're just going to do some backups so the. The performance. That I'm I'm getting out of persistent, disk is okay feel. Free you know you can you can change. Change. Either way on the fly and it, will recalculate your costs for you. We. Also believe that. The. Decisions, on on, what you should spend your time on it shouldn't be you, know should be done for you and this this, takes a place or an. Example, of this is. Flexibility. As your workload changes right, and this changes over time what. I mean by that is let's say that you had an application that was running on say Haswell. Architecture a, cpu an Intel, Architecture. From a while ago, and. But, you would like to run skylake, right, this is fairly new you're trying to get the best out of it we will allow you to choose the minimum CPU that, that it runs on and as. Our as. Our. Environment. Grows over time and we add new architectures, you can go in and change that or we. Can automatically. Change it for you and say hey you know what I just maybe I want to run on a lower on the, lower price are on the, lower family, because I've tested, it and I get the better performance. Out of skylake as compared to cascade lake or, vice versa you, you can put, that information into, into. Your VM and we will we will automatically, adjust it for you, know. Reboots, no no, nothing we will just move, it around with. The magic of the cloud. You. Can also do this with GPUs, which is nice so if you select a particular, GPU. Minimum, family type you, can also take advantage of the same thing as we as we introduce, new families, because. For instance you might have it set up and optimize for a particular, driver or or. Something else and and, you need your application, to run on a ka T instead of a p4. So. This year we also introduced. N2. Which is something that that Patrick. Will, talk about today and it. Is the, next, generation of, our general-purpose. The. End two types are. The the, next, general-purpose machine and they offer more a little more flexible, sizing and they go they can go up to ATV CPUs. And. And. They, also run on cascade lake with, a much higher base frequency, than our n1 family does, these. Also will allow you to attach a. GPUs. Memory, storage and this and give you the same flexibility, that you had on n1 just with better performance. And. In many cases what we've seen is as twenty to thirty percent so, if, you, are potentially. More price sensitive or. You're looking for the possible, tential. Of preemptable which i'll go in in a minute and one might be a bad good, fit for you because performance is in your means your concern you're just looking to optimize for cost but if you're looking for a general-purpose optimize. For performance and, two might be a better fit. And. Then, we also in. Alpha, right now we have n, to D, I. Don't know if you've noticed this the cloud gun vendors, are very creative when they come up with their instance, names, our. N to D is our AMD, family. And, we think that it is it. Is going to be. An. Excellent offering, to. Help people who even want to go up to like memory. Memory. Sensitive, or clothes because the Bant the memory bandwidth on on these processors, is exceptional. We. Think it's a good customers, who are looking for a performance consistency. We think that the TCO. On. Offering on n2d will be a good fit, it, has an excellent. Balance. Of, both compute, and memory for web server applications. Databases, back-end. Applications. Things. Like that everything from media streaming and. Financial, simulations, have seen. Have. Seen benefits and success on our new and 2 DS and. If you are interested in the Alpha you. Can come up to me after after, the session or feel. Free to go, to our blog and it will point you to to, a forum on where, you can sign up for that as. We'll be moving and, if you can't get into the Alpha in, the next you know a little, bit we'll, be moving into beta shortly. So. To. Summarize we've covered, our. General-purpose VMs right you. Have a wide range of options, which. With a great deal of flexibility.

And They support, a variety of workloads, from databases, to, web apps and dev tests, and just, a variety of and. GPUs. But. It's. Once. You have, the, you, know and we built intelligence, into it but once you have them up and running there's, other things that you might want to do. So. One, of those things is to, scale. To. Scale. With Google right be able to do one of the things about, Google cloud is is that our scalability. You. You get the scalability, that you have with Google and. But. You you. Also have the ability to change. Your applications, and be able to burst and whatnot, and so we'll go into some of the the. Other special, things that are we sort of hinted on earlier. So. For example like bursty. Workloads. Many. Customers, look, to Google and said hey you. Know cloud, applications. Run at varying times and, they run at varying times all over the world and you have peak times here and peak times there is there, in any way that I can take. Take, advantage of this of this, you, know business, cycle right within the capacity in your cloud and we thought we thought to ourselves yes why, don't we do something about that so we came up with preemptable, VMs and. There. We go they're. Made for batch checkpointing. Hi available, we're. Closed things that, you. Know might run in really large sets, but, if you lose a couple of the VMS it's not really that that, big that, big of a deal they're, set up to run for up to 24 hours, and. But. And, and they're set to run like us and enlarged and large groups but, the advantage to them is is that you can save up to 80% of. The. Cost of a typical general workload so. If your apps are fault tolerant, and possible. Instance preemptions, aren't a thing that you need to learn really need to worry about and, these large batch jobs, you. Can save an incredible, amount of money we've. Seen success, in this with media. We've seen success with this in financial services we've, seen success with this in in, healthcare things, that people want to batch and run and that our regular, and that maybe they can run off hours or something that is very very could, be a potentially, a very good candidate for preempt, ability. Okay. So. The next one we're going to get into is the, compute. Compute. Family right there come the compute, optimize family, and. For. These intensive, workloads it's my pleasure to actually introduce someone. To talk directly to you about it with real-world experience, with, all these families that we have gone over so far, Patrick, hands all from improbable. Thank, you. So. Um yeah, my name is Patrick I'm, a solutions, engineer at, improvable here, in London I'm, gonna give you a brief, insight today into. How our platform, takes, advantage of the options that GCE, gives us as I was talking about. So. First of all who. Are we and what do we do well. Improbable was founded, in, London back in 2012, originally. As a game, studio with. The aim of building, the first of a totally. New era of online, game. But. It actually wasn't long before. We realized that, the, technology that we'd built was. Actually very relevant. To. Do more than just ourselves and actually to game developers. Really. Building one. Of very many types of games and since. Then. We've. Worked on giving those Studios the, the, option to to work with spatial OS which is our core. A technology, platform. So. What we actually do on a technical, level then. Well. We're, working on technology, and practices, to usher. In a, new era of game, development and. That's. Really through our core tech spatial, of us and through, our first party studios around the world we. Have we have offices now in in. Europe in London also. In in the US Canada. And China. And. The tech that we've built is a whole suite of software, that. Really helps each stage, of the, game development cycle. With a, range, of course systems such as game. Client distribution. Logging. Metrics. And user analytics infrastructure. But. For this presentation I really want to focus on our, game server, hosting solution and our. Multi server networking, stack. So. Our hosting platform has been built, from the ground up using, Google cloud from day one and uses. A whole bunch of GCP, services. At. The core though it really boils down to two, google kubernetes engine, as, a. Control, plane for orchestrating, and managing of free services, and on. Top of that Google compute engine as the sort of data plane that. Runs the actual games and, simulations, that, are built on our platform by ourselves and. Also. By our partners, and. The. Second part of our core services, I want, to talk about is is our multi server networking, stack. Specialist. Provides, a way to have multiple.

Game Servers, collaborate. In real time to. Simulate, a virtual, world that is actually. Huger more complex, and rich. Than could be handled by any single, server, instance. And. To do that we provide an SDK that integrates, with any. Game or simulation engine, and we, also provide, off-the-shelf, plugins, and SDKs for. Popular. Game engines like unity and unreal. Let's. Talk a bit more about exactly, what. Game servers are and how, they interact, with GC, on our platform. So. First of all what, is a game server. What. Really you can think of them as the central. Coordination, point of most, modern online games. They're. Essentially the arbiters, of truth of what's actually going on in a game world. So. On a technical level they're. Really they're responsible for for, first of all taking, input requests, over the network. From. Clients, so for example that might be you. Sitting, at home playing on your PC or console. And. Then they use this to, simulate the evolution, of the world over, a small, time step. Once. They've worked out what, the new state of the world is they. Then, figure, out what's changed, and update. All the clients, who. Then come. Up with their own approximation, of the state of the world and present. That back to the players and. As. You can imagine this loop has to happen. Incredibly, fast and actually, that's that's really important, games. Are very, sensitive, to the latency, and it, really doesn't take much to ruin. This illusion, of a sort, of seamless shared world that's inhabited, by by multiple players. So. The performance of a game server is clearly, very important, for online games, but. What do we mean by performance, and what's a good metric of that. Well. Generally we talk about framerate, as one, of those key performance metrics which. Is the number of simulation, steps that can be processed in a second, and. A high frame rate as well as it being stabili. And consistently. High there's, an incredibly important, factor in building. A game that feels good for players. Where. Inputs they make on their controller actually feel, like, they're instantaneously. Playing, out to. Them and to other players wherever, they may be. So. Let's talk about bit, more about how GCP, actually, gives, us access to the right machine types for, our workloads, and how. That choice affects, our business, and those of our customers by. Looking at one of the games currently, in development on spatial. So. I'm gonna be talking about scavengers. Which is a massively, impressive, and exciting, new game that's. Currently, being built on our platform using Unreal Engine 4. It's. In active development by, midwinter. Entertainment, which is a team of around 35, developers, based, in Seattle a, lot. Of whom actually came from working, on the Halo franchise of, games at 343, industries, so. They're a massively, talented bunch that have a lot of experience, of developing this. Kind of triple-a online experience, and, that's. When, you play it it's it's immediately, obvious. Unfortunately. This gif actually, doesn't work very well what, a huge screen as we found out earlier, but. Scavengers, is a survival. Coopertition. Third-person. Shooter, which is set in a dystopian future where the, planet's climate has collapsed, which. Is maybe a bit too close to home for some, players. Fly down to the the planets freezing, surface, from. An orbiting space station which. Is inhabited by other survivors, dropping. In in small teams to, explore. A huge. Rich. World, and. To to, fight, to survive, against the the elements of the planet. They. Have to defend themselves against. A variety of, passive. And hostile, AI while. Collecting weapons and equipment they. Need to survive. So. What is it that's important. To a game development development, team like midwinter. Well. Writing, software for games is inherently. Very different, to other types of engineering it's. Ultimately, a creative, process and the main objective at, the end of the day is to build a fun experience. And. Because of that being, able to iterate quickly, on. New designs and features is incredibly, important. Which. Means that investing, in the right infrastructure, and, the right tooling, from day one is is. Really critical. In. The case of scavengers. Midwinter. Knew, that he wants to create a large world where, it's its own inhabitants and, ecosystems, and. In order to bring this to life they. Needed to be able to maximize, the number of concurrent players that could be connected. While. Sustaining the same large AI population, which.

Is Why for, them using spatial of us was an obvious choice, with. Technologies, like our offloading. Which. Essentially, allows you to split. Simulation, of your game across different instances of Unreal Engine that's. Really the only way that they could reach the sort of scale and complexity, that they originally, had envisioned. I. Want. To go into a little bit more detail about how this fits in with Google Cloud and what. Underlying, compute resources we're. Using to develop and operate games like this. So. First of all, Google. Clouds directly. Helping us to increase the rate at which developers. Developers, can iterate on their games. For. Example our continuous, integration infrastructure. Which. We use for, building, unreal. Projects, specifically. Uses. Or, makes heavy usage of the n1 machine types that Aaron was talking about earlier and. They're. Particularly relevant to us because of the, really high core. Counts that you can get with with that machine type so. We actually use for our. Unreal codebase a fleet. Of 96. Core. CPU. Optimized and one, instances, and that, allows. Us to keep build, times very low and to, allow developers to focus on refining, their systems. And. For. The spatial of us runtime. Itself. Which. You can kind of think of as the piece, of magic that's responsible, for actually routing. All that information, back down to game clients. We. Use we, make heavy usage of of the n2 instances, new. N2 instances, which. Have, given us some some real performance improvements, over in once. The. Runtime the spatial at one time is designed. To, be distributed, and so, it can convey, efficiently, make use of all, the cores on the single machine or across. Multiple instances. Important. Importantly we're able to actually give this choice of hardware directly, to our customers, so, that they can make the best decision based, on their knowledge of their own game and what it requires to run I. Should. Also add actually that we are currently, looking into migrating. From n ones to n choose for, our CI, infrastructure, because. We've we've actually seen that, even. Though that we don't have access to such high core counts, we've, still seen some some small performance improvements, with, roughly the same price.

And. Then moving on to focus, on the actual machines. That we use to run instances, of unreal. We. Decided to investigate the specific, profile, of Unreal. Engine in. Order to to understand, how we could optimize for the underlying hardware. And. What we found is that unreal. Is particularly, sensitive to single core performance. And. That's, really because of the, predominately. Single threaded architecture, that he uses particularly for the servers, and. With that knowledge we. Run some benchmarks against. A variety of GCP, instances, and. We found that by switching, from N 1 and n 2 to C twos they. Were able to get some really huge performance, improvements, for. Game servers. So. As you can see here when. Using the CPU optimize C 2 nodes, we're, actually able to see performance, improvements, of around 60 percent which, is pretty, massive. So. That's great, there. Were some some huge performance improvements, but. What's the actual impact of that on a game level what does it what does it actually translate into. Well. This really means that developers, on, our platform, on spatial OS are able to push the boundaries further. Than ever before with their creations. The. Point of the underlying hardware here is, a direct impacts on the the fidelity and the complexity, of the those game worlds and that, freedom to build new types of experiences. Actually. Helps them to differentiate their game in what can often be a very crowded marketplace. And. A good example of this is what, we were able to do with, the game's AI, BOTS, because. Of this performance boost. The. Graph on the right here is showing a trace of the server frame time over a period a time period and, that's matched up with a trace down below of the number of AI. And, players, in the world and. What. It's really showing is that Midwinter. Were able to increase the number of bots in their world by 50%, while. Still, his the same performance targets, and. That's. A really big change that actually. Fundamentally, alters the, experience, of playing scavengers, as a game and. AI. Account isn't the only thing they're able to experiment with they're, also able to make their world feel much more dynamic, and realistic. By. Increasing the number of items that can be interacted, with so, that means things like more, materials. More. Resources. That. Can be collected and also, more wildlife, and. In creatures roaming around the environment which really helped to bring it to life. So. In summary then, choosing. The right type of machine for what what it is that you're doing and what, type of workload you're, running can, have a really huge impact on, you, and your. Customers, businesses, and that's. Something we've certainly found probable. And. Ultimately. The. Wide range of cloud machines that we have access to. Through. Google cloud is is really helping shape, the next generation of more complex, engaging. And ultimately. More impactful, online. Games and. With. That thank you very much back to you earn you, Patrick. Patrick. And I only met very recently but I was really excited when we were getting, to do this presentation because, it actually turns out that well I live in Seattle when one of my friends is developing, the game that he was. That. He was showing, off today so I was like oh look at that. These, things just kind of work, themselves out that way. Okay. So computer optimized a little bit more about it now that you've heard some some, real-life things and you don't just have to listen to me prattle on about how, great this particular, art gray VM families are, they. Are especially useful like, I said for H PC gaming. Search, performance, sensitive workloads. The. C2 vm's offer more, than 40% higher, performance, per core applications, and things as single-threaded, at least those what we measure but you, know as. Patrick. Pointed out your mileage. May vary. Sometimes. You know in, many cases we've seen people get even much better than that, they're. Built on the latest CPT or CPU architecture. With second. Generation scalable. Processors, the. Highest gigahertz on GC p of 3.8, gigahertz, and sustained, all core turbo very. Powerful machines and if. You can stand up here and say 3.8, gigahertz room with all core scalable, architectures, very quickly you, can go do this next time because it takes some practice. Then. These. Are by. The way these are also on. These. Are latest, on the cascade Lake I should point that out right so we're again. We're starting to introduce, new architectures, into GCP.

They. Are, the. Less core count but, a much, higher performance. Per core scalable. From four to six, TV CPUs, we. Are planning, to introduce larger. Sizes, later. In 2020. And. You can currently attach, up to three terabytes, of local. SSD and run, at 32, gigabits per second, with I believe hundred, gigabits in beta. Today. Again. If you are interested in any of the the, betas or some of the things you can see me afterwards. So. They, offer near so we're to what we have with compute optimizes, near real-time performance. Optimized. We also optimized the software and the GCE stack, so. That you get you, know you. Have full visibility and, transparency at the underlying, hardware explicit. Numa, visibility, to make sure that you're, getting the best out of your memory to core ratios, and, of course we manage the sea state and the processor, as well just to like make sure that you get the extra boost on when you're going with turbo. HPC. EDA, gaming. Like. We heard from Patrick are all very good. Candidates. For this type of workload as well as certain types of financial. Financial. Services, transactions. So. Let's. Move now that we've talked about, compute. Optimized VMs we'll move over to another. Let's. Say workload, workload. Optimized, part. Of our workload optimized families which is memory optimized, right. Enterprises. Small businesses, alike rely, on, databases. ERP. And applications, that are much. More demanding. Larger memory to compute ratios that are much more sensitive to memory changes than they are to compute. Memory. Optimized are designed for these high-end critical, business applications they. Service things like s AP HANA. Running. And real-time analytics, and things like persistent cache. We've. Seen success. With business. Warehousing. Genomics. Genomic. Analytics, sequel. Analysis, and so on right for. Those of you who have been working in infrastructure. For a while or working in the cloud you've never become sure that you're aware that you know that they're there if. You have, applications. That are sensitive to memory these things can become quite challenging so, we. Are scaling, these up. To meet your needs 15. To one and 25 to one core ratios today they are up they're also priced, in a particular, way to optimize, as. Opposed to end one optimized for these types of high, memory ratios. And. We. Are. They like I said I think we said, earlier we support, up to 12 terabytes today, and we are in the process of scaling those so in 2020, you will see larger sizes coming, to market.

Specifically. For things like sa P right we really want to make sure that sa P customers, have the value from the infrastructure, that we that we've invested, and we've been weaned past in that investment onto them. Customers, can buy committed, used discounts, or, can utilize committee to use discounts, for overtime, when they know that their workload what the size and the shape of their workload is going to run they, can scale it up and down across. The shapes, and sizes that they need to to, work with the SA P or there are other database architectures, as, they grow with their business. So. In review. Applications. Needs, vary by, by. Use case across, customers, and over. Time. So. Finding the right VM for every workload is important, our. General-purpose VMs, they, have the best performance. For dollar offering, with a light, ride a wide range of sizing, and, pricing, and shape options. Our. Compute. Optimize are the highest performance, core for, real-time performance for gaming HPC. Scientific. And. Like, perhaps search high performance websites. Memory. Intensive workloads with. The lowest dollar per four, gigabyte in in memory, with. For, high-end databases, real-time, analytics and of. Course. Preempt. Ability for the most economic, options of saving up to 80% for. Bursty, workloads, that, are more fault, tolerant. So. Wrapping up. We. Went through a, lot of content but I want to re-emphasize. Simple. Flexible and efficient. We've. Gone really, over. Just. Three, different areas right general purpose computer, optimized and memory optimized, we. Are not in we, are not planning to make things complicated. We'd. Like to make things simple like to make them available we believe that they should be available around the world and, to meet whatever the applications, that you need right we're designing architecture. With, the customer, in mind we're. Designing architectures. And providing. Offerings with the flexibility, with, the customer in mind and pricing, them in a way that is the best price, performance in the market. We, will continue, to scale these these. Offerings, over time compute, memory and general performance, introducing. Things like AMD. Introducing. Things like. The. Intel. Obtain, we. Are the first to to to introduce, that into the market so, you'll see memory. Memory optimized, evolve in that particular direction over time and of. Course scaling, all of these up and down as. You. Know as to meet our customers workload, needs, easy. Decisions, simple, steps general-purpose. Memory, optimized compute optimize tune. It from there. With. That I'd like to thank you for your attention, Patrick. And I will be here for the, next ten, minutes or so to answer questions of, course. Like. I said earlier please complete the survey let us know what you thought whether you found this session, interesting. We. Always appreciate.

Your Feedback and try and and make, things a, little bit better there, in the app you can find Dory, so you can put your questions up there and we can address them or we can take them offline, if, we didn't if we don't we're not able to answer them immediately. Story. And. Thank. You very much.

2019-12-15 11:36

Show Video

Other news