Bringing You the Future of Cloud (Cloud Next '18)

Bringing You the Future of Cloud (Cloud Next '18)

Show Video

Hi. Good afternoon everybody I'm Ken Goldberg and I'm a foreign asana and thank you for joining us today it was super exciting for us to see course presenting. The cloud services platform today in his keynote. We. Started working on this vision a bit of a four years ago where, we took our years of experience in developing. Applications, and managing them reliably. And brought, them to the industry, through, kubernetes, and kubernetes. Engine, we. Invited community, users partners. And customers. To work with us ensuring, kubernetes. Solves real problems. For. All users. The. Result is that kubernetes. Today is the industry standard for managing, containers, applications. It. Is used by enterprises, of all sizes in their journey, to modernization. Increasing. Teams productivity, and leapfrogging, competition. Last. Year we, open sourced sto decoupling. Service, development, from service management and it already got significant. Traction from the industry, including first, users in production, today. We, are bringing those technologies, and a few new ones together building, an integrated stack, giving. You application, modernization, wherever, you are in, the cloud and on-premises. Our. New solution is will be those four principles in, mind consistent. Experience, centralized. Control agility. With reliability and flexibility. We'll. Be using them to introduce our products, it. Will. Help you understand, why, we have built cloud services, platform the way we have and what, to expect next if. They. Sound familiar to you that is because they are already rooted, in kubernetes. They're. Also demonstrated. In many other Google, cloud platform products. And solutions today. You. Will hear more of them throughout the next three days and we. Want you to know that, these principles are not just principles, for technical design, there. Are principles that enable, enable a customer, journey that. Starts with you where, you are and allows, you to transform, IT at, your, own pace. We. All know that, enterprise IT is complex. Every. Customer, that I work with has, a mix of custom. Developed software as well as off-the-shelf, software and a variety of on, Prem hardware from. Many vendors and, across. That they, have to ensure security, regulatory. Compliance, and all of their policies, work, seamlessly. This. Complexity, can, slow businesses, down in. The. Middle of this environment, cloud, offers. An opportunity, for speed and competitive. Advantage. But. Without the right software, cloud. Can. Sometimes add another, layer of heterogeneity, to, an already complex, environment. Cloud. Services, platform is targeting, the 80%, of enterprises. That, are looking to enable, hybrid, cloud today. They. Need a way to homogenize. Public. Cloud infrastructure, with what they have on Prem connecting. And managing, it together, I. See. Users, marrying, the modern, with the existing, so that they can really enable use, cases like these, they. Want to have CI, CD continuous. Integration continuous deployment. Across. Public and private, and. They. Want to be able to manage compute, at the, edge, where. They can have low latency or, for regulatory reasons or, just for business reasons and yet, have centralized, intelligence, in the cloud and. Then. Lastly, they. Want to be able to consume the best of breed services, regardless. Of where they're running whether. In their on Prem data center in, GCP or in any cloud. Today. To. Accomplish, these use cases requires. A, lot. Of fragile, do-it-yourself. Integration, or. It, can. Result. In a, compromise. Between. Consistency. And choice, between, agility, and reliability. Cloud. Service platform, is a different, kind of hybrid, cloud software, that, overcomes. Some of these trade-offs, so you don't have to make those compromises. At. Its core cloud. Services platform is powered, by kubernetes. And sto, which are open source enabling. That consistency, across environment. But. On top of that this. Stack is, custom. Configured. Enterprise. Hardened. Tested. And managed, by Google so, you have that centralized, control.

It. Is, also deeply. Integrated, with DCP services, to, deliver the benefits of Google cloud on Prem, and then. This, integration, enables, a wide, variety, of solutions. Such, as CI. CD across clouds also, machine, learning and Cerberus and a variety of others. Today. Hen, and I are going to demo for you exactly how, we break some of the traditional, trade offs with, these design principles. The. First principle, of cloud services platform is consistent. Experience, we. Have seen with kubernetes. How much consistency, matter the. Idea of write once run, anywhere, pulled. Users, and partners, into, its community, no. Login to a specific platform really, excites users it simplifies. Things for developers. And operation. Teams when they know there is a set of certain. Set of primitives, they can rely on. Cloud. Services platform, takes consistency further. Portability. Isn't enough, users. Are looking, for consistent, user experience, use. The same tools everywhere, where, there were close run, their. Operation, teams would like to have, the same way to manage the cluster. On-premise, and in, the cloud, developers. Would like to develop build, and deploy the, application the same way regardless, of where the workloads will run the. SRA team would like to have the same tools in, order to troubleshoot, an issue regardless. Of an environment. Our. Customers, tell us they. Want a consistent experience of. Kubernetes. Engine, g ke g, key has been GA for, almost three years now and users. Like kubernetes, engine automation. Upstream, currency, and availability, it, is, Google configured, managed, and supported, kubernetes, they've. Been asking us how, can they get the same capabilities. In their own Prem data centers. And. We're, giving our customers exactly, that. Introducing. GK on Prem it. Is. Google configured kubernetes, with, automated, provisioning managed. Upgrades, and security, in your on-prem, environment. When. Talking with users, this, is when their eyes light up the. Idea that they don't have to train their teams once or twice to, learn different tool really, excites them this, is where they're spending most of their time. Operation. Teams having, a single playbook, to manage all of their environments. Is really, a game changer for them. Less. Time spent on training and integration. Leaves, more time to innovation, and increases. The team productivity. What's. Even more awesome about GK on Prem that it connects, seamlessly. To Google cloud platform and, helps. You manage a multi cluster environment wherever. Your cluster, is and all, of them we look and behave exactly the same. Let's. Move through the demo and see consistency, in action. Ok. So here what we have already is that earlier this morning my, team provision, a GK on Prem cluster, on this pizza box server running vSphere, well. This. Is a live demo. And. Now we'll show you how to integrate this, cluster. Into your GK environment. And get that consistent, experience. From. The G key UI I, can. Go and register. An external cluster I, will. Call it. The. Innovative. Name West. And. Then. I will download this gke Connect manifest. Let's. Look inside it what, it will help us is to create that connection, to. GCP. So, what it has inside it, has a namespace and in addition to that there is a service account and roll bindings, we will use in the UI in order to access and view the workloads in the cluster, we. Also have here a deployment, of the, gke Connect agent which is what we use in order to create a secure, connection from, this cluster into GCP. Let's. Apply it. You. Can also see here in this UI the GK is waiting, for gke Connect connection with Google and once, it's done we will see this change into, a checkmark, yay. And. We'll continue to the next step. Here. I have to provide some old details also I want to make sure that we are talking with the right cluster, from GCP side so, I will use a certificate. Provide. The IP. And. Continue. To authenticate. For. This set of the demo I will just skip authentication.

For Now but I can also use, a user token, or leverage. An identity, provider in, order to do basic authentication. And. We. Are finished what we can see right now in the environment in, my GK environment, I can see all my clusters. Including this on Prem cluster, and it is already registered but. You can also see that I don't have any. Details about it. Everybody. Needs to enter their own user credentials, and this is why there is this lock icon, here, this. Is good hygiene practice in general and this will help us to maintain the integrity of the environment, and make sure that we have the right audit logging and our, buck. Policies. I will. Copy it from the environment. And, now I will actually get access into the cluster and you can see how I can see the cluster size the, number of CPU and more from. This point on everything actually behaves exactly the same as the, gke clusters, you are used to managing on gku, i. Yes. First live demo yay. We, see the master version for example which is very important, again another. Attribute. That we keep consistent, with your GK environment, we heard from our customers this is really important, for, them for, the integration aspect, I have two gk8, two kubernetes we. Can see the nodes that, are there, and, like every other GE a cluster. I can go into the node and see, really useful information such as the pods running, and resources. Available on that node so. This is just looking on a single cluster but, then Jiki. Actually creates this multi, cluster, single, pane of glass where. I can see, for example all the workloads, running in my environment regardless. Of which cluster they are running on and, this. Is true also for services, as well. So. This is really nice getting, that visibility. Into my environment but, what's even more awesome. And again, first I'm doing it on stage I will also deploy, from. GCP for the first time into, a cluster, which, is actually not a GCP cluster and it can run anywhere. Will. Go in to deploy and we. Really want to make sure it's simple. As possible and all I need to, do is to choose the right cluster, I. Will. Click deploy. And. What. Do you think Tom. Tom Tom what. Uh yay. And. While it's merely look simple, it's actually not simple. To do and it for us we, heard from our customers, that, they really appreciate, the management, capabilities, they have on gke and we, want to provide that wherever, customers. Are running their workloads. Back. To the slides. This, is a quarter I really like since, it's quick captures really what excites me the most about, the technology, we are building you, know I'm an infrastructure, person, but what excites me is that we are allowing developers. To really, free their time and focus, on innovation. Awesome. Consistency. Is the building, block for the next principle of cloud services platform which, is centralized, control. We. All know that control is important, but, what is different about cloud, services, platform is two things number. One it provides control, not, just across clusters but, across environments so. Across GCP. And other, environments, number. Two it works out of the box with your existing, authentication. And other services. So. It doesn't create a trade-off between control. And agility, as. A. Complement, to, hen's, announcement, about gk on prem we're, also really excited to announce gke, policy. Manager which, is an alpha now. This. Is a piece of soft piece of software that allows admins, to define tenant, name spaces, as well as policies. That, can be applied across environments. And. These. Are defined in a single source of truth which could be G CPI M or a git, repository. The. Software, then, automatically. Updates. All participating. Clusters at once, regardless, of where they are they. Could be anticipate. Or a different, cloud or on Prem, and. So. It, synchronizes. Those, policies, everywhere. Our. First version supports. Synchronizes. Synchronizing. Namespaces, role, based access control or are back policies, quota. Policies, and more, and we're. Going to show you a demo, of that now. Actually. Using, the cluster that hand provisioned. So. We can move to, the. Demo screen. All right so first I want to show you what clusters I have access to here I've, got actually, four clusters I'm going to be using three of them in this demo I've got, us Central, East Coast.

And This, puppy, over here the, West cluster. I've. Also set up some watches on these three clusters, so here's a watch on East here's, a watch on Central and here's a watch on the namespaces, in West and as you can see there's really just system, namespaces, here today as. An. Admin, what I want to do is across all of my environments, I want to set a set. Of policies, that apply to all of them and I've, created some policies, first, of all. Then. I'm gonna copy into this directory and let me show you what they are, so. I, want. Each cluster to have three namespaces, a dev namespace, a prod. Namespace, and a staging namespace, and, for. My Ansari's I want to give them pod creation, access across all of these three namespaces, so I've created a, pod, creator sre role binding, that, in, this hierarchical tree applies to all three of these namespaces. But. In my dev namespace. That's. The only place that I want to, allow developers to create pods I don't want them to be able to create prog pods, in production, or in staging, so, I want. I have created this role and role binding, for pod creator pod, creation, just for the devs just in the dev, namespace, also. I want, to make sure that my test costs don't balloon out of control, so. I'm going to set a quota policy, which. Is encapsulated. Here in coda llamó also. For the dev space. Alright, this, is cool, now. What I'm going to do is add that to, an empty git repository, so that's gonna be my single source of truth for all of my policies, and then. I will go ahead and commit this file. And as, I push this change you will start to see in these, three watches those. Three namespaces. Come up so, we should see and orders. Dev, orders, prod, and order staging, namespace, come up in all three clusters fingers, crossed and, okay. It, like us, Central has got orders Dave Prodan staging, nice, and us, East does as well and here. Is our, is our. Pet cluster here on Prem, and on the stage which. Has gke connect the agent that Han configured, earlier and there, also we have the three namespaces. So, you see here our centralized. Policy, management synchronized. Across multiple. Environments. One. Last thing I'm going to show you is that quota that, I created, I want to make sure that, it's also actually enforced, right here in this cluster so. I've changed context, to the West cluster, and I'm, gonna see, if there's a quota file so let's go ahead and open that and indeed, there, is a resource quota file and it does set a both, CPU, and memory quota, limit for, what my devs can do. And. So. That's it that concludes, the demo in summary. Cloud, services, platform policy. Management software, allows. You to apply policies, across, cloud, from. One source of truth you can kind of think of it as security, and policy, portability. That. Was an awesome demo one of my highlights the thing I love the most about this announcement, the. Combination, of consistent, experience, and centralized, control, access on-premise. In the cloud is powerful, but. All of this is ultimately, in the. Service of your business success, that. Depends directly on, developers. Productivity, and on their apps running, reliably. Often. These two things are at odds, developers. Want to move fast but, operation, teams would like to make sure the environments, are Enterprise ready and they would like to have less changes. That. Brings us to the third principle, of cloud services platform agility. With, reliability. Cloud. Services platform lifts, this trade-off with easier if. CEO is an open platform to, connect secure and managed services it. Takes care of traffic, management monitoring. And security. It. Does it by raising the level of abstractions, from, VMs. Containers. And ports to services. This. Also means that sto, can work with uncontained, workloads. Together. With containerized, ones. This. Week we are announcing is tier 1.0, and its, readiness for production, environment. This. Has two big benefits, one. It puts, more intelligence, in the hands of the operators, who can now systematically, monitor. Secure, and control. Services, the. Second one it relieves, the developers, from writing common, capabilities, that every, act requires, like, monitoring. Authentication. Logging, and more by. Decoupling. Development. From operation. Easier. Gives coders, their time back while. Not compromising. The availability aspects, and give, operations. High quality, common. Services, enabling. Greater, visibility security. And reliability. Now. We'll go to the last demo for today all right so. As a developer I want to focus on writing my applications. That actually add value to my business I don't, really want to write a whole bunch, of common, services, for off their monitoring or logging or other, things and those, things need to be done in they're done, well in their own right.

So. If co actually. Enables. Us to. Decouple. That, work. From, developers, freeing, us to, focus on what's important, and I'm, gonna demo that for you specifically. This demo is gonna show you three things number. One it's, an example of how SEO, frees developers, from having to implement off in. Each of their services, number. Two it's. Going to show you how, as a security, operator, I can apply policy, incrementally. To new services, in the face of existing. Services without breaking them and then. Number three I'm going, to demo how SEO, enables, a secure by default enterprise, and, to, do that I'm going to be using this application called, the book info demo app it consists. Of four services, as you, can see these four services, are written in four different languages also, referred, to as a poly, application. In each, of these languages, can have their own way. Of implementing. Monitoring, or auth or any of these services. But. What we've done here is we've. Deployed, sto, and, we've. We've, enabled sto as an, auto injected, on by proxy, in each, service, and you kind of see that with the blue line. This. Required zero code changes, from any of the developers, and out. Of the box it comes, with. Authentication. Best-in-class. Authentication. It comes, with intelligent, routing and telemetry. For. Demo purposes, we also have two clients there's, a modern, client, that's in the book info namespace, so that's gonna be shown in green and it's actually, going to get all of the changes that we make, through ISTE oh and then. There's a red client, which is a legacy, legacy. Client, it could be a legacy service, that you have existing, and it's not going to be Asti unable and then. What we're going to show here, is incremental. Enablement of M TLS, in first. In permissive, mode and then in strict mode. All. Right so, we can move over to my screen so, we can start the demo. Okay, okay, so for, this demo I'm going to use the Midwest cluster, and so, we've switched contacts, to the Midwest cluster, and just, to show you what's in this cluster. These, are the namespaces so sto is. In fact deployed, in the. Midwest cluster, and, also we have the book info application, which we were showing you just on the screen but, let's take a look to make sure what the deployments, are part of book info and so, here you see there's a client which is the book info client that's going to be generating, load there. Is the. There are the for services, the details service, the product page service the rating service and three versions of the review service so we can do canary. And, so what we're going to do is the, book, info client is going to be hitting the product page but remember, I told you there's also a legacy client, so let's just take a look at where that is if we just do cube cut' I'll get deployments, there's this legacy client, that's not, part of the sto or book info namespace, and it's also going to be hitting the product, page. Let's. First make just. Check to make sure there's no policies, installed, and so, let's, see if there's any policies, there's no policies, and, let's. Also then, get a graph on a dashboard, so that we can take a look at the traffic that's hitting the product page and so, this is the the. IP address, for the graph on a, dashboard. And I've already gone. Ahead and set that up so here you see that, there's two QPS. Of traffic, hitting the product page so here we are we're looking at the history of service dashboard, in graph on ax and we're looking at what's hitting the product, page, and. At the moment there's two QPS, of traffic, and, it's all, non-authentic. Ated, so there's no client side authentication anybody. That hits the. Product page will get a response and you know it's all to hundreds, so.

Now. What we're gonna do is we're gonna turn on permissive. M TLS in the book info, namespace, so, what. That means is I'm gonna set up this policy, and then apply it let's, take a look at the policy so, the policy, is going to apply to every, service in the book info namespace, and the, mode is permissive. And. It's your mutual so, permissive, mode means, that, whether the, client that's hitting is, it, has mutant, MPLS enabled or not book. The, product page is going to go ahead and accept that traffic and so what we should see though, in Griffin ax is some. Of the traffic is going, to be M TLS some of the traffic is not going to be M TLS because we've turned on M TLS, for all, services, in the, book info namespace. Okay. So now let's go ahead and apply this policy and, it looks like the policy is created the destination, rule is created, and now, when we look at Gravano we'll see one Q pious of M. TLS traffic, from that load generator, that that book info client that's in book, info namespace, that has sto and then, one QP s of traffic from the legacy client that doesn't have m TLS okay so, let's see what, happened. And. Indeed. You. See that the. Non. M TLS traffic, has gone down to one QP s so this is just the traffic from the legacy client and in, addition, we've, now got this additional, traffic that's also, one QP s that, is M TLS, so this is the M TLS traffic, from the. Book, info client that's in the, in. The same namespace so you see we've gone down from q2, QP s to, one QP s one. QP s each of em TLS and non-empty LS so this is permissive, mode that's great and the benefit, here was you know we didn't break the legacy service. So the legacy service, is still hitting the product page and that's fine and s, re can then talk to the developers and say hey we got a stop, with this legacy service please implement, client side off and that legacy service, because we want everything to be, authenticated. Because this is a product page that's you know it's got sensitive information. So. Now what we're gonna do is you, know we want to turn off the legacy service turns out it's actually rogue, and, we want to enable MTL, s only. Authentication. We only want services, that have client-side authentication. To communicate, with our. Product. And product info page so, this, is what the Yamma looks like it's, very similar so it again applies to the book info namespace, but now the mode is strict, so we're strictly. Going to accept traffic from services. Client services, that understand, M TLS and, so let's go ahead and apply that and here, we go, and so, now that policy, has been applied. And what we should see now is only one QP s of traffic only. From the service. That's in the book info namespace, that has m TLS enabled and the other one the road client, should, go away so, let's see did that happen and.

It. Looks like it did happen so you see that the blue service, the traffic, from the. Unauthenticated. Non, M TLS service has gone down to zero QP s and now we only just have the one QP s of traffic from the M TLS enabled service and, we can see that together so actually. If you want we, can see the history of this let's just go back 15, minutes and you'll see we had both two QP s then we went to one and one and now we just have one. So. That's nice. In. Summary, you know without any code changes, we have enabled, client. Side authentication. Across. Every. Service in the book info namespace, and also, by the way we've turned on security, by default, so that if there are any other services, in the future created, in book info they're, going to be M T and s and M TLS enabled as well. Those. Are not the only benefits, of s do we don't really have time to demo all of the benefits today so instead. I've. Invited a customer, to speak to you about how they are using cloud services platform. To, solve real world challenges. It's. My pleasure and please well please join me in welcoming Jeff. White platform, architect, at eBay to, the stage. Thank. You, hi. Jeff thank, you welcome thank, you for joining us that GCP next can, you tell us a little bit more about your group and your, role at eBay. Hi. I'm. Excited, to be here and share, eBay story. My. Group is focused, on building products, to expand, eBay's reach to. New markets, and market. Segments, last. Year we. Released a new buying experience, for the Chinese market. We. Also released. EBay, on Google, assistant, and we've. Been developing AI technology, to. Enhance these products. My. Role is to help build our services, platform. Our. Team is responsible, for all aspects, of development. And operations. Everything. From, choosing application. Frameworks. Continuous. Integration and, continuous delivery. Platforms. And setting. Standards for logging monitoring. And alerting. One. Of our goals is to, build a services, platform that. Is easy to evolve as our. Architecture, grows and matures. And. Places. Minimal, burden on. Individual, developers, a. Core. Strategy to delivering, on our product, and technology. Goals was. Our decision to expand, to the public cloud two. And a half years ago. There. Were many benefits, to expanding, to the public cloud. The. Key aspects, that were important, for us were. Global infrastructure. Auto. Scaling, and. Managed. Services. These. Aspects. Have helped us quickly, build new products scale. Our infrastructure, with our user traffic and provide. An optimized. Experience to. Our international customers. That's. Great and if I understand, correctly you, now have more than 30 DCP, services, that your using yes. Pretty, much we. Take advantage of GCP services, as much as we can and, we. Try to place all our applications in, kubernetes. Wow, that. Sounds like an amazing charter, Jeff so, how's, your journey, so far. Fantastic. We've. Learned a tremendous amount, about, building. Robust cloud. Native, services. Kubernetes. And helm, along. With our internal tools and standards, have, made it very easy to deploy services. And. We've. Scaled our architecture. Across, multiple. Dimensions, we. Have on the order of 100. Applications. And they. Are running a variety of, languages. And frameworks, and, these. Services are deployed across multiple. Clusters running. Around the, world. Wow. And. Of. Course we've scaled our infrastructure, horizontally. To. Match our traffic and workloads. But. Scaling across these dimensions. Presents. New monitoring. Challenges. A. Micro. Services, architecture, shifts. The emphasis. From. Monitoring, individual, services. In isolation. To. Monitoring, the entire system, as a whole. When. You get paged. The. Problem is never your fault right. Right. It's. Always, you. Can't connect to one of your dependencies, or some. Error way. Upstream has propagated all, the way to you and now everyone, is up trying. To figure out the. Root cause. Standardizing. Monitoring. Signals, allows. Us to monitor, across. The, system. This. Allows, you to do things like correlate. Logs by, trace ID. And distributed. Tracing. So. You can inspect the network call stack. But. These monitoring, concepts. Require. Services, to collaborate, on consistent. Standards, and they. Become part of the, platform. Itself. But. Adding this logic to. Our applications. And libraries. Makes. It difficult to migrate, especially. In a polyglot environment. Maintaining. A services. Platform becomes. Exponentially. Difficult, as, the. Architecture, scales I. See. I understand, so. How, are you dealing with this complexity. And, multi. Polyglot. Services. Environment.

Boom. So. We. Looked at service mesh technology, and we, saw the, value of moving. These common, networking concerns. From. Our application, processes, to an individual. Independent. Component, and we. Particularly, like how sto, makes this transparent. To services, with. Its strong kubernetes, integration, and, sidecar. Proxy, approach. So. We started experimenting, is with Sto earlier this year and. It. Was a natural decision to. Initially focus, on the, observability, features, metrics. Traces. And logs. These. Provided, us with the biggest initial, benefit, while. Minimizing. Risk and learning, curve and. Perhaps. Most, importantly. We. Would be able to use the very metrics, provided, by sto. To. Help us evaluate and, fine-tune, as we, incorporate, more advanced. Sto features, as. Part. Of our testing we, gradually, increase, the size of our mesh by. Using an opt in policy and. Today. We. Are running sto, in, production. That's. Awesome. Our. Mesh contains about 10 services so far and we are progressively, adding more and. We. Now have a uniform, and transparent. Mechanism, of collecting. Metrics, traces. And logs the. Prometheus, and Griffin, dashboards. Provide. Us with the key metrics. We. Can get an aggregated, view of the overall health. Of the entire service mesh. Showing. Traffic, latency. And errors. Three. Of the four golden. Monitoring. Signals, and we. Can dive into an individual, application, and see. Those same very, metrics. And. We. Also have stackdriver integration. Working, and we're continuing, to explore, those capabilities. These. Tools complement. Our existing metrics, and monitoring. Allow. Us to continue to refine our, service level objectives, and assist. Us in troubleshooting, live, site issues and. We're. Now in a great position to start introducing some. Of the more advanced, sto, features. We. Look forward to seeing how much amidst EO simplifies. Our monitoring, and improves. Our, system, performance. Wow. This is awesome to hear thank you Jeff it was such a great time to work with you and the team over the past few years a few months as we. Continue and work on SDO is there any feedback. Or requests for, us so, take it back to the team sure. The. Kubernetes, integration. Is excellent. The. Helm charts, make. The installation a, simple, one-liner. The. Automatic, sidecar injection, provides. A transparent. Way to, attach the client proxy, and the, default. Graph on a dashboards. Provide a great starting, point for, monitoring, the critical, signals. Of the system, in. Terms. Of areas of improvement. We. Look forward to getting access, to more granular. Data and. For. Example, we. Would love to be able to slice and dice our metrics by, URL. We. Also like to see sto expose, more. Sophisticated. Retry. And timeout, configuration. Settings and of. Course we. Look forward to a managed, sto service. Thank. You very much thank. You thank, you. Thank. You, luckily. We have all of that record it so we won't forget that. So. We started with the friends prints up the first principle, consistent, experience. Consistent. Experience, is often a traded with flexibility, when. You lock things down to. Make them consistent, you. Limit your options as an, example when, you you, cannot take a Best of Breed approach, when, you decide to standardize. We. Also heard Jeff talks about the need of having a platform that evolves, as our, requirements. Continue, to change, cloud. Services platform breaks. This trade off with a fourth design principle, flexibility. Clouds. Of this platform is build an extensible, open software it comes, with an ecosystem and, applicable.

Architecture, That gives you flexibility, a choice, of solution, and the ability to integrate into. Them without. Introducing, introducing. Friction, in the user experience, if. Something. Is missing you. Can fill the gap yourself, without. Compromising. Consistency. The. Platform allows you to, evolve as your needs and requirements change. We. Are leveraging kubernetes. And is tío extensibility. And a rich ecosystem, so. You can customize, evolve, and integrate. Cloud services platform to better fit your needs. With. That I'm super excited to share that we are the first major cloud provider to offer a production, ready commercial. Kubernetes application. In the GCP marketplace. It. Is developed, by third party partners our commercial. Kubernetes, up support, usage. Based billing. Simplified. Licensing, and easy deployment to kubernetes, engine, on TCP. And everywhere, else. My. Favorite part about this announcement is that all kubernetes, are listed, in the marketplace, I'll Google, vetted that, includes, vulnerability. Scanning and partner, agreements, from maintenance and support. This. Brings the open cloud services platform ecosystem. To you wherever, you are. Clouds. Of this platform is a unique, hybrid offering, that, it dresses today's enterprise, challenges, where, you are in the cloud and on-premises, in, the, session today we focus on the core platform, services. We. Introduce the new GK on Prem and how, it is consistent, with kubernetes, engine. The. New GK, policy, management, is here together with strike driver and the, marketplace, gives, you tools and is a way to, manage this multi cluster environment. The. Key principles, of consistent. Experience, centralized. Control, agility. With reliability and flexibility. Have, stood the test of time at. Google, and now with, cloud services platform we. Are making sure that they support you through your journey without. The need to compromise between them. We. Will continue in bringing more solutions, and prod products, into, the cloud services platform based. On our years of experience following. The same principles. Google. Cloud is the only public, cloud provider bringing, consistency. Flexibility. Control, and speed of the, cloud to your local data centers. Along with the freedom of workout portability, to. The environment, of your choice. We. Are very happy to bring the cloud Google's. Cloud to, you. Thank. You very much for joining us today and thank you, so much deaf for, talking about your use case and your environment we. Hope that we've managed to pique your interest in, cloud services platform, there's a lot more we didn't have time to cover everything these. Are some deep dives tomorrow. And on Thursday that we would highly recommend, you. Visit. For. A deep dive on sto, on GK on Prem and of course hybrid use cases, thank. You thank you.

2018-08-02 19:28

Show Video

Comments:

Good luck to you Google .Good luck day

Hello good luck day Thansk DUY KHẢI Xin Chào Tất Cả Quí Cô Bác Chúc May Mắn

Great talk guys!

Other news