Build a Software Delivery Platform with Anthos

Build a Software Delivery Platform with Anthos

Show Video

So, over. The course of the day you've seen a lot of technologies, presented, and ways, in which you can optimize for operators, and developers and things like that what we want to do in this session is show you something you could build and to end with, these technologies, and how you would use them kind of give you the use case for how these might be stitched together so. My name is Vic Iglesias, I'm cloud, solutions architect. The other two are Amir and Steve and they are the brave folks who are gonna do the live demo, portion of what. I'll be explaining, so I get the easy part here and, they're the real heroes so, I'll first talk about why you should build a platform then. We'll talk about the platform that we're demoing today and and what that looks like, Amir. Will will demo, the onboarding. Of an application to the platform and then Steve will talk about how we roll out to production so, why, should you build up, platform. We'll. Start this conversation by, talking. About what is a platform, and really it's it's this idea of an abstraction, between, your developers, and operators it's, like the simplest way I could put it you, want your developers, to have one single way that they know how to iterate on their services and they're, really good at it but, also as operators, you want to be able to iterate on the, underlying infrastructure, you, don't want to be stuck on whatever decision, you made five years ago for, infrastructure, you want to be able to iterate on those pieces while, maintaining your developers productivity, that's a huge huge task so. What we do is we build platforms, we build these abstractions that allow developers to do their things and stay consistent you, might have you, know and a hundred times more developers, than you do operators, you want to make sure that those folks are able to iterate and how, many folks in here actually, like their company ship software and then makes money from it.

Yeah. A lot of us right you you have to do that how many folks have to have software to run your business the. Rest of them right we, need to maintain the developer, productivity and keep high reliability as, we work through these challenges. When, you look under the covers things, get pretty complicated a lot of the time right you started in your data center things, were fine you maybe had just dev and prod people. Were just Isis aging to machines and throwing jars everywhere for the very beginning, that, got a little cagey, so you added a staging logical, environment, so you could test out your changes, and, from there you added a cloud provider maybe. It was GCP who knows. And you you started to complicate things now having two physical locations, where things are located also having, those many logical environments, so things get complex under, the hood for these operators and we want to make that easier some, of the complications, you end up with here are, inconsistent. Ways that people deploy, or think, about CI or run, their builds and all, of that stuff means that you're, not sharing your best practices with each other all of the innovation that you're seeing in one part of your company is not being leveraged in, in the others right another. Thing that you start to see is that those, logical environments, don't actually mean what you thought they meant you thought you had a staging environment that was staged, to be like production but, the end of the day it's so far from it that you're not even learning things that would hit production and you're seeing your bugs hit prod, anyway obviously, you. Can't test for everything but that's that's the goal of having those logical environments, the, next thing is you, start up start, figuring, out that it's hard to even get people the right access, to the right things to be able to get their job done now, you want a standard way to do those kinds of things including configuring.

Which Apps can connect to each other dan showed a great demo, of a steal and service mesh and things that we can actually use. To connect, services, in the secure and, reliable way the. Last one is mostly for operators, here where, each team does, ops differently, and so when an outage hits you, have to figure out what their best practices for logging and monitoring and all that stuff are and where their systems are and how they do their actual logs and whether they're JSON or not you start to see a lot of this stuff when, you're not building platforms, to. Operate. For developers, and operators to, have an abstraction, so. Then you say we've. Had platforms, right we've, had things. Like App Engine things like Cloud Foundry, hosted. Platforms, platforms of the service in, 2008. Google. Launched App Engine right and this helped a lot of companies not, think about all the operational, aspects, they just got scale, to, millions of requests per second for free with just a little bit of code turns, out this seemed very, opinionated too enterprises, it was too hard for them to make a leap into, the platform as a service world so we, started to see platforms, like infrastructure, as a service like coop compute, engine in 2012, which was launched which gave you the most flexibility, just run a VM and put software on it it's gonna be easy turns. Out that was kind of hard as well in. 2015, we, launched kubernetes engine and that was kind of a happy medium for folks it was like enough, of an infrastructure, abstraction, it, turns out it can run kind of anywhere now and. This is a new, building block that folks are building they're kind of bespoke platforms. On top of and what. You get here is a nice infrastructure, abstraction, and, then the ability to add all the pieces that you need in between them there may be some, bespoke, choices in your company like you, chose a particular cloud provider identity, provider or a secrets. Management tool that you need to integrate and kubernetes, gets you it gives you a lot of flexibility, to make those choices.

How. Many folks know this gentleman Kelsey Hightower he's, awesome we, prayed to him for Amir, and Steve's sake that the demo goes well, Kelsey. Said that kubernetes is a platform, for building platforms, it's, not your end game it's, a better place to start and that's kind of what we're gonna do throughout, this talk to the demo so. How do you build a platform on anthos. First. Thing to look at is what, are the layers of your stack when, you're when you're talking about applications, you have infrastructure. Layers which is your networking your compute storage, things like that and, those provides a way to abstract that away and so, above. That you create kind of platform, tools or sets of tools that make. Up your platform you might have a source code management system, a CI. System and a, continuous delivery system, that you build on top in order to give your developers, that you, know get commit to do. The right thing, experience. So. Why would you build on top of anthos, you, get a central essentially. Manager infrastructure, we had anthos config management as an example of that you. Get a single API that. You know you can use to deploy workloads. You have an interface which is like the docker image and then you know from there you can go to any of your ear locations, or environments, and deploy and, it, allows you to then share, those methodologies across, your teams or centralize, those methodologies and then extend them and as. I said previously you can run kubernetes. And gke now anywhere, right, whether, it be on pram or in the cloud. So. As we're building out platforms, we're thinking about three main personas, the, developers, the. Security, and policy folks who are trying to figure out what, can't to be done in the platform and our, operators, who are just trying to keep the lights on and make sure that things are running smoothly in. Order, to build a platform we think you need to have one, of each of these components, I talked about a lot of these Dan. Talked about Apple deserve ability which we can introduce with a steel and Antho service mesh we. Need a way to manage our source code a way. To manage our configuration. C. ICD methodologies, a way, to manage our policy and then, ways to run our containers. So. In this talk in the demo we'll we'll go through what it's like to build, this out choosing, one of each of these components and then stitching it all together and showing it to folks as we've. Gone through this we've seen that there's really three interfaces, that these components use to talk to each other we, have git repositories, so we can push and pull code source, code sometimes, that's a kubernetes manifest, we. Can push and pull docker, images from a repository that's, our heart our deployable, artifact, and then we configure, that deployable artifact, using, kubernetes manifests, that we deploy to clusters. The. Ant the way that we're using ant those can configuration. Management in this example is by, using. It to sync policy, and structure. Of namespaces, across our clusters, so we store all of that configuration, in the centralized git repo and then it gets applied to all of our clusters, and. What. We end up with in each of these namespaces. That we create for our applications, is one, service account that that application uses, to deploy within that cluster it. Has a CI, runner so that's in our case we're using git lab so we have a git lab CI runner running in that namespace just, for that application and, then. That, deploys, with, that service count all the, the resources. That the application, requires whether that's deployments, or services or in grasses or anything like that and then. These are all replicated, across each of our clusters so, as we add a cluster, to an environment it gets the same exact setup, as the other clusters that's in there it's a nice way to centralize, that, the. Other thing we're gonna be doing is deployment, with git ops so this, has been an idea that's been around for, a while and now has a name, to it the get-ups methodology. And what that means is we're gonna have our application configuration living.

Inside. Of an app, repo and we're gonna have our kubernetes, config in there and then, we're gonna hydrate, that manifest we're gonna render that, fest into a fully-fledged. Kubernetes. Manifest that can be applied into, a cluster but before doing that we're going to commit it to a git. Repository and that gives us a way to track what, was what was deployed, and when and what the diffs were between those those commits so. In this example we're, gonna have that environment repo have, two branches one staging, branch which maps to what's in the staging clusters and then, one master branch which, maps to what's in production. As. Far, as managing. Our configuration or application configuration we're, gonna be using customize, which, is a way to manage, your kubernetes, manifest rather than templating, them you're gonna be able to create a base set, of manifests. And then apply patches to them for various use cases, another. Thing that customized. Can do is apply, labels, and annotations, across, a set of resources and/or. Add, a prefix, so a lot of a big use case for adding. A prefix is I want to prefix. Everything with the environment, name so you have a base that just says what your resources are and then in customize, you can apply prod, dash or staging, dash to all the resources so when you see them in your your dashboards, and in your kubernetes CLI you can tell which environment you're in really, easily without, having to go into each of those resources and change it yourself. And. One, of the key, method. Ologies that Customize, uses is to have these patches on top of a base the other place that we see this is in, docker images right you have in docker, from, Ubuntu. 1604, or, 1804, that's the base image then, you run commands on top of that and make little tweaks to the file system we, have a very similar approach with, customize so, you have a set of base manifests, and then, from there you add overlays, or patches on top of that that sprinkle, in the changes that you want without changing, that original base that base might be used for a completely different use case in, another place and you just do what you need in order to transform, that for, yours. One. Concrete. Way you can see this is I have a base set, of you, know kubernetes manifests, that I call my application, and then, for each of my environments. I have application. Environment. Specific patches, that I applied to each of those before, I finally. Hydrate. Those and so, them into my clusters that's, a very basic use case for customized. So. Now on the demo. We'll. Talk through a little bit of the cloud architecture the, will. Have five clusters in this demo we'll have one for dev that is where they can iterate they have some namespaces. That they're locked into on.

In, That cluster they, iterate with something like scaffold, or cloud run on their own and then they commit some code oops. We. Need the slides, back and power. Up here it looks like, so. Yet you have a tools cluster in there we have our CI CD tools all those platform, pieces that I talked about in that layered infrastructure, diagram are gonna be in a tools cluster we're using git lab for this and so, the developers are gonna commit, and merge in git lab and then, that's going to make its way to staging, and then finally to production clusters, here, we have one staging cluster in two production clusters, and. I'll, talk a little bit about how the code, repos work again. We're gonna have two. Main, repos the code repo and the environment repo when. A developer commits. To a feature, branch they're iterating on their local environment then eventually they're gonna make a merge request to the master branch of the, CIA PO then. From there that's, automatically, going to be committed to the staging, repo of the. Staging, branch of the environment repo and and added, to the staging environment, so now everything, that's in master of the developers repo is ready, to go in staging now. If we want to get a change, from staging to production. We. Merge. In the environment, repo from staging to master and that does a CI run or a CD run that pushes, directly in to the production clusters. So. With that we'll, take it over to Amir who will take us through it sounds. Good can, you guys hear me all. Right everybody clear on what Vic just said we. Need to run this demo we're running on like seven percent we're running on like six percent battery life so I'm gonna go pretty fast alright so. Yeah. We just found out that it's not charging so, my, name is Amir I'm also an SA Solutions Architect err Google and I will be your operator for the rest of the ten minutes or so that we have let's, all put on our operators, hats and let's, assume that our developer, just walked up to us and said, hey I have this new go app I'd like to deploy hi miles how are you awesome. And then. So what do we do so like any other platform-as-a-service. We'd, start with the CLI which is what you see up here we have this handy-dandy CLI. Tool that. We are going to use to add a new application called hello, cube con 1 into. Our CIT d platform maybe. Alright, let me just make sure I'm in the right folder huh. Check. Out so. Good. One. More all right. CLI, is a good way of adding an application, it's nice and consistent everybody, gets to do it the same way you don't make any kind of human mistakes but in this particular case we're, using Antos platform CLI the, name of our application is, just going to be hello cube con 2019, we give it our CIC DS host name which in this case is get lab the, password, and then, the the template, right so again template, gives you kind of that boilerplate code, kind of already made. Up right so, what's. Happening in the background I'll just quickly explain is, in get lab it's. Creating, a group and a group is nothing more than just a collection of repositories. Just like an organization, and github and in.

This Case it's creating those two repositories one, is your application, code repositories. That's what you hand off to your developers, that's where they iterate on their applications, and then. The other one is your operators, repository, or your environment repositories. That's where all of the kubernetes, resource, manifests. Reside for all all environments, so for dev stage and prod keeping. Those two separate kind of keeps the CI of the developer, separated, from the CD of the operator it's a nice way to do that another. Thing that Vic mentioned is that every application gets, deployed in its own namespace on every single cluster and as, part of that namespace a git lab Runner which is just a deployment is deployed, inside of that namespace now. There's a couple of schools, of thought here one is how do we push these manifests, down to the clusters, so there's two ways to do that we can either give kind, of the credentials. To the git repos and every. Single get rebuilt for every single kubernetes, cluster and we push those configs. Down so the credentials, kind of remain outside of it or in this case we are giving those credentials as a service account to the git lab runner and then, giving the get lab runner the get credential, so it can pull those manifests, in order to do that we, have to give this get lab runner access. To pull those images right so, I'm going to click on this link, I'm just gonna create some, credentials so this is very, similar, to just creating a docker login, and in this case the only thing it really needs is read, access to the registry, so it can pull the images so, I'll create a user called, get, reader it doesn't matter what we call it we'll, give it read registry, access, I'll create the deploy token, and will give me this username and password we'll, copy this password go, back to our command. Line give, it the username which is get reader copy. The password and let it kind of finish the bootstrap. So. Let's take a look at what it actually got created. In gitlab, so I'm gonna go into get lab go to groups and go, to your groups, so we can see all of them so the first thing you notice is that we have this new group created called hello cube con 2019, we. Go into that group we have the two repositories the bottom, one is your application repository, that's the one for your developers, the, top one is your end or your environment repository. That's the one for your resource manifests, for kubernetes, will, inspect those here shortly I'm gonna go back into the groups and I'm. Gonna look at this group, called platform admins, and this, is an operators group there's quite a few repositories. Kind of in here right so I'll highlight, a few of them the. First one are these templated. Repositories. So the golang template, and the golang template ends again, like I said it kind of gives our developers, a boilerplate of a starting point of something. They can code so you don't have to do this every single time so, you can have other templates. For Java, Ruby go and what what have you, we. Heard about Antos config management this morning and this is the. Platform that sets up kind of the foundation, the namespaces, the registries, and stuff like that inside. Of this repository this is by the way what the folder, structure looks like there's a namespace folder, so, we're going to the namespaces, folder and the managed apps folder, you. Can see that we now have as part of that bootstrap, that we did with the CLI command we have a hello cube con 2019 folder we'll, go into that folder and you'll. Notice that like, I said it creates the namespace for that application it, creates that registry, secret, that's that docker login that we just created and it feeds that to all of the gitlab runners in each, namespace, for each cluster so you see that three runners two, for the production clusters fraud central prod East and the staging so, now each cluster has pulled, missions to pull register to pull images from that registry, and if, we want to look at what that looks like inside, of the cluster so again I'm in the cloud console, we're, looking at the kubernetes engine workloads, page and I've already put a filter for that namespace on there and if we hit refresh.

You'll. Notice that we have the, three gitlab runners we also have something else deployed and we'll talk about that shortly but I really want you to focus on the. Three runners that are scoped to that namespace so they're running within that hello queue con 2019, namespace in each, of the clusters the to production clusters, and the one staging, cluster and those their, permissions, are scoped within that namespace they can't put anything outside of that. So. Let's go back to our platform, admins, group, another. Thing I wanted to show you was the shared CI CD repository, so this is where all of the CI CD for every single language will sit here so in the CI folder, for example you'll. Have all of the manifests, for a different, CI file so we have one for go here as an, example and you can see it's fairly simple it's doing a unit test which just go get go test and then, just a build stage right again this could be as as deep as you'd like it to be and. Then. The last thing I want to mention here our customized. Basis. These are the shared customized, basis like Vic talked about for every language we. Have a folder so we can go into the golang folder and in this demo, we're just doing a deployment and a service but you can have other kubernetes, manifests, and they, don't look scary, at all they look just like a vanilla kubernetes. Deployment, and things, like we can change our environment so, which environment they're going into dev stage or prod we can change the image name because majority of the time you're, not changing the entire yamo you're just changing a few things and that's where that patch kind of comes in and to, show you that now let's go ahead and inspect our, actual. Application, group so I'm going to go back into hello coupon 2019, group and we're gonna start with, the application repo. So. Again this is the boilerplate this is the repository that gets handed over to the developers, and you can see the first thing we just gave them some boilerplate code, this is just a simple hello world app that. The developers, can start iterating, upon, in. The same repositories, we have the darker files this is what builds the artifacts, we have scaffolds, so they can start doing some developments, and local dev testing, before they even push this to this to. This particular repo, we have this get lab see which is their CIC D file for this particular, repo this, pipeline, gets kicked off anytime somebody commits, makes, a commit to the master branch of this application repo. And I already showed you the test and build and Steve's gonna show you kind of how the deployment, piece works, what's. Interesting about this is in. As, part of the application repo, we also have the kubernetes, manifest, so the developer, is not only get to see what actually gets deployed they, also have influence, over how it does it right so inside. Of that Kate's folder, we have the three folders for the three environments so. For example if we go to the staging folder first, thing is that it's pulling that shared customized, basis from that repo that I showed you earlier, so, as an operator, I can go and make a change to that shared, base one, time if let's say I want to change, the logging level or if I want to add resource. Limits or something I'll just change it one time and in the next rollout this will get applied across the board and then, here's the patch the, deployment patch that is. Being applied for this particular, thing, if we just look at that very very quickly all, we're doing here is giving it the, environment. Variable and then giving it the the name of the image which, Steve. Will show you shortly so. As a developer I can actually go in and I can change, this patch to let's say if I want more CP or something or. If I want to change the image if I want to push a different image to staging I can do that here so I still have that control over staging. So. Now I mentioned that every time a commit is made to the master branch a pipeline, is kicked off so, here on the left-hand side I'm going to go to CI CD pipelines, and, you can see that one pipeline because of the initial bootstrapping, was kicked off and let's click on that pipeline, and it's just a simple four stage CI CD step again this could be as complex, as you'd like the, first two are pretty self-explanatory and, I already kind of showed you those this is the unit test that, happened and then the building of that image, once that image is built in the registry, somehow, we need to deploy that and for that I'm gonna hand it off to Steve and he's gonna talk, more about how that gets deployed.

Yeah. All right cool I'm Steve McGee I'm a former essary so I get the honor slash, duty of pushing this thing into production, so. Great. Okay. So the first thing we're gonna do is a it's. Still part of the CI step, it's kind of the beginning of CD which is hydrating. These manifest so these, are these Yamla files that we talked about before so before. We had like the base image and the patches and you kind of think about variables, and templates this is all of it expanded out so we have two, files one's gonna be customized. Staging. The other ones customized, krod. And that way if we have any trouble down the line we know exactly what got pushed to production. The, first step is just to build those and we can click through here and we can show you builds. Them all it's very simple it builds one right here it runs a customized come in for customized, staging, and then it does it again for production, so. After, that we have these two files what do we do with them we just push them over to another repo right so now we're moving from the code repo to the environment, repo I won't, really show you this because it's just like copy it's not very interesting so, let's back, out into the group and we're gonna show you the end repo, the environment repo and we'll show you where those files kind, of end up so, I go here oh my, gosh where are the files well we forgot we actually pushed it to a branch, right we pushed it to the staging, branch so as soon as we move over there here, we go we've got our two files so, we can look at one let's look at staging and, it. Shouldn't be scary in any way it's pretty basic right so we have our environment, label staging, just like we expect it, has the name hello cube con so all those things that look tall vanilla before, have all been filled in with the right stuff okay. So we saw before that there were three, runners running in kubernetes. And there was also a job running there was the actual staging. App, right so this is this guy right here it already got out there how did that how the heck did that happen and the, reason why that happened is because when we made the the first commit we. Pushed this file all the way to the. Environment, branch into, the staging branch and our friend, the pipeline ran again so let's see this new pipeline that ran this is a different pipeline this, one you. Just. Did one thing it deployed to staging, so we can click in here we can see there's this job that ran this this this binary. That runs we're using something called gke deploy and it's, think of it just as like cube control apply except. It does a couple other little fancy things and it waits for it to succeed and then once it does it sort of tells you here, go look at your job it's running it's great so. We can see yes indeed it's, running so at this point the develop says great, my job is what I expected, the is doing the featuring let's, push it to prod and then yes or he shows up goes like you got it let's do it you, know the schedule is correct we're ready to go and, then if you recall from the kind of interesting. Flow chart the way we do this is we're going to do it by, promoting. The. The. Environment. Yamo from the staging branch to the master branch we, do that and get lab using what's called a merge request which, you probably already know we're, gonna make a new merge, request we're, gonna give it a very very descriptive, name like. Go. Oh no. The. Computer stops computering, okay. Rod. Okay, so. We're gonna go to prod so we want to make sure we're actually going from staging to master that looks pretty good we're, gonna submit, a merge request some. Other person might come in like you might want to have another set of eyes go yeah this looks fine they might come in and say well, let's take a look at the changes make sure they're exactly what we expect, you might also have other things in your pipeline like to make sure that we have you know the right amount of, resources. In place etc this is really boring because it's just like a blank entry, let's, say it looks good we're gonna merge it let's go so, now what happens in this, process yet another pipeline happens, of course it does that's how this works in, this case there's, one, step for each cluster so for each production cluster it's exactly the same as the staging one except, this time it's pushing to production, so we have two clusters here one. Is just in central, one look it already finished and the next one's gonna be an east one you might have another step in here that's like weight, or you know check the canary or you, know something, something but we just did something very simple so, let's take a look at your.

Pennies And make sure it's working it should be ready. Oh. It's. Working so. We have one running in central, one and then. The US east one is coming up as well so let's take a look at the logs viewer and we're gonna see if. It is really, running so for, each cluster we. Should be able to see the health checks come by and ideally. What we're gonna see is three sets of apps all running, kind of you. Know skewed, from each other. Stream. Stream, this, is it guys. This. Is gonna be it. This. Is this is where we pray - Kelsey yeah. So. We have health checks in staging, we have it in prod we have in prod central, and eventually. We will have ones from fraud East as well but that's it you know our code is deployed it's, in many places another thing to point out fraud, East and fraud central could be two different types of kubernetes on two different platforms you know this is this is the amazing thing about anthos it doesn't care it's just kubernetes, so that's, it. All. Right if we could, go back to slides for a little bit, that. Was pretty good I I, will continue to bring them along for future and those days I don't, want to do the demos all. Right so let's take a quick review we saw a lot of things a lot of terms flew around code. And arrows and all kinds of stuff so we, had developers, committing to an application repo that was kicking off CI and then that built an artifact our doctor image and stored it in our container registry from, there it, was added, to our application, configuration the. Operators, were able to get their base configuration and, put that into the application, configuration with, that image and then. That was committed to our environment, repo where the CD process, was started the. Other thing that we saw is that policy, and security engineers, were able to create that landing, zone for our applications, and those namespaces. With the right policies, in a. Git, repo as well and have those propagate, to all our clusters so. That's kind of the the shape of what a platform might look like if it's built on top of anthos.

The. Last thing I want to just review is a lot about how we propagated. Our best, practices, as operators, through this platform with, the ability to use. Git. Labs includes. We, can have a centralized, repo where we're having, most, of our CI, steps that has a library that developers can pick from so they have a good starting point and then can iterate from there we. Also can do the same thing for our continuous delivery. Processes. And with our application config we have our shared customized, basis which we can store and get and then reference, from our application repos, and so, we're pulling in our best practices and then as we iterate, on those as operators, those automatically, propagate, on the next run to our developers, also. We have the ability to store our our. Environment. Configuration. In a repository and, have that reviewed by sre and security, and whoever else before any changes are merged into production so. With. That I'd, ask you to please come talk to us at either the Google cloud booth or here I think we have a break coming up right now and, ask about the anthos platform demo thank. You folks so much. You.

2019-12-06 22:36

Show Video

Comments:

how do you get Anthos? as a partner we have not been notified how to get it or use it

Other news