Where Should I Run My Code? Serverless, Containers, VMs and More (Cloud Next '18)

Where Should I Run My Code? Serverless, Containers, VMs and More (Cloud Next '18)

Show Video

Hello. Everybody. Thank, you so much for coming this is gonna be where, should I run my code. Let's. Go got a lot to cover so. Quick. Question how, many people here, kind, of write code on a semi-regular, basis. Okay. Lots of the room maybe 60 70 percent how, many people here keep code running like keep servers running keep infrastructure, running like, all the pieces so kind. Of 40 50 percent here and how. Many people are, coordinators, of, some kinds managers, like, P GM's like, all these things and another, kind of 20 30 percent so like everybody does a lot of different things right at the same time. Awesome. Okay so I. Think, this will be useful for all of, us. And. It's like where should I run my code and. Sorry. It, depends, uh-huh. And. Hopefully. Through. The next you know kind of about 40 minutes we'll, talk through a whole bunch of the trade-offs and options. So. First. This. Is Google cloud next so we have blue hexagons. If. You. Happen to know what these are already you're, you're good you're, all set. The. Kind, of proper names for, the various pieces here and I kind of put them in this order on purpose there's kind, of a stack, and abstractions, tracked how can you get more abstract, and you get more concrete kind, of down at the bottom so we've got basically. Compute engine our VMs, hosted. Kubernetes, clusters, of running, like collections, of containers, App, Engine which is a platforms of service and cloud functions which is you, know functions. As a service so you input, output and, the. Takeaway if you if you want to get to lots of talks at the same time. Is. I'm. Gonna make the case that the, majority of the time if your work looks like kind. Of these things like, you might really, strongly consider starting here and you may branch out from there and I'll we'll talk about why we think but basically if, you've already got code that exists and is running well I'll. Put it into VM make it work well if you are primarily, working in containers.

Hosted, Kubernetes or kubernetes somewhere is going to be a really great choice if. You're primarily working. With, code that talks to the internet maybe needs to scale up and down very quickly over the web App, Engine is a great choice and if. You're primarily reacting, to things happening events happening and you need to do something, functions. Are a really good choice there so that's the quickie, version and. Deep. Breath, no. Worries, and. I'll I'll, try to convince you of that by the end we can move around between these abstractions, so. We're gonna talk kind of bottom-up. Go. Through all those and then I'm going to actually answer the question where. I start on my code and then. Give you some ideas of where to go from here so, compute. Engine these. Are virtual machines they are logically. Computers, the same kind you'd plug, in and use other places but running in Google Cloud, and. You. Can have really quite, large computers, these days you know lots of CPUs. Lots of memory. Lots. Of disk attached like. Lots. Of, graphics. Processing unit GPUs and tensor. Processing, units if you're doing machine learning for this, stuff and, one. Of the things that's really interesting, I think about our VMs is that all of these pieces are individually configurable, so, if you have a workload that is very memory intensive but, doesn't do much compute or the opposite or needs. A huge amount of storage space. You. Can actually make a machine that's that size and save, you a bunch of money versus, just. Making a huge large computer so. That's useful they start quickly, for virtual, machines like around tens. Of seconds so 10 20 seconds per machine, and. These are computers so they're running a whole operating, system so we have images. Built for various Linux operating systems, and then also. Windows. Operating systems and you can create, your own image if you'd like to run something, very specific, perhaps, a company image or things like that. Diving, into disk just a little bit because I think it's it's. An interesting example and these are available, they're true, throughout our our, compute, but this. Kind of really makes it shine our. Default, disks are all network-attached. Storage so. They're they're, collections. Of pieces, of disk parts, of thousands, of disks across a whole data center and so. When you when. You make one larger you, actually your disk just gets faster and faster and faster as you use more and more of the data center and. It allows you to take snapshots live copy. Those to another machine make. A new computer out of them maybe in another part of the world.

Easy, Backups, that sort of thing so it's. As. You're. Thinking about these like virtual machines feel like computers but, every aspect, of it is. At. Data center scale so, your disks your networking your load balancer all this stuff, so. I guess, but try to keep an open mind as you're evaluating the, different pieces because you can make exactly the size computer you want make it as fast as you need and change. These on the fly, including. Changing the disk size without rebooting, the machine which is handy. If anybody's ever run out of disk before which. I've done. And. Kind. Of more than just virtual machines so we have. If. You make a machine that you really like and you want to have, sibling. Machines, that do exactly the same work you, can basically kind of take that image and make, a managed, instance group and say okay I'd like to have five of those now oh now, I'd like ten back, down to three and you, can just have. Compute. Engine just scale up and down and you can even do that automatically based on load and. Just. A moment, to talk about our. Load balancer, if. You want to dig into the details there's a paper on. It called, maglev, the, internal name is called maglev and, our. Load balancers that you're using from, compute engine and all the rest of products I'm going to talk about today are actually, the data center scale load balancers that we use to run all, of Google off of so when you're making a configuration, change here it's, the same law of balancing infrastructure, which means it scales, up and down incredibly, quickly you can make a new load balancer and immediately, hit it with it's. In a about a minute hit it with like. A million requests per second of traffic, and and, then drop down to nothing and it works so. With. You as a developer as a project manager, as a team, when. You're using virtual machines you're. Thinking about your software the, shape of the computer, the, operating system like.

What The networking connections, are it's very similar, to kind of working with on-premise hardware in a way but. Just be careful because it's, all software. So. But, you're you're responsible. For all those same concerns. So. This is a really good fit for lift. And shift kind, of scenarios where you have existing software you want to bring it over a. Non-obvious. One if you just want to run one. Containerized. Process, on a computer of arbitrary size that's, really easy to do with compute engine you can point it at an image and boot, it up and have it go, if. You have a need for a really particular version. Of a Linux kernel or if you have very specific licensing, requirements, this can be a good. Or perhaps only fit to do this. It's, great for running databases, and the like so, and you. Can do you, know I sent nearly all of the network protocols. You'd. If you broadcast, we don't do. On. The constraint side it, takes about 10. To 20 seconds to startup one machine you can start up thousands in around a minute that's, pretty. Fast response but some loads need faster, response time so that might be a constraint and. It's. Completely, up to you to, manage the. Whole. Software stack. Which. Is important. To recognize. You. Know how you update your software and alike. So. What does it feel like I've been talking for too long already let's. Go ok so this is the the, web interface for Google cloud platform, you've. Probably seen demos on like before but. There's one thing that is, critical. To. Understanding this interface and. It's this button. Hamburger. Button all, of the real functionality, is in there to, switch between different parts of it so click. On the hamburger button go, find the thing so we, will take a look at compute engine and. Come. Over here and. Create. An instance so. We, can give it a name like. Awesome. We're. Going to make it on the. West coast. Because. We are on, the west coast right now if, I, can read I. Guess. We should do it in Los Angeles and. And we. Can basically choose. A whole bunch of pre-configured machine. Sizes. But. We can including. Up to Oh in. That region don't, have all the biggest ones but this. Is where we get to the customization that we were talking about so if you want to have a machine that is exactly 20. Cores and, not that much, memory or more memory you can kind of like move these sliders around and create exactly the shape machine, you'd like. If. You are doing so, different regions have different base. And chipsets. Available, and the the. CPU family, so, if you were doing rendering, earth or other processes that need very specific. CPU, instructions you can say I need to have this, generation, or newer of. CPUs, and the. Like and, those. Do vary a bit by zone and. We. Can in, here choose what kind of just gets on any. Of the kind of pre-built things, various. Linux's. Different. Versions of Windows Server and the like or your own and. All. Kinds of other fun stuff but I'm gonna go ahead and right. Before I hit create down. Here at the bottom so on the, when you're learning, something new like it you want to be exploring right so the web UI is really good for that once, you figure out what you want, if. You do it through there you can actually click down here and get either the, exact. Kind of JSON payload to pass to our API or, the. Command line that would create exactly, the same image, so, you can then run this from G cloud and. Get the same thing so. If I hit the create button and will, start, mental, timer going, of you. Know right, now we've actually allocated, the resources, to run this machine and. Are. Actually. Kind of caught copying, this image out and booting the machine up and waiting, to be. Able to actually run your code so once it goes green on that check box it's, handed off to a startup script that you can define to, run whatever your software is on. The computer, and. This. Is my favorite button, this. Will actually. Since. I'm logged in to Google Cloud with OAuth it. Knows who I am so it's actually pushing an SSH key to, this new instance and then, it's going to connect me, via. The web over you know Jess, Shh into the machine and actually, do work on it so we, can take a look and. We. Can we can have some scrolling text oh I have. To do that as you know it's. Real I got an error message. And. So. We have scrolling text proving that it's a real computer Oh. Bounce. So. That's, a quick taste of compute engine there's a whole lot there, please.

Take A look so. Why. Might you choose compute engine, it. Is very consistent, if. You do your testing on ten VMs in a given, zone and later, you want to use those same ten VM somewhere else you should get very, similar but almost, the identical behavior, wherever, you run them whenever you run them so it should be very consistent these. Custom sizes live disk resize. In. Other, talks I probably talked in in detail but basically if you use the same shape instance for a long time they get cheaper over. The course of a month so sustained use discounts, automatically, if. You leave that going if, you have batch jobs we, have something, called preemptable VMs so, their VMs that are guaranteed not to live more than 24 hours might, shut off at any time but. They're, dramatically, less expensive, to, run and. Behind. The scenes if we were planning to do any work on the host machines we actually move the VMS to another computer without. Dropping network connections usually, within milliseconds, to. Do that and. It's naturally a good fit for almost all the software that's out there currently you know it's running on computers with networks between them and that sort of thing and. By the way we do run services. On top of GC ourselves, so, compute. Engine I'm sorry. Partly, kubernetes, engine our, cloud. Sequel instances, and other things, are actually running on top of compute engine so a lot of what we're going to talk about later inherits. All of these qualities that. We just talked about so. Head. To the docs all. Sorts of information about compute, engine there and. There. Are quite, a lot of talks but here's a few selected, talks to check out for more details about. Compute engine. For. A time. And. Okay. So. Kubernetes engine as. You might guess is, kubernetes. Hosted, on Google cloud so. Anybody. Heard of containers. Thank. You for the laughs. So, how many people have heard of kubernetes though I'd, like to get, a sense so like kind of heard the word how many are using kubernetes in some fashions anywhere, ok, quite a quite, a lot maybe 10 20 % that's awesome, thank you. So. The, the quick version of kind of why containers, in in my mind if. You ever heard somebody say well it works on my machine, well. I, mean containers basically solve that problem like it actually works in all the places because, it it has the exact same binary, and the same file system, that. You use to, test and to validate and you bring that to production and bring, it to the other machines so that. In, a short that that's my personal, feeling there, kubernetes. Comes, into play when you have lots, of containers, and you want to think about multiple. Computers, and large. Numbers of containers, as one system and reason about it in that way so, it. Is inspired, by, systems. Built, previously at Google and built, in the open like released, in public you know 2.1 release and got, dramatically, better because. We. Worked with real use cases with real users. Lots, of contributions from a lot of different companies a lot of individuals. Just, cross over 20,000. Contributors. The. Cloud. Native compute foundation owns all the IP it. Is a hundred percent open source and at this point runs. Pretty much everywhere so, it's. It's worth looking into in my opinion. But. The conceptually, you're, thinking more about, applications. Than computers. So. Logical. Infrastructure, and the. The short version is you know when you get together with a colleague and you're trying to explain how something works you immediately, start like drawing things on the whiteboard. Kubernetes. Is not, far from, writing config files that look like those lines and boxes we had seen yeah Milland text so it's like, it's. More fiddly but conceptually. You're connecting, the apple occasion to the database to the you know back-end service that like runs, and you're actually defining. Those connections, in config. And. Telling. Kubernetes to be responsible for making that continue to work which, is awesome, and. Then, kubernetes engine is hosted. Kubernetes, so, there. Is some effort involved, in kind of running a cluster and keeping it up to working. To getting one started for the first time to like evaluate, whether you even care about kubernetes, so, if you're doing an evaluation it, really makes a lot of sense in my opinion to like go, straight to convert any engine you, can get one running at about five minutes and see if it works for you and the projects, you have and go, from there and, basically it keeps the individual.

Computer Nodes you know up to date it, updates, cover Nettie's itself does. Help, you with auto scaling and things like that and a whole bunch more now. So. This. Is a really good fit if you are running multiple environments, of any kind and you need to kind of insulate yourself from the differences between them and that, can be multiple clouds it, can be you know kind of on like in your own machines, with, a cloud it can be different. Dev, tests, prod environments, that, you want to behave the same but. Are actually, isolated, from each other. Those. Kinds of things it, does need. In, in, my, opinion here again like, good, team communication, because we're actually mixing, some of the responsibilities. That have, been traditionally kind of separated, you know security. Folks the people who own the computers, the people who like are responsible, and software running on them all of those pieces they, they, need to have good communication we, you're, probably, much better off if you have, source. Control, with config, and your, your. Application. Data you know the apps that you write and go through some sort of build process so that everyone who makes changes they go through the same process and end up in, a test environment later, production. Kind. Of implies it works much better that way. One. Big constraint is kubernetes, only, runs containers so you do need to get your software kind, of wrapped up in a container. There. Can be some. Fiddly. Bits around licensing, some software, like. Is, license per physical CPU that it ever runs on so, you need to pay extra attention to, run it in too many computers things like that. And. You know maybe some architectural details, if. You've got an app that has, really particular, needs, about how the pieces of it communicate to each other might. Not be a perfect fit in that case. What. This looks like in. Practice. Over. Here on Cabernets, engine I actually, have in, cooking. Show style I have a cluster already deployed and I. Have set up a service. That is listening to, right. Now nothing. So. You click on it and you get nothing back. And. If, we go in here this. Is a, kind. Of Google. You know the kubernetes. Engine UI for managing things, in a cluster you, can also do this all from the command line and, same. Story you can get kind. Of the raw config, file for the actual, what. We do in the UI so you can do that programmatically, later. And. I'm. Going to create, a, new. Set. Of pods which are a collection of, processes. That have their own kind of they're doing work and I'm gonna like use an image that's already built called, nginx, and this is a web server if that is very, very popular runs very well so, this will be just a default deployment, of of, a web server and, I'm gonna leave all the other stuff default. And. Click, deploy here and basically. It's, kind of making some decisions for me like, configuring. An auto scaler and kind, of a default number, we. Start with three copies of this running somewhere in our cluster and then, depending. Upon CPU. Load we'll, actually deploy. More or fewer pods over time and this adjusts pretty rapidly depending on how. Much usage. Is actually getting and so that can be based on CPU or a lot of other things to, actually scale the pods up and down inside. Of the cluster that is already running so and, then, separately cluster, size can also be scaled. So. Now. If, we come back to services click. On this same thing we. Have tada. A. Brand-new, nginx thing it's not amazing but. That's. A really, quick kind, of intro. That I think the the main takeaway there is once you have the cluster up, Cooper. Net you tell Cooper days what state, you want the world to be in and it's, a job to keep it that way so if, one of these processes, dies kubernetes. Will restart it if, one of the computers, that one of the nodes that committees, cluster is made up of goes. Away it, will be restarted and. Basically. That lets us all kind of sleep better at night, is this, is the main thing there so, let's. Go back to, the talk so. You. Might choose kubernetes, like. I said before it kind of insulates, you from changes, of different environments it. Gives you kind of a higher, level of abstraction to reason about but. Still lets you do almost all the things you want to do in software. One. Of the main reasons Google invested in this kind of technology is to save money actually. So, it'll. Enables you to pack a lot more software together on the same computers, without, messing. Each other up because you have the isolation, from containers you, can specify how much CPU and memory you want each of them to use and they. Scale up and down within this set of machines so instead. Of having computers, kind of sitting aside in case you get traffic to them you, can actually have multiple different applications, running in the same space and scale, up and down as.

Needed And save a bunch of money especially. If you're running in cloud where you can actually change the number of nodes that, you're paying for at the same time. And. This, the abstraction level the whiteboarding kind of level is a real sweet spot for a lot of communication, and for, a lot of reasoning, about your systems. You. Might choose kubernetes engine like, I said because it's managed. Lets you focus on it a bit more very, quick start automatic, updates good, stuff so. Again. Check. Out the docs first and there are multiple, talks. Here this is actually only a small fraction I think we have something like 20 or 30 kubernetes related, talks. That. The event and some of the things from yesterday are already online, so. Check. Those out and send them to friends colleagues. And. The like and moving. On so. Abba Jim, this. Is. This, is the transition from infrastructure, into. Kind of developer focused code, so. How. Long can I avoid saying surrealist let's see oh, not. Very long. That. Actually wasn't super intentional but um so, App Engine primarily. Is made and was made to help you help, your developers, which, would be you focus. On code and, just. Push that up and let us handle the rest and, it's. Kind of sweet spot is things that scale up and down really rapidly. It. You know it's perfectly fine running things with consistent workloads but it adapts, very very quickly based on actual activity. Each. Of the processes actually run inside of a container and so they spin up on our infrastructure. Very quickly, it's. Actually been around quite a while started in 2008. At. The time there were and up, until fairly recently there's been quite, a few constraints around the kind of code you could run an app engine, had. To be certain, runtimes had, to. Only. Allowed certain libraries within, those languages. To be used. Last. Year we announced the, flexible, environment which. Is basically app engine managed VMs. Kind, of running containers in those VMs which enables. You to run a wide, variety of languages completely, custom docker. Images and alike and. Just. Recently like yesterday, I think we. Announced several. New second-generation. Runtimes. That. Are I, pension the standard style but, these are the. Standard. Er the the. Same. Language. Environment. That you would get by downloading Python or downloading nodejs, PHP. We're, starting with three of them this. Is in the coming very soon like, next. Week or so kind of time frame and. Why. This kind of all is sudden how did, the. Old runtimes, we used to take, a you know a lot of effort to secure them make. Special versions and alike and the. The, the key difference is G, visor, and. This is a recently open source project, that. Allows. You to run. Untrusted. Code, safely. Without. A VM of action between so. It's basically. Syscall. Proxy, for. Applications, on linux and let's. Kind of safe calls through to the kernel and blocks other calls and it's running a user land so it's very. Robust in. Terms of being independent from attacks, in the kernel so, G. Visor and, and pieces around this make. Us confident that we can run essentially. Arbitrary code safely across, our fleet, and. As long as to move much much faster in. Updating App Engine. Which. Leads us to a slightly. More complicated space to reason about so. Hopefully. This, is a useful kind. Of break down of these three. Different environments, and some of the trade-offs there so. This talk is a choose with a choose inside of it. But. Basically. The. I. Don't. Make. Sense on its own but the, the. Two to the left the standard ones are sandboxed. Individual processes, running on an distribute, set of machines, flexible. Is actually. Managing. Virtual machines in your project, and, you, can run arbitrary code you can resize those to whatever size machine you would like that, sort of thing the. Second. Gen and flexible, are using. You. Basically use. Google Cloud api's directly. The. Original. Standard. Runtimes, many, of them existed before. Before. The rest of cloud existed, and so, they kind of were incorporating, things that are now part of cloud normally a blob store we, just recently announced a. Scheduler. Which is racing across ervice and things like that so. All those are available to all of our execution. Environments now so the new pieces use those directly, and. The big deal for developers, in in. Many ways is that you can run any, Python, module any. Kind. Of node.js extension. And. I've. Just write the code the same way you would write it in other environments and it should just run. So. When you're working in App Engine you. Are thinking, about the code you're writing, HTTP. Requests, and responses.

And Versions. Which I'll demo here in a second so like, but. Basically you have a version of your code and then you have a whole another version of the application deployed, at the same time which is kind of cool so let's take a look at that Oh before. You. The. Constraints think so. It. Is a really good fit for, anything, that is kind of HP request and response based. Stateless. Applications, are particularly easy to scale, and. I talked, someone earlier if you have applications that, scale, very rapidly perhaps from almost zero to millions, of requests person per minute or per second and then, back down. That's. A really good fit for that sort of thing and. Actually. Refer back to this, for constraints. It's, a demo. So. Let's take a look at this so the App Engine dashboard looks. A bit. Like this and I have I have. One app deployed. And. If. We take a look at it all. It does is return an image of lemons. This. Is a amazing. Application. Thank. You for. Laughing. However. I have gotten some feedback that not everyone likes lemons even. Though I love. This our lemon. So we've actually created some different versions of the application, and they're. All, deployed. Simultaneously. And we, can actually kind of poke at the. Serving, URLs for them directly so this is actually a cherry dot this. Site, and so you can actually test and look at and have people evaluate, I have, scripts run against different, versions the application at the same time Wow the. Main one is serving. A previous version. So. That's pretty cool and then. When you decide actually, let's check here so we have options, the lemons the current state how. Many people like cherries. Some. Cherries or not. How about apples. More. Apples than cherries and durian. Oh, we. Have some durian lovers. Just. Because we're, gonna go with they're in. So. What. We can do is actually say, we're going to ignore popular, opinion and I'm, gonna actually move. The the site over to. Being a hundred percent durian, and then. When. We come back to our main site and new users on the next request they're going to get this version of the site so. That's kind of a straight. Cut over but. The really cool thing is that we can then cut back just, as easily so I can come here and say. You. Know actually, let's go. Confusingly. 100%. Lemon. Go back, maybe. I will leave this open next time and then. When we refresh we get all lemons from there on or. If. You. Want to do a. Percentage. Of your traffic you want to test out your new features, without committing, to like all of your users seeing it you, can use the same tool and basically say actually, we.

Want To go, just. Say. 30%. Apples. Hit. Save there and then as we refresh this, refresh. Refresh, refresh, refresh. Oh, you. Know what I need to use the shift because, caching. Maybe. Let's. Try it go closing it all the way click. See. How smart I am about web stuff. Did I not save. Now. We got a saved. Probably. Sorry. So. You get four extent of your demo on the fly some. Of the users are gonna get lemons some of them are gonna get cherries. Thank. You. So. So. App Engine left developers focused on code we handle the rest. Great. For like web focused workloads, great for variable load. We're. Serving well over a hundred billion requests, a day, on App Engine for. Many many users so. Please. Check out the docs there and and. Various. Talks. And. We'll do the moment for, photos and. Done. A. Lot. Of functions okay, so, this is. Functions. In the service and functions. You know the abstract sense are a, thing, that you give some input and then, it gives some output or has a side effect right and so. This is kind of in some ways kind of the purest form, of code, you. Just focus on this specific, thing you want to do it's got some inputs it does something and. You get some output probably. Probably. The it. Was a little not obvious to me but one way to the reason about this is it's it's functions. That get invoked from something happening in the world so an event happens. So, we can do events from a lot of places including our. Pub/sub, tools from cloud storage changes. From. HTTP, requests, that come in from somewhere if, you're using firebase, from various hooks in firebase, and. It, is, service. You know in the the definition I'm using of server lists is. Anything. That an abundant, fits this as well actually but anything, that you're. Doing work. For. For. Money basically so like you're, paying, for a unit, of work not. Just, reserving. A thing for a certain amount of time so, you pay for some amount of work that happens, so. That's roughly the definition there, it. Has, been no, js' and javascript, kind of for, the last year plus we. Just announced that very very soon. Python, 37 support, will be added and you can, see content you'd release. Of new environments, soon. Upcoming. Including. Something. That is worth checking out will be on the blog over, time, but. The ability to run containers. On. Top of cloud functions so, you can kind, of wrap, up whatever code you have and have, that to work so that's. Going to be on a kind. Of a whitelist sign up early access program so. Take a look at that in the future as. The. Developer, you're thinking about events, and the, function definition and kind of what comes in and out of the function and that's. Basically it. Oops. Shall. We edit slides on the fly.

Wait. That updated. So. It is a really good fit for anything. That's event-driven when, you really don't want to think about the runtime environment you want things to happen automatically. Something, that may be a little not obvious is a lot of data, transformation, flows really, look like this you just kind of feed, them some changes and it, updates other systems based, on those changes so you kind of have the ETL. Workloads a lot of people use and in, my mind it's basically kind of the glue for the cloud you, know you want to connect this API to that API and they do this other thing as a side effect it's, a really good fit for that, one. Of the constraints is you pretty much have to interact via events. Can. Be a HTTP, call but via, events, your. Code is in. This function level granularity, so as you get more functions, you, have to reason about them the. System gets more complicated, and figure out how you're going to do logging and monitoring and things, like that. And. The, quick, demo there is. Missing. There we go so. I have cleverly. Deployed a function already. It. Is a. Very. Amazing, function, that, basically, takes this request in looks. In the body of the request for. Something. Called message, and. Then. Returns. Returns. That so. Inside. Of functions. Here so you can actually kind of update, them from here, you. Can get. The this is the URL if you're going to trigger it and create an event over HTTP, if, I click on that right now I'm not actually sending any data, so, I get that no message defined. Response. And. If I want to send. Some data, copy. This example here there's, a little kind of in browser testing that you can do and. Basically say hello, lovely. Next, folks. And. Hit. Test and we. Get a hello world back-and-forth so I mean, really, there's no. Developer invite like dev environment it's straight JavaScript if you want to do it you can also do it from the command line using your regular tools, but. You can do, it all through the web very. Straightforward and then connect these together. And. There we go so. Don't. You have to think about servers that's where the server list comes from, pay. Only for what you use a very straightforward developer, experience, and. Talks. And talks and. Photos. Three-two-one. Gone. Okay. So. Where should I run my code let's, get to. The actual talk it, still depends but. On what okay. So maybe in listening to this and the things I talked about you already thought.

Okay I need to check this out or I need to look at that so awesome. And. It depends on several other things that I wanted to go into a little more detail on. This. Kind of what abstraction, level do you want or need to be working at what. Kind of technical requirements, you have for, the, language you're using the, the, way you talk to it over the network whether. It has licensing, or kernel limitations, things like that and then your team your organization your, your colleagues, like, what, what. Skills and interests. Do you bring to writing, software so. In. Terms of what you think about this is kind of a summary, of the things I talked about across the different sections so at, each level of abstraction, the. The. Way you think about the code is actually quite different. So, you may decide that actually, we want to spend most of our time focusing, on code. Or we want to spend most of our time focusing, on how the pieces connect to each other and, that's where things so that might be a reason you, use to start in a certain place. There, are certain technical requirements. That will kind, of pull you down the stack if you will so, if, you need a very. Specific. Kernel. Version, you're, gonna go all the way to compute engine, if, you. Need you. Know if you are, you. Want your code to run pretty, much the same across, really diverse environments, you're gonna pull you know kind of lease to kubernetes maybe virtual, machines that's where the thing. When. You need you, know to run very specific. Languages. That. Might get you an app engine so and there's several other, constraints. As you get into the lakes research and docks that, might kind. Of push you one way or the other and. In. Some, ways this is this. Is kind of the server. List break if you will like if your team is primarily. Developers. And want to focus on the code and that's where thing you're, probably going to be in the App. Engine and compute and cloud functions kind of space if you, have good. Team integration, you've got folks who are engaging. In how, the systems work together and, like all in on containers and you want like there's, knobs and parts that you really need to control or want to control for your business needs like, kubernetes is going to be a great fit and, if you're coming over from existing things or you, know you want to kind of work your way into clouds step by step very deliberately, you're. Starting with VMs is a really good place to go, so. Compute engine if. You're, running in lots of different environments this, this might also. Do. Things so. This. Slides a little, funky. But you know there. Are virtual machines in kind, of almost every environment like you probably have them at your own offices, maybe on your laptop a lot, all the cloud providers have them so, if your stuff runs in virtual machines like, you can run it in lots of places, kubernetes. Essentially. Runs everywhere right now and gives, you a lot of insulation from the things that do change in the different environments so it's a really good choice for that. The work that's ongoing in the community around envoi, sto. K native and others. Really. Are beginning, to like make a really good start at, the. Kind. Of platforms of service more like focusing on the absent code and that, inheriting, that insulation. So that you can run them anywhere let. That you get from kubernetes so, that's worth thinking about and then. If, you've got the functions that are kind of run in JavaScript or other languages you can probably move them very easily between different. Functions environments. So. Kind. Of things to think about like. Equivalency, if you're already kind, of in one of the other places look at these pieces or if you're you, want to make sure that. You are able to kind of run in different scenarios. Think. About these pieces so I, mentioned. Key native here right what, is that it was in the announcements yesterday but, it's. Basically an open source project. To kind of help build. Additional higher level primitives kubernetes, is very much, kind. Of infrastructure, a high level infrastructure, but the infrastructure, project and the. Key native sto envoi. Pieces are, kind. Of helping us like reason at the applic you know what they specifically. At the kind, of like developer, focused code out of, level like all. The way up to almost doing. Functions so. Please. Take a look at that. Back. To here and, here's. Here's the crux of the. Thesis. Here so why. Is this no worries I. Think. The key thing is if you, get code into containers you, can run it in all, the different environments and, real. Systems. That's. The room here we erase a it most, systems end up running, multiple. Services, multiple different runtimes in the end they. May have some. Piece of third, party code that only runs in a particular environment maybe it needs Windows and it talks to other stuff and then, you have another.

Part, That's in functions because it makes sense from the from, the data processing, standpoint that sort of thing so it's very common to run multiples. Of in the same overall system and. If. You are doing that please, take a look at both pub/sub and cloud storage if. You if you notice you look at architectural, diagrams or if you talk to the cloud solutions folks, almost. Every, system will, use cloud storage and/or, pub/sub because. We. We. Actually don't have local file systems anymore or you know as we're kind of scaling these things up and down like, you need to have good coordination between, the machines you need places to put data, and get it back out so. You use a lot of databases you'll use cloud storage and pub/sub a lot, so. Take a look at those they're, not the flashiest, bits, but they are everywhere and super super useful. So. If. You do get your code. Into containers. You. Can then move it relatively, easily. Between. The different abstraction. Levels in. Particular for, example if you write, an app that, you deploy to App Engine flex, as a. Side effect of doing that it's actually created, a docker image built it and written, that to a registry, so you've got this image that's sitting there and so, you can take that URL and pointa, kubernetes cluster at it and say run that and it will run you. Can take that same URL and put it into the VM creation, UI and run, that same, image from there so, kind, of working down. The abstraction works like, quite, smoothly, going. Up a level you may need to do a little manual. Work in some places but. It should be quite compatible so if. You are already in containers that helps move up and down between these abstractions if you decide you need to later so. Start. With. These this is the too, long didn't read version. Start. With these and then. As, needed. You can move as, necessary, so that's why I believe no. Worries on that we've, seen be winding down I will release, folks, and I'll, be hanging out to the side and then moving out into the hallway pretty quickly super, happy to hang out and talk thank you all for coming.

2018-08-03 00:31

Show Video

Comments:

I've watched Brian's Next updates for a few years now. It's where I start after every Next videos are posted. I have found when I start here, I can then build on this into the other many areas of Google Cloud. Thanks Brian, great job!

Great talk, super informative with a good balance between using relevant information but not going too deep.

Other news