Approaches For Mission-Critical Network Reliability (Cloud Next '19)

Approaches For Mission-Critical Network Reliability (Cloud Next '19)

Show Video

Welcome. To net 204, connecting. Your data center to cloud approach, for a mission-critical network, reliability, my. Name is Nick de Costa Faro I'm a customer engineer here at Google, specializing. In Google's, dual cloud network. So. First let's talk about Google's network. So. When we talk about Google's network we're talking about a massive, global platform, as you can see a hundred and thirty-four points. Of presence massive. Investments, in subsea cabals connecting. The world as you, can see behind me all, within, a private backbone, network that, we own or that we have private connectivity, for the. Map here clearly. Distinct some regions that we have of course also future regions that we're planning to launch as well, as the mast. Point of presence that we have to connecting your data center to, the cloud. So. Let's talk about about how do you connect Google to the cloud. So. Connecting your place to our place we, actually have five ways of connecting Google, to the cloud we. Divide these into two different sections public. Addressing, and private addressing, the. Latter at the top is the. Public addressing method which is actually connecting, Google via, the various, means of peering so. Direct peering is one of them which, offers the ability to connect to Google via standard, peering, that you would do with an internet exchange essentially. Connecting. Your public, prefixes, so public address. Blocks that you would own to, Google cloud via, standard BGP peering on the. Top, right hand side we talked about carrier, peering carrier, peering is essentially the same means of connecting to Google via. Public addressing, however, through a carrier that provides that connectivity, so, this is in case for example you do not have the connectivity to Google directly. To be able to meet us at our peering facilities, you can do that through a carrier on, the. Bottom part we, speak about private, addressing now this is really interesting for enterprise environments, where you want to extend connectivity, from your on-premise, environment using. Private RFC, 1918 addressing, and connecting. Directly to Google Cloud so, that means being able to reach directly, your on-prem workloads, to Google, Cloud, via two means of connectivity, and, then of course we have the VPN option, which kind, of sits hybrid in the middle which. Provides a mix of connectivity across public internet links to do the private. Addressing, extending, your own prime environment, to the cloud so. When you speak of private addressing we actually have two Metz's, methods, of doing this once it's dedicated, and our connect which, is to directly, connect to Google Cloud via. One. Of our various points of presence that, means your equipment, is co-located in a point of presence or you have connectivity to our point of presence and establishing. A dedicated link to us the. Partner interconnect offering is actually to provide connectivity again. Privately, however, through a partner, a service, provider that is able to extend the last mile connectivity from. Your on-premise environment to Google cloud in case. You do not have connectivity in one of our points of presence you could use a partner, interconnect to do that for you and as. A hybrid model as I mentioned VPN. Is a way to connect via public, internet links and often. Enterprises, start that way before actually moving on to purchase a dedicated or a partner interconnect offering and I, will show you later on actually how seamlessly, the. Transition, is done between a VPN, connectivity to, Google cloud through a partner dedicated interconnect link. So. Speak a bit about the cloud. Interconnect, but, first I'm pleased, to announce that we are announcing 100 Gig connections, on, our plops so now you're able to connect not only with 10 gigs but now with 100 gig 100, gig connections, at our, points of presence, so. Cloud interconnect, as I mentioned is a, method, to provide access to private RFC, 1918 addressing, from. Your on-prem workloads, directly, to the cloud and it enables, essentially, a hybrid cloud connectivity, model, and. Does not require the use of management. Of hardware VPN devices, because, this is something that you directly connect your on-premise, router to, us via the, various pop locations, that were located in so. From. The left-hand side here we see dedicated interconnect as I mentioned we can do 10 gigs now we can do 100 gigs we can provide an LACP, bundle so a link route bundle from 8 x 10 gigs or 2 x 100, gigs for provided transport, circuits directly, to, our cloud environment, from your own promised data, center and a, partner interconnect provides dedicated bandwidth, for, subcircuit, rates starting. At 50, megabits all the way to 10 gigs provided, through a service provider network. So, here's a bit about how the interconnect.

Deployment. Looks like so, we can look at a dedicated interconnect, in this MIT in this way we have a colocation, facility, in the middle the, colocation, facility, is essentially where you would meet us at, one of our points of presents so you may either have a cage already established, in one of the facilities for example some of the Equinix facilities, is a very common colocation, facility or any other colocation, facilities, or you can extend connectivity, there through, through. One of any, kind of telco providers that can extend that connectivity to you and from. That you extend your on-premise. Connection to your data center and we provide connectivity to, our cloud network. Now. Here's a list, of our interconnect locations, so as you can see we're present all around the world in all the continents except of course Antarctica, and then we have connectivity. With low latency links which is the. Dark. Green circles, which, are providing very little agency access, to, Google cloud resources if you need a high highly, low latency access between your, workloads, and Google cloud so that means you can provide sub, 5 milliseconds, on average from. These sites to any regions, that are located in these, points and presents. Looking. At the partner interconnect, offering this, is essentially a model, where the partner is essentially, providing. The connection, to Google so partners have established a partner, relationship, with us already there, they've established connectivity. With us and they provide that last mile connectivity to. Their clients so, various partners offer that and if, you're not looking. For something at 10 gigs or potentially higher throughput, you would look at our partner da for that because they can provide subcircuit, rates from 50 megabits all the way up to 10 gigs so. They can provide that solution and also one thing that some partners are able to offer as well is that they can minimize the router requirements, you may not need a router at, the. Colocation. Facility themselves, in case you want to do something like a multi cloud scenario and Wan connect cloud to cloud some, partners offer the ability to connect to clouds, together via, virtual cloud virtual, router offering. So. Let's look at some of the distinction, between partner, and dedicated, interconnect, so, why would you choose partner why would you choose dedicated, I talk, to a lot of customers depending, on their use case it's very clear partner, is where, you're not able to have, access to our points, of presence you're not physically, co-located, in a point of presence you may not need to have you. May not need to have a requirement of 10 gigs right you don't need 10 gigs which is the minimum that we can offer with dedicated, interconnect, therefore, a partner will provide you a subcircuit, rate starting from 50 megabits all the way to 10 gigs and we can provide that via v/line, attachments, on our end you. Pay only for what you need. SLA. Is require at least I have two VLAN attachments, which would terminate at two different availability. Zones within a metro area if you want to have an SLA of three, 9s and then four nines you would establish that across two different metropolitan, areas, you. Can actually also establish, with partner interconnect, you don't need to be within one org we can actually extend the connectivity, from, multiple, orgs because the partner interconnect VLANs can land in different organizational, units within TCP. For. Dedicated, it's. Often used by customers, that require high throughput they require minimum, of 10 gigs the, customer would essentially, meet us at one of our pops Oregon extend connectivity, to one of our pops either by a service provider or meet, us physically, in one of our point, of presence all, the VLANs, are actually, writing on the same physical link so that means that when you establish VLAN, attachments, through the through. Our cloud console you would actually establish, the VLAN attachments, within the same physical. Circuit that's established with us, SLA. Is again require at least two, connections. To one of our to, one metro area across to availability zones and then, all VLANs, need to land in the same, organizational. Unit and GCP for dedicated interconnect that means that you can establish a project, in GCP, within.

Within One org and that project can share its resources but, it's still bound by the organization. That's one of the advantages that the partner interconnect can do is you can actually spend, that across multiple organization. If you need to. So. Let's talk about a, few different ways that you can actually use interconnect on some of the use cases that our customers, are. Using today. So. Here's just an example again of the private, routing that you can establish between on-premise, and your, between, your on-premise infrastructure, and our cloud, environment, so essentially would establish a BGP peer via, our our offering, which is called a cloud router which, would be deployed, in the cloud region and an established BGP peering with your on-premise, router and exchanging, prefixes, in this case here we have a headquarter. With an external data center that has a some, you know network of 192 168, 50. 5.0, / 24, and we're advertising that to the, cloud region, and the cloud regions receiving that prefix / BGP, in a cloud region has 192 168 49.0. /. 24 and we're advertising that across so, we can establish that connectivity through. Through, the cloud router which is our virtual BGP speaker in the cloud and your on-premise, BGP. Speaker. Another. Use case that we've seen is because, of the concept, of global routing and the concept of our global network we're essentially able to deploy within one VPC multiple, regions so, that actually allows you to connect with one router that, you deploy in one region other, you. Can actually reach natively other regions, within GCP so that means that we're able to advertise prefixes. That would exist let's say in a different region then, the region where the cloud router is deployed and you can actually ride the backbone of Google's network to, provide connectivity between these regions you do not need to duplicate, your effort and building separate cloud router instances, in different, regions this is done natively through our global, backbone, network it's a flag that's enabled in cloud router when you actually deploy this it's called global routing it, labels you to seamlessly, transition, our network and allow. Access to remote, regions from. The cloud routers region. DNS. Is another one, of our very popular solutions. As well customers. Would like to deploy on premise, they, have on premise already a DNS, server where, they have post, names that they're resolving for their private you, know private resources, on on-prem, and then they also want to deploy resources, in the cloud so they would use cloud DNS in the cloud specify. Specific a, records. For different VMs. That they spin up within the cloud and they wouldn't want their on-premise environment to also resolve, hostnames in the cloud so they would do that via DNS forwarding we have both inbound and outbound DNS forwarding which actually allows you to resolve hostnames from. On-premise workloads, to the cloud and also from the cloud to on premise workloads, and we can allow that through the means of interconnect so by either a dedicated, or partner interconnect or through VPN we're, able to extend that connectivity via the means of DNS forwarding.

Another. Great offering is through the shared VPC model so, within Google Cloud we're able to deploy a model via shared VPC which essentially, structures, a host project which is essentially, the anchor for all your networking resources within the cloud so, service projects do not need to worry about networking, they just need to consume resources they can be development teams that just want to deploy applications they. Don't need to know about the underlying networking, infrastructure, the routing the subnets, the firewall policies, this is all done by a centralized, networking, project that, would be managed by generally, a networking team within your organization, and that, project would deploy, cloud router within. A region and in that cloud router could essentially. Share the resources of the interconnect, with the individual service project so the service project would consume networking. Resources similar. To subnets. That you would grant. Them access to and they can natively route, with, any routing, that's available within the host project to. Your on-premise environment so. Again it's really simplifies, the model you don't need to duplicate multiple. Interconnections. In each, project you can actually centralize, this have a central network administration, team that manages the networking for, your environment and the rest of the the teams that need to manage development, projects, or other types of service projects do not need to worry about networking, and they're essentially. Connected back to your on-premise environment via. Policies, that you set. Another, use case is connecting privately, to Google services, Google as many api is that you expose, for example cloud storage or bigquery that you would may want to utilize from on-premise and also, because you have a dedicated link to Google it's, a good benefit to be able to use that link with low latency access high, throughput to, be able to store data to, Alexis a cloud storage bucket or to load data into bigquery services. So, it can actually advertise, by, a single, by. A VIP range back, to your on-premise environment at that API that you can call and then you can actually make, calls to Google. Cloud API, services, that. Are available through this API so, this supports Google API endpoints, there's a subset of endpoints that's currently supported, with us and essentially. On-premise what you would do is you do, a DNS change to, rewrite the Google, API domain, to use a specific VIP. Of VIP range that, allows you to connect to these to, these prefixes, through, an interconnect link and you would advertise, that prefix, which. Is you can see in, this example here it's 199 36, 153. 4/30, on the, cloud router back to your on-premise environment so the cloud router actually allows you to program, which routes you would want to advertise and by advertising this range making the necessary DNS, changes you can actually connect back to on-prem, to, Google API, services, natively.

Let's, Talk a bit about high L high availability topologies. So, one circuit as I mentioned earlier does not give you any kind of SL A's to have an SL you need to have two circuits for redundancy so. In this case we have a colocation. Facility, which is located in one metropolitan, area that metropolitan, area would have two availability, zones so to have any kind of SLA a39. SLA with us you would need to connect across two of these metropolitan availability. Zones in a single metro so. With BGP graceful restart and cloud router we're able to provide means, of having, a high availability across, these two zones. Now. For customers that require very high availability we. Would want to duplicate. That effort across two metropolitan, areas so, in this case you may have Metropolitan, Area on the East Coast maybe one on the west coast and you would. Be again connected. Across two availability, zones within these metropolitan areas, and that, would provide you a 4/9 SLA with us to connect back to GCP. And within GCP you would deploy a cloud router in one region and a cloud router in the other region generally. As close as possible to where you are co-located. Geographically. So if it's on the west coast you would pick a region on the west coast and on the East Coast you would pick a region on the east coast with. The means of global routing as I mentioned earlier we're, actually able to natively. Transit, our network in case of failure so if we, were to lose an entire region we're able a route via the other region to connect back to your on-premise environment via. This this model of deployment. And now. I'd like to introduce, my colleague Sasha on stage to talk about a high availability cloud, VPN. Hi, I'm, Sasha, and one of the engineers at a European, team and if. There is one thing I learned in my years working, for with en it, is that reliability, is a critical, feature of VPN. Because if your connectivity, from your on-prem, location, to DCP is down you'll, feel it across, all organization. Now. The protocol. That we use to, run VPN, is the, Enderson standard called IPSec, it's. Pretty good when. You consider its, encryption. Capabilities, but, its ability to detect and, route around failures, is not. So good so if, anything. Fails on, the, way from TCP to, your location then. Your, tunnel will be toast and. And. We. Want to build a more, reliable connection. And to, do that, the. Way is, to use redundancy, so. You, can set up several, tunnels running, in parallel between. The two location, and hoping. That if, one is down the other will still there so. That. Gives you some value, but that's not enough because. If those, tunnels share, any component. On the way from GCP to your location if there, is a fault in that component again. Your whole connection will be down so. Not. Only you, need to provide a redundant, connection, you need to be able to isolate failure, by. Making, sure that no, at no stage there. Is a single point of failure in. Which your tunnels is working now, as being. Cloud, this is virtualized, Hardware so where to even start, so. To do that we, developed a new. VPN. Product, called high, variability VPN, and for. That we. Have a new kind of cloud, VPN, gateway. Slightly. Different from the old one and. What. It does it, provides you, with two interfaces. And you. Can think of them as two different metal cards, similarly. To. To. The physical, network arts you would put in your computer so if one of them is burned out the others are still continuing. To run your traffic but, this is virtual so, if, you, have, two tunnels attached. To two of the interfaces, we, will make sure that they are nothing they share, no, line cards, no, network. Connections, no machines, no switches, actually. Being, cloud we. Are pretty sure that they will not even share data center so, they are pretty much separated and if, one of them goes down the, other is, still. Assured, and. Very likely to be working and, then.

If This. Further, of scenario happens then, a short time after that the system will discover, and that. One tunnel went unhealthy, and switch all the traffic over to the healthy tunnel, now. How much exactly. That. Gives you in terms of reliability it's, a good question it's, a function of how good is the isolation and how. Fast. Is the process, of switching over from unhealthy. Tunnel to the healthy one and we. Did this analysis, for our system, and I'm, happy to announce that we are going to provide four. Nines which is nine nine point ninety nine regional. Availability. So. It means four and a half minutes of, maximum. Monthly downtime, or 53, minutes, yearly. And. Just. A few notes so, this is a regional product you, do not need to, go multi-regional. To have this level of availability just one region is enough, you. Will be able to connect just as with classical pain that we had you'll, be able to connect both DCP to DCP or to, your own pram locations. Or even, to another cloud the, only requirement. That is. Mandatory. Now and was optional, with classic with pian is that, we need the. VPN on the other side to support dynamic routing if you think of it it makes total sense because, you want, your system to be able to failover to switch the traffic from one path to another so it's, natural, and, one. Last thing to notice is that. When. You're providing this. Redundant. Connection, you are actually having twice. As many, tunnels as you're. Sure, to have in case, there is any system failure. So if you watch. The. Observable. Capacity, that you have in your bandwidth, then, it will be twice as much as you will have. Assured. And so if there is any failure then. Your. Available. Bandwidth will will go down by a factor of two and that's, not, the time when, you want to discover. That you actually needed more tunnels, than, you have configured, so, to. Have a consistent, experience of, your bandwidth staying constant. All the time what, we recommend, is to set up a VPN. Connection in, an active/passive, configuration. Meaning. That one of the tunnels has priority, and all. The, traffic flows to it until that unless, that fails and all the traffic switches over to the, other tunnel, with. Dynamic routing again, that's easy to set up you just use different medval. Use and that's, it so to recap we have. We're. Going to provide four lines availability, this. System is currently in alpha it's, going to launch together next, month and. Yeah. Here is the more traffic running more smoothly Nick back to you. Thank. You. Thank. You thank you. Okay. So. Putting all that into perspective now, I will actually go. Through a demo that looks. At a common enterprise use case where. We have an enterprise that starts with a VPN, tunnel, here. For example I, have a customer project, I have an IPSec tunnel connected, back to a customer, router on Prem I set. As sasha was talking about the med values which are important in terms of traffic, engineering that we're doing within the interconnect, model so, in this case I have a standard. Med med, value of a hundred and. Then. I would like to set up another connection this other connection, would be a dedicated, or partner interconnect offering and and, I don't want traffic to switch over to this connection so I create a connection create. It with a lower, med value of 1000 and I establish, a link now, once that link, is established traffic, will not seamlessly. Switch over yet because the value of the med priorities, are, preferred. Over the VPN tunnel then from the interconnect connection. Now. To, do a traffic switch over it's, very simple we just switch the med values between the, VPN, connection, and the, inter connect connection by swapping them and then the traffic seamlessly, moves over so, now I have a VPN. Connection that could be used as a backup I can keep it online case, I needed but, right now my traffic would actually be routed. Across a. Dedicated, or connector partner interconnect link. So, here's a topology, of what we're doing for this demo so I actually have a hav, p.m. so deployed, with two interfaces, they're, both deployed and active passive right now with med value of 100, med, value of 200, so active passive on the VPN link and then I have my interconnect, links which with higher med values 1000, and 1200. And they're connected back to an on-premise, environment which. I'm running a ping, test between that and a VM within GCP. So. You can see here I have this, in my cloud console with this project I just showed, the topology so. We have here the interconnect.

Session And the VPN. Sessions you can see here the one thousand and a twelve hundred as advertised route priority, and we can see here the 100, and 200 for, my VPN. Connections. My VPN tunnels. So. Here I'm running a ping test between. The VM. On Prem and in a VM sorry, on Prem workload. And in a VM and GCP as you can see here we have kind. Of sporadic pings between 20 to 30 millisecond. This is running across the internet this is a VPN tunnel it's not predictable do the path nature of the internet so, I can't really rely on some hearing, team chose milliseconds. It's pretty sporadic, it's flipping up and down I want. To now switch traffic, over to a, dedicated. Interconnect, link to have higher throughput. And also lower latency. So. Here I'm just running a script which is actually going to set the priorities swap, them between the, interconnect, and the VPN, tunnels and once. The it's really what I'm doing is I'm switching the priorities, both on the VPN side in, GCP both, VPN and err connect side on GCP and also on Prem from my own Prem router to make sure that we're, updating on both sides so that it's symmetrical traffic, of course it's, important. To do that so that traffic runs both symmetrically. Between the peers. And. Once that's actually completed, then we should see that traffic the latency is going to be reduced quite significantly, because we're gonna go through across the dedicated interconnect, link with, much lower with, much lower latency and higher throughput, and as you can see now we've, actually lowered it down once, it stabilizes should. Be around eight nine minutes milliseconds. When. Traffic is running across the interconnect, as you can see now it's, around nine milliseconds, a lot of traffic seamlessly, moved over. To the interconnect, link, instead of the VPN tunnel. And as you can see I've, seamlessly. Transitioned traffic there's no packet loss easy, to do now you can move traffic over and you have traffic writing across the the interconnect link and if we look at the cloud console and refresh, it. Now, we see that the priorities have been swapped so, the inter connect session now has 100 and 200 that's the preferred path active, passive path across the interconnect and 1000. 1200 for the VPN tunnels so, they're still there in case we need it as a backup in, case there would be a failure, we can still, transition. Traffic across the interconnect so, it's that simple to be able to do a migration, between VPN, and interconnect seamlessly. Between your enterprise and GCP and. With. That that, is the end of my demo I'd like to introduce Steve Olsen from the, Home Depot and talk, about his a interconnect. Experience thank you. Hey, my name is Steve but everyone calls me Steve oh you'll have to excuse, me, my voice is a little bit under the weather today caught, something here but we.

Will Push forward I've been with the Home Depot for 12. 13 years, which. Means I'm incredibly, tired and I've seen lots of different things we. Like to say we've, seen a lot of changes, and Home. Depot in the last five years probably, more than we've seen in. The past 10 or 15 and, that really goes to speak to our digital transformation. Things that we've done we've, really put IT first, it. Wasn't always the case when I first started IT was kind of a vaccine, now, I t's kind of a front seat driver and we've really done the strategy called interconnected, retail it's, really an omni-channel strategy that, lets our customers. Buy online pick, up in store buy. Online ship to their home buy in the store all. That kind of merges together to give the customer the choice and, one of that big. Strategies. That we played, first with, first was the cloud GCP. We chose to do. Econ. 4 and we have a journey. That come and used the tools and the guys have been talking about and how we connected, to GCP but, first I'm going to start with some base facts about. Home Depot I won't read everything on this slide but Home, Depot is found in 1978. Last. Year we did a hundred billion hundred, eight billion dollars in sales, dot-com. Was, up 22%. Linking sales and there's some calm guys here but, in. Front I think, in the last five years six, years we've. Actually increased, comm sales by a billion, dollars almost every single year a lot, of that growth has been in our cloud, space we. Completely replac form calm and we'll talk about that in just a second, we've over 150. Supply chain facilities, we, have over 2,000 stores spread out across Canada, u.s.. And Mexico. And of course 400,000. Associates. What, a lot of people don't realize is we do have a big IT presence, we, have over 3,000. IT associates, we, hired just under a thousand, last year and we're still go we, have offices all across the country from our various brands, that, define home the phone companies that we use and. With. That I'll start with our actual strategy, and how we started with the, cloud. So. What. Kind of led up to a distributed, model for us is, really. Just starting with e-commerce first, in around 2014. We'd. Completely, replac, formed the website it, was decided that we would go from an on-prem, model to, in the, cloud and some. Of the reasons are the classic reasons that you would hear but, we were building our own premise infrastructure, for sales that. Wouldn't be around Thanksgiving. Or Black Friday or Father's, Day and you build this infrastructure. To the point where you're, using it for those days only and then the rest of the year it's it's stagnant, so, we really wanted a more on-demand, infrastructure. That could scale and of course the cloud came. Into the hat with, multiple, years to do really, this is gonna be a condensed timeline but I'll explain exactly how we got there one, of the first things is data movement but, how do we actually get data from our own friend to. The GCP cloud and the, first thing of course is before we even get going with lots of projects it's just a basic internet and, so we have basic proxies, and internet. And that doesn't scale incredibly, well what if you need private services just like they talked about and the slides before maybe pm's, interconnects. We. First started using VPNs, and. I've. Kind of got a love-hate really read relationship, with VPNs some people like them I hate VPNs, and, I'll explain that for a few, reasons I mean VPNs suck let's face it but, if that's all you can do that's all, you can do right, we actually do static routes there, was no cloud router at this point this is around 2015, so we, were stuck with static routes and no redundancy and, it worked, it accepted terminated, on-prem here, and then. It terminated it inside the cloud you, know it. Basically did, what it needed to do but we needed something more robust, and we needed to scale high and. So. The promise with VPN which I said we'll talk about MTU how many of you have ever ever dealt with frame size issues and MTU problems, with VPNs like everyone's. Hands gonna but MTU. Is a terrible thing to deal with in it's not Google's fault but Google, you have to have, fragmentation, before encapsulation, and if you are dealt with firewalls and, and. Packet, captures and wire captors for why something doesn't work does. Your application, even support, fragmentation, doesn't know it's gonna be fragmented, I I.

Just Need an Advil just talking about it we dealt with that so long that we needed to get rid of that speed. And latency, is another big thing one, of the big things that I found about speed Alain sees that if you talk to your customers, sometimes. They don't know what the difference between that is they. Might say I need something quick but. It's quick mean I need a load a terabyte, of data in a day or I need a connection to a database in five milliseconds, sometimes, you have to actually leave that conversation, there to see exactly what they care about all, right SOA and the Internet in general, that's. A big thing as well you'd. Come in one day and your connection, across the Internet's 30 milliseconds, in the next day it's 45 we've, all dealt with that we actually made a diagram, that basically. Show cold potato hot potato routing to explain to our application, developers, and, our consumers what, it's like that yes, we have internet connections, to the cloud but we don't go straight there I don't. Own the infrastructure, in the middle and getting, their heads wrapped around the fact that yes I may leave here but I don't know what some. Tier one provider does with it after the fact and, that's how we explain that to them. So. Let's see what the VPN issue first right so what we, really wanted to have a way to have the VPN be better we, couldn't fix the MTU immediately, but we could fix them in the speed and latency problems so. This is where we really started discussing peering, facilities, so, we actually came up with if, we, could get the VPNs closer, to Google we, can actually get performance better so, speed, latency, reliability. SLA. And so, we came up with peering, centers on the east and on, the west coast. Those. Are the first good, choices because, a lot of the infrastructure, four things that were using are there we, have well-defined lanes use now so instead of using the Internet we, own these private, circuits between some of our daters directly, to these pairing centers so, I can tell my application, developers, and all my teams I know what you're going to get from a late from a latency perspective, I know this it's 15 I know this is 40 and, we.

Don't Have to worry about the internet so what we really did is revisiting. The first slide as we moved the VPN connections. All, the way out to the Colo there in the pairing centers and the, VPN still inside the project so the Internet's, not used right. But reliability. And that's, the way go way up speed, is now actually, even better because now we have direct connections, so, we're running the VPN basically, on the last mile we're not running it across the country and hoping that it gets there we have our own private circuits that gets us there all, right so the. Real redundancy, actually still isn't here yet so we, actually, started, doing multiple, VPNs, but without cloud router and static routes and trying to scale that that's, yeah, that, didn't work either so we needed to do something even better than that all. Right it actually worked well enough that, we basically started, adding more people, liked it so much that we added a couple more so now we have seven milliseconds, and three, we actually have vendors and application. Owners that test this and can get connections straight to GCP within 10 milliseconds, all, right so once, we started doing that thought. Well interconnect. Beta started, and if we can get the interconnect, on we can get rid of the VPNs right. So now. We're gonna fix the VPN issue, that's. Still problem with the MTU so what happens, is the interconnect beta comes cloud, router comes and, we have now an interconnect, plus a direct connect so at each of the peering facilities we have four different cables, we have two for public access to. Frick interconnect, and depending. On what the project needs are they can do private access or they can be public access over, our own infrastructure, to get there the Internet's not involved and. Of course with dynamic routing the, redundancy, is much more seamless and it fails much easier. So. With, that we basically grew, even more we actually added in another facility this year and all, these sites back each other out they're talking about mission-critical, reliability. You build this in such a way that when, we have circuit issues and some of these go down our. Application, orders don't even notice because, if they're multi region and multi attached to different interconnects, and use pile routers and, automatically. Fails over all. Right so that's, we. Just feel - they're now but we're even adding more and really. It's not just UCP, either the journey to exit getting is connected, directly to gz p led us to other things we're, using it for other providers as well and other vendors and partners of us and even the internet itself so, just the concept, of getting to GCP led, us to this distributed, regionalised network that, we, can use and with, the combination of you, know new technologies, like Sdn, and, things like that we, can have our remote sites our stores our distribution, centers we could have them all come to the closest peering facility, and use cloud services, or use the internet and things that that that they might need they don't need to come back to the main offices they don't need to go directly to the air and we can control that all in a centralized, regionalised manner. And. So with. That being said what, we're doing next is we're investigating, what we exeed they talked about in the last few slides it is private, services. Via these channels meaning, using our private space to contact a query and things like that we're. Really interested in shared VPC, and we're, going to continue deploying, more pairing, centers and keep using the multi, region model and quad routers and interconnects, and we'll, probably even expand to more appearing centers in Canada and Mexico this year this model has proved extremely well, our customers, like it and we will continue to do that with, that being said. You.

2019-04-16 15:19

Show Video

Comments:

Very informative video. On the best seen on the subject !

Other news