ASP.NET Community Standup - ASP.NET Architecture Series: AOT in .NET 8

ASP.NET Community Standup - ASP.NET Architecture Series: AOT in .NET 8

Show Video

[MUSIC] >> All right. Wow. That music always gets me hyped up.

I'm quite excited. Welcome to the ASP NET Community Standup. Damian, you reminded me we just hit nine years. >> Yeah. I had stuck in our ca***dar a couple years back when I realized we kept missing the anniversary. It's so funny because it feels like I just did the five-year celebration where I went out and bought balloons and stuff and decorated the studio.

But of course, that was all like pre-pandemic times and everything. It just feels like it wasn't that long ago, and now we're staring down the barrel of 10 years. We're in our 10th year as of right now. >> It's ridiculous >> Insane.

>> It is insane. >> We're so fancy. We got music, outro, fancy cameras. >> Animation there. >> I love the production value.

I really do. The epitome that misses the junk of the old [inaudible] . >> Sure. >> Well, I know we have limitations on time. I want to get right to sharing the community links, because Damian. >> John loves community.

>> Here we go. Let sharing this and here it is, and boom. Here's what we got for the community links. I will share them in the chat and just a few this week, just five. Accounting is correct. Let's go. We got, first of all, this one is exciting to me.

I've been using this HTTP client feature in Visual Studio quite a bit lately. I love it. >> Awesome. >> There's some new stuff in here, because before it was just like a hard-coded file, and now with this you can actually use a bunch of different variables. They've got the whole thing listed out lower down below, but you can template in different things. Host address is one that they've got there, but there's a ton more of them.

Still scrolling. User secrets is huge, so be it store token or something. There's [inaudible]. Still scrolling. Here's some. You can pull from the JSON file,

you can pull from Key Vault, all sorts of DPAPI. Tons of different stuff there. That really opens that up. >> Has support for DPAPI as a first class source, that is something else. >> Isn't that great? Then here like a random manager. I've actually been this thing right here, the Urlist is a site that Burke Holland and Cecil Phillip built five years ago on View Azure Storage, Azure Front End, Azure Functions, but it was all cobbled together, and Burke and I have been rewriting that in Blazor.

We've been using this HTTP file for just testing some of the API endpoints. This random integer is great for me. Being able to just generate a new item in the thing without having to change my HTTP file every time. >> Very cool. >> A lot of great stuff in here.

I love seeing them build this out. This is, I think, built on top of a loose standard that is out there in the HTTP. >> Yeah. My understanding is, it's not a standard in the traditional sense, it's one of these de facto, there's of an extremely popular VS Code extension.

Then I think Rider has some support for this file format as well, and then VS started adding their support, but there's some deviations in places. I think the team is working to try and make sure it's aligned as best as it can be. One of the chal***ges with these things is that they come up with great feature ideas that you can't really express using the current standard, and so there's always that tension.

But like you said, the feature itself is pretty cool, and I can't wait to have support more and more stuff. I'd love to get some more constructs in there so we can use this for pseudo testing and pseudo perf testing stuff. I've got lots of interesting things that I'm going to do with it. >> Cool. Next thing up here. There's continued stuff going on with the.NET Upgrade Assistant.

I absolutely love all the stuff they're doing in here. One of the things that they've done recently is upgrading project features individually. Before.NET Upgrade Assistant was very kind of,

hey, welcome, we're going to upgrade your entire application. Sometimes you want to do something as simple as just upgrading your project type, moving to SDK style project type. This is just a built-in feature now. It's adaptive, so in this case, like this one doesn't require updating to.NET Core, or whatever, I don't know. But depending on your project type, I think if your app will recommend the different things that are available.

Big fan of this, I've been doing a bunch of demos and stuff lately. I talked to a team that has 28 million lines of Windows forms code, and I was like- Hello. Yeah. Get on that. I'm showing off my two community blog posts today. Mostly because both of these folks are doing a series.

John Hilton's doing a series on Blazor changes in.NET 8. This one is on capturing user input. Especially with Blazor server side rendering, it's great to be able to use the Blazor model with edit form, but then you've got to actually make it interactive, and since it's server side rendered, you've got to hook that up. He walks through this. The two big things that I took away from this is you need to be sure to set your form name. With edit form you have a model that it maps to.

Here, the important things are you've got to set this form name to something unique for the form. Here he's got checkout. You can have multiple forms, of course, in an HTML document and also multiple edit forms.

You've got to name that with something unique, and then here this is a key attribute this. Supply parameter from form. That's just basically saying like from the form post it'll populate that and it'll repopulate this DTO, and then it's able to just map the stuff back across. Other than that, pretty straightforward, and really nice to be able to see like you get the benefits of SSR with Blazor and also being able to have that rich interactivity.

Anyway, Sonny, he just walks through doing this using that supply parameter from form, and then posting and making the updates. But the big thing also that I want to point out here down at the end of this, he's got a whole series going through the different things, server side rendering and interactive components on server [inaudible], etc. Thanks, John. Then of course, the ever popular Andrew Lock series.

Here he's continuing, he's on part 8 of ASP NET in.NET 8. Here he's looking at identity endpoints. I just love the way he writes his post.

First of all, he talks about what he's doing, gives the foundational stuff. Talks about what we've offered to date. Shows this amazing but also overwhelming dialogue that we've offered for a while.

Here's how you can add in your different forms. Of course, this doesn't work that great if you're using Blazor or a SPA, or an API. The big work here in.NET is this API or identity API endpoints. He just walks through here.

I'm not going to spend a whole bunch of time on this, but just basically showing how you can set up these API endpoints. Here he's using with the EF core storage for persistence. Then authenticating with an API getting a refresh token, etc. Great stuff. >> Before we move off this one, I just want to say scroll back up to that somewhat comical dialogue, because I was the [inaudible] in that dialogue. [inaudible]. >> Ideas for a much better user experience with this dialogue, but as is always the case, it's complicated. >> What are some of the other ideas that you thought? >> Well, for example, this dialogue, this whole feature with Identity in what it call three, I think, is what when we did this was that we wanted.

This dialogue more than anything else shows you how feature rich and how complicated the built in identity features of ASP.NET Core are. This is literally all the pages that you get by default when you ASP.NET Core Identity to an app. It's not just, register user, sign in and be able to manage your e-mail address, No. Every single one of these is a separate razor page. This dialogue is asking you which one of the built in razor pages do you want to override so you can customize it? What it does is it drops that page into your project in a special path that will allow it to override and circumvent the in framework page that listens on the same route.

We had to build features in ASP.NET Core and MVC to make that work. Before that we didn't have the concept of a razor class library that had a page in it that listened on routes in your app when you added it in a way that you could then override it just by dropping a page at the same path in your app. That wasn't a thing. But just saying that out loud, I hope people can see the utility in that, you can prepackage components of your modules of UI, put them in a new get package, reference them from a project, have one line in your program CS to wire them up. Then if you want to override stuff, you just add a page at the same path and it takes precedence.

You can still share layout pages, you can still share views and shared razor views and stuff because the razor look up engine in NBC still does its view searching trajectory stuff which is very different to how Blazer works, which is all strongly typed and all the rest of it. It extremely different philosophy. This was actually a really cool set of work that we did to enable this in my point of view. But this dialogue unfortunately, what I had hoped was that we would be able to build a UX and visual studio.

Such that when you added the identity stuff, you would get virtual nodes in the solution explorer, so that all these pages would show up in your solution, but like colorize a certain way or with a specific icon so that you could see they weren't actually files in your project, they were pages being projected into your project by virtue of a razor class library. Then if you wanted to add them to your project to override, you could just like write mouse flick them and say, add this to my project and then you would get the code. That would be the experience for doing that. We never you were able to prioritize that type of experience so this is what you get. You get a check boxes, check box get area and this things like crazy.

But I'll never say never, maybe one day we'll get the feature that I wanted. >> I loved it and I feel like it's super like it's very useful, and it's also a little overwhelming for somebody new to it. >> It is overwhelming. >> It's like there's a complete like out of the box, you do nothing, and they're all just built in behind the scenes and it just totally works, and you can override each individual one, you can use this dialogue or you can just do by name, It's amazing but it's also like a little bit like.

Then I guess the big thing too is it only works like for the Razor pages, like NBC scenario, it's not going to work for you. >> Victor in the chat has just said maybe it's time for a new identity, UI and Blazer SSR. Guess what, that's upcoming in IC2.

That's literally being worked on right now by the identity/Blazer team. My understanding is it won't be as fully end to end experience as what you're seeing right here, but it will be basically a set of razor components that do everything you see on the screen right now. That will be an option in the template so that you can say, hey, I want to create a new Blazer, eight app and I want it to include Identity, and it will have Blazer UI for doing all the stuff that the existing ASP.NET Core identity UI stuff does. I just don't think it's going to have the scaffolding stuff on release like that will take a little bit longer for us to get to, and the dialogue will still look like this, this is my guess. But it'll look a little different just because we don't have the same overriding thing in Blazer, we have to do it a different way.

They're still figuring out how they're going to do that. What we decide, the only thing in Blazer that does that right now is the router. The router is a component that basically takes a string, which is the route, and then dynamically renders a page based on that. It's not strongly typed, whereas the rest of Blazer is strongly typed, like you reference a component by its type name.

That's probably what they'll end up doing, but I'll let that team speak more to that. >> Cool, so anyhow, for just wrapping this one up, he goes through, does all the stuff. We did also do last week's ASP.NET Stand up, I wasn't on, but Stephan walked through some of that too, so that's pretty cool. >> The feature is like if you look at the comments on both the video last week from stand up and Nick his video on this feature, people want to customize everything, and again it gives you a glimpse into what is so hard to design. Because they want to custom database, they want a custom schema, they want custom pages, they want custom.

>> Not the same people, different people want different customizations. >> People want all the hard things to be done by the framework, but they want to sprinkle all these custom things around it, which is why it's really hard to design one thing and work for everyone. >> Well, and then other people will say, why is this so complicated? >> Exactly. I feel like we had to get something out to get into people's hands and then see concretely what do they think they want. What do they think that they need for these features? Do we have to expose these endpoints, is it useful for them? The UI is super tricky. When we shipped this version, people rage because they wanted an NBC view version of the same pages.

We did one version of them, and they wanted one for Blazer, for NBC, and it was like, you have to do every combination. >> Yeah, that was my call. >> We can't build everything. At the end of the day our results are somewhat finite, and we have to make product choices about what we build and we'll always fall back on and at the end of the day, people can build whatever they like. Like if there's a strong enough community demand to build an alternate version of something that's in the box, I would encourage folks to go and do that.

Just like I built a very rudimentary Blazer set of identity components last year as part of the exploration for this work. That's up in my public GitHub repo and I've had a few people like, can you expand it to do this and this, And I'm like, I just did it in a couple of days to explore the idea. At the end of the day we can't build everything.

To Fowler's point, everyone wants to customize things slightly different ways. It's extremely difficult if you take those 30 something pages and then think, let me support all the various ways that we've seen people concretely ask with regards to how they want to customize it. It's just so much easier to say, how about we just give you the code? Let's build a UX that dumps the code directly into your project and then you can customize it to your heart's content.

That seems to be a better approach to building a higher level API that has all of these extensibility points that then you have to learn about. They have to document the version and all the rest of it. >> I mean it is a balance but it's like I do feel like for what you built, at least what was shipped, you can plug in on all these thing.

It's a bit of work, but it's like you're not locked off from anything. As opposed to a really pretty elegant, very simple thing that doesn't let you change stuff. I feel like we have so far is nice, but now having these API endpoints is going to be open stuff up even more. >> Yeah, I think so. >> Cool. >> I've just got one more link before I turn it over to you, and this is this foundational C Sharp certification with free code camp.

This is really cool Katie and my team's been working on this. This is really cool, It's integrated with free code camp and Microsoft Learn, 35 hour C#. Training course, you can go through, you can get it. Then this gives you a badge through free code camp that you can put up on your LinkedIn or whatever. Wonderful stuff. David, if you're looking for a promotion, I'm just saying you could get this certification people.

Once you get the cert, people will actually believe that you know C# and then, you can see what avenues that opens up for you. >> New career goal for you Fowler, you need to get your autograph on this. >> I'm done talking. I'm ready to turn over to you folks.

>> Cool. Well, I think we wanted to talk a little more about native AOT and the work that we've been doing ASP.NET Core 8 and. NET 8 to support native AOT for a subset of ASP.NET Core. I can't remember how much we've talked about it already in the architecture series. >> It was a while back.

>> It was a while back. >> Because it may set, you start by showing a template and then we could go to the templates. Template shows the differences. >> Let me do that. Let me share my screen. >> Is this really? >> Let me just create a screen that is sharable and then I will share my screen. >> Close Teams, close.

>> Oh my God. >> [inaudible] While you're doing that, there's a new thing in PowerToys that allows you to share a region of a screen as basically like a window. >> I know, I've been wanting that feature forever. >> You know why that is important now though, because of the curve screens. People have like giant curve screen so that you can screen share.

>> Mr. Galloway, direct demand, thank you very much. >> Welcome. >> We have changed a few things. In the first previews, we tried to combine the AOT template into the ASP.NET Core API template.

It turned out that that wasn't the best idea, and so instead what we have now is one called, I got to find it, here it is, ASP.NET Web API native AOT. I'm going to go ahead and do that. I'm going to put that here.

Actually I'm going to put C2 because I've got a folder that's set up already to deal with our C2 projects. I'm using a daily build. Anyone can grab a daily build from GitHub.

But that's why I have to do something a little weird here just to do that. If I go ahead and create this project, this is now a separate template. If you want to create a native AOT compatible ASP.NET Core API, you have to use this separate template.

The reason is, that's the only part of ASP.NET Core in terms of higher level app model that we've made compatible with native AOT in ASP.NET Core 8 because we had to start somewhere and that was the most logical place for us to start. If I look in the programs here, you can see we've got a minimal API application. Again, in ASP.NET Core,

minimal APIs is what we focused on for making native AOT apps compatible with ASP.NET Core. This is a different sample to what we've seen in some of our other API's on the weather forecast because I'm sick of the weather forecast. I chose to do the also very popular to do app which you see out on the Internet everywhere. There's a few things that are different about this project straight off the back.

One of them is you'll see its using this new. >> [inaudible] Can you zoom just a little bit more. >> Bit more zoom? >> Yeah, there you go. That's about working. >> There's this new CreateSlimBuilder API up here instead of CreateBuilder. The big difference between this and the usual CreateBuilder is there's just different defaults and there's basically less defaults.

The CreateBuilder API is one that gives you a whole bunch of opinionated defaults with regards to what gets plugged into our very extensible core, so like what configuration providers, what logging providers, what gets enabled during development. Things like the developer exception page get enabled in the middleware pipeline. If you set up an authentication handler in your DI container will automatically add authentication and authorization middleware.

There's a whole bunch of smart defaults, I won't use the word magic because magic is different to everybody, but smart defaults that are set up. But they come with a cost. Not really a start up cost, maybe a little bit, but mostly to do with app size, because you're pulling in more code.

One of the goals of native AOT and indeed one of the defaults of native AOT is that the app is trimmed by default. That is, we tree shape the application to remove all parts of your app that are not used, where we can't statically determine at build time what code is being used. Why is that important? Well, it turns out when you native AOT compile, apps get bigger, it's just the reality.

One of the advantages of the.NET IL format is that it's actually quite terse with regards to what could be projected at run time via the JIT, the just in time compiler, in the.NET runtime itself. After your application starts, it can basically expand and create more code at runtime, and that's basically what the JIT does, it's adjust in time compiler.

When you do all that upfront, when you publish the application, the application, if you do nothing else, will get bigger and then you go finds its about three times bigger. That's the general rule of thumb, and so it's very important. Generally, it's important that you trim the application, that you just get rid of all the code that you don't need in order to run the application. That brings along some new restrictions or some new compatibility requirements. We have this new CreateSlimBuilder.

If you could run this out with CreateBuilder, I don't think you get a warning anymore Fowler. I don't think we actually have any AOT warnings in that path but I can't remember. Maybe I'll try it later, but it'll be bigger. We also have empty builder now. I think it's [inaudible]. >> It's letty. >> It's slim empty. Wouldn't that be funny?

That's the layer in between empty. Create empty builder, which is new in.NET 8 is what it says. It's a truly empty builder and so some folks have been asking for this for a while. We used to be able to do this all the way back in ASP.NET Core one through to like ASP.NET Core 5 or 6 when we introduced the web application type, the pattern. But this is not for the faint of heart, you get nothing.

You don't even get a server, you get no logging, you get no configuration providers, you don't get an HTTP server. If you want Kestrel you have to add it. It is truly empty, but if you want that, you have a use case for it, it's there now. The first one is that we're using this SlimBuilder.

It's a set of defaults that is paired down from the usual set of defaults, more optimized for application size and for this type of application. The second thing is you see a couple of bits of code to do with JSON. We've got this section here about configuring our HTTP JSON options with this rather unfortunately verbose line to wire up some JSON stuff and I'll talk a little bit about that in a moment, and that's referencing this app JsonSerializer context type, which is defined down here at the bottom of my program CS.

I'll get through talking just through this program CS and then we can start addressing some questions in the chat. This is using the system text JSON source generator, which is a requirement if you're using native AOT and doing JSON. By default system text JSON we'll use reflection in order to discover parts of the types to serialize and deserialize and then generate code at run time to do so. You can't do any of that in native AOT or at least you can't do it the same way, and so you have to use the source generator in order to do all that upfront work at compile time but it also changes the pattern. It changes what your code looks like. You have to have a type like this that inherits from this type, you have to decorate it with these attributes to say that you are going to be serializing these types.

In this case I'm serializing a todo array, and then for ASP.NET Core, you need to wire that generator type. AppJsonSerializerContext.Default is a member that points to a default instance of that generator type and you have to wire it up into DI. Because we use DI, this was statically generated. It's just a static member.

You need to like wire them together so that we can find them at run time and then all of your APIs will be able to use this JSON serialization context when they go to serialize or deserialize input, output data from your APIs, and so you have to wire it into this type info resolver chain thing. It's a little unfortunate. I wish we were able to make this a little smoother and I think we have ideas for the future on how we might even be able to get rid of this step or make it easier to wire this up. We may never be able to get rid of it completely but this is what we have in.NET 8.

Then down here is just the APIs. We're going to get, it returns a bunch of sample to do is which is this array here and we've got a get by ID, which as you can imagine, just looks up one from this array here. That's it. That's all this really does. If I run it, it runs like you would normally run during the inner loop in Visual Studio. But when I publish, this is when the magic happens. When I publish this application, I get the natively compiled version of the application.

I think for this it's about eight or nine megabytes currently for this project to be native AOT compiled. With that, questions? Too much further with that, I'm answering some questions. >> I'll go backwards, I guess. Why not have source straight areas attribute on the class and generate the serializer behind the scenes? >> You can. Today the way the source generator works is it tries to handle, there are two common cases; one where you own the type, so you declare it to do yourself, you own the definition and you can change it.

The other one is where you don't own the type and you can't change that definition. In both cases we need to be able to have you declare which types you want to use to serialize to generate the code to make this work at compile time. This model where you have an actual that points at types, the ones that you own, the ones that you don't own, works in both ways.

But we are discussing a model where if you do own the type, so in this case where you have a to do type, we could just code spit more stuff on your type that can be found by the call to serialize as that's happening. If you think about how it has to work, the serializer has to know how to reflect on your object without doing reflection. What happens is when you say you want to make this to do array, you want to have that be understood. We have to that type at compile time, find the members, and then basically emit a compete time, the reflection information that is used at run time to actually turn that type into JSON. >> Can you show the attribute? The attribute basically says I can point to this to do array and now I can code spit in this context code to serialize the array, to do strings in anything that is a properly defined by to do an array, is code spit onto the context. The context is this a surrogate type essentially, that stores all the information required to serialize a world of types.

That could be distributed onto individual types but we haven't built that model yet. But I think in the future that is a reasonable thing to do. >> Let me see. Does native consume less memory or more than interpreted by how much? >> That all depends. It's actually less to do with native versus IL, because ultimately the git will compile your IL to native code at run time.

That's what it does, that's why it's adjust in time compiler and then that will use memory, based on what you write, the code you write and the code that we wrote in the framework. It's more to do with GC if I ignore the code. The other thing that we did in.NET 8,

was that we invested in a new GC mode, which is effectively an extension of the current server GC mode. We have set these projects up. If you use this project template, it gets set up by default behind the scenes here. You can see publish AT is set to true, and then in the web SDK, we default projects that have published AT set to true into this new GC mode. The GC mode is called a DATAS, D-A-T-A-S.

I think that's the right acronym. >> Then an acronym adapts to application sizes? >> Yes, dynamically adapts operation. What it basically does is it's a server GC mode that has the characteristics of.NET server GC today, which loosely speaking means it's multi threaded. Which the normal GC, the workstation GC is not. It's background GC and it's multiple heaps.

You get a heap effectively per CPU core. There are rules around when that's not true and all the rest of it, but like for most of the cases come true. Then the heap size is determined by how busy your application is. How much memory is your application asking to use? That's the big difference. Server GC does scale.

The app will start up with a certain amount and then it will scale and grow to use more, but it's more aggressive about how much memory it'll allocate to the heaps on each of the cores and it's less aggressive with how much it deallocates. If your app gets quiet again, whereas DATAS is just basically more aggressive about not allocating memory until you ask for it, having a smaller budget to start with, and then aggressively deallocating that memory when your application looks like it doesn't need it anymore. What that means for an application like this, is that the memory you should scale up and down much more in line with how much memory that is asking for. The number one factor is still how much memory do you allocate, and the number one factor of that is what code you write, and what APIs do you call in the framework. That's still true, but this new GC mode is intended for the actual physical memory use of the process to be more in line with what your application is doing at any given temporal point in time. Because this stuff is never straightforward, ideally we would just want to use only the memory I need and not a bite more.

But the truth is, if you want that you have to go C++, you have to go and do it yourself. You have to go and manage the memory yourself. This is a memory managed environment. Even in native AOT, it's still a managed run time. That's something else that I always try and make sure I address when we talk about native AOT is that often the words managed and native are used to mean opposites.

That's actually not a fair or an accurate portrayal. It really depends on what aspect of the run time you're talking about. Go is a managed run time even though it emits native code.

It doesn't have a Git compiler, but it is memory managed. It has a runtime component, there's code in your Go application that you can't get rid of that manages memory allocation and de-allocation just like.NET. They have a garbage collector and we have a garbage collector. Native AOT.NET Apps are native applications,

there's no intermediate layer. It is a native executable for the platform that you compiled it for, but it is still memory managed and so you have to keep that in mind. >> What's strange are interesting about this whole new paradigm for building app is, people have a lot of assumptions from C and languages that started off native. They're faster, they're smaller, they use less memory. I think those assumptions carry over when they hear like we're doing OT for C#. They think they can just like make an app and it's going to be super tiny and small and the GC isn't there.

There are different concerns like the GC being there, it doesn't mean you aren't native, just doesn't have a GC, but it is native. They're just different facets that typically come together in different languages. I think people think of native AOT and C#, like C and rust.

It's more like Go which is like there's a whole GC still running. It's still having a heat per core and it's still using that memory. Why is that? I thought native apps were small, tiny, fast, and efficient and I said well, you can start it faster because the git isn't there, but then you still have a full GC etc. There are differences in the runtime as well like the freight pool is different. There's a lot of things that are different in the runtime.

The runtime in AOT is fully managed, and the runtime, which is funny, the runtime for course LR, the Git is not managed. It uses less memory at run time because there's no git that has to generate code at runtime at the same time we have to emit more code at runtime because we can't git on the [inaudible], generic instantiations, etc. But in our measurements those differences turn out to be a wash, I guess. They don't show up as big changes between git and non git. What we've seen biggest factors have been trying to reduce the size of the libraries.

That's been a big thing in.NET 8, there's been a lot of back and forth like trying to optimize libraries for speed versus size has been a big factor. You'll see some changes in the run time trying to remove generics, for example, so you don't have generic instantiation explosion at AOT time to make it smaller on LT but bigger for the git based frameworks. A lot of those changes went in and then things like generic math blow up the size of the AOT framework. So they have to find other ways to reduce the size, to make the size like even O. But we haven't seen any significant, oh my gosh because you're AOT, you are no super tiny.

>> Yeah, it's all tradeoffs. The default console app in.NET 8, you can tick a box during a project creation in.NET 8 to say I want a console app that is made of AOT published. If you just publish that Hello World app, it'll be about a megabyte on Windows which is fantastic fits on a floppy disk, which doesn't matter.

It's all just about vanity at this point, but it's cool that we have a default app that is that small. But in order to hit that, requires either making choices about what runtime behavior and functionality you get by default that you don't have to opt into or opt out of, or other aspects of the development experience have to be thought about. Or we have to re-implement something in the runtime to do it in a more size compact manner. Often when we do that, we have to make a choice between do we prefer it to be faster at runtime or use less memory, or do we prefer it to have less code? You cannot just say, I want less code, I want to use less memory, and I want it to be the fastest. It just doesn't work that way because physics, and just like anyone who has done algorithms, which I haven't, but I know that often when choosing an algorithm for a computer science problem, you make tradeoffs between well, this will use more memory, but it'll be a much faster, I can calculate it before the heat death of the universe, but I need to have more memory in order to do so. Whereas this other approach is much perhaps faster, it uses a lot less memory, but you know, it's going to take me 100 times more compute because that's just the nature of math, that's how these things work.

Even when you scale all the way up to just building a full application, you still have to consider those type of things. To the point about comparing with things like Go, what we're finding in our labs when we benchmark these apps is that, yeah, Hello World of like a console and Go and a console.NET are actually fairly comparable now. Then the curve as you add more code really depends on what that code is doing.

If you use Go with something like Gin, which is a very popular HDP API framework for Go, and you use.NET with a net core, minimal APIs on the other side, they're comparable when it turns to app size. In some cases we actually come out lower. If you use GRPC again, what we're finding is that we actually are a lot faster in a lot of cases because we allocate more memory upfront even with the new GC mode and because we generate perhaps more efficient code.

But the Go app might be smaller on disc because they philosophically they've made that trade off rather than the other way round. Because.NET is very much a batteries included framework, whereas something like Node or Go, you get very little with the core platform and then you're required to bring in more code to make these higher level application frameworks. Whereas SP.NET obviously is basically part and you get all those opinions and all that functionality included in the box, and it's seen as odd to use something alternative to SP NET Core in our ecosystem.

I have no problem with people using something alternative SP NET Core at all and I strongly encourage it if that's what people want to do. But it's just not the norm inside the.NET ecosystem. Because the product includes and we fund and we produce a framework with a lot of these features built in. That's just different to what it is in other ecosystems and other stacks, and so the trade offs look a little bit different. >> It's also funny, I saw a question about compilation Steve. It's abysmal if you're used to like fast compiles for.NET assemblies.

>> It's got some graphs. >> It gets pretty slow compared to the current build times. >> I'm going to go to our Public Benchmarks dashboard and I'm going to go down to the Native AOT page so anyone can see this stuff. Let me just pull this over a bit so that people can see it even with our lovely faces pinned to the right, so I'll do it about there. These are all the different variations on the Native AOT page. All the benchmarks we're running.

Stage 1 is representative of a Hello World API. It's effectively that template that I showed you before. If I compare Stage1, Stage1AOT. Stage1 is coresular; no Native AOT.

Just like that API, but not published Native AOT, and then Stage1Aot is that with AOT. You can see the requests per second. We actually saw a huge gain in the non-AOT version during development of.NET 8, and that was due to improvements and product design choices with regards to the JIT, effectively in coresular which we cannot really emulate over on the AOT side. Things like profile-guided optimization. >> Wait until you see Steven Toll's blog post [inaudible] It's epic, it's coming super soon.

It's epic. It describes everything. >> But if you ignore that, they're mostly the same other than this coresular-specific optimizations that we can make. These are still incredibly fast like, 715,000 requests per second to return a lister to do in memory. That's JSON serialized on these machines.

These are six, seven-year-old machines. There are very few people in the world running these types of APIs without caching, without any other type of optimization that require to get these types of throughputs. These are benchmarks, so these numbers are ludicrous. What's more important is that this demonstrates some of the tradeoffs. Startup time down here, the green is AOT, so the coresular version is starting up in 140 milliseconds on this hardware, whereas the AOT version is starting up between 30 and 40 milliseconds depending on the particular run.

Massive difference. Our goal was to get under 50 milliseconds for this particular aspect of the test and so we hit that goal which is great. Similarly, time to first response. Now in ASP.NET Core, you may not know, we defer a bunch of the ASP.NET Core logic.

Things like routing, endpoint table construction, metadata evaluation, etc. We defer a bunch of that stuff until the first request comes into the application. We do it for reasons because we don't know all the information until the first request. Technically, there's probably an earlier time, but it would be hard for us to hook that in the framework. The first request is the most convenient time for us to hook that, which is later than the last possible time that that data could be updated. We hook first request.

That's why we see, even though the app is started up after two, whatever, it was 140 milliseconds, when the first request comes in, that request takes 140 milliseconds as well because we do a bunch of extra framework-level stuff that only happens once on the first request. For Native AOT we, basically, do a little bit but it's just nowhere near as much and so it's much faster on that first request on top of that startup time as well. Memory use. This is what people like to see. Max working set. The normal coresular one, you can see it's working set is about 100 megabytes. The AOT version is under 50 megabytes.

Less than half for this Hello World JSON Serialization to do application. That's fantastic. CPU is about the same, which means they're both effectively using the resources on the machine from a compute point of view. The second one here is working set. Rather than looking at the max working set, we're looking at P90 during the performance run. Again, it's fairly similar because during a benchmark, you don't have quiet points, and so the new GC mode, the data doesn't have a chance to go, the apps not busy anymore. Let me scale down the memory use.

No, during the run we're basically hammering it with as much load as we can so we don't see much variation between the max and the P90 here, but you can still see the difference between Native AOT and non-Native AOT. Then the last one is application size. So if you take the application that's coresular and you self-contain publish it, it's 95 megabytes and if you take the one that is not Native AOT, it's under 10 megabytes. This is Linux in this case.

That's because it's been tree shaken to remove all the parts of the application and the framework as well that aren't being used. Much more. If we go and compare something like a GRPC AOT app with the GRPC Go App, we can see some of those other tradeoffs start to emerge that we talked about before.

Here we've got GRPC Go, is in yellow, and it's RPS seems to be pretty much limited at 400,000 requests per second, whereas the ASP.Net core GRPC AOT is nearly a 1 million requests per second. Now, GRPC testing is nuanced because GRPC is multi-channel and multi-stream, it's multiplexed, and so it really depends on the variables and we have other benchmark pages, you can look at that look at different variants of GRPC, different permutations of like, is it one channel with 50 streams, or is it 50 streams with one channel and how you all that type of stuff. But in this particular Hello World GRPC example, this is what we see. When it comes to the startup time, you can see again, the Go app is actually much slower to startup, even though it's also native in this application whereas the ASP.NET Core app is much faster just startup.

Then in the working set we're a little higher and this is what we talked about with regard to those tradeoffs before. Our RPS is double and our memory use is not quite double. The Go application seems to be using about 59 megabytes of max working set and the ASP.NET Core is about 121.

Then application size again all the way down, we're actually smaller. So in this application the Go App, let's see if I can get the right bullet point, is 20 Meg, and the ASP.NET Core application that's GRPC is 12 Meg. It all really depends on what you bring into your app with regards to what the final disk size application is. Incidentally, we did a bit of customer research on this whole, how do customers think about the trade offs between throughput speed, memory use, startup time, and disk size.

Overwhelmingly, disk size was the lowest importance for customers given the choices that we gave them in the surveys. Obviously, if you tell someone the app is going to be a gigabyte, they might care, but if you're giving them a choice between 20 Meg and 50 Meg, they don't care. They care much more about what the memory use is, the startup time is, and the throughput. Throughput was the thing they cared about the most.

But again, that was based on the choices that we gave them. In reality, I think throughput usually cares most with regards to, am I getting adequate throughput for the application needs that I have for the load I have, commensurate to the cost, like how much am I paying for the computer to get that throughput, obviously. Obviously, things are a lot more nuanced and a lot more complicated in real life but when we talk to customers about these tradeoffs, we are starting to see some patterns which is interesting. I hope that gives people a little bit more insight into how these variables of app size, memory use, startup time, and throughput trade off against each other as we explore this Native AOT world with ASP.NET Core applications.

>> Cool. We've got tons of questions coming in. Feel free to just say, I want to show more stuff or whatever. Otherwise I'll just keep taking questions. >> My questions are great. You're going to ask them. >> If we have more than 1 million users in a website application, is AOT Publish recommended? >> I would say that those two things are just not related that way. Whether AOT Publish is recommended is going to depend much more on will you get the benefits that AOT publishing provides? Namely, does the vastly reduced startup time potentially lower memory use at runtime and probably lower disk size? Do those things provide a benefit to you? There are lots of compatibility requirements for Native AOT like we touched on them briefly.

The fact there is no JIT imposes a whole bunch of compatibility requirements which you could also just call restrictions which makes it hard for a lot of libraries that a lot of.NET customers are used to using to work. >> You can't dynamically generate code after the app has started. That's what we mean by out runtime, and then load that code. Anything that relies on reflection to then do any type of IL emit or expression generation into code for performance reasons, those things don't work. You can't dynamically load code at runtime.

If you're relying on a module wide based system where the app starts, it scans a folder of assemblies, and then loads the ones that it wants based on configuration. You can't do that in native AOT because the app was fully compiled upfront. You have to do that type of thing at compilation time, not at run time.

It's all about tradeoffs. If you need to run more than a million users in a website, I'll tell you the biggest Microsoft sites in the world run much more than a million users and they're not using native AOT yet. Also, it's not as simple as saying my website equals one app.

The reality is in most deployments these days a logical website from the user's point of view is made up of many different applications in the back end, probably going through multiple layers of routing and caching, and it might be that some parts of those you will get the benefit of native AOT and it's worth making that investment there, but for other parts it doesn't matter as much. It really depends on the use case. >> One of the big changes is the fact that libraries aren't compatible as Emmy said.

We assume that as part of this journey, you will try out, file new project for AOT to add your libraries and it will work and your app will get huge or it will break at runtime in some strange way or you'll get a warning. The idea is that this is a multi-year journey, so the whole world has to catch up with it. We're going to end up having to teach people how to build libraries that are compatible with this stuff, for example.

>> As an example of a warning. I tried to use a part of ASP.NET Core that has been marked explicitly as not compatible with trimming on native AOT.

Right down the bottom here, it says, I can't move my mouse and have it show up. You'll need to see, I can point at the screen, right down the bottom of this big gray square, it says IL2026 and you'll see in the bottom of my screen, there's now a warning showing up. It says, yeah, AddRazorPages, which has this attribute can break functionality when trimming app code. Razor Pages does not currently support trimming a native AOT. If the library has been updated or we sometimes call it being annotated for trimming in native AOT, you'll get a warning in the editor like this.

If it hasn't, you won't get the warning until you publish. Then when you publish, we'll do a full analysis of the entire application as part of the publish and you'll get this warning instead in the published output. >> Let me see. Gosh, so many.

Here's just, is it faster than Node? You were comparing to Go. >> The reason I'm trying is not even close. ASP.NET is so much faster than Node. It's not even funny in most cases. If I go to the Benchmarks page and I bring up the Json benchmark and I compare to Node here.

>> Do Json map action. >> Yeah, let me do Json map action. Json map action is minimal APIs, but the red one is minimal APIs.

We get 974,000 requests per second Node on the same hardware in a clustered configuration. It's running a Node process per CPU and is getting 500,000, so we're double the speed. >> Question here on, let's move the spec, native AOT Web API support for Azure functions. >> Good question. >> Yes, the Functions team will need to make changes to their development experience to support native AOT.

>> They're very aware of it. They actually are working on changes to make it compatible. But we still have the issue with the ecosystem. We spoke to the Azure, still we're in Microsoft, we spoke to all of our libraries that are very popular within the company.

Things like the Azure SDK, we've been talking to you about support for ALT there, the identity model, so the job based libraries Azure AD, all those libraries should also be compatible. I think we actually got it compatible in the latest release. That was a lot of work to move that team from Json.NET to System.Text.Json.

Actually made the perf really good that happened I think in the last month or so. There's a lot of slow crawling library dependencies that have to all be changed to be friendly. For example, it's swash buckle. This doesn't have swagger if you notice.

That's because it uses reflection to discover points and that doesn't work anymore. >> A big thing too that jumped out at me, it's not HTTPS by default. Again, because [inaudible]. >> Yeah, that's a really good topic too. >> It's a good observation.

>> Yeah. >> That's mostly for size. You can use HTTPS with Kestrel with native AOT.

Our assumption, which again is very hard for us to make global assumptions that always apply, but we made one here; is that folks using this are most likely going to be deploying into a multiservice, distributed application architecture, microservice type of thing. It's very common in those scenarios to have HTTPS offloading where you ingress those HTTPS at the reverse proxy layer, but then your individual services might not. That's not always true to be totally fair. But we also looked at other stacks.

Things like Go don't do HTTPS by default or Node. You have to configure those things manually. But us including that by default in the template hurts our output size even if you never use it because it's a lot of code to bring in crypto support if you're not using it.

By default, the create some builder does not support HTTPS for all of those reasons and you can call a single method to add it back and it will add a bunch of code back to your project and that will add more app size to your output, but yeah. >> I want to add one thing about what we learned doing this work. I think the biggest learning for us was the fact that you basically need highly loosely coupled software to optimize for pay as you go mindset. You want someone to be able to pay for features that they use and not pay for everything just in case, your software has to be such that you've designed highly decoupled components that can be plugged in at a start time and then you only pay for the things you call essentially.

Only the things you started to call get pulled in, don't get trimmed, etc, and that was the issue with the whole create builder and there's a class of API styles that don't work well for trimming. The moment you have a function that has 10 if statements and are turning off features, in case rules are set, that pattern is fundamentally broken with trimming. How to pick branches based on runtime values, it can't figure things out. We had to think back to all of our designs where we had one method doing five things, the options have to be led to the user. >> The in code, the thing that really hit home for me was that you cannot have configuration based defaults.

Because a configuration based default, configuration meaning a Json file or an environment variable, our configuration system. If the framework is using configuration values to enable or disable features, configuration based defaults, then that code has to exist in the application so that you can change the configuration value after the app is running or after you've deployed it, after you've compiled it. A lot of the stuff in here was things like that, like what logging providers do we enable by default and that you can go into configuration and adjust the granularity of, the verbosity of. That stuff doesn't work. It's an absolute trade off.

Fowler is talking about how it changes the APIs. If people who have been with us since ASP.NET Core 1 and ASP.NET Core 1.1 will remember the first APIs that we shipped were to get this. >> Working. You had like 17 lines of code because you had to

add the server and you had to add the conflict providers and you had to add the logging, and you had to use all this code to get all those defaults. That was perfectly Native AOT compatible. It was trimming because if you delete the line of code, the code behind it doesn't come in.

But it's just unwieldly, it's very hard for people to get up and going with a good set of defaults in that world, so we have to make tradeoffs, and it's all about tradeoffs. >> I think you've answered this question, but I just want to bring it up for context. With all these restrictions on Native AOT, why I invest a year, how many programmers will find this useful? I think it's like, I feel like you're saying it answers a specific thing. For the case where developers want to build something that's dynamic and it's cool that we're discovering things at run time and we've got reflection and all that, a lot of those cases are solved but then the cases where you want to compete with really small, tight start up microservices, you pay for play on everything, this is what you're building here.

>> Yeah. I'll say from a product point of view, there's a reason we've done this work, Innate, it's because we got very strong customer feedback, that this was an aspect that was preventing them from being able to in those types of scenarios that you talked about, John. It wasn't that we thought this was a good idea.

Sometimes we do stuff because we think it's a good idea and that's fine, that's a perfectly valid way of doing aspects of product development. This particular case, we had specific strong customer signal that folks were like, "Hey, I tried to do this thing," comparing this to this, trying to do this type of app, looks terrible for this. The coding model was great, I enjoyed building and writing the code but after I published it and stuck it in the container and measured these aspects which are important to me for these reasons, it just didn't compete. We looked at the technologies available to us that we had in our Satchel already, as it were, and so which one of these would best address this feedback? Native AOT was the one, it was the obvious choice. That required us to go into a whole bunch of thinking and working with how do we make ASP core work with the restrictions or compatibility requirements of Native AOT, and this is where we've planned it.

Now we wait for the broader customer feedback after A to see how many people actually think that this is something that will help them. >> But whenever we build these features, there are always the secondary effects that happen as a result of investing in things like this. Data, the new DC mode. That was the forcing function, was that customer e-mail that said, when I run your app in a container and I compare it to go, the memory is 10X.

It was like they didn't go the extra mile to learn how to configure it and make it smaller and configure limits, they were just trying to hit the tires, what we call them, kick tire? >> Tire kickers? >> They were like why is it 10 extra size and 10 extra? It was faster but it's huge and it's bloated like why it's so big? Even though we landed on that new GC mode, that's going to be, we're hoping the new default for every All these secondary effects from designing new API's and making Castro more configurable or having more APIs to turn things on and off, those always accrue to things that we didn't foresee or that we did foresee in the future. I feel as though it may seem like we're investing in this super niche thing, but in my experiences, always been these other additional benefits from having these new designs that are foundational. For example, when we make the signal our client AOT friendly, that will work for games. We've been getting feedback to run the C Sharp client in environments that don't have a for ages and it's never been a p

2023-09-13 21:18

Show Video

Other news