Intel and AI PC: Igniting a Software Ecosystem | Technology Launch | Innovation Selects

Intel and AI PC: Igniting a Software Ecosystem | Technology Launch | Innovation Selects

Show Video

What I'm here to talk to you about is how we work with those hundreds of ISVs that utilize our leading silicon engines so we can unlock those new and enhanced experiences that truly unlock the software ecosystem. And yesterday, I had the opportunity to talk to some of you at our press event, our press dinner. And someone mentioned to me, software is where it's at. So I tend to agree with that. Experiences that allow users to better create, connect, play and learn is what we aim to achieve.

So for 45 years since launching the 8086. Many of you may remember that Intel has worked hand in hand with OS and software developers to ensure software runs best on Intel, and this is make that has made us the scale platform of choice and helped us build an unmatched software ecosystem. Now, in the first 25 years of our journey, we spent a lot of our time focusing on optimizing the OS's, creating tools, libraries and frameworks.

And what we did is we wanted to simplify software development on Intel. And in the last decade, we launched OpenVINO. Many of you are familiar with OpenVINO. It's some framework for high performance inferencing that uses our CPU, GPU, and now NPUs to accelerate client workloads, making it seamless for developers to utilize our multiple engines. And now in this era of AI, we continue to lead and we lead with the creation of an AI, PC developer program, a specific program for developers focusing on AI.

We launched this back in March. Many of you were there in Taipei, and it's a one stop shop for developers. And we also expanded our support for AI frameworks to now include direct ML, PyTorch, and WebNA. And last year, many of you may recall we committed to enable over a 100 ISVs and over 300 AI features. And I'm happy to share that we've actually exceeded that goal ahead of schedule. Simply put, software enabling is in our DNA because only Intel can drive scale at this magnitude.

And we actually got here by learning a lot of lessons in our four decades of software optimizations, again, working across multiple lessons and enabling hundreds and I mean hundreds of apps, partnering with thousands of developers. And our commitment to the software ecosystem always starts with leadership. Silicon. How do we do this? We conduct early disclosures of our platform technologies so we can give our ISV partners time. Right. We do this with plenty of time so that they can align their use cases and start making decisions. They start mapping the right workloads to the right expo.

We also receive feedback. We have architectural review boards, so we bring back the voice of our ISV partners and infuse that back into our product teams so that the next generation of products are even better for our developers. Breadth and depth is required to reach scale.

So we work closely not only with our leading ISVs, but also with hundreds of medium and small partners that range across segments, anywhere from content creation to security. Kind of everything in between. Specifically, we provide dedicated engineering support so we can optimize, we can test, we can code, we can validate these apps so that they deliver greater experience that run on Intel. And actually, many of our software engineering, team members are here in Germany. We don't stop there.

We also provide support to thousands of developers on the Intel developers zone, so they get access to the latest SDKs frameworks, tools and those optimized models that Robert mentioned. And we know developers like choice, so we like offering Lunar Lake dev kits because developers like to get their hands on hardware. Actually, we have some hardware right here.

So today I'm proud to share that two of our OEM partners have decided to have their own developer kits. We have a dev kit here from ASUS and also from Samsung and HP, as well as the dev kit that we showed at Computex. Yay! Additionally, developers can access Lunar Lake devices through our Intel Tiber Developer Cloud later this month. So what does that mean? It means developers can remotely access, develop, and test their AI applications free of charge thanks to Intel. So you got to have choice. Last, our go to market engine is really value to our ISV partners.

We've learned a lot in the 40 years of software enabling, and this was one of the things that they've really learned to appreciate from Intel. It really helps us help them drive market awareness so that we can differentiate various fees through marketing campaigns, software bundles, industry leading retail point of sale support. What do I mean by this? I mean we help train 48,000 retail sales professionals on key enabled apps globally. This effort that I just spoke about the suffered is real co engineering. This is work that is unmatched, but this is work that results in the unmatched software ecosystem. That really is more than just logos on a slide.

This is real co engineering work. And with Lunar Lake we've enabled more new and enhanced AI experiences. And this is possible because we bring together our optimization efforts. And we couple that with Lunar Lakes fastest and most efficient CPU core, highest performance graphics and unmatched AI.

And it allows ISVs like Zoom Microsoft with Copilot plus PC and Adobe to provide instant creativity, saving time and simplifying complex tasks. So let's talk about Adobe. I think most folks have heard about Adobe. So we've been actually partnering with Adobe for decades, and we've helped enable many of their popular applications like Photoshop, illustrator and Substance. And specifically for Premiere Pro, it's a crowd favorite among video editors. I'm sure many of you have used Premiere Pro.

We've now incorporated new and optimized transcription and caption features on our GPU, and we've paired that with our software, enabling. And look at that. It runs 85, 86%, sorry, 86% faster than competition. Now let's take a look at Lightroom.

It's another favorite amongst photographers. We worked to offload and optimize denoise also on an integrated GPU. And guess what happens? You got it. It runs 145% faster than competition.

Guess what I'm going to say next? Finally, Adobe After Effects with the Roto Brush feature also optimized on our integrated GPU. Guess what? It actually doesn't even run on competition, but we see a 54% improvement Gen on Gen. Again, real engineering work. So I want to give a big thanks to Adobe Maria Yap from Adobe specifically for their years of incredible partnership. And I'm also really excited to see all the creativity that Adobe users will unlock with our next generation I GPU with exa Max.

All right, so more ISVs to talk about. Next, I'd like to introduce three ISVs that we've been partnering with again for many, many years Canvid, Magix and Trend Micro. And we've partnered to ensure that their critical workloads are optimized on the right XPU. So let's start with Canvid.

It's a new AI, a new AI app from the same folks behind XSplit. Many of you are familiar with XSplit. Canva combines stunning screen recordings with powerful AI tools, allowing the average user to avoid retakes and the dreaded words like ums and oz, removing those so that the user's voice is actually matched with the lip movements. Next up is Magix.

They're actually based right here in Berlin. We've been working together with Magix for 20 years, and their Vegas Pro app has 15 million activations to date, and Vegas uses AI powered text prompt to simplify video editing. And last in our lineup is Trend Micro.

Trend micro is a cybersecurity ISV. They protect. Listen to this 500,000 organizations and a whopping 250 million individuals.

Trend micro runs critical AI based security scans locally, and that provides benefits from privacy to performance. So with that, I'd like to join leaders from each of them up on stage. We starting off with Canvid we have Henrik Levring. So big round of applause for Henrik.

From Magix Henrik nice to see you from Magix. We have heard, Hagan Hirche, nice to see you. And from Trend Micro, Eric Schulze, welcome, gentlemen. Yeah. So we have the pleasure of having, three ISV that we've been partnering with over the years here on stage.

And we have a couple of quick questions for them, so that you can hear directly from them. So how are you guys doing? Great. And. All right, guys. So first and foremost, can you say a little bit about how your work with Intel, has gone over the years? You know, obviously we've worked together for many years. And so can you say a little bit about how it's going? Let's start with you, Henrik.

Sure. Yeah. So, so we've worked with Intel for for more than ten years. First, first I worked with Intel my capacity as a CTO of Expid that, some of you hopefully know from the streaming scene. And then now, also as the CEO of a sister company who is developing Canvid I think one of the quite unique things here is that is actually the same two man team that we've been working with through out the whole period, with one member representing the the business side and one member representing the technical side.

And you know, what's extra amazing here is that they actually work together. So so for us, that actually translate into to a single point of contact with whom we have monthly catch ups and also connected in real time chat. So that's a really effective way for, for us to, to work with within to and allows us to move quite, quite fast. Now, if you contrast that with working with a lot of other large companies, you often you may have a technical contact and also a commercial contact, but they they'd be completely separate. They may change from year to year or even from month to month. And quite often you get handed off to other stakeholders in the organization that might hand you off again, with the original contact actually getting disconnected altogether.

So yeah, I think Intel's model for working with those of you is, you know, it works really well for us and it makes us move super fast. That's great to hear. Awesome. So, Hagen, how about you? How has it been working with Intel over the years? Yes, we also have a very long standing partnership with Intel for over 20 years.

In the meantime, and it's a very good partnership. It's actually our most important hardware platform partner. And, so we get all the tools on the platform, which we need to analyze the performance of our software, as well as for optimizing, just to give you two examples.

So, we use Quick Sync through Intel's video processing libraries, which got mentioned before. To accelerate our media pipelines. And we use tools like the OpenVINO toolkit to accelerate our AI workflow workloads, like the aforementioned assistant feature, which is new, and Vegas Pro, and that's only the tool side. We are also happy to be connected really well to the Intel engineering team. So we find world class engineers who help us throughout the whole, development process. We attend Architectural Review Board meetings where we get early information about future development.

At Intel. We can give feedback and make an impact. And later in the development process, we get early access to hardware guidance, through development. That's all really, really important for us. So I appreciate this good partnership column. That's great to hear. Yeah.

So some of the things I mentioned, I want to make I want to make sure you highlighted that as well. So thank you for mentioning that. And Eric. So our partnership with Trend Micro goes back a while. How has it been going? Good. Yeah. So we've been we've been partners just like some of the others for a couple of decades now.

One of the things that we've always valued with Intel is the openness of the ecosystem. The reality is, in the security space, we have to support all hardware vendors. It's just because we don't get to choose what our what our customers run because we have to protect them where they are.

So having an open ecosystem that others adopt and are used widely actually makes our life easier, lets us run faster and create, newer use cases as well as the developer support. So I think, Carly, you had the pleasure of meeting some of my R&D leaders in Taipei earlier this year. As well as, you know, then, the support we have as a global company, right? So we have large R&D offices in the US as well as in Taiwan. So having those regional support, when we need it as has been critical. Awesome. Great. Well thank you gentlemen. So now let's talk about the Lunar Lake platform, which is why we're all here today.

So we all heard from Robert, all the incredible features that we're bringing, to bear with Lunar Lake. So specifically for Lunar Lake, what were you able to leverage? So we've got a lot of leadership across CPU, GPU and NPU so from your perspective for what you're trying to optimize on or optimize for, what were you able to leverage? And let's start. Let's start with you, Hagan. Yeah. So, you laid out that developers like us have a choice within a, like, a very powerful GPU and pure CPU. So actually, GPU is most important for us. So we have some, really, really demanding workloads.

Let me just give you two examples. So in our, flagship video editing software, Vegas Pro, we have, complex video effects, like for instance, z depth. This enables creators to apply effects to certain layers in a video like to the foreground on the background only, or put text, titles, images, videos in between layers of a two day video.

And it's pretty simple to use thanks to a AI of thanks also to Intel research, who developed, the model which which is used for this effect. And, yeah, as I said before, we really need a powerful GPU to run workloads like this. So that's why we really, Yeah, opt for for GPU many times.

And, I'd like to highlight also another example. So you talked about this AI assistant before. That's a new feature. I'm excited to share information with all of you today here. Because yeah, we are going to, introduce some very new unique user experiences with help which help people, new customers, new users to get started with, video editing more quickly. Okay.

More easily. And, how are we going to do this? So we introduce an assistant which is based, on language model, exposed to a chat interface, which you will know, and as you can imagine, that's, that's a pretty big workload. And usually this runs in the cloud. But we were able, thanks to Luna Lake, thanks to the advancements, to bring it onto the local PC to run ads on a nice thin device in the end. And that's a huge step for creators. Awesome.

Thank you. So, Eric, let's go to you. Yeah. So we've so in the security space, performance is key because if we empower risky, if we use too much of either, unfortunately we tend to get removed. That's just the nature of the security world.

So the NPU specifically has enabled us to do things that we just couldn't do before on the CPU efficiently. Some of the, OCR that we do, some of the data discovery we can do for for sensitive data detection really requires that capability, because the only way to do it before really was that that scale was out in the cloud. And from a privacy perspective, that just didn't fly. So we didn't even try to do it because no one would would want us to do it anyway.

So now with the NPU, we're able to move all of that locally, process it locally, and really just get get some new use cases that we couldn't even do before just without that power. And even compared to Meteor Lake, Lunar Lake has enabled us to run it faster. And as you know, some of the numbers that the that, shared earlier, which are quite impressive, means even less power has to be used to to do some of these use cases. So quite excited about that. And specifically for security, some of the privacy, mandates in some countries are important. You want to see a little bit more about that.

Yeah. So we have certain like specifically being in Germany here, where, you know, one of the stricter countries in the world for privacy concerns, there are things we just couldn't do. Right? So, where we used to have to prompt the user with a data collection notice because we'd have to send the data to the cloud.

Now, that's not required anymore. I as a feature, we, we initially introduced or highlighted at ITT where we just couldn't do it and talk maybe a little bit more about that in a bit. Awesome. Awesome. Thanks, Eric. All right. So Henrik, how about for you?

Yeah, I think maybe let me just give you a quick little background here. Right. So, so aside from, from remote access, my, my first, our first of experience with the Lunar Lake platform was the engineering samples we got from from Intel. Right.

And usually, you know, engineering samples can be both a blessing and a curse. Blessing, of course, because you get to work with leads and greatest and and a curse because sometimes the software needs a lot of alignment with the hardware before it gets joyful to work with, if you can say that. But in this case, I think the blessing has far outshined the the curse. Okay. And I'll be one of the first persons to line up for, production Lunar Lake so OEMs out there just find me. Okay, so just like, like Eric, one of the things we are super excited about is the, the NPU.

It's, it's incredibly powerful. We are amongst the things developing a model for, for face generation or lip, lip and, lip sync generation. And this model also runs on, on, on Meteor Lake but it just went incredibly much faster on, on on the NPU of Lunar Lake.

so just in context we can generate around 80 talking head frames per second. And if you think of that, then it sort of it becomes more efficient to do it locally than in the cloud, actually, because if you consider that for the cloud, you even have to to to also count in the round trip and so forth. So, yeah, we're totally excited about the platform for sure. Awesome. Great.

Okay, so last question. We'd like to think about your customers when we're developing a lot of these things and make sure we're providing value for the creator, the end user that's going to be benefiting from the extra security scans, etc.. So what are you all and your customers most excited about? When we think about what we're bringing to market with our Lunar Lake based AI pieces, let's, let's start with you, Eric. Thanks Carla.

So from our user perspective, wherever we can remove friction and increase security is always one of the things we like to look at. And with the the NPU, there was things we could do in the cloud that actually had a high cost to trend that by moving to the NPU, we were able to actually remove user friction. So as I mentioned earlier, you know, very fitting being here in Berlin in Germany with the production loss.

But we used to do some scanning in the cloud. And to do that, we'd have to prompt the user with a data collection. Notice that I'm sure you all are quite familiar with saying, hey, this data is being collected now, we don't even have to do that because the data never leaves the device anymore. And without the data leaving the device, there's no, you know, much lower privacy concerns, etc..

So this is one of those rare opportunities where we've been able to increase security, increase privacy, and decrease user friction. Normally that doesn't happen, especially in our world. It's awesome. So the the fact that you don't even have to ask the user benefits because you're actually protecting that end user.

It's like we're expecting one of our features that we used to have a decent adoption rate. Now we'll have pretty much a guaranteed adoption rate because we don't have that friction step. That's where we saw, the most significant drop off in the process. It's awesome. It's great.

Okay, so, Henrik, how about for your, Canvid users? Yeah. I mean, I think we're talking about the future here, a bit. Right. And I think the entire roadmap for Canvid is actually built, on the promise of the, the IPC and and platforms like Luna Lake.

So, I don't think we'd be able to deliver what we think customers, need and want without, without the event of the, of the, AI PC. That's pretty powerful. It is definitely, definitely.

And and, I think, of course, it's of course we could do it in the cloud. That's that's possible. But, I think from our conversation with customers, it's it's not it's not what they really want, that there's a bunch of reasons for that.

And, I think probably on top of the list is what, what Eric was also touching on before, privacy. But but general. Yeah, we're very excited about about the opportunity. And I think our users will will, we'll get to experience that.

But it's great. And, Hagen Yeah, let me get back to this, AI assistant example to explain what Luna Lake and the new development means for our customers. So first of all, they save a lot of time.

And time really matters in the creative process. So, as I elaborated, so this assistant gets users started with video projects more quickly. And thanks to, media workload acceleration, they can work on their projects more quickly.

So this saves them, a lot of time. Secondly, also, they save money in the end. Yep. We save a lot of, cloud costs for costs for cloud services. And we can give it back to customers. They save money for subscriptions because we can offer this AI assistant as part of perpetual, licenses. That's the huge one.

And, such privacy is, of course, important for us as well. So, we, we want people to build, trust in AI because we believe it can be really helpful in the creative process. We want them to trust that we don't use their data for training, that we don't send something to the cloud, but instead use their local PC to run all workloads. Actually, we have a slogan and you will slogan at at Magix which is AI means assistance with intention. So we really want to help creators, with AI, not replace them with AI.

Generative AI. Yeah. That's great. That's great. I think it's actually quite interesting here with with what you were saying in Hagen that, you know, with if you're depending on the cloud, there is no such there is no such thing as perpetual licenses, right? Yeah. That just doesn't exist. And so with Canvid for example, you, you do get a perpetual license and and. Yeah, so, so I think that's a very good point there.

Awesome.

2024-10-16 12:23

Show Video

Other news