Meteor Lake Overview: In-depth with Intel Architects and Engineers | Talking Tech | Intel Technology

Meteor Lake Overview: In-depth with Intel Architects and Engineers | Talking Tech | Intel Technology

Show Video

- Hi and welcome to "Talking Tech." I'm your host, Alejandro Hoyos and I am very excited as we're gonna be talking about Intel's latest processor, Meteor Lake. (inspirational music) We have gone from a monolithic die to a completely disaggregated multi-tile which is Meteor Lake. We're gonna be interviewing different architects and engineers that are gonna take us through different tiles of Meteor Lake, which are the I/O tile, Graphics tile, Compute tile, and the SoC tile. We also had the opportunity to travel all the way to Malaysia to learn more about Meteor Lakes' packaging and toured the different facilities that cover assembly and testing. (inspirational music continues) So now we have this new architecture, the new name that we call it, that we calling it a 3D Performance Hybrid Architecture.

Can you tell us a little bit more about what this is? - We have had P-cores and E-cores, right? We have had that since Alder Lake timeframe. We still have that on Meteor Lake timeframe as well. So our Compute tile that we have on Meteor Lake that has P-cores, which are designed for high performance for ST-limited threading cases such as gaming content creation, we deliver on that. Then we have our E-cores on the Compute tile, which is basically there to deliver multi-threaded performance under given power envelope.

And then with Meteor Lake, we added this SoC or the SoC tile that we have. And on SoC tile we added E-cores as well. And those E-cores, really the objective of that is providing energy efficiency to that design point.

- And what we're doing there is having the ability to have separate E-cores on the SoC tile. In this thing we call the Low Power Island. And the Low Power Island is not just E-cores but consists of key IPs like the media, the camera, the display, and other IPs to allow us to do a wide range of workloads in low power mode. And what's great about this is while the Low Power Island's on, we can turn off the main Compute tile. So as part of our disaggregation architecture that we did with Meteor Lake, we were able to actually break those tiles up and have the ability to turn some of the tiles off.

- So let's say if you have, you know, low usage activity going on that is not scaling, it doesn't need as many cores or anything, we can run them on the SoC tile and keep the Compute tile power down. And this helps us get power benefits. But when we need that high demand, high performance work, we have our compute complex, we run things on that and we deliver on performance as well. So we have the Low Power Islands, the cores that are in the SoC. - Hm mm. Then we have the E-cores and the P-cores that are on the tile Compute. - Yes.

- And so all those three is what we call the 3D performance architecture. - 3D performance architecture. - So for Meteor Lake, there's a few things that we really focus on there is the first is the DLVRs. DLVRs we integrated into Meteor Lake in order for us to allow find power control of each of the IPs, allows us to control individual tiles, or allows us to go the IP layers itself to control the performance of the IP, set the right optimization point based on the workload itself.

The other thing we did for power management, we optimized on the fabrics itself. We wanted the fabrics to be very dynamic based on the workload and just the performance. With the disaggregation, we increase the size of our fabrics to have better performance. We have some additional traffic going between the tiles themselves, but we also remain focused on the power management to do this. The best way to think of this is if a workload demands lots of bandwidth between one or two tiles or certain things today, we can optimize the fabrics between those tiles while the other tiles can be lower power mode.

And so this allows us to have that flexibility to adjust our fabrics to address the bandwidth and the speeds and the performance we need, based on the workloads that are there. All this work is now tied back to the software so the Intel thread director can have the ability to schedule the right thread and the right workload and the right core at the right time and it allows us to turn the other tiles off. And so that software, and turn it back on to the overall system software has been very important to do that and the thread director allows us to give hints to the OSS to actually move the threads back and forth between the different cores and do the optimization points that are there. - Okay, let's talk about the improvements have done to thread director.

- Yeah, definitely. So you know, this is kind of our continuous attempt to help improve hints that we provide to operating system in assisting them to come up with a right placement decision on our hybrid architecture. In Meteor Lake we have our SoC-tile, right? That's new. - Hmm? - So there is SoC tile-specific algorithms that are part of our enhanced power management that got incorporated into threat director to kind of reflect towards that, "Hey, now is the time to keep the work on the SoC tile," or "Now is the time to move the work from SoC tile, right?" So I mean just to give an example on that, right? Like let's say that I'm running some CPU intensive task on my computer with Meteor Lake system, right? I have four threads and I keep my P-cores active because I really need that performance back. Now if some audio playback or video playback started, since I have my compute complex up, there's no point in waking the SoC complex, even though the work could have fit in there, then we recommend to OS that use the E-cores and the Compute tile to do that low usage work, right? But once my foreground high usage, if I'm running a game or content creation that is finished and I only have that low usage, actually then thread director reflects back to OS, saying that, "Hey, the efficiency ordering has changed." SoC Tile is more efficient so the work can move there and we can power down the Compute tile.

So all of this gets communicated to OS from hardware in terms of this thread director table that we provide. - That's pretty cool. That's pretty amazing and it's all focused on, you know, try to save that power- - Exactly! - and get that extra three hours of work in the plane. - You get extra time and also you don't sacrifice on performance, right? - Yeah. - 'Cause that's one of the other objectives we have.

- We've really been focused on having, you know, from the very beginning, it's focused on having the most power efficient SoC ever with Meteor Lake. And we really start to achieve that. It gives culmination of the IPs that we did, the disaggregation, but also the thread director.

It comes to together there and the partnerships with OS. Microsoft's been a great partner. We've been working with 'em very closely in order to have this really work and really be seamless and really excited about this, having this full range of giving that great performance and great battery life. - I mean think about it as a first step in our journey for disaggregation, right? We are going to double down on this.

We have more products and architectures coming down this pipe and then as we put more Meteor Lakes out there, get feedback, use that to improve our next disaggregation step that we have coming up. - You know, you don't have to choose anymore. In the past maybe we would focus highly on performance or you get battery life modes.

With Meteor Lake you get the best of both worlds. You get great performance when you need it, based dynamic on the application and workloads, but at the same time you got great battery life, it's all done under the hood. Software and optimization center is dynamically done to give you that great... So it's no longer a trade off, it's that, you get best of both worlds. (inspirational music) - It's a multi tile processor? - Right. - How's that different from previous processors? - Great question.

So the way I would say this is that, you know, if we take a step back and look at what we have done for Tiger Lake or any other processors we have, it was all monolithic, meaning just one single die. - Meteor Lake is actually a combination of multiple dies, right? Not just the SoC, right? - So Meteor Lake, as we know, was the first time that we did die-level disaggregation, right? So we had multiple tiles, the SoC tile, the Compute tile, the Graphics tile and the I/O Expander or IOE tile and. - Of course we also a dedicated tile for the base die. So these are the tiles that we have for the overall Meteor Lake SoC in general. So, the concept of having multiple dies within a single package isn't new, but Meteor Lake is revolutionary with the notion that we're going with a Foveros 3D construction. - So what happens is that we have all this tiles or the die that's on the top and there's a base that is sitting, that is where you have the connection.

And what will happen is that, you know, let's say Compute wants to talk to the SoC: Compute will go down, go through this base die, come up and will talk to the SoC. It's first time we are doing this multi-tile design and to me it sets the groundwork for our future generation products to be able to mix and match and get the best out of process technology as well as performance when we need them. - The big thing here is when we disaggregated, the idea was you get the best design with the, you know, matched to the best process and hence you can keep moving things around.

- If Compute is very important for you, you go replace the Compute with the next generation compute on a next generation process and you're only changing the component that is really important for you. - Usually when you have all these various IPs revving, including a new IP like the NPU for the AI accelerator, you would count on the fabric being pretty stable. But this time around, given in the previous generation, which was Tiger Lake, right, we had maxed out the bandwidth capabilities of the previous fabric, which was PSF. - This fabric is responsible for getting traffic from the memory in and out right into these I/Os as well. - We needed to create a new high performance fabric. So part of the Meteor Lake architecture is the creation of a new network on a chip fabric, NOC fabric.

We have our NOC, and it's a cache coherent fabric. And then the devices that are attached to that, we have our two tiles, Compute tile, and our Graphics tile. Those are attached to it.

And then, then inside the SoC, we have media, display, imaging, our new NPU and then our low power SoC E-cores. Those are the main devices that are on it. And then attached to that we have the memory controller and we have like power management controller and then we also have a bridge to our IO fabric. So that's our NOC fabric. We have two main fabrics, our NOC fabric and then a IO fabric. - In Tiger Lake architecture, okay, think about this like we have one highway, okay, that highway was getting shared with like IPU and the VPU and the display.

So if there is a jam, then some other IPs or the agent IPs that actually had to wait, in our case, the way I would say this is that Meteor Lake is the true SoC that actually had multiple highways for different agents. Meaning that for example, if IP wants to talk to the memory, you don't necessarily have to wait for others. You actually have your own channel and go to the memory. If you have display which needs a higher prioritization, it will go to its memory link, you know, directly. - Is it a one way street or is it a two-way street? - It's both ways.

- Both ways? - And within each way there is multiple data types that have dedicated resources. And then on the fly you could have prioritization. - It was very challenging, right? I mean a lot of work, a lot of architecture innovation went in, the power management was revamped. Every IP had its own power management agent, which talked to the P unit. - The intent had always been, "How can we be the most power efficient?" How can we bring the most low power integrated platform features, right, while not taking up the power from CPU.

- We are giving the tremendous amount of flexibility to the IPs so that they can talk to memory directly. - So you can have certain power hungry, high performance die on a really advanced technology and basically you can mix and match to what that product eventually needs. - Strategically, what we figured out is not all IPs in the entire Meteor Lake need to be on the latest cutting edge projects. There is an optimization that you could be done across power performance, cost vectors, and that gives us the ability to split the die into multiple tiles and tailor the IP for the process node that's best suited for it.

- The thing what we are looking for Meteor Lake was that we are not looking for a patched solution or we are looking for solution that is scalable. Meaning that today I have, let's say three agent IPs, but tomorrow if I have like three more, will I have a design that can support, you know, this, or do I want to redesign this? And based on that, keeping that definition in mind. So we have architected in such a way that our fabric is scalable. Meaning that if you want to add like two more agent IPs, I don't have to re-spin this, I already have the mechanism to just do a plug and play and go. - Meteor Lake gave us a unique opportunity for some of the dies like IOE in our case, right, to be able to start from scratch, right? New development, new infrastructure, new validation collateral, a new way of doing things.

So Meteor Lake was a combination of picking and choosing the right combination of, you know, which ones do we want to keep, which ones do we want to throw away, right? Needed to be intelligent about not just everything from scratch, but more more conscious and more strategic about, you know, which pieces do we want to redo, versus build from scratch and/or fix or things like that. - So when it comes to clock domains, you know, clocks are sometimes could be finicky. How do you guys overcome that issue of jitter and clock trees and delays and... - For Foveros to go across die-to-die, we use a forwarded clock architecture.

- Forward? Okay, that makes sense. - It's a forwarded clock architecture: for our audience out there that understands what forwarded clock architecture, if they don't, I'll give you just a little primer. - Yes, please. - A little primer. And that the clock is sent along with the data across the die-to-die.

- That high frequency and design verification, clock design, right? Replication of clocks from one die to the other, handshake of signals, right? And FIFO design and all that stuff, right, was very, very interesting. Very challenging. And one of the, I think very rewarding elements of Meteor Lake for me at least, which I worked on. - What have been some of the greatest challenges you've run into. - In Meteor Lake? - Yeah, in Meteor Lake.

- Okay. So as we started Meteor Lake journey, we got hit by this pandemic, okay? No one knew as to how we'd respond to this pandemic. While the pandemic was happening, we are going full speed ahead with the Meteor Lake execution. But guess what, at the end of the day, this team rose to the occasion and we delivered, we delivered this in style. - Many new collaborations, new teams coming together and we have to figure out a way to work together as a team, collaborate and just make it happen.

- Since I spent a lot of time in physical design, the scale of the challenge is just enormous. We had great collaboration with PESG, we had great collaboration with the industry, EDA vendors, you know, being able to work with them on a daily basis to solve, you know, the humongous challenges because you know, taking a leap of faith and improving productivity by 5x in a generation is extremely challenging. And when you accomplish it, it's extremely rewarding. - And I think it was a learning experience for everybody, right? The use cases were different.

Like NPU, I mean now it's all "AI, AI," but even now we don't understand all the use cases well, right? (scoffs) So three years back it was this thing called NPU, does it really need to have this real estate devoted to it? What does it mean for traffic patterns? Who should get a priority over what, you know, who gets the higher metal resources, if you will? We have an awesome team and that, you know, really came through in Meteor Lake. - Meteor Lake is truly big. The disaggregation has changed and is going to change the way we are delivering. We've learned to be more, we've learned a lot in this.

So we've become more agile, we've become more adaptive. - I think Meteor Lake paves way for the new architecture, right? New foundation of how client SoCs are built, then it paves the future for us to be more agile, right? And hopefully our execution cycles in the backend will improve from here because of the whole architecture which we have. - I'm super proud of Meteor Lake, I really am. It was phenomenal to see this new revolutionary architecture and then all of Intel gathered around to get, work together to build it.

- Simplistically, I would say that, "Hey, go test it, get it, buy it, and enjoy it, right." (inspirational music) - Right now we're in the middle of our journey of five nodes in four years, right? When Pat came back two and a half years ago, he put in place this plan of for technology leadership catch up and that's the five nodes in four years. So Step 1 was Intel 7, Step 2 is Intel 4, and that's where we are today. So if you look at Intel 4, you know, our goals were first, area scaling. So if you look at our standard cell library and you compare it to the Intel 7 node, we're able to achieve 50% area scaling or 2x area scaling, however you want to describe it.

That was, you know, the continuation of Moore's Law. Second goal of Intel 4 was power efficiency, right? We wanna deliver a lot of performance, but it needs to be efficient performance. Some of that we get from the scaling and a lot of that we get from the transistor improvements that we implement as well. And our goal really is to take most advantage of power efficiency. When we drop a gate contact, you know, prior to 10 nanometer, you would never put it on top of a gate. It was always off the gate.

And then you'd use the conductivity of the gate metal to get to the transistor. Starting on 10, we dropped it on top of the gate and that allows that because we don't have to drop the contact so far away, we can achieve better scaling. - Yeah, so it was like you're reducing your area, right? - Yes. - Because now, you have to put it outside, it's directly on top, so it's less area, so... - Yeah, exactly.

It's, you know, a contact was sitting out here, here's the gate contact's sitting out here, and then I can drop it right on top of it so then I can, you know, I chop the cell at the edge now. We also were able to pitch scale both the fins and the gates. EUV lithography was a big part of that. EUV lithography really allowed us to reduce some of the complexity that had grown over the years because we didn't have lithography capability of the resolution that we needed. We've been able to reduce complexity and reducing complexity allowed us to achieve better yields.

It's allowing us to run the fab faster. The metal stack is 18 metal layers, now we have a zero metal layer, then we have metal 1 through 15, and then we have these giant metal distribution layers up the top. So it's 18 tall. You're also challenged by electromigration, right? Electromigration is when you're putting, you know, voltage or power on a wire, sometimes, that metal wants to move, (chuckles) you know? - Yeah. - And metal moving is not a good thing, right - No.

- So you know, we developed what we called enhanced copper. - By doing the implementing this new type of wire connection, you have also reduced the RC or the impedance on it, so making it- - Yes, yes, yes! Resistivity on those lower metal layers. If you compare resistivity or the conductivity improvements that we have in metal zero, metal one compared to Intel 7, it's a significant improvement. Five nodes in four years.

So Intel 4 is step number two, and we're in production today. Step number three is Intel 3, and Intel 3 will build upon Intel 4. It'll offer denser libraries for scaling. It'll provide additional performance opportunities. Intel 3 is a node that we will use for a very long time for both internal and Intel foundry products. So it's pretty exciting.

Yeah, yeah. Step, you know, 2, done 3 to go. Foveros is what we call 3D advanced packaging and by 3D it means we're stacking die on top of, or silicon on top of other pieces of silicon. I think one thing that Foveros is kind of a first, versus the, like some of the other 3D packaging in the industry is that we do have the capability to stack two active die on top of each other. And we can also stack a active die on top of what we call a passive die. On Meteor Lake for example, we have multiple top die and we put them on a single base die.

And that base die serves mainly to interconnect all of those die. So we could take all the die instead of building them on set, you know, one large die, we break them into smaller functions and then we connect them with packaging. You know, traditionally we do what we call organic flip chip packaging where we take a monolithic die and put it on a substrate, you know, and then on like Raptor Lake and Alder Lake, I'll go in all the way back to Haswell. We put the platform controller hub or PCH on the package and we connected those together with what we call On-Package IO. And for example, those connections between the ISEP and substrate are like a hundred microns apart.

We call the pitch, right. When we go to Foveros, you know, those connections between the die and the wafer, they're at 36 microns. Since we're talking an area, it's about an 8x increase in the amount of interconnects that we can get per millimeter. So, you know, you need that amount of signaling between the two die. Also Foveros, it gives, you know, the characteristics in the base die: it allows those signals to be carried at very, very high bandwidth and very low power. And as we said earlier, very low latency.

So with the technology you get the electrical characteristics you want and that allows you to, you know, then partition into many different tiles. - So, Foveros, if I understood correctly, you can correct me if I'm wrong, is we have this base wafer in which we have other tiles that come on top of it. So, okay, tell us a little bit more about that, and how is it assembled? - Sure, that's exactly right. We actually start with all of them in wafer form.

We have a base wafer and then we have all the top tiles come to us in wafers. Some from internal like our Compute die on Intel 4. And then the other, you know, other die that come from us, from external foundry, we bring them in, all the top die. We take through what we call die prep or singulation.

So we'll cut up the die, singulate the die and then we'll send them to sort. - My role as a factory manager, I take care of the entire factory operation, managing all the operation indicator to ensure operation excellence. At the same time, to ensure that we are meeting the cost target as well as ensure all new product introduction and release on time, Penang assembly and test is the backend processes to assemble the die into the packages as well as to test the unit before finally shipped to custom. - We actually improve what we call, "Known good die." So you get, you know, something like you know, 3 to 4% improvement in known good die going into the package.

So if you think about that when you're putting three or four die together in a package, if you know that, say for example they're 99% known good die because you tested 'em so well, versus 95% known good die on wafer sort, which is still really good, but you just didn't have the same coverage. You know, that multiplies out and by the time you get the final class test there's a, you know, a 10% yield difference after we sort them or singulated die test them, that's where we move to kind of the first step of the Foveros operations. This is what we call wafer assembly.

This was, you know, the major new capability that we had to put in. Obviously we have to have tools and equipment to take and processes to take the top die and to stack them to the bottom die. So we do that. The bottom die or the base die is in wafer form when we do that and the wafer assembly flow, it's kind of unique that it's a combination of both what we call like classic assembly processes and classic fab processes. - So for particularly for Meteor Lake is Foveros product, so where DMO, which is called advanced packaging fab, will be doing the wafer level assembly in terms of connecting, kind of attach four dies on top of the base die. So for ATM we'll receive the Foveros die and we will do assembly which is die attach or chip attach, attach the Foveros die on the substrate and the packages.

- We stack all the die onto onto the wafer and then take it through the flow. We then singulate or, or chop up that particular die into what we call the die stacks or the Foveros stacks. After that, kind of circling back to that singulated die test, we do have the option to test them again. Since we are able to test at the die level, we are able to test at the die stack level. You know, I think this is a big advantage, you know, sometimes, you know there is a cost for adding a test step, so we have to use it judiciously. On Meteor Lake we've been using it through development and sampling to make sure we have a healthy process and then we'll evaluate how we use it in HVM or in high volume manufacturing.

But it's still good to, as a quality monitor, making sure that the line is running well before we send the die to package assembly. - Once we do the attach, we'll go through all the assembly process that whether it is epoxy as well as the stiffener attach, then we will go to test for completing tests before going through the finish operation to do a research check and marking and before ship to customers. - I think the one thing about, you know, test, not necessarily just for Foveros but our, you know, again, we have world class backend test capabilities. All of our test platforms have been developed in-house over the past, you know, decade or longer. But we can do this in a very, you know, manufacturable way where we're not adding a lot of test cost and test time, but still able to achieve that world class DPM, you know, and from there pack it and ship it. (both chuckling) So, and that's what we're doing now.

I mean, Meteor Lake, we finished all the qual samples, they are out the door this summer, production materials in the line, the line's running pretty healthy. We're at our yield targets, you know, reliability, manufacturability targets, you know, we're there or really, really close. So yeah, we're ready.

- We are very proud to be part of this journey to be able to enable the Meteor Lake Foveros packages and Intel 4 Note, right. So that is, yeah, we all super proud and to be able to be part of this journey. (inspirational music) - So the NPU or the neural processing unit is completely new for Meteor Lake and I am very eager to learn all about it. 'Cause this is completely new for me, so please tell us a little bit more about it.

- Sure, yeah. I guess I can use this, this diagram here to explain it. Well first off, just to describe the motivation, you know, the reason we have this is, you know, we see the number of use cases for AI seems to be exploding, right? And a lot of these use cases we want to bring to the client, but these algorithms are actually running on the CPU and you're kind of limited by the amount of, you know, efficient compute you have, you know, you can actually make the algorithm better, but then it's gonna burn too much power. So when we're bringing the NPU, it's basically kind of a power efficient way to do AI.

So you can take those algorithms, you can actually improve them, they'll take more compute. We basically have, you know, power efficient compute. So that's kind of the motivation to bring the NPU to the client. I mean the CPU is good for, you know, very small workloads where you just wanna, let's say the complexity of the workload isn't very high, you can actually execute it really fast.

The GPU is good if you have, let's say, very large batch sizes, you know, GPUs get at kind of large batch size type of workloads and maybe it doesn't run very long, so you're not worried about thermal constraints. The NPU is great for continuous workloads where you really want power efficiency. Also batch size one that has relatively higher complexity, you know, that's really where it shines. Essentially this is the NPU section and then here's kind of the rest of the system. So we basically have kind of two components.

One is, let's say like the host interface and kind of the control for the NPU. And then the rest is kind of the compute portion. So the host interface portion communicates with the host, that does all the kind of scheduling and memory management. The host interface also controls the scheduling on the device, power management, those kind of things. And then everything below here is kind of compute. Really the heart of the NPU is our fixed function compute block.

We're calling it the Inference Pipeline. When you have a neural network, the, basically the majority of the compute kind of boils down to matrix multiplication. So we have what we're calling the MAC Array and the inference pipeline block that does all that, you know, matrix multiplication, actually there's a lot of neural network operations. If you look at like OpenVINO for example, there's like 160-plus kind of operators we have to support. Not all of 'em are matrix multiplication or activation functions.

So we have a DSP. So this DSP is basically fully programmable. It can essentially support everything. But you know, if you did matrix multiplication on it, it's just not nearly as fast as our fixed function hardware here.

But if we have a lower compute operation or we have some operation that just doesn't happen very often, we would run it on the DSP. You know, the key things for neural network execution or neural network power are really how many times you read and write data and then also how efficient your kind of matrix multiplication is. So through this we can get a lot of, you know, data reuse.

We have kind of internal register files, let's say inside our MAC array where we can get a lot of data, data reuse that can help reduce the power consumption. - Okay, so we have cover what the hardware side, but what about drivers and software side? - So there's kind of like two factors in the software. So the first is what is, what is the driver model for the device? And that basically means, you know, how do we do the power management, how do we do the memory management? How does the security work? So we have a driver model called MCDM, which is the Microsoft Compute Driver Model.

Then the other part is, you know, how does the developer program the device, you know, what's the programming interface, what's the programming API? And there we've tried to have kind of a common API across the different hardware we support for, so it as CPU or NPU, you know, we're all supporting, you know, DirectML, WinML OpenVINO and ONNX Runtime. So we're trying to ease the developer experience. I mean that's really the key key for adoption. - For you as an architect, as a design engineer, how do you get, per se, predict the future? Do we have a little crystal ball, (both chuckling) if you're trying to know what's coming up next? - Yeah, I mean the exciting thing about the field is everything moves so fast.

So, you know, and this is all incremental. I think we have the right kind of base architecture. So we're making kind of, you know, incremental tweaks. So let's say like a new paper comes out and there's a new kind of network architecture, we'll take that and then we kind of analyze it. We do like a simulation to see what's our performance and then we look to see, you know, like what's the bottleneck? You know, what's taking the most amount of time or maybe are we not having good enough efficiency? So then we go look to see, you know, should we add some new fixed function hardware in the inference pipeline, or should we tweak, let's say, some of the way that we're processing data in the inference pipeline, or should we add new instructions in the shaved DSP I mean, and immediately we added a few new instructions based on new activation functions.

There was new activation functions there were coming out. So we're like, okay, well we wanna make those faster. In our analysis, we showed that was a bottleneck, so we added, we added new like vector instructions to the DSP for example. (inspirational music) - As you can see, Meteor Lake from an architecture point of view and also from a product point of view, has a lot to offer. From new AI, new graphics, new process technology, a completely new type of architecture where we've gone from monolithic to desegregated, brings a lot of new features. Thank you for watching with us and please stay tuned for more videos that will coming your way.

(inspirational music) (inspirational music continues) (Intel jingle rings) (no audio) (no audio)

2023-09-24 14:20

Show Video

Other news