NETINT Technologies about why YouTube and Meta encode with ASICs

NETINT Technologies about why YouTube and Meta encode with ASICs

Show Video

hello Dylan thank you for joining me today so can you please start by introducing yourself a little bit about yourself and also your law and semi analysis I am the chief analyst of semi-analysis and semi-analysis is a market and uh semiconductor supply chain research firm we uh and Consulting and we focus on um you know from the manufacturing process technology all the way through to design IP and Asics um and and strategy as well we're all surrounding the semiconductor industry so um today we are going to discuss about the development of the Asic products in large technology companies and also the driven Factor behind that massive basic investment and in fact so Dylan so you you come on or radar screen is when you wrote about a very famous Google Argos article and what why did that technology Intrigue you and can you give us a little bit brief about you you your thoughts about that or your findings so Google's uh created a few pieces of custom silicon you know ranging from uh the you know the tensor processing unit for AI uh which you know is very famous and everyone knows but one of the Lesser known uh pieces of silicon that they've sort of created for their own in-house needs is Argos it's a completely new kind of uh application specific circuit right it's it's called it's a video coding unit um or a video Processing Unit vpu VCU um and and the main idea behind it is that Google has very unique needs um or at least you know they have a unique scale with regards to how much video they ingest and then serve um and or photos they ingest and serve um and so you know they they'll get video in all sorts of formats and resolutions from their consumers uh uploaded to you know a number of their IPs like YouTube Google photo photos Google Drive you know and they'll you know and they have to stream it back out to to these users but you don't just store the raw file and then send the raw file right because what if someone needs to you know someone wants the highest resolution and someone else wants you know they have a limited bandwidth uh so or or limited data per you know a cycle so they they you know they have to store it in many different formats um and at the main time you know at the same time you know storing all this data across you know the you know billions of hours of YouTube that are out there would be incredibly expensive um on a on a cost basis right of just storage of streaming right data streaming is expensive networking is expensive storage is expensive so they created a a new Asic called the VCU Argos and the whole purpose is to encode video right and and before they encoded video as well but they did it with x86 Intel CPUs right they did it with uh you know Intel Skylake and before that you know prior Our Generations of Intel CPUs the the problem with this is that you know these these the CPUs are much less efficient especially when you start moving to more advanced video codecs uh for example vp9 and av1 um that you know save you know save you on storage save you on bandwidth but you know involve a lot more processing to encode the video um and so this you know these CPUs start to hang up in terms of performance you start needing so many more CPUs it's actually a Delta of millions of CPUs that it that Google needs uh you know if they were to encode everything with just CPUs which is you know incredibly costly there's also one item and see about the Innovation products right so it's not only regarding calls it's also regarding how they conserve the customers with the new appearance new features like uh of course the the stadium already uh starting shutting down but without the VCU I believe they couldn't even start that that is the kingdom yeah so so yeah yeah so uh I think you also gathered many very interesting uh data in your article talking about different uh Solutions and also the the cost per different solution right so could you uh give us a little more insights about those fundings so uh there's a very interesting uh talk about it right the software is eating the world but in fact I think for the media industry it's not only eating the world eating the CPU too right so maybe you can talk a little more about that part the the there's there's some interesting data that that sort of we could uh point to with regards to you know why the CPU is being eaten and what needs to be done right so when you look at um the the throughput figures of a um of a Google Argos VCU right um versus a Skylake CPU the CPU is five times slower um and it uses way more power uh to encode vp9 um which is Google's you know video codec for the entirety of YouTube um and it has been for you know many years um and and you know and it's it's significant it uses significantly more power um so using their their quoted performance right even if you assume Google servers are used 100 always right which is very very hard to do no one gets 100 utilization um you know all the YouTube Google photos Google Drive you know you know video if you just assume it's all 1080p 30 FPS and and you do uh h.264 which is a decade old or even older encoding technology that's that's 900 000 Skylake CPUs um you know that that's that's incredibly costly now if you switch to vp9 which saves you a lot of um capacity and bandwidth when you're streaming video um then then all of a sudden you're at 4.2 million Skylake CPUs um so this is this is getting you know you know when you imagine each Skylake server CPU uh each Skylake server was you know 15 20 30 40 000 you know it starts to add up to you know billions of dollars um and then if you you know if and that's just the 1080p 30 FPS right um you know most people's phones can shoot for 4K um 60fps or you know a lot of people record at higher resolutions so if you if you use if you use 4K 60fps as the Assumption then h.264 is 7.2 million CPUs and and vp9 is is 33.4 million CPUs right so this is this is getting to the point where it's just literally impossible to get that many right if if you think about it there's there's a about in 2022 there's about 30 million CPU shipped total so so we're talking about the entirety entire capacity of the whole world just for YouTube encoding right not even serving the video not even like any of the search uh or or algorithms or you know comments or likes or any of that stuff no just encoding the video um would require the entire world's capacity of CPUs so so the situation is is very dire and that's why Google made their VCU and as we look forward right um you know capacity of video is just continuing to grow um you know in fact more you know more people are uploading video than ever before with the Advent of short form video content you know in the form of tick tock and Instagram reels and um YouTube shorts and so on right um or or twitch right more people are streaming right you know all you know with this stuff you know rising in popularity being able to store it becomes even more costly uh you know you need to get that file size down and so the industry has rallied around av1 which is a new video encoding um video encoding software um or or codec and and it dramatically reduces the size and capacity of files while maintaining the video quality but the issue is that there there's it's so costly to compute um you know you know I mentioned these numbers of 7 million and 30 33 million uh these these numbers would would you know more than double right you know if I think about YouTube with av1 you know you know it's hard to estimate because CPUs are quite efficient but even with the most current generation CPUs right not Sky Lake which is really old but you know the most current generation CPUs it would still be something on the order of 70 million servers um so that's that's an incredible amount of compute that is not in that's not even possible for the world to make so if YouTube wants to move to you know vp9 which they've already done uh with their Argos or if they want to move to ab1 with their second generation version of that chip you know they're they're they they have to use their their or you know in-house expertise to design custom silicon for this purpose that's a truly an amazing number and not feasible at all so they have to find the new solutions that in fact that's also what we hear every day from our customers so they tell us if we adopt the new solution they could couldn't even run their business they want to or they need to right so it's literally the the economic not possible in fact we've been talking about the the the Google the YouTube let me talk about the av1 or the new code app we we know there's a famous coating that they have uh 500 hours video uploading per minute for the YouTube right that's a few years ago but that only talked about the new indexed videos they didn't talk about how many part of the videos they have but once they have the new codec involved they need to have every video re-encoded again right think about that that's an important point right the the numbers I was saying is just what's uploaded today right uh what's uploaded each year but you know that we we have we have 15 years of History right we might be you're going to want to save that video and crunch down on that story so you don't have to buy more storage because storage isn't improving really in cost much so you know that that's that's that's a great point yeah exactly exactly and and since you also mentioned the twitch before so so last September you were highly critical critical of uh teach cutting their revenue speaks to the producers compared to the YouTube right so what why side could you provide more insights um last September you know twitch uh made a very sort of controversial move if you will um by by changing their revenue splits with their with their um Partners um so this is in you know I think late September or October um they they cut the revenue split from 70 30 to 50 50. um which is significantly less than uh YouTube which has a 70 30 split um and and you know this these Cuts targeted you know their larger content creators because YouTube twitch was you know in a bad place right they they they have all this video uploaded to them and they have to distribute it um but they they can't they can't uh they couldn't make you know enough money to support it why because their infrastructure was behind where Google's is um you know Google has a superior infrastructure due to their use of their Argos chip which enables them to give content creators 70 of the revenue that they generate rather than 50 um and and at the same time YouTube also provides higher quality uh video um higher resolutions higher bit rates HDR those sorts of things even on live video uh which which twitch cannot twitch does not offer that because they can't uh with their CPU architecture because you know so so you know twitch needs to move to an Asic but they don't have those in-house design capabilities um whereas YouTube has been you know design and Google have been designing their own Asics for a a handful of years for this problem um and so many of twitch has had some really big impacts like sure they were able to cut the revenue split from 70 30 to 50 50. uh but but some of their

biggest content creators uh moved to YouTube uh they switched to YouTube streaming and and they brought you know not everyone not all of their viewers switched over but many of their viewers did switch over to YouTube live um and so so you know Amazon and twitch you know they they faced a big financial problem right they they kind of they either had to you know go go down this route of cutting you know the revenue splits or losing they lost some streamers so either way they were in a lose-lose situation because of their inferior uh you know you know lack of Asics and their inferior hardware and server infrastructure yeah and also think about if they want to on that base to create more uh unique or new you can relate to the customers that would be very challenging with current infrastructure right yeah more intactive contents or Mouse higher quality videos for new formats of the surveys but that's really challenging yeah yeah so uh yeah so we we mainly talk about the the ACT 86 and also the Asics right now but uh in fact in in the industry there's still some other Solutions like GPU or ipv we call this solution but really uh did you have any other insights about the other Hardware approach that we should discuss here well there's some other approaches in Hardware out there in the industry for example um you know there's some xilinx fpgas uh that are that are you know maybe Target this Market a little bit there's some you know Intel fpgas as well um and then there's gpus from uh Nvidia and and it's kind of from AMD and Now intel as well uh that they sell into this Market um and they they all you know claim they can do video encoding and and yes they they do do it a bit more efficiently than CPUs but there's some major limitations um to integrate these into your infrastructure you know um there there are some difficulties with regards to the software um you know you can't just you know put these in and expect it to work right away because your users send you all sorts of um you know video right all sorts of format whether it's vertical or horizontal or different resolutions different bit rates different frame rates um and and these Solutions are typically a little bit more um stringent in what they can take um you know or or they take a lot of software work to get them to work for these more complex you know or for these like varied workloads and use cases um and so when you when you look at this right like you know you look at isilinx fpga or an Nvidia GPU um you know you you might get better throughput than a CPU but you're still uh you have a lot of software work and then furthermore you know when you look at an Nvidia GPU right how much area is dedicated to the video encoding uh you know less than 10 actually most of the areas dedicated for other forms of compute right the the the general purpose you know GPU type of you know uh Graphics processing or render pipeline or Ai and ML and and and you know similar occurs with the fpga which is not dedicated to video encoding um you know it's not it doesn't have any area dedicated specifically for video encoding but it is a less flexible um architecture and so what that ends up resulting in is you know yes you get some improvements but you have to give away some in software um and you end up with probably a more efficient infrastructure in some capacities uh but you're still you're still not bending this cost curve in a you know an order magnitude right you're still you're still you're still spending a ton of money on encoding uh video um and furthermore right like you know the availability and cost of you know each GPU and fpga is significantly higher than than a CPU right you know Intel's average sales price for a CPU is is for a data center CPU is somewhere in the 700 range and AMD is like a thousand um you know that's that's last year's data uh whereas Nvidia you know their their GPU is are significantly more expensive x i links fpgas oh gosh they're expensive they're you know they're they're tens you know ten thousand dollars right is is a more reasonable number for a high-end Nvidia GPU or FP or as I linked fpga right not not not a thousand so it's it's you know you get you get you get a lot better cost uh you get a lot better throughput per chip but then you end up paying more per chip and you have this inflexibility um so there's some there's some problems in in that front I think eventually when you talk about the the passive solution to the video industry you have to consider different uh different factors and cost per stream or cost per customer that is the is extremely important and to to my knowledge I think the Asic is the best way to drive the cost down and that that's one thing another one is the many people talk about the video are so interesting so um attractive right but to the industry people sometimes they they also say that the video is the ugly animal because so many things that could go wrong especially when you talk about the live contents if you really need a very focused area it is a very focused area and try to improve the quality or try to improve all the features try to feel the cover to serve the customer better you have to be a doctor to your first priority otherwise it's just a minute or tested it's not suitable for the high end automated industry that that's why I think they need to be a focus and I didn't see that in the GPU or in the ipj companies right the video is totem is still a small piece and there's no Focus yeah yeah the the lack of focus is uh is important right because the main market for data center gpus is not you know video it's it's it's Ai and machine learning the main market for fpgas is uh well there's not really a single main market for fpgas but it's it's certainly not video is anywhere in the you know in the in the top of that list right yes they can use it there but uh you know when you look at where where they're you know um adapting their next generation fpgas for um it's it's more so for you know 5G signal processing or AI or networking right it's not for uh uh for video um video encoding um so so this these these these products are going to make compromises right they're they're better than a general purpose CPU but um you know they're they're still not you know you know changing the the cost curve as we you know mentioned earlier in a significant way that yeah yeah we we use RPG a lot for a company we use that for design so so the ipd is perfect for the small volume and very unique uh Solutions you you need to quickly adapt not quickly enough in fact but you still have select ability to to adapt the hardware structure and to study the new uh new features and do simulations that are good right but once you want to have it on scale or try to economically make sense to sell real customers is not possible right it's not for that purpose at all yeah yeah yeah so so dealing so uh you you also deep in the change of the Silicon industry right so what what's inside do you have about different strategies right to the company are inquiring for their uh customer purpose build the silicons I mean in this industry there are there are projects right I mean uh in in video encoding right you know meta is known to have a project uh working on this you know they're not you know anywhere near as developed as Google has um you know with their Argos chip um you know you know bike dance the owner of tick tock also has a project in this space uh it's unknown but it's believed they're you know they're not you know functional with it yet but we're not quite sure um it's kind of a black box um but they're certainly working on it um and and you know you look around the industry at many other you know of these major companies that have you know that that aren't necessarily semiconductor companies I mean everyone's making their own chips we're at Apple Google and Amazon you know Microsoft is working on some you know all of these companies are working on it but you know in the video encoding Market only Google has really brought it to Bear uh successfully um and you know you know you would think you know hey Amazon they have some of the best custom silicon in the world right they have graviton uh they have the Nitro dpu you know their server infrastructure is really efficient because of these products um but in the video encoding Ace in the video encoding World they haven't they haven't deployed anything that you know enables them right which is why you know twitch still has you know such stringent limits uh that make it you know unattractive to some content creators and had some switch to YouTube right is that well they they you know they delete your videos after a certain amount of time right the Amazon can't afford to store them because they're not encoded to a high quality uh at a small file size right uh you know you can't you can't stream at a very high resolution uh because you know again they can't afford to encode it in real time at a high quality and low cost whereas YouTube can't um and so you know Google's been very successful in the market which has kept them as the leader in uh most video content even even today right they're gaining some share with uh versus Tick Tock um Tick Tock is actually uh if you look at the growth over the last uh you know six months they've been effectively flat in terms of watch time um whereas whereas YouTube shorts you know it's still smaller but it's growing significantly and that's you know why because yes YouTube has a lot of users but YouTube is also paying their content creators on shorts and they're paying them a significant amount more than Tick Tock is why because because Google has more efficient infrastructure right um and it's all it all comes down to um you know these custom built uh chips that that Google's Google's developing right their infrastructure is just more efficient um enabling them to you know do do more with less um and so you know that's that's it's it's it's it's a strategy that's worked really well for them um that maybe is is you know others others want to emulate um but they haven't been able to um to emulate it yet it's very interesting so what do you say is that the the efficiency of the their infrastructure also enables their uh to have a better business model on the upper layer right it feeds through which you know a lot of people you know don't realize is that that it always feeds through the the um the the the the the the the the business will always depend on the infrastructure below it and you might not realize it but you know that this is why Tick Tock has almost no monetization for their customers because they have to capture it on it's it's believed that tick tock's not even making much money at all um you know despite YouTube being a very profitable Enterprise right you know uh and even YouTube shorts and and uh metas you know saying yeah they're gonna they've said on earnings calls you know short they're reels their Instagram reels and uh Facebook reels which is their short form video also doesn't make money yet um which is which is a significant deal right because Facebook you know they're working on an Asic on any on a on a you know a specific video encoding Asic but they don't have it yet today um and so you know between you know being able to monetize um there but also you know just the the cost of each video that's uploaded and serving it um they're they're they're at they're both at inefficiencies um and so you know they you know know meta specifically said they they hope to be able to be profitable on reels next year um but you know they're not today um and you know that coincidentally lines up with them saying they're they're you know with with rumors about their Asic being ready you know later this year um you know you know maybe you know that's if the Asic works properly on their first shot you know there's there's a lot of chance that their Asics don't work um on the first shot because there's always problems with the uh semiconductor industry so so you know one could correlate the fact that their Asics should be later ready later this year with and them saying they'll be profitable next year on reels uh as as you know that being direct evidence that that their platform is uh you know gated and their profitability is gated by their lack of you know semiconductor expertise uh with their own in-house solution so I I cannot share the customer name yet but I can say this this seems gonna be change for this year so one of the biggest the shop videos social media company also adopting our Solutions and the artists in they have cut 80 percent of their operation costs because of this so things will change right and yeah we'll talk about the big companies they try to if you talk about the particularly they can't have more to have ASAP to change their infrastructure is that the same for Twitter right it's just really interested they didn't have anything for that year but since Elon Musk is talking about they want to they want to build all everything in one app right and talk about the 4K live streaming or hdr on the Twitter as well do you think that makes sense for them to to to have that Asic solution too right is it seems obviously right this is an interesting one right like so if you if you look back in the history of short form video right you know it wasn't Tick Tock that made it popular it was fine um and then Twitter bought mine all those years ago um and then they shut it down um why did they shut it down because infrastructure wise it's just not profitable to sort of video um and now you know Elon Muska bought Twitter and he's you know floated the idea that they're going to bring back mine um and in fact they've been testing uh you know this this tab with short form video uh sometimes um so you know the question is you know what are they going to do for for [Music] um you know uh for for Hardware infrastructure right like they they they they use a lot of on-premises infrastructure currently for Twitter um but the problem is you know that's adapted for serving text and and you know doing that efficiently as possible um what how are they going to move to video which is a you know orders of magnitude more video uh you know volumes of data right orders of magnitude more volumes so how are they gonna how are they gonna solve that solution and it's you know well you know they just bought the company and you know the Silicon timelines development timelines Take Years um they take multiple years to to come to fruition and and it ends up you know they're even if they wanted to launch you know uh a short form video content they they may not be able to do it you know at any reasonable cost uh until you know three or four years from now if they develop their own in-house solution so so there needs to be a solution in the public market for them right and and furthermore right like if you think about it right you know does does meta and um bite dance and uh you know and and and you know every other company serving a lot of short form video um you know Amazon or or long form video right you know these three companies you know there's there's a few more in China as well uh like like 10 cent and so on right like these companies all are serving tons of video um and and then you add Google as well um and Twitter you know that's that's five companies that are already serving tons of video today if all five of them develop their own Asic solution that is that is hundreds of millions of dollars uh you know uh each you know uh of of uh you know at least at least 100 million dollars of non-recurring engineering expense at each company right so that's 500 million dollars right um you know so this is why you know semiconductor industry is important right like people always talk about uh hype you know everybody's gonna make their own silicon yes they're gonna make their own silicon where they can where they have the volume to support it but what if you don't have the volume on day one or you know you look across the industry there's five players that's 500 million dollars okay let's divide that across how many units you need okay maybe you only need you know 100 million 100 million dollars maybe you only need a million units right um you know that's okay that's a hundred dollars per unit that you're spending on non-recurring engineering and that's not even talking about the cost of the chip the cost of the memory the cost of implementing the cost of software it becomes it becomes too much for you know each individual company to uh to build so so this is why you need a merchant silicon solution right um that that can say hey we'll we'll do that development once um and we'll actually we'll develop it better because we know your needs and we know your needs and we know your needs right we're uniquely suited to each companies needs because we we communicate with all them so we're more flexible we have a more robust software that's more flexible Can it can take more forms of video uh maybe maybe you know maybe today meta's meta's goal or or Twitter's goal you know or who tendson's goal is you know long-form video that's shot only horizontally at certain resolutions but what if all of a sudden they want to do vertical video that's you know at a different resolution or maybe they want to add a feature that they didn't have before well now they need to go back to the beginning of the Silicon timeline and implement it and wait three years and then have the Silicon come out and now they can do that efficiently otherwise they'll be do it very in a very expensive way so this is this is where Merchants silicon that's more flexible kind of comes in they can take this 500 million dollars and bring it down to hey we're the only ones spending it and now we can sell a million units to you to you to you and you um and and that all of a sudden you know makes more economic sense that's very good point so the every company they face uh they have the needs right and they already discounts that but they always need to make the decision made worldwide right so right now it seems like it's a good solution right Google candidate for that but let's figure out so so uh so we talk about a lot of what have existing here and work for Dylan so you have seen so many companies so many Technologies uh Solutions right so from your perspective what's the next customer silicon that for the video processing from your perspective of course you know this is a very loaded question because of course there's only one company that's making a merchant silicon uh for for video A6 and that's that's negative but but you know as far as the custom silicon market right like okay fine I'm uploading video but you know and I'm encoding it into a new format but that's like one you know one use case uh but it turns out you know if I'm on YouTube um you know it's great because I can I can look at the captions so how am I generating these captions um it's great because you know I don't just let let them people upload videos of you know war or you know uh you know other other bad items right you know like uh like things that you wouldn't want to show kids um you don't just let people upload that stuff right they prevent that um and so how do they do that right a lot of this is AI algorithms right um you know how do you do captioning well you run you run a you run a a a voice to you know you know you run a model that that can can convert voice to text um and then you'll run maybe another model that converts that text and adds the correct punctuation and capitalization and so on and so forth um and then you'll also you know to make sure it's safe you know for for everyone or make sure people aren't uploading illegal content and sharing it on your website because now you're illegally at fault you know if they do that you you have to scan every single video well of course there's so much video no one can look at it so again you're utilizing AI uh you're and you're doing a detection like hey are there guns shooting in this video um you know are people dying is there a lot of blood here right you know are there you know illegal acts happening are there drugs you know all these sorts of things okay if those things are happening we'll review the video further um and maybe we can do a quick pass uh you know immediately as soon as it's uploaded and if it gets flagged then we can do a more in-depth review maybe it involves humans or maybe it just involves a larger AI uh model um you know but we can do that you know after later but you know every single video needs to be scanned it needs to be captured it needs to be hey what what's it what content is in the video uh you know sure they put search title up you know they put a title but there's a lot of other content right well what if someone says you know uh they want you know whatever the video you know they're they're looking for videos of of I don't know you know whatever it is they're looking for a video of it um maybe it's a machine maybe it's a uh maybe it's a tutorial on math but the title doesn't have that word but it still shows up in your search why because when the video is uploaded they're actually running an algorithm that pulls out metadata it sees oh what are the main topics of the video oh this is about you know a car and it's not just about a car it's about Toyota and it's about the fuel economy and it's about the reliability of the car and blah blah blah so now when I search reliable cars you know my my the video that says you know the review of Toyota uh Camry now shows up right uh that that's the beauty of you know YouTube and and some of these other video platforms you know making content discoverable or hey this this content you know pulls out all this metadata you know you were watching this video by the way you know why don't we suggest this video to you well how am I generating that metadata about every single video so these are these are operations I'm doing on every video that I that I'm also doing besides video and coding um and and do I do I encode the video and then run them on a CPU or encode the video and then run them on a GPU um do I have to you know do I have to do multiple passes this is very costly right especially when I think about memory and networking costs um you know reading and writing multiple times the the the the innovations that you know Google's adding in their next Generation Argos and and you know company you know I'm sure some of these other custom silicon projects from meta and and and Tick Tock uh bite dance sorry uh you know these these Innovations are they're adding some AI processing on the chip they're adding some general purpose CPUs you know just a small amount so you can do some of these operations at the same time as you're encoding the video um and so you're you're not reading and writing data multiple times you're not wasting money on networking you know all of these these costs are being saved uh because you're putting them on the video encoding Asic so now it's not just a video encoding Asic really it's a video processing Asic um and video processing involves a lot more than just in encoding um it's it involves that that you know detection of of illegal content involves you know what content can I advertise with this video hey what what who what are what are some related topics that might work with this video what's in the video that they didn't say is in the title um you know all of these sorts of things captions all have to also be processed and and that's the that's what the next generation of sort of video processing and encoding Asics will do yeah so that's an interesting part as well so uh let me have the first generation product we call it the video transcoders then we have the second generation cultural products which we branded as the BPU video Processing Unit it's really to answer a lot of the questions or the interesting points you raised that did on that conversation so we have many features already checked but yeah it made the customer to fund and how to use that but there are so many things that we can do for the video part right like the how to identify the contents use the contents or even the to work work more based generally with the iei part right there's so broad range of the the things we can do now what was the feedback from customers you know when you're first generation and then you you know as you're designing the second generation what was the feedback that made you you know decide hey we need to add all these features like that's obvious today but this is a couple years ago or at least uh where you you know where you had this input and decided or a few years ago so what was the feedback you got from customers that you know made you drive towards that decision uh then they have the first generation in fact it's the half solution we have the uh limited or not compared to CPU is quite efficient but compared to what's ideal that the vpu is still have some room to improve right so the customer feedback normally uh on certain uh areas one is they want even higher density higher performance so and also they want New Codex so we added the ab1 and to answer the performance we increase the Peak Performance from 4K 60 to 80k 60 and added the new codec and also they want more uh scaling feature because they have like the ABR lighter they need one resolution in and a split or to scale down to different resolutions right that we add very powerful scalars and also they want to understand the content in the video just like you said they want to know what happens in video and to to fully utilize the value of the the video and also to prevent some of the the the issues right on the flight so they added the the AI capability into the quarter so and also there are some other parts we are not fully utilized yet like the audio processing as well and also we have um relatively uh stick powerful uh uh 2D engine that can and the DSP can do a lot of programming as well to have more flexibility on the fly to add new features and serve for the new requirements that's a feedback we got from first generation and put to the second generation in fact they are also uh in the feasibility phase or or third generation we are also open to ideas and the suggestions that's why I'm also asking you if then is there any something that your customer are expecting to see in the future so we can continuously improve that and video is a really focused area that's uh that's that's that's super interesting so you know your first generation solution yes it encoded video but it was very um you know stringent in what it could encode right you know only certain Target resolutions which was fine for you know certain you know at that stage but as you went forward to the second generation customers demanded in you know yes this is an Asic but give us flexibility and and you know furthermore you needed to add some of these functions like you know a little bit of AI processing so you could caption a video or you could detect if it's a legal content or you know these sorts of uh things um and so so you added that with your second generation and you're improving that in your third um you're you're making the CH you know the chip much more flexible right you know in just one video you can I'll put it in multiple formats um you can ingest it in almost any format and put it out in almost any format um you know you're you're you added support for av1 which is you know still not deployed heavily yet but it's it's it's going to be deployed heavily right I mean everyone's adopting it you know Netflix has said they're going to adopt it um you know uh I Believe YouTube has said they're going to adopt it um you know there's a lot of firms that have said they're going to adopt av1 there's ab1 support in um a lot of devices right every Intel every new Intel CPU every new AMD CPU my new I I just got a new Qualcomm phone you know they all have support for 81 decode right um but but encoding is very much you know even even where there are there is encoding support on the newest you know Nvidia gpus or Intel gpus um it is very limited um in terms of what level it can support and how much throughput it is because this is such an intensive operation um and you know the main purpose of that chip is not encoding so so you know you've added support for that as well um and you've raised the throughput and resolutions and flexibility can you can you talk about like the the software implementation a bit of of this right you know what what are some pain points um because I've heard a lot of pain points in the industry about you know implementing Asics into a process you know um implementing Asics into your workflows um into a distributed system where I'm getting video from everywhere and I'm exporting video everywhere right can you talk a bit about that uh in fact there are different scales or different layers of the issue talking about the software a total solution for the video or generally for the A6 Solutions so the first and most people didn't notice is how it can work with the host system or different operation systems different kernels it's a really painful process for uh Asic or not film is it for any hardware to work with different operating systems they keep upgrading right the kernel always have different versions will be where you spend hundreds of my Minds to develop a driver and there's a upgrade to their different version you should you need to redo that again right that's why we start from beginning they are designing a totally new account because the computational storage structure so uh or uh or or vpu is on top of the the existing many driver so uh whatever the operation system or kernels as long as they support enemy SSD they can support us and to adapting to uh uh operation system or kernel version from to a new one to us in only a few days work underneath is for the testing we want to make sure everything is it is right for for that and we have several form of a few hundred servers try to run the 27 24x7 to test to make sure it's mature for that for that system but for us it's really easy to do right and a weekly can even plug and play to the system they can recognize the cars and start to use it right that's the very fundamental layer that is different compared to all the other Solutions which is much much for once everyone uses nvme server ssds right nvme being the protocol for you know solid state drives that's pretty much the dominant one right you know for the last at least you know five to ten years right um it's been the dominant protocol for for ssds and so you know you're you're sort of piggybacking off of that infrastructure right saying hey you know our Asic we're just an nvme device right you read you write to us and then you read from us and it just so happens when you write to us you send us the unencoded video or video that's poorly encoded and we output the properly encoded video um and so you know every almost every you know you know every x86 and even arm CPU supports nvme right um you know my laptop has nvme uh you know actually iPhones people don't know this the iPhone Nan is actually communicated with the soc in over nvme like of course you're not going to get into an iPhone but um you know nvme is everywhere um so it's an industry standard that you that you support um and it's and it's very it's it's in every host system um so that's that's that's really cool um that it's then it makes it the ease of use a lot uh a lot better yeah so we we support x86 we support um architect the servers we also support IBM uh part 10 par nice series of the CPUs as well it's about the Linux Windows uh mic and also Android system so whatever you have as long as you use pod SSD and let me SSD you can support it right so that's that's easy and Beyond the the the fundamental layer on the software layer we um uh is using the working-based uh open source system like the FM pack and the g streamer and to to fully seamlessly integrate uh to to to to that and and in fact most customers are using this to uh framework for the reading flows so whatever they have built on software using FM hack or keystream or similar Solutions it can easily to work with us just to recompel the the FM pack with our library that's it when you run the code you just change the pointer from software to us feeling finish then you can keep your current workflow as it is but that's a that's the so just so easy of course there are some of the challenges that we can see uh with the FM pack is the afterpack was designed for the single thread is not for the uh very extensive massive in parallel precising so so for the bigger guys like the the top club company is someone you already mentioned they interview skin a fantastic layer but call or API directly then they can truly utilize the whole potential of the hardware but that's a advanced layer of how to use that you know ffmpeg is the sort of the industry standard right like if I wanted to encode I use ffmpeg on my windows right like if I wanted to encode some video that I had um so so this is not only for like you know uh an application like the cloud right like this this could also you know proliferate down to uh you know some security applications where I have hundreds of cameras or dozens of cameras in a building you know instead of having to encode them on you know so many CPUs I I could use this you know maybe one or just two net in Asics and I could encode all of those videos at once um and it would work on you know even a Windows system uh that's so you don't have to upgrade a lot of your infrastructure um you plug in an Asic and it you know again you know almost every CPU even you know going back you know you know even desktop CPUs for a decade supported uh you know nvme so it's not it's not a big difficulty to get this up and running yeah exactly that's also one of the uh the the use case that the core customers start to using and they they they have whatever they have the cameras all the style or the new ones they can all uh accumulate the video streams to the server beasts or cars they can transcode that and also analysis them up in the same time right then then they can uh compress and understand in the store store that on local or stream out to the called Data Center for further analysis or for the First Responders to to watch the video online that's a very typical use case so your second generation has some AI processing right and the third generation has even more so so you know as you mentioned you know you could you could take the video from the cameras you could encode it or you could just run inference and say hey there's somebody on the screen or hey the screen is you know change okay we'll store this video but we won't store the rest of it or hey there's there's there's uh there's somebody that looks suspicious on the screen we'll we'll uh we'll send this to you know we'll alert the authorities automatically um we'll alert are you know our our reviewer automatically rather than you know having to wait and saying oh you know oh no what was stolen oh well we can look back at the video no no we can we can we can preempt this yeah and is there uh some in fact some of the the features that people didn't have chance to use or didn't realize the value of that yet like the we talked about the me part of it right but by by designing that way we also can have a box of our cars just connect to the whole system through the Miami over fabric that is a pool of the results can be shared by not only one host but the whole data center right they can access the the review says like local results by doing that they can further improve the efficiency of the resource utilization so that's a pretty big thing I think for the hyperscalers and that I think that we're going to um very valuable to the customer especially you you have the video including the coding part there you also have ai part also shared with the whole data center scale right that that is uh just I think it's a hidden jewelry that nobody really have chance to use here so so you know the flexibility is there there's some you know unique use cases especially on like retail and um you know manufacturing you know where where you're going to record a lot of video and you want to you know you want to take actions based on what's in the video um so you know you look at smart cities and you know it's you know we talk a lot about the content as a consumer earlier in the show right you know uh what what happens with YouTube what happens with you know Instagram reels and Tick Tock and um all of these you know forms of twitch you know video streaming and all these sorts of things um but you know we never you know we didn't talk about like you know this has use cases beyond that in in the smart city and in in the in the manufacturing in an industrial um you know every traffic light could potentially in the future have have cameras on it right um and and you can you can you can instead of encoding you know and you could you could stream it all and have maybe a single encoding chip serve multiple you know a whole store or a whole block yeah and and there are in fact some other applications that the normal people may not notice like like the what's your desktop infrastructure right model people not using their own local machine they are using the servers a few miles away or hundreds now away right that's how I hit the video streams streamed to your device you can work from the hotel from home or whatever place with the highest performance of the server on the data center what do you what do you need if you test uh a mouse or keep height and also a bigger display okay yeah yeah I think the the time is over so thank you Dylan it was great to have you here and also here you are available insights about the industry thank you very much thank you also Alex we look forward to chatting with you again

2023-02-21 15:37

Show Video

Other news