Tech Radar Volume 28 — Eastern Preview
Welcome to the sneak peak of the Thoughtworks Technology Radar. This is a preview session, and we will be releasing the upcoming volume of the Thoughtworks Technology Radar. So this is a session where some of our technology leaders can go through some of the interesting highlights from the upcoming radar.
And thanks so much for taking your time to attend this session. Let me quickly do an introduction of myself and the technology leaders that we have today. My name is Kumar. I work as an engineering director in Thoughtworks India. Birgitta, would you want to introduce yourself? Yeah. Hi, everybody.
My name is Birgitta Bockeler. I'm with the Thoughtworks Berlin in Germany, and I'm a technical principal, which means that I spend about-- in my case, I spend about 60% to 70% of my time with clients. And then I also have some internal responsibilities, for example, curating the technology radar.
Thank you, Birgitta. Over to you, Shangqi, for your introduction. Hi there.
I'm Shangqi. I'm based out of Hong Kong, currently running the South Hong Kong, Macau market unit here. And my previous role was the head of blockchain at South China. So basically, I'm still a very pure technologist, always happy to share my thoughts and experience for the technology.
Thank you, Shangqi. So let me give you a quick introduction to the topics, technology radar, before we move into the details of what we're going to talk about today. So the Thoughtworks technology radar is, in fact, one of the pillars of the Thoughtworks technology strategy.
It is an artifact which we have produced over the last 14 years or so. I believe we are at the 28th volume at this point of time. We create two artifacts a year, two volumes of the Thoughtworks Technology Radar in a year. And the way that the Radar is constructed is a very intense exercise.
It's a purely ground up exercise. And we have around 12,000 plus Thoughtworkers who work from 18 plus regions, working on a diverse set of technologies, diverse set of problem statements with different clients all over the world. And the technology radar is essentially a distilling of all that experience, ground up, and it goes through several levels of iteration. And eventually, we come up with 100-odd technology items which we refer to as blips, and that's what you get to see at the end of when you produce the artifact and when it is out for people to read and consume. And we feel extremely passionate about producing this radar and the very experience of creating it. It's a very rewarding one.
And as you see, the individual items on the Thoughtworks technology radar, these are techniques or technologies. This is what we refer to as blips. And there are four categories in which we place these blips, in fact, based on our confidence levels in them. So the first section, or the first ring, is what we call as the adopt ring, and that's where some of the blips that we have put out in this adopt ring.
It's essentially-- if you had good experience with them, we strongly recommend that you try them out, and that's the adopt blips. Trial ring, there you have-- we are super passionate about them as well. We have tried out them-- some of the projects, we have production rights on all of them.
But again, we do recommend that you try them out. But based on your risk appetite, please go ahead and try them out. But we've got sufficiently good experience with them.
Yeah, the criterion for the trial ring is always that something-- we need to have seen it in production at least once, ideally, maybe two or three times in team. So that's always one of the barriers for a blip to go into trial. Thank you, Birgitta, yes. That is a useful distinction. In fact, I was about to come to the Assess ring, wherein the [INAUDIBLE] tie breaker happens.
We are passionate about both of these rings. But something has not been productionalized of-- it's emerging, and it's quite hard. And we see some promise, so we might move that to Assess ring. Right. And finally, you have the whole ring, which is where we suggest that you proceed with these things with extreme caution. We have not had great experience with some of these techniques and technologies.
And that's the whole ring for it. Anything else that you would like to add, Birgitta, Shangqi on this particular thing. No, I think that's a good short intro. I just go through the place that we're going to cover this time, and I just realized that we don't have any blip specifically for adults this time.
That's a surprise to me. In this webinar, yeah. I think we have some adopt blips in the radar. The radar.
Oh, sure. Yep. All right.
A few comments on the logistics bit. So as I was mentioning, I would be your host. And essentially, Shangqi and Birgitta would be talking through some of the themes and the blips. I will be asking them-- or I would just be handing it off to them at the opportune times. We would also love to engage with all of you.
So Q&A, you will see the Q&A button on Zoom, and that's where you could post your questions as and when we talk about particular blips or technologies. We would try to take those questions alongside as in when Birgitta and Shangqi are done with their narrative. We will try to take those questions and try to answer them. So please, please use the Q&A option, which is where we'll try to consolidate, and I would moderate those questions.
One thing to call out since we-- we do have one hour. So just in case, we get a flood of questions. We might defer some of those questions to the end of the session, and we'll try to take them at that point of time.
So chat is also an option, you can continue engaging. If you have some artifacts to share or if you have some anecdotes to share, please feel free to use the chat option as well. But the questions, kindly put them on the Q&A, so that we take a notice of them and we can answer them alongside. I think we are pretty good with this. So let me go to the next.
So we are going to discuss two themes initially, and then we'll talk about a bunch of blips. So the first theme is around accessible accessibility. It's a nice play of words. And Birgitta, would you like to shed some light on this for our audience? Yeah. So yeah, Shangqi and I picked a few blips as a preview for you, things that we find interesting. And so the idea of a theme is that, usually after the discussions, we sometimes see, yeah, is there any commonalities between some things a theme that is coming up? And so the radar is always like a snapshot in time of what we are currently discussing a lot in Thoughtworks, what we're using a lot.
And at the moment, there is a lot of activity in Thoughtworks around accessibility, and also a lot of passion around this. And it was interesting when-- the blips are all crowdsourced from people all around the world in Thoughtworks. And we had a lot of suggestions this time to put things in the accessibility space on here. And people are actually-- Thoughtworkers are actually quite sensitive also to not saying, oh, this is now a special thing, but actually saying, no, no, this has been around for a long time, and there have been tools that have been around for a long time, and this should be a cross-functional requirement like all the others. But there's a big push to raise even more awareness that, actually, building, making your applications more accessible is becoming easier and easier, and the barrier to do that is becoming lower and lower because the tools are supporting it so much better.
And that's this play on word, accessible accessibility as a developer also as a designer, as a software delivery team as a whole. You don't have to be nervous about all of those standards and how do you do this, how do you support assistive technology, screen readers, and so on, because there are so many tools helping you in the software delivery process. And if you go to the next slide, Kumar, there's a first example of one early on in the process.
Accessibility, annotations, and design. So there's multiple plugins you can use for design tools like Figma, where the team can, already in the process of creating a design document, tag and annotate things, so that you don't have to discuss them anymore when you're doing the development. So things like, in which order do you want items to be accessed by assistive technology? For example, for tabs or in which order should the screen reader read it, and then the text and stuff like that.
So you can already put them in there, so they're not an afterthought later where you then have to go back and forth again while you're already building the story. And then later in the delivery process-- if you go to the next slide, Kumar, tools like axe Linter, which is maybe axe-- some of you in the frontend space are aware of it. It's like a huge suite of tools by a company called Deque, I think. It's just-- I think it's similar to that. And they have a lot of these really cool tools to help you here.
And axe Linter is VS code extension that links your called while you're writing it. So while you're writing the code, it can already give you pointers towards more accessible code. And then the next one, also during the coding phase, this is a bit of a mouthful accessibility aware component test design.
So this is basically the idea that, when you write your component tests for a web component, that the way you approach writing those tests, you can already build in some principles that automatically give you, again, more awareness of building accessibility. For example, usually, in these component tests, you have to have some kind of way to look up the elements that you want to test, to look up a button that you want to test if it's active or, not active, and stuff like that. And if you have a principal in your team, the way that you look up those elements, you do that via things like semantic tags, or these ARIA roles, and groups, and stuff like that.
Then you already make sure that you have to have those things in there. So you're trying to do the lookup the the same way that assistive technology would do it. Or another principle would be just to always think about, OK, if I have a click interaction somewhere, immediately think, OK, for this click interaction, should I also have a keyboard interaction or some kind of other interaction that goes beyond somebody who doesn't have a mouse or who cannot see the screen? And then finally, intelligent guided accessibility tests. Again, this is at a later stage in the process.
So once you actually have a testable version of your site, there are browser extensions that help you do what they call guided tests. So it's not like a fully automated test, it just asks you questions as a developer, or as a quality analyst, a tester. It walks you through and asking you questions about your images, your headings, the things that should be read by a screen reader or not. And through those questions, you don't actually have to know that much about accessibility in the first place because these tests are guiding you through them.
And then you can also record that, and then later see again if you've improved. And of course, by going through these tests, you also learn more about it, and maybe even next time you can already take those things into account early in the process. Yeah, so those were, I think, four of the five blips from the accessibility space that we will have on the radar this time. Thank you, Birgitta.
And it's a very relevant theme. And unsurprisingly, we do have five blips around this very important cross-cutting concern. So before we go to the next theme, I just want to make sure-- yeah, there was someone who had raised their hand.
This is a meta question on the technology radar itself. Maybe it's OK to answer this for now. Can you brief-- what is the differentiating factor for a technology to categorize it into the assess than in the trial ring? What is the significance of representing few-- yeah, so let's take that question first. And I believe we just answered that some time back.
Basically, the production deployment aspect, that is what differentiates blips between assess and trial ring. So trial is where we have had some experience in deploying those technologies into production, whereas assess is something which we are still excited. We haven't had yet a reasonable backing of production deployment to warrant our confidence in them.
The other quest-- I think, for example, for these accessibility blips that I just talked about, I think three out of the four were in assess. But I think they are very close to trial. I think we just weren't quite sure-- I think all of them are actually used. So I'm actually wondering about that as well. But assess can also be more obscure things, where we're actually not sure how it's going to turn out. If it's a good idea or a bad idea, then we usually write that in the text, so on the website, what kind of assess it is, yeah.
Thank you. The follow up question is, what is the significance of representing few blips surrounded with full circle and few with half circle? The full circle is, when it's a new blip, and the half circle is-- sometimes we move them. So they actually move across the rings, and then the half circle shows the direction, I think. So we don't always have all of the 100 blips totally new. Sometimes they actually show up again from the last edition, or we move them from assess to trial, or-- we've also moved things from assess to hold, or from trial to hold.
So that's when the blip was on before, but we moved it. Thank you, Birgitta. All right. So let's go to the next theme. And Shangqi, would you like to take it over from here? Yeah, sure.
So I would say the elephant in the room during our radar discussion was clearly ChatGPT because [INAUDIBLE] as a group of senior technologists. We tend to ignore the hype and noise on the market. For example, we prefer to advise more specifically on the machine learning and the data engineering rather than just talking about the buzzword AI, very general one. However, this time the change is brought by ChatGPT. And the large language model, definitely much closer to the artificial intelligence than ever before.
So in this issue of radar, we did introduce a lot of AI related blips, such as GitHub copilot, Promote Engineering. Actually, our chief scientist Martin Fowler recently just blogged an article coauthored with my colleague Xu Hao, the head of tech in China, on how we can use ChatGPT to do AI-assisted, test-driven developments. Well, in this webinar, I would like to introduce more blips about how to get large language models into production. So this space moves very fast, and there are-- a lot of innovations happened. And to be honest, our team has been very busy assessing such new technologies. So if we go to next page, Kumar.
The first blip I want to talk is the domain specific large language models. So although we have featured large language models like birds, like a ring in the radar before, and we even tried to include an openly [INAUDIBLE] last issue, but we [INAUDIBLE]. But I believe, for most people, the larger language model is really no to the public from the ChatGPT and the OpenAI GPT 3.5 behind it. So for fine tuning general purpose, large language models with domain specific data can tail into different virus tasks, including information retrieval, customer augmentation, content equation. We have seen many use cases like GitHub Copilot for software development, like [INAUDIBLE] for image narration, you name it.
We have seen more promising use cases that target specific industry applications, like finance, like law. In this radar, we explicitly mention the OpenAI for the legal document analysis. However, there are challenges and the pitfalls to consider. First of all, it's very well known that large language models like ChatGPT can be confidently wrong.
And also, using third party large language models like ChatGPT API may have some token limits. And using [INAUDIBLE] large language models may have a risk to return and reserve data. So that's why we need to assess how to do the self host large language models. So we can turn to next page. So yes, to do the self hosting and the training large language model is very expensive. It requires significant GPU infrastructure to operate and 80 years to be [INAUDIBLE] raised for a few tech giants like Google, Microsoft, Meta.
But now some open source options make it possible. For example, we leased one library called LLaMA.cpp, which is a part to Meta's LLaMA model, demonstrate how to run large language models on different devices, including your laptops, community servers, and even Raspberry Pis. So there are also other open source examples like GPT-J, GPT-JT in Hugging Face community to show how to run this on your local laptop.
And you also know that to do this self hosting large language model is still way too expensive. And we are not naive to expect our trained large language model could either be ChatGPT for general purpose. However, for some of our clients, really, large organizations, if you do want to have better control in fine tuning for a specific use case, and you want to improve security and privacy as well as offline access, I do think that self-hosted models is an option. So if we go to next page. So further using large language models in isolation may not be enough. We may need to think how to combine them with our differentiated assets to build an impactful product.
So LangChain can meet this need. LangChain is a framework for building applications with LLMs. And imagine you are an application developer, that you want to develop a product based on large language model. You may need to choose an existing model, you don't want to train your own one. In this way, LangChain could provide an interface to integrate the different models which can be integrated into OpenAI's, GPT 3, GPT 3.5 models, and other open source models on Hugging Face.
You may also need to manage your own promotes. So LangChain provides these templates to manage your promotes. And you may need to convert promotes into different tasks for processing. LangChain provide a rich set of agents to determine which actions to take and in which order. So if you want to build your own applications on top of large language models, I recommend to assess this framework LangChain.
So if you don't want to build your own products, you just want to learn and understand how GPT work, I recommend to take a look at the nanoGPT, which is on next page. The author nanoGPT is a former AI director of Tesla. He reference to the famous two papers in this space which attention is all you need and OpenAI eyes GPT 3 papers to be able to get from scratch using Python 2.0. The architecture of nanoGPT is quite clear. Each file is less than 1 key lines of code, and the comments are very clear.
So it's really a great [INAUDIBLE] study. So there are actually more blips around this AI space in this issue of radar. And you can see more details once the radar is released. I will hand it over to Birgitta [INAUDIBLE] we can share more blips that we find more interesting. Sorry. Just Shangqi, just a question before we move with the other blips, and Birgitta.
So there's one question around LLMs. Are there any free and open source LLMs for us to use? Yes. Short answer, yes. For example, the-- although they are GPT 4, GPT 3.5 from OpenAIs, not open source, but you can still try to use OpenAI to [INAUDIBLE],, to build your own models.
And also, there are-- as I mentioned, there are alternatives like LLaMA, like GPT-J. And there are a lot of alternatives on the Hugging Face community. You can basically find them. And I posted them in the-- I posted the ones that will be in the blip text once we publish the radar.
There will be a few links in there, and I posted them in the Chat. Thank you, Birgitta. So that is indeed a very fascinating theme, and a lot of interesting blips coming out over there on large language models and [INAUDIBLE] and using nanoGPT. Any other questions? So let me quickly take a scan.
Yes, there was one question on the reference to papers. Yes, I believe that is answered live. Sorry.
So I think we are pretty much done with the Q&A over here. So just a reminder for folks, please do continue using the Q&A section, where you can post your questions and, we'll try to take them alongside as we discuss blips. And now that we have discussed the elephant in the room, we can move to the individual blips. So-- yes.
So we have Passkeys at this point of time. Birgitta, is that for you? Yeah. So the first sentence of the blip text is, the end of passwords might be near, finally. [LAUGHTER] So some of you might have already heard about it. I think it was also a topic at some of the big mobile developer conferences last year.
So Passkeys is basically, yeah, this potential future of passwords, where you don't have passwords anymore, but keys. And of course, we know that the concern there always that pops into my head immediately is, oh, but how can a layperson, a nontech savvy person deal with keys? And this is all about that. And it was driven by an Alliance called FIDO.
FIDO stands for Fast Identity Online, and this is backed mainly by Google, and Microsoft, and Apple. So nice to see those three come together with their experience to drive something like this forward. And so FIDO actually is driving two different standards for this new approach to authentication. And the idea is that-- let's say in the case of a browser log in, your browser has a bunch of APIs that are already today, I think, even supported by most browsers already. And with these APIs, you can give responsibility to the browser to deal with the server, with this authentication and use keys from your device. And then there's an extra factor, of course, with you as a human, so what they call a gesture, I think, or also biometric stuff, like your face ID on your Apple device, or a fingerprint.
And then in combination, All of those things do the authentication. And it's a lot safer as well. So phishing cannot be done with this anymore. It's also that, a key authentication cannot be reused if somebody is trying to do it on a fake website, it cannot actually be reused for the real website. You don't actually send the key to the server.
It actually stays on your device. So there's all kinds of security improvements there. And the usability, actually, also sounds quite promising. People often compare it to the way it works for a lot of people who, today, use the Apple keychain. You actually have your passwords in there, and you just use your biometric authentication with the device.
I don't quite understand all of the implications yet personally in terms of, what about backups, what if I use my device? But I'm sure there's ideas for that as well. And it's, of course, not just your phones that can do this, but it's also hardware keys. It's, I think, also what our password managers today can also manage this for you. So yeah, it's quite interesting. And so there's a website passkeys.io
where you can play around with this, and there's a bunch of good videos explaining the technical background of this. And I think, especially, if you work in mobile development, as I said, you've probably already heard about it, but maybe start experimenting with it. So it's not quite there yet, but it's getting a lot closer, and actually does work now. So Yeah, it's very interesting for me to see. Awesome.
There seems to be the right-- hitting a sweet spot between security and usability. I think this is definitely a good step in that direction. We do have a question, Birgitta, from Jan.
And Jan asks, how long to see Passkey implemented in real life on different websites, portals in your opinion? So please speculate. Yeah. I actually don't know enough about the space to do a good judgment of that, but I'm guessing it won't be really widely widespread soon. And actually, Shangqi is going to talk about a blip next that is actually about still implementing login with password and all of that. And I would expect it to still be going on for a long time in parallel as well because not every user will have the support for this as well.
So I was about to say rest in peace passwords, but not quite there yet. So perhaps we can move on to the next blip, which-- yes, and it's the perfect for the previous blip, [INAUDIBLE].. Shangqi, all yours.
Yes. This is actually another security related to-- I would like to introduce. First of all, I would say, we have events, [INAUDIBLE] invents, the Identity Management System for so many times.
A lot of people, developers, they want to have a try to-- trying to learn this different standard, or trying to build this two-phase authentication to build this registration, these features by learning. We have seen so many effort to do this. And most times, we saw very poor implementations that has very obvious security concern issues. And that's why we recommend [INAUDIBLE] this time. And first of all, I would say, our team is-- we have team inside [INAUDIBLE] is quite passionate about our research because they provide a bunch of open source products that are focused on the Identity Digital Authentication, this space of air. For these specific to cut our record host, they call themselves Next Generation Identity Management System, think about the-- just think about the Okta.
Just for the current host, they can-- they are more developer friendly and you can customize. It is basically API first identity, and the user management system. And it is easy to customize it already, provide a lot of common features that we want to achieve in a identity management system such as, how to do the self-service log in, how to do the registration, how to do the manufacture authentication, two phase authentication, how to do the account verification through emails, through SMS, and how to do the account recovery.
And to be mentioned, these Kratos is headless. It's API only. So you need to build the UI by ourself. But it also gives us a lot of flexibility to integrate it with our system.
Because most of the time, we are not trying to build a yet another Okta. We just want to build some [INAUDIBLE] application that requires a user identity management system. So I would say our team has a lot of good, positive feedback on this. And I hope to have more experience on this.
And maybe nice time or-- next time, we can move it to trial on Radar. But if you really need to build a user management or identity management system by your own, I highly recommend to give Kratos a try. So that's basically the playbook.
And usually you want to build the lock and screen ourselves, right? Because we need to adjust it to our look and feel and all of that, right? And it comes with the simple UI as well. And I would expect, actually, like the team behind Ory, some people in Thoughtworks actually know them and have worked with them, I would also expect them to be on top of including things like passkeys. Nice. So that seems to be like a cheaper alternative for say, [INAUDIBLE] or Okta. And looks like it's a headless offering where you can customize a lot of your UI. Very promising.
And we cannot name the number of projects where we have seen that authentication is spilled over and over again. And this seems to be like a good tool. All right.
So let's move on to the next clip. We do not have any questions as of now. Just, again, to call out, please feel free to use the Q&A box. So coming to a platform.
So Spin as a platform. Birgitta? Yeah, so I chose this blip because I found it as a representation for something. Because this is about WebAssembly. And I'm always intrigued by WebAssembly. Every other Radar addition, some blip comes up that mentions WebAssembly. And I'm always like, oh.
It's creeping into more and more areas. So Spin is basically a framework that helps you build services or microservices in WebAssembly. And I mean, as the name says, WebAssembly, we usually know it or it came up as a use case in the context of browsers, right? So that you have-- when you need more native performance in the browser, for example, for 3D games or stuff like that, that you can actually, in the browser runtime, you can run things faster and more natively.
And so WebAssembly is basically a very low level, Assembly-like, as people say, language or a compilation target that can then run in that environment and you can actually use that compilation chart with multiple languages as well, with Rust, with Go, with Kotlin now, I think it's starting to be supported. So you have the flexibility of using different languages. And then you can run that in the browser, right? But then I think a few editions ago, Shangqi, right, we had this Kafka alternative in Assess on the Radar, Red Panda. I don't know if we actually have production experience with it in the meantime. But then this Event Bus Red Panda was supporting WebAssembly as an environment to run your scripts on this bus. Because they said, OK, something that Kafka people need to know, things that run on the JVM.
But if you have different language skills, maybe it's easier to use different languages and we support WebAssembly now. And I was like, oh, that's weird. Things popping up in infrastructure. And now Spin is coming from a company who-- let me just look up the big term, again, that they're using, who is talking about cloud side WebAssembly.
So they're basically saying, OK, we used to have VMs and then containers are more lightweight. And then, using WebAssembly, it's even more lightweight. It's even smaller binary. It's even better cross architecture support.
And it's even faster load times. So they're also pitching this in the context of functions as a service-- serverless functions as a service because of this very fast load time. And so, yeah. I find this really interesting. This company, Fermyon, they also have a built an application platform using Nomad where you can then deploy those WebAssembly binaries. And yeah, so I find it really interesting.
So this is an assess, so something that we're watching and monitoring and find interesting but we don't quite know where it's going to go. Thank you, Birgitta. So microservices in WebAssembly seems to be very interesting.
And there's a question related to the previous, perhaps, the theme which Shangqi was talking about, the practical AI bit. Jan asks, coming back to AI, do you have experience in using GitHub Copilot in day to day development? Which ring would it fall into? Yeah, I would say, in some ways, we have this debate. Because previously, the version for GitHub Copilot, we do have privacy concerns because most of the time, we have to in the project code that deliver to the client. So we don't want to-- Although, the GitHub Copilot, they claim that the inference happen only at your machine.
They won't reuse and reshare the code data for training. But we still have that concern. Until very recently, I remember that was last December, they released this-- they announced this commercial version of GitHub Copilot for [INAUDIBLE] and they released it on February. So we got real experience to use it on some of our projects to make sure that our client's IP is protected. So I would say, you can still use it very light as a enhanced Stack Overflow because basically to write some code and to write some even comment, to generate code. And then to judge and [INAUDIBLE]..
And I remember people think the GitHub Copilot, it doesn't generate a thing. It's just reminding you of things that you already know. And also, I mentioned that one of our colleague in China, Xu Hao, he has a lot of experience not just only GitHub Copilot, but also how to use the ChatGPT, using this chat mode, interactive mode to do the software development. And he has a lot of thought on how to use chain of thought, how to use this promote optimization to better improve the software development.
And so definitely, there is a lot of things happening here. So maybe once the Radar releases, we can definitely find more direct link in the text of Radar file. Yeah. So Copilot is going to be assessed in this edition. But it's already been proposed by Thoughtworkers in the last two editions. And I think we are still cautious and not putting it on.
And just from personal experience, I think it's leaving concerns about how the model was trained with potentially GPL license code and all of that inside, and leaving the code privacy concerns aside, I think it's basically autocomplete on exponential steroids, right? And the little prototype code base that I have that I've tried it with, I've been really impressed by it actually. And it's really strange because you don't know how it actually works, right? It does things where I'm like, how can it do that? And roughly understand about how these models work. So it's really fascinating. I think it's definitely a productivity booster, even if it just works for smaller units.
But it saves you trips to Stack Overflow and stuff like that, right? So I think it can definitely be a big productivity booster. Will not replace coders, though, in my opinion. [LAUGHTER] Yeah, coders can perhaps do some higher order thinking and maybe something else. So just one other question before we move onto the next blip. Umesh asks, Birgitta maybe you could take this, WebAssembly based implementation, is it used in production by Thoughtworks? Any specific scenario where it is preferred? I don't know, Shangqi, if we actually ask personally from the group know of WebAssembly in production. I actually couldn't tell you.
So right now, the Spin, as I said, was in assess, right? So we've definitely not seen it in production. I'm pretty sure that we, probably, at some point, have done work with WebAssembly in the browser. The server side WebAssembly, I'm not sure. Yeah. Honestly, I don't remember any specific use case directly using the WebAssembly developing applications. But because our team too passionate about using some tools like Spin or other.
For example, in the blockchain space, we do have [INAUDIBLE],, which is to build the smart contract on top of, a WebAssembly based, Ethereum flavored virtual machine. So people find the package is highly efficient and easier to transfer for the load. But for the direct use of WebAssembly in application, I don't think we have experience. Thank you.
All right. So we'll come back to the other two questions towards the end of the session. Let me move to the next blip, which is Dapr.
And that's for Shangqi. Yeah, Dapr is another microservices one, I guess. Dapr is actually short for Distributed Application Runtime that aims to help developers to build this relevant, fairly stable microservice right in the cloud. And actually, if you search and look at its architecture, it's quite similar to service mesh because they both run the cycle architecture that runs a separate process alongside the application. But however, Dapr is more application oriented and more focused on how to encapsulate the fault tolerance, connectivity required for building distributed applications. Although, there are a lot of overlap features with service mesh solutions, like Istio, like other service mesh.
That said, actually, we talked, I remember, at least two times in Radar before. And we didn't make it because a lot of us don't know what exactly Dapr is. But outside community, they're passionate about this. And we are seeing this Dapr initiative by, I think, that was Microsoft community.
At least it's [INAUDIBLE] caught very early, and then expand into viable options like GBI, and like Python, and like C++ to allow different text stacks run on the same runtime. And I would say some interesting things are happening in this platform, and we want to keep track of this on Radar to see what happens. So that's almost the [INAUDIBLE].. Yeah, I think it's a good example. Shangqi, you we're talking about how-- we talked about it at least twice, I think I tried to kill it in the end.
And this is a good example for everybody to understand the background of where this information from the Radar is coming from, right? Where a group of 20 people curating this and going through all the-- I think this time we had 350 proposals from across Thoughtworks. And we go through all of them and then we vote. So we discuss-- sometimes shortly, sometimes long-- and then we vote. And sometimes the majority is very close, actually, right? So that's why we call it an opinionated guide-- an opinionated snapshot to what's going on.
Yeah. Yeah. Sorry, so the 350 might make it seem easier than perhaps it is because in fact, from every region, we have hundreds of blips.
And of course, if you count all the blips coming from all the regions and getting culled at different levels, I'm sure it would exceed several thousands, not to exaggerate. For sure it would happen-- Actually, Kumar, I think India is the only region that actually proposes hundreds of blips and has to curate them down. The other regions are not quite as prolific.
Thank you. All right, so in the light of the meta discussion that we are having on the Thoughtworks Technology Radar, I would just like to take Luke's question before we move on to the next blip. Luke asks, I noticed that the Radar tends to just include new things and past adopt items fall off the Radar, so to speak.
Is that a place-- so yeah. So that's the first question. So maybe I can take a stab at answering that for a change. So it's just that the Thoughtworks Technology Radar has a limited real estate in terms of, you know, you can put only 100 blips-- around 100 blips every volume or so. So it doesn't mean that blips that were adopted in the past have just like fallen off or they are no longer in the adopt state.
It is just that we do not have a fundamentally different thing to say on those blips, which is perhaps why we favor that-- favor other blips in place of the ones which had been adopted previously. So the narrative around them have not changed a lot, or they continue to exist in the adopt phase, or we just have a limited real estate. Anything else to add on this? BIRGITTA: No.
KUMAR: Birgitta or Shangqi? So you can search the history of the Radar, but I think we don't have a feature that says just show me all the past adopt blips. And also something to be really careful of is that we do not have bandwidth to go through all the blips and update them. So if you have blips that were in adopt like three, four, or five years ago, maybe today that's not our opinion anymore because better alternatives have come up, or because it's kind of outdated.
So that's also something to be aware of. I think we have a big disclaimer box on each of the blips that were on older Radars. But yeah, we don't have a feature where you can just filter by adopt blips at the moment.
Thank you. Thank you, Birgitta So we'll move on to the next blip, which is Ferrocene, and that is for Birgitta. Yeah.
So we've had Rust-- the language, Rust on the Radar before. I think it even stayed on for multiple volumes. And the space where in Thoughtworks we've actually been most excited about Rust is embedded development.
And also in Germany, for example, where I'm from, we work for a few automotive companies. And especially there, we're excited about it. And this is the context of Ferrocene. So Ferrocene will be a version of Rust, or also with some tools around it that will hopefully be ISO 26262 compliant. So basically in the safety space-- certified in the safety space so that you can use Rust to build applications for safety relevant things. Like for example, in the car for things like the steering or stuff like that.
Right now those applications are primarily developed in C and C++. And those are also the safety certified ones, so you basically cannot use an alternative because you have to fulfill all those regulations. But CC++ is prone to memory leaks, and there's concurrency issues, right? And also some of the regulations in terms of the coding standards that you have to follow in combination with the abilities of C++, for example, actually lead to code that is less readable, less maintainable, less modularized.
So there's a modern language like Rust and also a much safer language like Rust could actually make the space a lot better. But for them to be able, again, to use in these types of applications, it has to be safety certified, if that's a word. And so there's a specific version of Rust that will be the one that is forked once this certification is through. Its version 1.68.
So that's why we already put it on because you can already start using it today if you're also betting on this certification to go through. And there are two companies behind this-- AdaCore and Ferrous Systems-- that are driving this forward. And I'll also put a link in the chat if you want to read up more about that. And yeah, so we're excited for Rust in embedded safety critical applications.
Thank you, Birgitta. And let's move to the last blip that we have for today, which breaks the monotony of access. So this is on hold for a change and planning for full utilization. Shangqi? Exactly. This is-- we don't have just one whole blip on the Radar, but in this session we just played-- well just share this one. This one is about the key management, which I think is very important to put planning on-- planning visualization on hold.
Where the practice of creating extra capacity in that very process should be common sense of product management, at least we think so. We still think many teams are trying to planning for full capacity utilization of team members. I think there are already a lot of experience and research in our industry shows that reserving extra capacity during the spring planning generally leads to better quality, better relevancy. And it promotes the team to [INAUDIBLE] to unexpected events like illness, like production issues, like unpredicted production support and request, and leave us space to deal with the tech that, and to leave team space to do team building and the innovation ideation which lead to product innovation. And our experience is that a fully utilized team leads to collapse in throughput as well, just like a fully utilize highway create a slow and a traffic jam.
So during our internal discussion, we also think should we address this matter in a more positive way like to using the techniques to limit what can improvise in-- there are a bunch of practical practice in product management to highlight this thing. But I think-- still I think that the macro environment behind this is the current economic recession and the layoffs in the tech industry, and we see many managers regard improving employees utilization as a higher priority rather than pursuing business value and product quality. So I do think we should point this out and pursue-- to putting this practice on hold is to trying to address our concerns, and more likely to attract to attract more people's attention. So that's why we put planning for utilization on hold in this issue of Radar. So I think the positive version of this would have been to build slack in, right? Like not Slack as the tool, but like slack in the team that-- you know.
Yeah, exactly. I think this is the place we would like to share for this time, again, you can check more details of the text of the blips once the Radar is released. I believe that would be in May, right? So I will hand over to Kumar. Thank you so much, Shangqi.
So we are almost on time, but I believe we can answer this question from John. I feel it's a super relevant question bordered on-- it's an existential question at times. How do you think advent of language models like ChatGPT and Copilot will affect recruitment process if coding tasks can be completed by AI? Anyone wants to take a stab at it? Birgitta or Shangqi? I can take it, but, Shangqi, if you want to, or-- I don't know the answer.
So [INAUDIBLE]. So yeah, I think-- I mean, actually in Thoughtworks, like a while ago, we also used to have this part of our recruitment process where we ask people to solve an exercise in code. Send us the code, then we would do a review and then invite people or not invite them.
But we've actually already a while ago scratch this from our process, and now we do pair programming together when the candidate comes in with an existing code base, and then just building a new feature into it or something. One of the reasons was also with I mean, we all know the recruitment market is tight, right? Maybe things are changing a little bit right now, but it was just not everybody has time to prepare these things, right? So we've already moved away from that for other reasons. And if a company still wants to do this type of thing, I mean, even before, there were solutions for our recruiting exercises available on the internet by people who shared them. But if you still want to do something like this, I guess the exercise actually would have to become bigger, right? So that you actually have to think more about a design that you want to do. And-- I don't know. I mean, if-- and in the end, it's also about you using that AI, right? Like even if the AI can do that, if you send in a code base that just blindly used all the suggestions from Copilot, but then it's actually not very well readable code and it doesn't have tests or something, then it's not a good code base, right? So-- Hey, Birgitta? Sorry, so we have one last question.
And after that, I think we can perhaps conclude. Mehta asks, any blip update in the area of sustainability? Good question. Did we have anything this time, Shangqi? I'm not sure.
Probably is better to check once the Radar is formally released next week. We can look forward to that. Yeah. All right, so please keep watch out for the upcoming Tech Radar, which will be released next week, I believe. And so I think it's time to conclude.
We are just on time-- on the dot. And I just wanted to thank the organizers-- [INAUDIBLE], our speakers-- Birgitta and Shangqi, and the wonderful audience that we've had. And thanks for all of those brilliant questions, and thanks for the level of engagement that you all showed. And thanks for turning up for this previous session.
And as I was mentioning, please continue to stay plugged in to our updates on LinkedIn and Twitter. And we will be releasing the Thoughtworks Tech Radar next week, and I hope you will find that as a very fascinating read, just as I would do Thank you so much. Thanks, everyone.