Artificial Intelligence, Real Quality: Exploring AI Testing Technology's Pace and Impact
[Music] good morning afternoon everyone from wherever you might be joining us today we are excited to present today's webinar on artificial intelligence real quality and exploring AI testing Technologies pace and impact in 2024 thank you uh my name is Christian Bey I'm the director of marketing here at testrail been on board since January I have a background primarily in other B2B SAS IND indries sales Tech uh transportation technology and these types of reports when we do these industry Benchmark and Survey reports are something that I've I've done quite a lot of in the past I'm very passionate about so I'm excited to be able to share the results of what we've gathered here today hi everyone nice to meet you I'm Judy bossy I'm the VP of product management here at idera um as part of this role I oversee multiple Brands um especially in software testing and security um test rail uh personally is one of my favorite Brands favorite teams to be working on so I'm very excited to be here my background is in software testing tools companies where I've LED product management and product marketing teams both in um startup environments and mid to large size environment so really excited um about what we we'll be covering um as part of this webinar and speaking of that Judy I'll let you go ahead and talk through the agenda agenda a bit as well thanks so for the agenda um our topic today is centered around AI exploring ai's testing technology pace and impact in 2024 we are going to uh review the findings um from a large survey that we recently did um our marketing team is fabulous and um what I love about working with marketing here at testrail is that we not only work with marketing from uh a standpoint from broadcasting the message that we have to share uh from a product development and release standpoint but we also use marketing uh the marketing resources marketing team very strategically as a way to conduct market research and customer research so that we can better inform product roadmap decisions so uh this survey is just one of the many examples of product uh and Market working together uh to do this Market uh research and we're so excited to share with you uh the findings the key takeaways and exactly what this means for test rail um I don't want to give away too many details but if you stick around until the end of the webinar what we'll cover is you know what does AI mean for test Trail what does this mean for test management at large and what do we plan on doing about it in terms of product road map now with the agenda down I'd love to uh dig into the survey and um let Christian lead the way excellent thank you Judy uh I know everyone's gonna be very excited to hear what's going on at the end so I'll try to make my part uh quick and efficient you got to eat your vegetables before you get to the the good parts right so I think you guys will enjoy this hopefully as much as I have so as Judy mentioned if for those who may have seen or downloaded our soft our annual software testing and quality report we asked a very interesting question there that was done the end of last year about you know our teams currently using Ai and if so how are they how are they implementing that in their processes and from that question we internally had all sorts of other questions that we wanted to answer and that really you know spawn the need for this survey so we took the results from one question we just wanted to to have a deep dive on AI in QA what do people think what do people doing with it what's what's the likelihood that someone's using it already in the marketplace um there really wasn't a source for this information available at the time we were doing this research last I checked there still isn't so we wanted to just take on that um those questions ourselves very first question are you currently leveraging AI in your QA process 65% of the the over a thousand people that we've we asked this to said yes uh they are currently using AI which is very interesting because we assumed based on the previous results that we collected last December that it was actually going to be lower than this this is much higher uh 20% higher in fact than we initially thought it was going to be um based on the previous data point we had collected so clearly in 2024 you're seeing this massive surge maybe unsurprising to some of you but um interesting nonetheless of people who are uh rushing to start using AI as part of their QA process we're going to find out more as we go through we're going to qualify this a little bit what does that actually mean what does it look like what tools are they using etc for those who said that they're not the 35% of the teams that or professionals that said that they're not using um AI we wanted to ask what are the things that are preventing you from doing that we expected again to see data privacy and security very top of the list but interesting in this situation that uh the number one answer was actually uncertainty about the benefits and we're not here to necessarily to say that you have to use AI or you're going to have to embrace AI in the future we're not necessarily advocating for one position versus another but it is here and we do want to you know try to share some of the lessons that other QA professionals have have shared in this survey about the benefits of what they've seen so maybe we can help address uh address some of that first concern just with the content that we're going to be sharing today but then number two right below it data privacy and security issues testrail works with a lot of you know industries that H are are very compliance focused and have a lot of regulations that they need to follow Finance energy um defense and healthcare for instance the just to name a few at the top of my head and there are really big concerns about what are you what data are you sharing with these AI tools and how are they consuming it what's private what's not private um that do need to be addressed and and it's been a part of the conversation since AI first you know um started being talked about in this industry and it's going to continue to be talked about until software providers and um you know industry experts come to some sort of conclusion about what what sort of standardization looks like and then you can see the other answers are kind of below there and they're they are significant they're interesting um but they're you know far below the the top two answers yeah Christian I find um this piece especially interesting um when it comes to our product management and development teams and uh trying to plan out exactly how we're going to be implementing AI um because you know one major goal is to uh address these data privacy and security issues in a way where we can allow for as much adoption of AI features as possible um but of course different companies may have different requirements um especially for those in those regulated industries that you speak of so um some of the decisions that we're looking to make is how do we uh make sure that these features are optional right they're configurable they can be turned on or turned off um if customers want to uh we also want to make sure that of course customers data is kept separate from other customers data um and then lastly for those customers who have test tril installed on server uh on premise or maybe even don't have the ability to connect testrail to the outside internet um providing um some sort of way for them to leverage their own AI if they want to bring their own AI and connect uh those to enable the AI features inside of test those are some of the things that we are planning for yeah and I do think another big part of our plans that um is is alluded to in here but the uncertainty about the benefits is something that I think marketing and product are going to be partnering on a lot of different companies especially here at testrail um trying to get the the word out about what how AI is being used and what it's being used for some of the pros and cons and processes and best practices related to that so that will be a major Focus uh for certainly in testrail and I'm sure sure other uh providers out there over the course of the next year or two we know a lot of people are using AI we know why people aren't using AI we wanted to find out for the folks who are using AI in their process is it actually helping are you seeing it impact productivity this one's sort of a no-brainer we didn't I didn't necessarily expect it to be this high of a number but 9 4% of all people said that it is uh significantly or somewhat improving their productivity and coverage this is going to become a little bit more significant later on as we see how it's being used and the tools are being used I would just to kind of give a little bit of a sneak peek we're it's still very nent technology in our space right it's still it's still got a lot of growing to do and the fact that this many people are saying that yeah it's it's helping me and it's uh in a significant way is is important it is interesting um that that does definitely speak to what the potential of this could be uh for for others moving forward how are they using AI as another important question and this runs the gamut right there's a whole slew of things that you can apply AI towards we just took some of the most prevalent or most talked about options or use cases that uh we found during our research and we just ask people how are they how are they doing it multiple choice um answer as many as you can select all that apply whatever you want to say right at the top of the list write test cases or scenarios create test automation scripts you see this couple different themes in this data which I think is important to call attention to there's AI focused on individual productivity of a QA professional trying to get more out of a single work day for for one person who's doing QA and testing and then you have things that are happening at more of a team level um helping to enhance the collaboration or being able to improve the visibility or test coverage over a wide range of tests or testers um an entire testing department and so you're seeing a combination of different use cases that apply uh you know for for a different set of you know responsibilities and Scopes um we did ask you know are people using it at the bottom there self-healing test automation not something that's actually out there as much as it's we we put it on there just to see how people would react we got a really low response to it again these are kind of things that may be interesting to to look at as the as the AI uh continues to mature for the space as part of the that that last question we asked you know where where do you see AI being implemented in the future where it would be most helpful and this again sort of corroborates where people are using it today um so some of it's interesting as as just the consistency of the answers but test Automation and script generation is uh number one by by a significant margin right behind it was rep repository integration and maintenance um and then you see their tied for third automation productivity that's the in individual uh productivity enhancements and then data analysis how are we how are we aggregating and analyzing that data for improved insights in decision-making I think it's also interesting to note here Christian the fact that these responses actually came in the form of open text like free text responses uh I I thought it was so fascinating that uh we in preparation for this webinar Christian uh used chat GPT to summarize uh the responses to be able to group things into these buckets right so if someone said that they were're using it for test case generation then we asked them give me examples of how you do that we did use chat GPT for it so it was a little bit of a you know um yeah we it was an interesting use case for our ourselves and and in when we publish the report version of this by the way I didn't mention that before we are going to be publishing a downloadable report that has a lot more detail on this we're going to give some um actual examples of of insights that people shared on how they're doing this so hopefully you can actually use that if you're interested in taking that and implementing in your day-to-day process all right uh this is probably one of my favorite slides in the deck um so what we know it's AI is being used but what tools are actually being used for these 65% of people that are using AI today and number one with a huge margin is chat gbt and then you have well below that all these Point Solutions all these AI embedded features and capabilities on all these other parts of the devops teex deack and what I think there's two interesting points that I want to make about this slide one the fact that chaty BT is has such a significant lead here if you want to call it that does suggest that even though AI is being used frequently in our in our space it's still very homegrown process there's people who are just doing it themselves they're going to Chachi PT maybe with or without uh their leadership support or knowledge that that's I'm not going to editorialize that but you see there you know people kind of taking it upon themselves to do it because technology providers are still they're still rolling out features and there's not really one standard um that works for every single Team every single person or use case right below that you have this huge list of other providers and I think what's interesting about that is this is just a SE cross-section of all the providers that are out there that are developing AI virtually every software company on the planet right now is developing AI features and if I was to put every single one on here it wouldn't fit on the slide um and so that does lead to some questions about you know this proliferation of all this different Ai and all these different narrow use Point Solutions how does that gonna how's that going to fit together in the future and and how does how do we kind of standardize across all these different providers Judy I'm sorry I think I cut you off no no worries I think it's fascinating that chat GPT is so widely used I think also too it's um interesting to see several uh test automation Solutions mentioned here notably catalon which I know catalon has an integration into test rail if you want it's also great to see my uh former employer function eyes be mentioned in here as well function eyes is a AI powered low Code test automation solution so great to see so many different tools being adopted several of which here are already able to be integrated with testrail all right we're on the downhill slope what types of test being is being is AI being used for um this one actually the results are pretty widely dispersed I mean regression testing is at the top of the list exploratory testing maybe those things are obvious for everyone but I expected to see certain types of tests be used you know having AI being used significantly more but what the results actually shows people are dabbling with AI across the entire series tests that an organization would expect be expected to run um some of these answers are there's too small of a difference to be statistically significant and so this just suggests that it's being used and applied everywhere possible for those teams who are are currently using it all right um one of the final slides we asked the audience what needs to happen in order to make AI um more widespread to be to improve AI adoption across the entire industry the first one was training and education people need to know how to use AI where to you use AI we need to develop standards best practices and processes that everyone can adopt um to ensure uh consistency of results integration um of chat not chat gbt but of AI into existing workflows right now it's one thing to have a chaty BT that sits outside of your solution or outside of your Tech stack but that's probably not going to be allowed or or it's probably not going to be the best way forward for a lot of companies or Industries um and so we they need to we need to figure out how we can incorporate AI um responsibly and consistently in their existing workflows and then the accuracy and reliability I use chat GPT and I know that it takes a lot of training to get the answers to be effective uh for me and and um I'm sure in a with a with a very technical use case it's even more so the reality is the accuracy reliability just is not there today it takes less time to get an answer but it takes more time to evaluate the accuracy to comb through that and make sure it's working correctly um than it does if you're just going to do it from scratch and so we there does need to be some significant strides in that and the last one is customization an adaptability generally speaking um AI Solutions are very good at getting you like 80% of the way there but that last 20% Which is what you need specific to your situation or use case is very difficult for it to do unless um unless you've put in some significant time in in training it uh to to do what you need all right key takeaways and then I'm going to hand it back over over to Judy I'm sure you guys are all tired of hearing the from marketing guy um but Ai and QA is still very homegrown it's very decentralized um there's not a lot of structure and operating principles these are the things that as an industry we we are going both the technology providers and the consumers are going to need to um solve in the next couple years if AI is going to continue to um reach mainstream adoption there are still a lot of significant barriers to overcome Judy mentioned that already for certain industries AI is just it's just a no um it's just not possible and until we can kind of figure out ways of of implementing AI in a way that um allows some of these more compliance focused or regulated Industries to to use it it's just we're just never going to see that come to a full full widespread use um the top of that list is is security and privacy concerns and rightfully so I even saw a question as that we were talking about this about how you can do this without sharing proprietary information it's very big concern and it's a it's just a deal it's a showstopper for for a lot of companies a lot of people uh education needs to be on the you know number one or two priorities for uh for the industry over the next couple years and then um AI is going to continue to enhance the role of human testers this is is our sort of position um and and Judy's gonna this is kind of my hand off to her but AI is going to continue to enhance the role of human testers rather than replacing them AI is not going to replace QA um but AI people who don't use AI are going to be at a significant disadvantage in the future and so we want to try to find ways of incorporating it to to make people more productive and more competitive in the market so I'll hand it off from here thanks Christian um one so before we talk about test rail for AI I just wanted to know like seeing the results from the survey uh was really ey openening for me and the rest of the product management teams I think that whenever we do a survey like this we have to uh take it with a grain of salt um due to the sample size but I think having over a thousand responses from software testing professionals and we did add several qualifiers in there to make sure that you know the people whose responses we were um taking know like our actual practitioners in the software testing space um I just think that the the main takeaway here is that the demand for AI is very strong and uh that testers are already starting to leverage AI even outside of their testing tool so that's really promising to see and so what this means for test rail is that we are very busy uh making sure that we are planning to implement AI very soon I know there's a lot of excitement and Buzz around this um and before we talk about you know the different use cases for test management and what this means as far as road map I want to start with a bit of an outlook for um what we believe uh this means for uh test management as a whole so how does AI affect how testers manage their tests how will test management need to evolve to meet these AI demands from the market those are some of the the questions that I'm looking to answer in the next slide all right so the impact of AI on testing um really taking a step back AI presents both a challenge and an opportunity for software testing if you think about all the code that's being generated um through AI assisted development there's a lot more code being generated now um at a faster pace and when there's more code being generated it requires more testing at a faster pace and what's the easiest way to test faster um test automation is often seen as the obvious answer but with the need to automate tests uh that results in even more tests that need to be reviewed and maintained so all of this culminates in a huge need for testers to be able to synthesize large amounts of data understand where things need to be improved as they look through all of these tests um to be able to find ways to test more with less time and uh in the midst of all of this they're looking to make business decisions um so we think that with AI test strategy skills are going to be even more difficult to hone into with traditional test management software and this is where we see the opportunity for test management and if you look at uh where test management is evolving um this is where we see test management evolving to meet this demand for strategic quality because let's face it features like test authoring test case versioning approvals even Integrations these features have been around for a decade Plus for test management tools um with testers now in the midst of AI creating so much test case and test execution data there's a lot of noise to sift through um particularly for test leaders those that manage teams of testers and when we think about uh the perspective of a test leader test leaders want an easy way to see the progress of quality impacting initiatives um they want to be able to make decisions quickly they want to know how their teams are performing without taking the time to micromanage every single tester uh they want to know if they're getting the most out of their test autom automation decisions um and if they need to uh rip and replace what they have for test automation or um you know invest in maintaining their existing Suite they want to know if the testing headcount that they have today is enough for them to meet their release goals or if they need to supplement their teams with additional testers so with all of these decisions that need to be made we want to empower QA leaders to visual Iz quality so that they can make these informed decisions to either release faster and help generate more revenue for the company or to help cut waste and optimize costs whatever your overarching company strategy may be so this area of reporting analytics and insights what we're calling strategic quality is where we want to take testrail even further and extend our capabilities with AI and while we can explore new features and enhancements that also help with things like test authoring or collaboration or planning we think that you know taking a step back if we look at test rail in the next 3 to five year time Horizon in the long-term Future these features that are designed to help QA leaders is where we think um the bigger opportunity and the bigger impact will be especially for Enterprise test management so there's a lot of use cases to apply for test management and seeing from the survey data that the demand for these AI features is so strong um we're really looking to take a holistic view not just building these analytics and insights features for test leaders but also looking at these other um individual productivity features like test authoring and collaboration so the AI use cases that we um are seeing commonly requested from customers um I'd love to go over six of them with you today um looking at the Q&A panel I see a lot of questions from uh testers asking you know what sort of AI use cases are there for manual testing um what sort of AI features can be applied into testrail um so as we talk to outside of this survey we're doing a lot of conversations with key customers to understand how they want to apply AI what sort of ideas they have what sort of um issues they might have with adoption these six are the top use cases that we think would be beneficial for test realm and I'll go over them with you today okay so the first is being able to generate a manual test case based on a requirement this is using um llm tools like chat GPT or Claud being able to generate a test case automatically um by feeding um the existing data in a requirement format this might be a you know analyzing the acceptance criter criteria on a requirement and being able to translate into a proper test case format with precondition with the description with the steps to reproduce expected results and so on um this is uh seen as a very popular request I think because there is a uh several tools in the market that allow you to do this today so I think this would be loow hanging through for us to consider um the next item is similar in that it's also using generative AI but it's improving an existing test case for easier test maintenance so you might point to a specific test case that already exists and ask uh the llm to improve the test case to make it easier to maintain um being able to incorporate things uh features inside of test tril that make maintenance easier things like reusable steps parameterizations and configurations the third use case that you see here is uh using machine learning being able to detect duplicates that exists in your test repository um a very common story that I hear uh from our customers is hey I just joined this team or I just joined this company and we've been using testrail for many years we have hundreds or thousands uh upon thousands of test cases and to be honest I'm not sure if you know we have redundant tests but I'm afraid to delete these tests and to do this cleanup because we might need to keep all of this around and uh you know the risk of having these redundant test cases is not only are things harder to find right you have to poke into each and every individual folder try to see if this is the test case that you're looking for if that's the test case that you're looking for but what this means is it can um significantly extend release testing Cycles right so if you're if you only need to execute one test but you actually end up executing the same test five different times because of this redundancy you're uh multiplying your testing Time by 5x so being able to provide testers with the ability to identify these duplicate test cases even if they're not worded exactly the same if they are somewhat similar to identify them and give you the option to either archive delete or merge them together that would be the use case here um the next set of use cases are uh more related to automation so the first use case that you see is being able to generate an automated test case using plain text English I think that one of the barriers to test automation is having um a technical resource who understands how to code who understands your testing framework to be able to take a manual test and quickly generate a an automated test with it nowadays there's some pretty Innovative tools out there that provide this codeless test automation experience um that you can simply feed a set of manual tests written out step by step in plain text English and use that to automatically generate uh test scripts you don't see the test scripts as the way the test is written but because it is an automated Tool uh behind the scenes that is still generating and running a script so being able to do that um automatically this natural language processing capab AB ility is is really Innovative um and would of course uh cut down the time to convert a manual test into automation significantly the Second Use case that you see here is using uh test impact analysis to prioritize test plans um this is all based on the premise that you can't possibly test every single test with every single change um in every release you want to do as much regression test testing as you need but just enough so that you are balancing both risk and time right if we were to test every single test that ever existed it would take forever to release so um this feature would allow you to make these uh test scope decisions uh more intelligently so that based on the changes that are detected either in your application source code management tool or based on changes to requirements in the release and jira be able to highlight which existing tests in test rail need to be executed um therefore giving you the ability to cut your testing time to just enough uh to cover the changes or the impacted areas in this given release the last use case that you see here is being able to classify defects and the root cause of fail test um this again is related to test automation we know from talking to a lot of customers that when they have their regression Suite um of automated tests a lot of times what they get back after the ex execution is just so many failures um and these failures take a lot of time to dig through to analyze to understand okay is this an actual bug in my application or is this actually um a a test that needs to be re refactored or maybe this is an environment issue that led to this failure so um being able to diagnose these issues takes a lot of time and it also takes somebody who's skilled with automation so um this feature would be uh using machine learning to be able to point out anomalies so that you can um more quickly identify the root cause of these failures and be able to do something about it quickly that's um the value app so what we just covered are six different use cases the top use cases that we find to be the most in demand and the most impactful when it comes to what sort of features we should add to the test railroad map and as I mentioned earlier whenever we work with the marketing team and interface with customers we want to use every opportunity to learn about our customers learn about the market as much as possible so I know there are a lot of attendees in this webinar I'd love to use this opportunity to do a quick poll based on these six use cases that I described for test management which of these AI features do you think would be most helpful to add to test tril and this is a multi multi- select multiple choice um question so feel free to pick up to three of these six use cases and in a couple minutes we will share the results all right um let's see the results of this poll okay so it looks like um we have a pretty significant um results um pointing to the most popular one being able to generate a manual test case based on the requirement the second one is being able to generate automated tests using using PL text English um and then the third is improving test cases for easier maintenance it's interesting to see so many of the um popular uh response is related to test case generation it seems like creating new test cases is um the biggest problem area or the biggest opportunity uh that our attendees see in applying AI all right so I know we are coming up on the end of the webinar feel free to post any questions um as we're leaving time for questions to be answered um but what I wanted to leave you with today um is our view into what we are planning in the road map so there is a lot of Buzz right now specifically around generative AI especially with the rise of chat GPT clae and other uh so similar llm tools um we think that because of this generative AI might be the easiest or lowest hanging fruit to implement features quickly to deliver um these AI features for our customers um as I mentioned being able to generate a test case based on a requirement this is a prime example of gener AI for predictive AI though there's a lot of promise I think when it comes to doing things like analyzing large amounts of data being able to analyze patterns and to be able to make predictions or F forecasts about future events and outcomes as I mentioned with where test management is evolving and the Strategic test skills uh that are going to be in higher demand for testing as a result of AI I think there's a a lot of Promise in the predictive AI front especially for test management and especially for test leaders so without being too specific or giving away too many details because a lot of this is still in the works um here is the test tril AI road map um the way we're looking at the next year in implementation so first we're targeting generative AI features what we saw in today's poll is that there 's a lot of demand uh for generative AI features particularly to create a test case based on a requirement I think this is use case number one to implement as part of this road map but as we go forward into the longer term future we think that servicing our customers needs to develop strategic quality skills we want to provide more predictive Ai and insights and analytics based features to help test leaders espec pinpoint areas of concern and pinpoint data points that um really support uh actionable decision making so that's um where we're headed to next um being able to surface up the test data whether that be test executions test Creations or test failures to identify areas that are improving identify areas that need to be improved these insights and analytics features are primarily what I'm most excited about um because they will truly be Innovative um they're going to be a bit more complex to build of course but um it's great to share um you know the learnings that we've got from this survey with uh the attendees today um I want to thank Christian and the rest of the marketing team for their support in launching this survey um presenting the survey results and especially for helping our product management teams uh use the survey information to guide and inform product road map decision so thank you everybody I hope this was um a a helpful look into the testrail AI road map and an informative session to see you know from the survey of over a thousand respondents how other testers in the industry are looking at Ai and adopting it already all right so we will transition over into questions but before we do so um where we are in the AI implementation process is that uh we would love to do a beta uh for interested customers to provide Fe feedback before we launch to the general public so if you are interested in joining the testal AI beta um please answer yes here if you are not interested there is no pressure to answer yes um I would prefer that you do not respond or just simply answer no thanks um that way the list of respondents that we have are true interested parties because um with this beta program we'd love to reach out to you um possibly schedule some calls uh to share with you what the features look like and get some in-depth feedback so this will require a little bit of investment on your part um in terms of time but uh we'd love to reach out for anybody who's interested here all right do you want to just uh jump over to the questions while we're while we let that roll yeah sounds good work all right let me jump in here and uh so we have a lot of questions about some of the more details about test rail's future um so Judy maybe you could speak slightly to a bit more detail on what um generative AI could look like in test rail what predictive AI insights analytics could be since there are a number of questions around that sure and this is all obviously possibility since it does not yet exist hence the beta right yes um so we are still in the process of planning out this implementation I think um based on the survey and whole results from today um our plans are very much in line with the top requested features um as a reminder the top request features were being able to generate a test case from a requirement um and being able to improve a test case um so that it's easier to maintain those are uh two of the Gen AI use cases that we are planning on implementing to testrail uh in the new near future and then as far as the predictive Ai and insights and analytics features go I don't want to give away too many details we are in the process of buy build or partner decisions um and so I don't want to give um away too much at this point that's fair thank you and then next question is so uh someone still doesn't see how they can get AI to write test cases without revealing all of their proprietary information um so their ISO certification requires that they can't have those kinds of risks out there and what recommendations might you have for them on how to handle this yeah so um first we want to provide these AI features in a way that where both test real cloud and testrail server customers can use them for testrail Server customers um ideally we would provide the ability for a customer to connect their own AI um and this would allow you to um ensure that your data does not have any inter interaction uh with the testrol API for llm um I think some companies like have their own llm that they prefer to use and so this would be a good opportunity to do that thank you uh next question is uh Garrett's asking outside of using a web recorder how is AI used for script generation so for script Generation Um there are a lot of features uh sorry a lot of tools out there that are cloud-based that are uh truly on the bleeding edge of innovation a lot of um AI power test automation tools like function eyes for example which is a previous company where I've worked um you have the ability to um not only use things like a web recorder to scrape um metadata from your different elements on a UI but you also have the ability to use really Innovative features like natural language processing so what that means is you can simply write out your test script um in plain text English say uh navigate to google.com search for puppies click on the button search and through natural language processing it's able to uh understand your test intent so it'll go to google.com it'll put in the search bar puppies it'll automatically do this without the need to generate those scripts like the code that's needed even though it is code technically being being produced and run behind the scenes um if anybody hasn't uh heard of function eyes yet I I think it's a great um Tool uh that's an example of being able to do something like this perfect uh and then next up is is it mandatory to use AI with automation testing or is there any scope for manual testing as well yeah so I think this C this question came in earlier in the webinar before we went over the use case um but especially the three use cases at the beginning um being able to generate a test case based on a requirement that's manual use case being able to improve a test case um for easier M maintenance um that's another example uh being able to detect duplicates in your tests that could be for manual test cases as well um and then the the other example from the second slide was test impact analysis so analyzing your existing manual test case repository to suggest which of those should be cherry-picked for execution in the next release based on impacted changes in your source code AS developers are writing code or based on um the requirements that are planned for a given release in jira these uh would be two other examples great uh and then on the subject of information security because that's very critical now days would a local environment with specific AI models installed and properly trained be the right or only way to go to avoid data leaks or what thoughts do you have on that I think that's likely the shest way um especially for customers who want to keep um their data hosted on premise um that's that's the main way that I would um recommend but as we are implementing this we want to make sure that all of our customers data are kept separate and independent of one another um and that is how uh testal data is stored already today so that's um you know a promise that we would want to continue to keep for testal cloud as well um since as Christian mentioned we have a lot of customers and highly regulated environments that still use test Cloud today great um I know we have just a few minutes left so I think we'll take one more question and then we're going to wrap it up um so how would you differentiate between Ai and test automation so this is an interesting question um I think what the industry has suffered from over the last five years is a lot of tools saying they have ai when it's really not AI um AI by definition is something that uh is beyond a normal algorithm that's used in um automation um that is not simple you know code that is um informing an application how to behave AI is something that learns from data that gets smarter over time um and has this artificial intelligence about it um it's using things like machine learning to be able to analyze large amounts of data to be able to provide responses or make predictions um the same way a human would by understanding the intent um so there's a lot of Promise out there um but as I mentioned I think the industry has suffered a little bit uh from um software providers saying they have ai when it's really just software algorithms or just automation behind the scenes so I'm hopeful that in the next you know couple years that as these AI Technologies like truly mature and become more widely adopted it'll be easier for uh users and customers be able to identify okay this is a a provider that's using real AI or this is a provider that uh is just claiming to have ai I think we as part of this like education are starting to become a lot more informed um to be able to differentiate the two [Music]
2024-08-25 12:55