Testing Peers
Testing Peers is a community-driven initiative built by testers, for testers. We are a not-for-profit collective focused on supporting each other across software testing, quality, leadership, and engineering. This group is peer-led, values-driven, and passionate about shaping a more thoughtful, collaborative testing culture.
The Testing Peers podcast is now expanding beyond its original four hosts, David Maynard, Chris Armstrong, Russell Craxford and Simon Prior, striving to represent the voices of a diverse and thriving community.
Our inaugural in-person conference, #PeersCon, launched in Nottingham in March 2024, returning for #PeersCon25, with #PeersCon26 already scheduled - further solidifying Testing Peers as a not-for-profit, by testers, for testers initiative.
Testing Peers
Testing in the Age of AI
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Welcome to episode 135 of the Testing Peers Podcast. This time join Chris, Russell, David and the returning Jon Robinson as we talk all things AI in testing.
Artificial intelligence is already reshaping how we approach testing, QA and quality. But what happens when the pace of change accelerates? In this episode, the Peers explore what AI might mean for our profession, our skills and the value we bring.
We discuss:
- The future of QA roles in the next 3–5 years and which functions may fade or evolve.
- Why exploratory testing could become more important as AI takes on repetitive checking.
- The risk of over-reliance on AI tools, from hallucinations to “blind hope” in outputs.
- New skills testers may need: data literacy, training and validating models, and critical thinking about outputs.
- Shifts in business mindsets: AI as a cost-cutting tool vs AI as an enabler.
- The challenges of scalability, maintainability and sustainability when using AI.
- Ethical and legal concerns around data sharing, intellectual property and training models.
- How diverse skills and perspectives remain crucial for testing in an AI-driven future.
Along the way, we touch on topics from token debt and environmental impact, to healthcare examples, to why trust and human experience still matter.
#PeersCon26 Call for Collaborations for written and video submissions until September 30th 2025, and also for VolunPeers. More on all this can be found here
Tickets for the event are now live too, starting at £15 until December, when they go up to £30.
And as always, we are looking for sponsors to make this event the success it has been for the last 2 years, get in touch if interested
Twitter (https://twitter.com/testingpeers)
LinkedIn (https://www.linkedin.com/company/testing-peers)
Instagram (https://www.instagram.com/testingpeers/)
Facebook (https://www.facebook.com/TestingPeers)
We’re also now on GoodPods, check it out via the mobile app stores
If you like what we do and are able to, please visit our Patreon to explore how you could support us going forwards: https://www.patreon.com/testingpeers
Thanks to our sponsors – NFocus Testing.
nFocus are a UK based software testing company. They’ve been supporting businesses for 24 years by providing services that include burst resource, accelerated test automation, performance testing and fully managed testing services. In 2021, they launched a Test Automation Academy to create amazing testers and they’ve now created jobs for 48 people in our industry in just under three years!
nFocus were a big part of PeersCon in 2024 and 2025, really grateful for all they do to support the Testing Peers.
www.nfocus.co.uk and info@nfocus.co.uk for anyone wanting to get in touch.
0:00: Brighton.
0:02: Hello and welcome to today's episode of the Testing beers.
0:06: This week we have a special guest with us, so I'll introduce him first, so welcome John Robinson.
0:11: Hello.
0:12: Thank you and we have some of our usual suspects, our dodgy, suspicious suspects.
0:17: , I'll start with the main pretender, er, Chris.
0:21: Dodgy, hello.
0:22: And then our friend David.
0:24: And a dodgy hello from me.
0:26: There we go.
0:26: And then Russell.
0:27: So that's the 4 of us on today's podcast.
0:29: Today we're gonna talk a little bit about AI and its impact.
0:33: , but before we do, here's a little message from our sponsors, EFocus.
0:37: So EFocus are a UK based software testing company, and they've been supporting businesses sort of 24 odd years now, providing different services, best resources, accelerating sort of automation, , test automation, performance testing, , and providing sort of fully managed testing services.
0:55: Back in about 2021, they er launched Test Automation Academy to bring the amazing testers and things, so we teach them and train them and things, and they've created jobs for about 48 people.
1:05: , which is pretty good going in the sort of 34 years that we've been going on for doing this.
1:09: If you'd like to hear more about what InFocus can do, , please reach out at info@nfocus.co.uk or please visit the website www.infocus.co.uk if you want to get in touch.
1:23: Very grateful for them helping us put on the show.
1:25: But let's move on to some more fun and frolics.
1:29: So John, I believe you've got some banter for us today.
1:33: I've got a good question for y'all.
1:35: I'm, I'm very curious what your best dad joke is.
1:40: I've been practising it.
1:41: I've got a great one for you.
1:44: I've only got one and it's not good in any sense, but it's a dad joke.
1:48: I mean they're all good, go on.
1:50: man walks into a bar.
1:52: Ouch.
1:55: That that's bad.
1:57: That that's, that's you, you weren't wrong, you set it up very nicely.
2:01: Yeah.
2:02: Low expectations.
2:04: What, what do you got, Chris?
2:05: I think the one I'm gonna go for would be, how do you get a Pikachu on a bus?
2:13: Mm.
2:14: Pokemon.
2:16: Mm mm.
2:18: Nice.
2:19: Mine is my wife apologised to me for the first time ever in our marriage the other day.
2:26: She said she's sorry she ever married me.
2:29: Yeah, I've heard that one before, unfortunately.
2:34: Both literally and figuratively.
2:37: , all right, so I'll I'll give you mine.
2:40: Why are there two Ls and cancelled in the UK but only one in the US?
2:49: We gave you that L back in 1776.
2:52: So I had in honour of Independence Day and you know, the holidays, I had to, I had to pull that one out and I was just looking for an excuse.
3:02: So you know that noise whilst the tumbleweed and I and I had the perfect audience for it, so you know, I figured I would go ahead and just, you know, drop that bomb.
3:12: America's independent.
3:13: When did that happen?
3:16: There's Flash.
3:17: Right then I guess let's talk about a little bit about AI and what maybe we think it's impacts might be on us in the testing community, in the QA community, and the quality community and so on.
3:28: Anyone wanna start us off?
3:31: I'll ask the question that I've been thinking about for a while is.
3:35: What do the various roles and functions of the QA profession, the testing profession, if you will, look like in 3 to 5 years?
3:44: With the rise of AI tools, so much is changing rapidly.
3:48: Everybody's adding AI to their tool of some sort, but even just general usage of AI, you know, chat GPT, Claude, Gemini, etc.
4:00: it's changing how we do our jobs.
4:04: It feels like there are certain functions within the QA space that are just.
4:10: I, I'm curious if they're gonna be around in 3 to 5 years.
4:14: I'd love to hear y'all's thoughts and, you know, ideas on that.
4:18: Well, in 5 years' time we'll get to the point where we'll see if I look through the glass at 2030 was correct that we recorded 5 years ago.
4:28: We did have Russell mention AI I will point out in that episode.
4:33: That's one of the early episodes, listeners.
4:35: However, I do think our understanding of what AI was going to do in our industry in 2020 was somewhat different to what we have ended up seeing in 2025.
4:49: You know, the, the rise of.
4:51: MCP being utilised by a lot of places from something that's only been on a public GitHub at this moment in time, about 4 months for the likes of playwright has meant that many toolbenders are using that as a way of translating user stories into test cases and and what have you and.
5:10: I don't believe I had anticipated that AI would respond with as much confidence as it does.
5:20: And so as a result of that and knowing and looking at some of the outputs that I've seen, I think one of the most important roles that testers are going to have is in training and validating the quality with these learning models.
5:39: It's gonna, one real positive, it's gonna actually force better standards to be written and better data to be input into things.
5:48: Because these days, these things need explicit instruction for a lot of that work.
5:54: And so, in terms of that kind of interface, explicit standards, explicitly well written stories.
6:01: And templates and data that's getting input into this is gonna be really important, as will the validation and, and the training of the models as we go.
6:08: Those things are gonna be huge.
6:10: And I think one of my positive outlooks on these things is, that's all going on.
6:16: And if we lean into that sort of testing, checking world, that's a lot of the checking kind of stuff.
6:23: I think we may well see the rise of the exploratory tester.
6:26: Which could be very exciting.
6:28: Which would be really exciting because I don't think we do enough of it these days at all.
6:34: No, I, I think the one thing to me that's gonna be interesting is I was at a company who was building an AI based automation tool, and just how fast the game has changed and even just the past year.
6:53: And how much better these things are getting at, you give it a little bit of instruction and it does a lot of output.
7:01: I'm really curious to see if that rate of progression continues over the next 3 to 5 years.
7:07: What is that gonna look like?
7:09: How much of that are we gonna have to hold its hand, because at a certain point, maybe it gets way better than we expected.
7:16: So it's gonna be interesting to see what that does to people whose jobs today rely on writing test cases.
7:24: That's a huge chunk of someone's job.
7:26: If that goes away, hopefully they've got, you know, good skills and upbringing, but yeah.
7:33: Yeah, it, it will be interesting to see how it all evolves cos it's, it is a bit unpredictable, isn't it, as you say, cos it, if it keeps going at the rate it which is going, part of me thinks the human race will be extinct by 2030, never mind anything else.
7:44: Yeah.
7:45: It goes through peaks and troughs, it makes leaps forward, doesn't it?
7:48: I think is the worst way I can describe it, and it gets over obstacles and that actually makes a jump forward.
7:53: And it will change definitively what we do and how we do it because it can, the best way I use it is heuristics, it's giving us a shortcut.
8:02: To an outcome, be it a test case, generation, or something else, but it's enabling us to get to different outcomes, sometimes better, sometimes not, but it's helping us get to those better outcomes a lot faster, , which therefore freezes up.
8:17: It can free up up, or it just changes what we focus on.
8:21: To Chris's point, I think was yours, John, about exploratory testing.
8:25: It'll be interesting because then the question comes, how much do they value it, cos at the moment it's not that valued.
8:29: That's my word.
8:30: That's all you're gonna do, then unless you can persuade the value of the AI doing some of these things and this on top, we're gonna see the usual push and pull, which is job production.
8:43: Oh shit, we've lost too many people, pardon my French.
8:46: Right, hire some more people back again because they overcompensate or they react too quickly.
8:51: Think this will save us money, it's doing all the work.
8:54: Than realising it's hallucinating.
8:55: Actually it's doing the most simplistic things or the average things but not the abstract well.
9:02: So yeah, it'll be interesting how it evolves, it has to be said.
9:05: Yeah, I was going to say something similar.
9:06: I think that actually the training of the models, I think that testers may well be part of that training because, as Russell hinted there, the hallucination and the, there are reports out there that that the quality is actually decreasing in software because it's going for the.
9:25: The sunny day scenario or, you know, the bell curve, it doesn't pick out the, the extremes.
9:30: It only tends to look at the easy options as it were, rather than those extendable things.
9:37: That needs to be considered as part of this.
9:40: So especially in the short term, I think the, the confidence in the AI needs to be proved.
9:45: I would also go again for Russell saying, you know, the, the sort of peaks and troughs.
9:50: I think there are certain companies out there that will see AI as the golden bullet and absolutely think that the quality way forward is to get rid of all the testers and use AI and then potentially the quality will decrease and therefore they they will then employ certain people back.
10:08: As to the nature of those particular people, again, it depends on the nature of the company, what they're actually working on, but I do agree with Chris, it's the confidence checking, so that, as you say, the exploratory test is to ensure that those edge cases, if the reports are correct, that that's, that seems to be what's missing, having people to look at those edge cases are possibly the, the way forward and the roles that will stay.
10:32: I think one of the things that we've seen both in terms of of prospects and and talking to to friends that are in a recruiting business, a lot of the lower level QA functions, right, your entry level QA roles are getting handed off to other functions within the business, whether that's product owners, BA's in the salesforce world, it's it's Salesforce admins, those people are.
11:03: Now starting to take on those functions that have historically been QA functions rather than having a bespoke QA person and I think that's that's an interesting outcome because now they have the ability to do things that previously required a specific skill set to do.
11:21: So I've, I've got a feeling there's gonna be a shift of people skills, like the, you know, the additional skills the testers need.
11:28: So this last 10 years, automation, automation, automation, that's been the kind of the, you need to be able to think out of the box, abstract, question things, challenge things and ideally, in most people's view, be able to automate code, so on.
11:40: I think the AI shift is actually gonna shift a lot more to this kind of the big bang that was sort of data engineering of the last sort of 10 years, so.
11:49: If you imagine we've talked about data engineering and certainly IT has talked about data, big data for a long time now.
11:55: Feels like 10 years, probably about 20, about the big next thing.
11:59: But actually what's happening is to train these AI models, to do the right things, to test them.
12:05: Data is becoming the most valuable asset.
12:08: So understanding data, how to use data, how to generate data with AI tools or to train AI tools.
12:16: Is actually going to become the kind of the secondary skill set of the people in 5 years' time who are successful because they don't foresee how they can use data to check the validity of things or to build a model that's more accurate or to question things, they're going to actually struggle, and that is a specialist subject in many ways, and there's different language, like testers use a certain language, so do data science.
12:42: And actually there's gonna be a lot of learning of that and bleeding of those two domains into each other.
12:49: So you might find more of a QA role more in the data world than traditionally there is.
12:53: If you actually go and speak to data scientists, the concept of test engineer or something is often considered odd because they build it, they validate what they did, that's what science is.
13:02: They build a hypothesis, they test it.
13:04: In the software engineering side, developer builds hypothesis.
13:07: QA historically came in, validated it.
13:11: So I do think there'll be a kind of a much more of a data science thread, I think it's what I'll call it.
13:16: , coding skills are gonna be important, but actually it will be less about automation per se, more about big data, generating data, training on data, using data, analysing data and those sorts of aspects of it.
13:28: But again, it depends whether you're building AI systems or if you're using AI to help you build systems.
13:34: Cos there's a there's a shift there, isn't there?
13:37: Yeah, I I mean I think that's an interesting question, right?
13:40: Is how much are humans gonna be in the loop on the the process?
13:45: Or are you gonna be using AI to test AI created content cause that seems like a bad idea.
13:52: Already happens though.
13:53: It does.
13:54: It it 100% happens and I I've seen this, this term vibe testing going around lately.
14:00: It's the idea, oh, we're gonna use AI to to do the testing, and I'm like, yeah, but if the thing you're testing was generated by AI and it's all kind of using the same data set under the covers, isn't that a concern?
14:15: Like isn't that the whole reason we won't let devs test their own stuff?
14:19: So it's very true.
14:21: There's there there's a lot of, there's a lot of dangers that I think of dangers is the right word, but we're we're good at catastrophisers as testers, aren't we?
14:29: There's a lot of things that we are aware of and we can see potentially on the horizon that could be problems.
14:37: Maybe our gut instinct is to say you can't possibly do this without a human tester who's got all these expert skills, but.
14:45: I think by this stage we're long enough in the teeth to know that these things can possibly, even if we don't think it's the same or sensible thing to do, business goes where business goes.
14:56: The entry level for testers, the barrier to entry for testers and anyone in computer science is totally different to when we joined the industry way back when, listener, not that we're that old, but .
15:08: , it wasn't last week, let's put it that way.
15:10: It wasn't last week.
15:11: No, the, I think the gulf between different testing tasks that will be performed will be huge.
15:19: We've already seen there's a bit of a, you know, we, we know there's that sort of regulated versus kind of much more red place.
15:27: There's, there's enterprise versus sort of startups, scale up.
15:31: There's waterfall versus your agile DevOps type stuff.
15:35: There's test teams who are in isolation.
15:38: There's, there's offshore consultancy models.
15:40: There's people who exclusively perform manual test cases, , and write those things.
15:47: There, there is of course automated tests.
15:49: I think that there's already so many different things in the toolkit that exist.
15:53: That's quite broad, and I think.
15:55: One of the things I remember mentioning when we did the testing in 2030 was I felt that we would get to a point where there would be many more specialisms.
16:05: And while we may have a lot of those core attributes, we may need to revisit what some of those will be in the future, but those core attributes of sort of curiosity, critical thinking, communication, those will still be important.
16:16: But what we do on a day to day basis may well look very different.
16:21: How that looks and what our backgrounds are that qualify for those things may well be quite different.
16:26: At the Eurostar conference this June, they had an AI themed conference.
16:31: John, you were there as well.
16:33: Mhm.
16:34: Yes.
16:35: One of the keynotes was by er Isabel Evans, and er her keynote was about breaking testing stereotypes, who's testing and why it matters.
16:44: And one of her closing points was about what do we need?
16:49: In testing, back to the point of saying like, we need people who've got those hard skills, those computer science ones, we need people who've got those attributes we've got, but what she, she said was to represent the many different consumers who are going to be taking.
17:04: And consuming software in the future, we actually need that smorgasbord of representation to be people who are testing.
17:12: She was saying we need more technologists, engineers, artists, craftspeople, scientists, communicators, critical thinkers, and maybe even human centred designers.
17:23: And I liked the last bit which was the muddy booted pragmatists.
17:27: She was saying really like, it's gonna take a, a greater breadth of persona than even what we thought already was a pretty broad persona that testers brought to the table.
17:38: It may well be that we have even a greater breadth of that, but what we do and what we perform as testers is gonna be very different as well.
17:48: Going back a little bit, I agree that there may well be a greater breadth of people.
17:53: But do you think in the short term, there might, companies might take a short term view and.
17:57: I think that actually we need less skilled people.
18:01: We don't need that breadth initially because of AI doing all the everything.
18:07: I don't just think, I, I, I think they're already doing that.
18:11: Like we've already started seeing that.
18:13: Yeah, blind ignorance, everyone believes in the hope.
18:16: Like we as testers can often be the, the negatives or the pragmatists or the.
18:20: The fear factor or whatever you want to call it, but there's a lot of people on the opposite end of that spectrum, which is why we often have, how many times have we heard, I'll just ship it?
18:28: Do you not want to know if it'll actually work if you ship it?
18:30: No, no, the debs did it, of course it'll work.
18:33: OK, blind hope.
18:34: Well, next time they'll be saying, oh we'll put it through the AI, you know, it's, it, it's worked, it's worked through that.
18:40: AI wrote it, how can it be wrong?
18:43: If you look at it right, the, the general trend right now is how do we cut costs, how do we cut headcount, how do we save money, and this unfortunately leans into it.
18:54: The number of people that we would have come through the prospect lists.
19:00: That were interested specifically in AI assisted testing tools because they were slimming down their team, they didn't want to lose the bandwidth, but they didn't want to have the same size staff or they couldn't or whatever.
19:17: That's already happening it's it's only gonna get worse in the next few years, yeah.
19:24: And that they've invested in the tool, why, why, why do they need to have.
19:28: You don't double investment, do you, you invest in one, yeah.
19:31: No, and it's, and the, the worst part is there are a lot of companies out there that are leaning into that idea.
19:40: Our tool can help you.
19:42: Eliminate people they they may not use it in their messaging or in their marketing on their website, but I can guarantee you when they're having the conversations because we did we would tell people if that was something they were interested in absolutely our tool can help you eliminate roles.
20:00: It's yeah it's it.
20:02: It's that statement that sometimes you'll see, you know, like saves 10 man hours a day of one person sort of thing.
20:08: That's, that equates to.
20:10: That is what that is.
20:12: Yeah, it may not be using the words, but that often is kind of the what it means is actually you need you can do other things with that time or as most people read it, you can save that money.
20:23: And that's what the, this, the whole agentic motion that's going on right now is equivalating each agent to a person.
20:35: Why would you have one person when you can have 50 agents for the same cost?
20:41: And, and also they can be working 24/7.
20:44: Correct, exactly, and you, and you can programme it to do exactly what you want and you can point it in the direction so you focus on this and you focus on this and you just do that over and over and over and over again.
20:55: That's where I worry about what things look like in 3 to 5 years is that mindset.
21:00: How do we safeguard the importance of the QA as a function what we do.
21:09: On the whole, what is our value add?
21:12: And I guess we've got to learn, adapt, because if our value add is something that AI can replace.
21:18: Quite frankly, it will, , same way the looms and so on have been replaced.
21:23: So we've got to continuously adapt, learn things because if something is cheaper, more effective, more efficient.
21:31: Then the role goes, that's kind of capitalism 101 really, isn't it?
21:35: And you don't pay for something you don't need.
21:37: So therefore testers, we've got to up our game or change our game.
21:42: But it's, I'm still finding it interesting what AI is actually replacing.
21:45: I've heard lots of good experiments going on, like generating test cases, and I've heard lots of negatives of them of AI.
21:53: analysing code bases, trying to make decisions, trying to rewrite things and just absolutely being useless for some of these things.
21:59: And it's very hit and miss the information that I've been hearing.
22:02: I think it's growing and getting better every month to a degree.
22:07: But the question is at what pace to the point where it actually goes from being gimmicky, useful, but not earth shattering to actually being that earth shattering thing, cos as soon as it is.
22:19: The rate of change goes from we've got 5 years to adapt to we've got 5 weeks because technologists and companies adapt very fast.
22:30: I agree with you and this this is where when the agentic stuff started coming out, it became more of a concern.
22:38: We had a working in our product version that could look at a video.
22:44: And build you a complete test case of what you were doing in that video.
22:50: So product owner records a video, says this is the thing that needs to be tested shows doing it, and then you get a test case that comes out of that that's also automatically automated.
23:01: That's somebody's job.
23:02: Like today, somebody is doing that.
23:05: If the computer can do it and even if it's 80% comprehensive.
23:10: Yeah, well, that's still, you only need one person versus 5.
23:14: Well, exactly, and in that scenario, what you just said, well that scenario just shows that the, the skill that's been prevalent for the last 10 years has said this automation coding has been a big thing.
23:24: That in the scenario you just said it's gone, because all you need to do is show it to something, be it a tester doing it, a product manager doing it, who doesn't matter who, and actually as the outcome of it is a, the question would be about is it maintainable, is it reliable?
23:39: Like automating somethings possible has been for years, but from recording playback type things, but actually can, is it reliable, is it maintainable, is it scalable?
23:49: Those are the questions that are often people are scared of.
23:51: But if it is.
23:53: Than actually the skill of transforming an idea or actions into some written code, that skill doesn't isn't necessary anymore, per se, or would be a lot less valuable, maybe it's fairer.
24:06: Here's something that that I don't think a lot of people think about, but how does most automation these days work?
24:12: It's all based around interacting with the DOM to some degree, right?
24:16: You're, you're going to the web page, you're looking for an object based on the DOM, etc.
24:20: What happens when you don't have to do that anymore because that's what some of these agent-based interactions are able to do.
24:28: They go to the web page and interact with it just like a human being would.
24:32: They're not using automated tools or hooks to do that.
24:37: They're interacting with it in the same way that anybody else would.
24:41: And now everybody that's built these tools based around working with the DOM and, and you know how, what is XPath and you know all these things.
24:50: That's no longer important.
24:52: And yeah, then it'll come down to the usual factors beyond like how easy is it to do, how fast is it to doing it.
24:59: Like if you've got 1000 of these things and you go through a UI directly, is it faster than going, you know, through Xpath or anything else.
25:06: And then it'll go into actually the path of the last 10 years, again, I keep saying 10 years, but a few years has been move away from UIs as much as testing as possible into the API layers into the lower layers.
25:18: Into the unit tests and actually test things as low down as possible.
25:22: It doesn't mean there isn't UI test, there still are, of course, because there is a user interface.
25:26: It'll be interesting to see how all these things evolve because the one thing that seems to be important is being able to run these tests fast, reliably, consistently.
25:35: And I don't think anyone questions whether AI can generate an idea or come up with some things now, or write code now.
25:44: But I'll be interested to see how it copes with scalability and performance overall maintainability and some of these ilities which humans have not been great at.
25:55: So if humans haven't been great at it, how easy or how well are we gonna train it.
26:00: Yeah, AI be good at training.
26:01: It's that training thing I came back to, yeah.
26:04: The quality of things it's learning of is gonna be poor.
26:07: Anyway, yeah, cos I was gonna highlight, as you say, the maintainability, you know, it's all very well writing it, you know, but if you, if you show the same video, does it then just rewrite a new test case which may well miss out some of the functionality that was there before, or does it again, it's trained a shortcut, yeah, it doesn't take a shortcut.
26:25: It it comes down to the prompts like what have you trained it to do?
26:27: What are your expectations?
26:29: And, and I think that's really where the QA function's gonna go is defining those expectations.
26:38: Context, there you go.
26:41: Yeah, and it's gonna be either training these things to be more.
26:45: Aware in the sense of how to make this efficient and and optimising things rather than just simply writing things.
26:52: But once you've got something to write, it's probably easier for an AI tool to optimise it.
26:56: Here's a 1000 tests that you've generated.
26:58: Go in and make these more efficient and reuse parts of it, blah blah blah.
27:03: And that's where actually you could do it.
27:05: But again, you've got to have someone who's a prompt engineer then in order to actually put the right things in to get to the right outcomes.
27:11: The really cool thing is when you start looking at it and saying, OK.
27:15: You know, I've got an existing test case repository manual automated doesn't matter, but I've got a repository that's got X test cases in it, and this is my application that I'm testing.
27:27: What's missing?
27:28: What am I not getting?
27:30: like using it as a tool to help you start to close those gaps and figure out what you're missing.
27:35: Learn their skills now.
27:37: Gap analysis type.
27:38: That's a really good use case for a lot of stuff that's beyond just automation and test cases, right, because one thing that we have seen AI doing incredibly quickly is consuming data and.
27:51: Providing feedback or output, analysing patterns, identifying gaps, duplications, things like that, it can be very good at that.
28:02: There's a whole wealth of things that we do day to day as testers that isn't just around writing test cases like there's, there's appraising, there's understanding.
28:11: There's the scene that that bigger picture is interfacing with things, understanding customers, risks storming we've got a whole AI piece now keeping up to date in that world.
28:20: There's a lot of that stuff is, is, is ongoing and it's, it's very interesting beyond the automation world how AI is going to work.
28:29: One thing I.
28:30: Just wanted to drop in there as well that I think we've talked a lot about the debt that can be created with a lot of tests and things.
28:37: Gen AI can do a lot of these things.
28:39: There's also, I assume, a huge token debt, and environment debt, and I reading of these things, how much it costs in terms of global warming as well.
28:50: Those are things that are kind of not really taken.
28:52: Into account so much at the moment.
28:55: What the impact of AI itself and its sustainability or lack of credential.
28:59: The cost of scalability stuff.
29:00: Yeah.
29:01: I think one of the most interesting things to me, if we look at where the value adds are, is how much it can enable the people that are not automated testers to be more efficient and To do more things that maybe they add a little bit of automation to their tool kits, but now they can do a lot more, not necessarily we're, we're not going to become automation guys, but now I can do a little bit more.
29:28: I can look at this and create more effective test cases.
29:31: I can see where I've got duplication in my test cases.
29:34: Those are all great things.
29:36: One thing I was gonna point out, I think we touched on it before is the data.
29:38: What we need to be careful of the companies in general, is what we share with these AI tools, you know, we're all about the training.
29:45: I was gonna bring that up, but actually if you have IP you can have free access to your IP by sharing it in the wrong place.
29:53: And you need to be absolutely at the top of your game in order to either bring it in-house or give enough control in order to make sure that you don't share the crown jewels in whatever you're doing.
30:04: And that's very easy to say for a big company, but when you're a startup, you need to be very, very careful as to exactly what you're sharing and where.
30:13: Because AI has got better at it's kind of controls over when you use it, what's gonna do with your data in terms of what the contracts and the legal statements and things say.
30:23: I'm not sure how far they've been tested yet in courts and other things, but I know obviously the big GPTs of this world or equivalents.
30:30: There's a fair few court cases going on about their consuming of data that may not be, or may be copyright protected and so on and so forth.
30:39: Yeah, there's mes model, wasn't it, consuming from prose that had been published on a free to access and not fully legitimate website.
30:50: , in order to train their model nice and quick and fast.
30:53: Yeah, well, and then you've got like the Studio Ghibli stuff where they took a style and a very unique style and replicated it and made it a commodity then.
31:03: Like it it just didn't have any value anymore in the same way.
31:07: But that's the same, like you put an artist's bit of work in there and then you can generate something very similar to the artists like Picasso.
31:13: Then you start devaluing Picasso's work to a degree.
31:16: So anyone do a drawing like Picasso and so on.
31:20: It is interesting areas and you know, and if that happens in the testing space, it'll be interesting too.
31:26: People get ideas and steal them and so on.
31:28: But it'll all be interesting and again, some of our rules may end up being about testing, we may end up having more of a field of testing of AI than actually testing using AI if you see what I mean.
31:39: So these models that have to be generated, these tools that have to be generated.
31:43: That may be the shift.
31:44: There have been fewer providers of those services, more people involved in actually validating them in the complicated world they are, because actually their edge cases are extreme.
31:56: Not that you would test every scenario, you can't, you're doing mass data testing, you're doing ranges, you're looking for anomalies, but it's shifting the methodology of testing from A equals B, , a simple hypothesis.
32:10: To more of in 1000 cases did they happen once, you know, you're looking at kind of data and statistical anomalies and things like that, which again, data science comes into it.
32:19: You saying that makes me kind of think about like I worked at a healthcare place we we dealt with blood data which meant, you know, the amount of testing that we had to go through were insane.
32:31: It's gonna be interesting to see.
32:33: Are there gonna be certain fields that just can't go into AI and use AI because of just the concerns like that.
32:41: I know that where I work, there is lots of concerns about the use of data, where it goes, how it's done, and training it and so on and confidentiality, PPI that's identifiable information in it and so on.
32:55: And making sure you strip data out of it before you use it for this, that and the other.
32:59: And there's lots of things causing barriers, I think it's the best way of saying it, using these things effectively.
33:04: But there's also some good cases of where actually we are using it in order to simplify data to get information to people faster, , to sift through bureaucracy, for want of a better word, in a more effective way, which is really helpful.
33:18: But it, it might be that some places, medicine, it's got some really good use cases of AI.
33:23: It's been using AI for a while to diagnose people.
33:26: Yeah, and it can be quite good.
33:28: Though it's not just AI that can solve some of these problems, there's some weird things in this world, and it's complete abstract just to kind of probably wrap us up maybe or go towards our end.
33:37: But there was a study I was reading on pigeons, and pigeons could spot cancer.
33:41: they were training it on kind of like slides of cells and saying what, you know, whether it was cancerous or not.
33:46: And our pigeon was about 60-70% accurate.
33:50: We had a few pigeons, they were 99% accurate in diagnosing whether it was cancer or not.
33:54: These things that would train a pathologist for like 5 or 6 years to do.
34:00: So my point is, you can train a lot more things than just AI to do good things, but in medicine, you know, you wouldn't trust a pigeon to diagnose your cancer.
34:08: Are we at the point where we trust AI a little bit more now, but interesting to see how it evolves.
34:13: Certain devices, if you said it was tested by AI, would you trust it?
34:16: Cause I wouldn't trust my cancer diagnosis from a pigeon, even if the science said it was 99% accurate.
34:22: Even if they were 99.999% accurate.
34:25: You, you wait, a pigeon diagnosed me?
34:27: No.
34:28: I'd still rather have a human do it, that's the point.
34:30: Some of this might come down to.
34:32: A lot of this is down to the human experience, which I think is a lot of what what we've been talking about.
34:38: How we experience sorts of things and as as we wrap up, because we could carry on talking about this probably for the next 3 hours, and that probably wouldn't be very interesting for anyone else but ourselves.
34:47: But on that human experience point, we do have and we do love in-person conferences, and that's something that we've struggled to over the pandemic times we've struggled with online conferences.
34:59: They're wonderful, but that in-person networking, experiencing events is something that's been really close to our hearts and.
35:06: As you will have seen, friends, Testing Peers Conference 2026 is live at this moment in time as we are speaking, and you will no doubt be able to check out all the information on testingiercon.com.
35:20: Call for paper is.
35:22: Open right now.
35:23: Thank you today, John, for joining us.
35:26: Folks, we want your feedback.
35:28: We want to know how you're humanly experiencing this podcast.
35:31: Please get in touch.
35:32: Contact us at testing peers.com.
35:35: Find us on your favourite socials, maybe even your least favourite socials, and we will see you next time.
35:41: Many thanks to EFocus.
35:44: For now, It's goodbye from the testing peers.
35:48: Goodbye.