Scaling API Testing in the Age of AI: A Conversation with Dave Westerveld
Practical advice on evolving workflows, reusable scripts, and what AI really means for testers today.
From schema validation to shift-left orchestration, the API testing landscape has changed dramatically in recent years. In this conversation, we speak with Dave Westerveld—author of API Testing and Development with Postman, Second Edition—about what has remained constant, what teams still get wrong, and how Postman and the broader tooling ecosystem are adapting to modern testing needs.
Dave is a developer with many years of testing experience across both mature systems and early-stage initiatives. He excels at solving automation problems in team environments and has worked extensively on integrating third-party APIs, scaling quality practices, and bridging the gap between testing and development. With hands-on experience throughout the software development lifecycle, he brings an intimate understanding of what it takes to create high-value, maintainable software—and how automation can help teams do it more efficiently.
You can watch the full interview below—or read on for the complete transcript.
1. The second edition of your book, API Testing and Development with Postman, was published in June 2024. What core ideas or techniques in the book remain just as relevant today despite how things have evolved of late?
Dave Westerveld: Yeah, it really is a changing world out there. Software has always evolved rapidly, but over the last year or two, it feels like we’ve launched onto an entirely new trajectory. But the reality is that in this changing world, while some things change—others are timeless.
For example, there are testing principles that were valid in the '80s, before the consumer internet was even a thing. They were valid in the world of desktop and internet computing, and they’re still valid today in the world of AI and APIs.
An example of this is systems thinking, that is, the ability to understand how the piece you’re testing fits into the entire ecosystem—the entire problem you’re trying to solve—seeing where it fits into that application, understanding how it intersects with customer needs, and being able to take that system view. That ability to zoom out and see the entire forest first, and then come back in and see the tree, and realise how it fits into the larger picture and how to approach thinking about and testing it.
That's one example of a timeless truth that I’ve tried to capture in my book. And I think things like how to approach and structure your testing are also kind of timeless when it comes to REST APIs. They haven’t fundamentally changed in the last 20 years—neither should the way you think about testing them.
So, when you think about things like endpoint testing versus workflow testing, it's important to understand the differences between them—and how to structure your tests accordingly. You also need to consider when to do exploratory testing, when to automate, and when to run regression tests, along with the trade-offs involved in each scenario and how to choose the right approach.
Of course, some things have changed as well. Postman continues to evolve and introduce new features—and the elephant in the room in any tech conversation right now is AI. Postman is actively integrating more AI capabilities into the tool, and I think that’s something worth exploring beyond the scope of the book. The book provides a solid foundation, but from there, you can move on to some of the newer AI-powered tools they’re offering.
2. Postman’s AI assistant (Postbot) now generates tests and scripts. Do you see AI becoming a reliable partner in real-world API testing, or does testing in the enterprise context still need strong human oversight?
Dave Westerveld: I actually think it’s a bit of both. AI presents some different opportunities, and I’d even frame this in terms of skilled practitioners—those with a lot of experience who've been testing for a long time—versus newer entrants to the field, more junior testers who are earlier in their careers.
It’s important to think about how to use AI effectively in those different situations. For a skilled tester, someone with a lot of experience, these AI tools can help you move more quickly through tasks you already know how to do. Often, when you reach that level, you’ve done a lot of testing—you can look at something and say, “OK, this is what I need to do here.” But it can get repetitive to implement some scripts or write things out again and again.
AI can help you move faster. You can look at what it generates, review it, tweak it. You have the experience to evaluate whether it’s doing a good job—and that’s really important. We need experienced practitioners who can work with AI systems to make sure they’re solving the real problem and doing so in a way that’s safe, especially in business and corporate contexts.
On the flip side, for more junior people, there’s a temptation to use AI to auto-generate scripts without fully understanding what those scripts are doing. I think that’s the wrong approach early in your career, because once the AI gets stuck, you won’t know how to move forward.
In my experience, AI doesn’t produce maintainable scripts or setups. It just generates some code—and it’s a little bit, let’s say, random. Later on, when you’re trying to maintain it, run it multiple times, or make changes, you can get really bogged down.
So if you’re earlier in your career—in the first couple of years—use AI to help you learn. Let it generate code, then ask: “What does this mean? Why are you doing it this way? Can you explain this?” Use it more as a learning tool than an autocomplete tool.
So yes, AI can be a reliable partner, but it needs evaluation. It can be a great tool—but it has to be used well. And that’s where the balance—and the “both”—comes into my answer.
3. Postman now supports gRPC, WebSockets, and GraphQL. How should testers adapt their approach when working with these protocols compared to REST?
Dave Westerveld: Again, just to go back—there are some fundamental things that stay the same. We talked about systems thinking. Some things apply no matter what kind of system you’re testing, and those general testing principles are always relevant.
But when you get into different protocols, there are definitely differences in the nitty-gritty details of how you go about testing compared to REST.
For example, with gRPC—you tend to get lower-level access. It’s remote procedure calls; you’re calling a method directly on another computer. So it gives you access to the code at a lower level of abstraction than a REST endpoint. That increases the kinds of things you can do, the ways you can combine them, and even the impact you can have on the underlying system.
One area where you really see a difference with modern APIs is in how you think about test coverage. The way you structure and approach that will be different from how you’d handle a REST API.
That said, there are still similar challenges. For instance, if you’re using gRPC and you’ve got a protobuf or some kind of contract, it’s easier to test—just like with REST, if you have an OpenAPI specification.
So advocating for contracts stays the same regardless of API type. But with GraphQL or gRPC, you need more understanding of the underlying code to test them adequately. With REST, you can usually just look at what the API provides and get a good sense of how to test it.
But with GraphQL, there are a lot of possible query combinations. That even shows up in the documentation. A REST API usually has simple, straightforward docs—“here are the endpoints, here’s what they do”—maybe a page or two. With GraphQL, the documentation is often dynamically generated and feels more like autocomplete. You almost have to explore the graph to understand what’s available. It’s harder to get comprehensive documentation.
And that really points to the challenges of testing these systems compared to RESTful APIs.
4. Reusable scripting and Package Libraries were introduced to reduce boilerplate. How do you recommend teams maintain reusable, scalable test logic as their API ecosystem grows?
Dave Westerveld: Yeah, I think reusability is a bit of a double-edged sword when it comes to test automation.
On one hand, I’m really glad Postman is supporting this. In my book, I share some ways to make scripting reusable and explain the workarounds we had to rely on before Postman offered native support. It used to be complicated—not because we wanted it to be, but because the tooling wasn’t there yet. So it’s great to see them now building in native solutions. There are definitely situations where you want to share code across different scripts or collections.
The flip side is that we need to be intentional about when we share. As Postman makes it easier, that consideration becomes even more important—you don’t want to overuse the feature.
There are a couple of key things to keep in mind. First, you generally want automated tests to be independent. That’s not always the case—sometimes you're doing workflow testing—but in most cases, you should be able to run a single test without it depending on others or being affected by shared environment state.
So when you share code across tests, make sure you’re still supporting test independence.
Another issue I’ve seen is sharing too much. If something in the shared code breaks—or if a dependency the shared code relies on fails—you can end up with all your tests failing. Dozens or even hundreds, depending on your scale. And sometimes it’s just one small issue in a shared module. So you have to be careful that a single point of failure doesn’t take everything down.
In cases like that, it’s worth asking: “Do we really need this to be a shared script, or can we mock this instead?” For example, if you're repeatedly calling an authentication endpoint that you're not explicitly testing, maybe you could insert credentials directly instead. That might be a cleaner and faster solution.
This also ties into performance. Shared code that’s slow can slow down your entire test suite. That’s another risk to factor in.
There's one more angle I think is really important—using tests as documentation. A well-written test tells you what the system is supposed to do. It shows real examples of expected behaviour, even if it's not a production scenario. You can read it and understand the intent.
But when you extract too much into shared libraries, that clarity goes away. Now, instead of reading one script, you’re bouncing between multiple files trying to figure out how things work. That hurts readability and reduces the test's value as living documentation.
So yes, reusable scripting can save time—but you have to weigh the trade-offs carefully. Use it effectively and efficiently, in ways that support long-term maintainability and clarity.
5. Contract testing and schema validation have gained traction. What practical advice do you have for integrating these into CI workflows without slowing down development?
Dave Westerveld: To some extent, I’d say the point is to slow development down—and I say that in air quotes.
If someone’s building a new endpoint or feature and it doesn’t conform to the spec, that’s a problem. The whole point of having a specification is that it defines the contract we’re all agreeing to—whether that’s between frontend and backend teams, or with external consumers.
So if you're violating that contract, the right response is to stop. This isn’t what we agreed on—you need to go back and fix it. So yes, in that sense, we want development to “slow down” when there’s a spec violation. But in the long run, this actually speeds things up by improving quality. You’re building on a solid foundation.
My practical advice is to integrate these checks as early as possible in the workflow. Get them into CI right next to where the development is happening—ideally as part of the developer’s own merge or pre-merge CI.
Why? Because if you find a contract issue after someone’s built additional layers on top, it's much harder to fix. You might need structural changes or endpoint redesigns. But if you catch it an hour or two after the initial commit, it’s far easier to resolve.
So catching it early reduces delays. And high quality, early on, always leads to faster development over time.
The good news is that these checks—schema validation, contract testing—can run quickly when configured well. You don’t need to wait for a nightly build. You can run them directly in CI and get immediate feedback. That’s the best way to enforce contracts without creating unnecessary delays.
6. Performance testing within Postman has become easier. At what stage do you believe API performance testing should be introduced in the software development lifecycle—and how should teams balance it with functional testing?
Dave Westerveld: I include a few tips in my book on what to consider from a performance testing perspective, and how to use Postman’s built-in performance tools.
I think there should be a baseline level of performance testing—just like there’s a baseline for security or other specialised types of testing. It’s something every tester should be thinking about. Ideally, some form of performance evaluation should be built directly into your functional tests.
For example, when I’m testing an API in Postman, I pay attention to how long the request takes. Postman shows you that. If a request consistently seems slow, I dig deeper. Postman offers tools that help you start diagnosing where that slowness might be coming from.
That kind of early performance observation should happen right alongside your functional tests. Hopefully, you’ll catch the obvious issues early on.
That said, performance testing is a specialised area for a reason. It can be tricky—especially in today’s world of autoscaling.
Performance outcomes depend heavily on the hardware you're running on. If your app is deployed to the cloud, it’s likely on scalable infrastructure. You might be able to add more CPUs or memory, or the system might autoscale based on traffic. That makes it hard to get accurate, consistent results in such environments.
If you want to do high-scale load testing—to pinpoint bottlenecks or evaluate how the app behaves when scaled beyond a certain threshold—that usually requires a dedicated team. Those types of tests involve tools and expertise well beyond what Postman offers.
So the balance looks like this: start early. Integrate basic performance checks directly into your functional tests using Postman. But when it comes to deeper performance validation or scale simulation, bring in specialists and use dedicated tools.
7. Shift-left testing and orchestration are trending. How do you see API test orchestration evolving, especially in CI/CD pipelines at scale?
Dave Westerveld: Shift-left and orchestration have been trending for quite a while. As an industry, we’ve been investing in these ideas for years—and we’re still seeing those trends grow. We’re pushing testing closer to where the code is written, which is great. At the same time, we’re seeing more thorough and complete API testing, which is another great development.
As a quick aside—in the world of AI, APIs are even more critical. They’re the primary way AI agents interact with your system. So robust API testing is essential for modern applications.
Now, the thing about shift-left and orchestration is that they can sometimes be at odds. Shift-left means running tests as early as possible, close to the code. The goal is quick feedback. But orchestration often involves more complexity—more setup, broader coverage—and that takes longer to run.
So those two trends can pull in different directions: speed versus depth.
But I think we’re seeing a good trend overall. We’re pushing testing left and improving the speed of execution. That’s happening through more efficient test design, better hardware, and—importantly—parallelisation.
Parallelisation is key. If we want fast feedback loops and shift-left execution, we need to run tests in parallel. For that to work, tests must be independent. That ties back to an earlier point I made—test independence isn’t just a nice-to-have. It’s essential for scalable orchestration.
So I think test orchestration is evolving in a healthy direction. We’re getting both faster and broader at the same time. And that’s making CI/CD pipelines more scalable and effective overall.
8. With tools like Hoppscotch and Insomnia maturing, how should teams evaluate whether to stick with Postman or diversify tooling across workflows?
Dave Westerveld: I like playing around with new tools—but in general, my advice is: don’t jump ship for the latest and greatest too quickly.
It’s fun, right? There’s a new tool, you hear the buzz, it has some cool new features… go ahead and try it out. See if it fits your needs. But remember: as an early adopter, you're going to face feature limitations. Tools like Hoppscotch, Insomnia, Bruno—they bring fresh ideas, but they often lack the depth and robustness that come from years of development.
Postman has been around a long time. They’ve got a large team and a broad feature set. So when evaluating a new tool, look closely at your needs. Does this tool support the workflows and features your teams depend on?
This becomes especially important in large organisations. The more people using a tool, the more diverse the use cases—and the more likely it is that teams rely on a wide range of features. So it’s not that you should never switch tools, but if you do, make sure you’re not giving up something essential.
For example, I recently tried Bruno. I liked it—I thought their approach to change management was really well designed. But it didn’t support some of the features I rely on. I experimented with it on a small project, but in the end, I decided I still needed Postman for my main workflows.
That said, I still open Bruno now and then. It’s useful, simple, and interesting—but we’re not ready to adopt it team-wide.
One other thing to keep in mind: existing tools often absorb the best ideas from new ones. If you find a feature you love, go ahead and submit a request to Postman. They’ve got a GitHub repo, and you might find someone has already suggested it. You can chime in and help signal demand.
So yes, explore new tools, stay aware of what’s out there—but don’t rush to switch unless you’ve carefully evaluated what you’d be giving up and whether the change is worth it.
10. Enterprise testing involves governance, observability, and collaboration. In your experience, what’s one common mistake teams make when scaling API testing across squads—and how can they fix it?
Dave: One thing I’ve often seen is teams reinventing the wheel when it comes to CI.
You’ll have different teams, and each one creates its own CI process. Maybe they write their own wrapper around Newman—the command-line runner for Postman—and integrate it into their pipeline in a way that’s similar to what others are doing, but not quite the same. That leads to a lot of duplicated effort.
There’s usually plenty of shared discussion around how to structure tests, how to write assertions, and how to think about testing logic. But the execution side—how we actually run those tests—doesn’t always get the same attention.
How do we lay out our CI workflows? What schedules should we use? Where in the pipeline do tests belong? What are our best practices for integration? Those are the kinds of conversations that often get missed.
Depending on your company structure, there are different ways to fix this. If testers are embedded in development squads, you might set up a community of practice—some shared forum where testers can align on CI best practices. If you have separate testing teams, find ways for them to collaborate across boundaries and share tooling strategies.
The exact structure will vary from company to company. But the goal is the same: build shared CI practices—not just shared testing strategies. Be deliberate about how you run tests, not just how you write them. That’s the piece that often gets overlooked.
To explore the topics covered in this conversation in greater depth—including practical guidance on testing strategies, automation patterns, and real-world Postman workflows—check out Dave Westerveld’s book, API Testing and Development with Postman, Second Edition, available from Packt:
Here is what some readers have said: