Back to Blog
EducationalSeptember 2024·9 min read

API Testing for People Who Don't Do It Enough

A practical guide to contract testing, schema validation, negative testing, and the API bugs that your UI test suite misses entirely.

Ask most QA teams whether they do API testing and they'll say yes. Ask them what that means in practice and the answer is usually: "we have some Postman collections that we run manually before releases." This is better than nothing. It's also nowhere near the coverage that well-designed API testing provides.

API testing is probably the highest-leverage investment in your testing strategy that most teams are underexecuting. This article is about closing that gap — not comprehensively, but enough to get you from "Postman collections we run sometimes" to "an API test suite that actually catches bugs."

Why API testing gets neglected

The conventional testing lifecycle — write a feature, test it in the UI — creates a natural focus on the front end. APIs are invisible to users. When an API bug surfaces, it usually surfaces as a UI bug: something displays wrong, something doesn't load. The tendency is to test at the layer where the symptom appears rather than at the layer where the cause lives.

This is backwards. ISTQB ATTA Testing at a lower level — closer to where the logic executes — produces faster, more stable, more diagnostic tests. A UI test that fails because an API returned a 422 tells you that something went wrong. An API test that fails on the same endpoint tells you exactly what went wrong, with the response body, headers, and timing right there.

What UI tests miss by design

UI tests operate through the presentation layer. They validate that a button exists, that a form submits, that a success message appears. They do not natively validate:

  • The HTTP status code returned by the underlying API call
  • The structure and data types of the response body
  • The behaviour of the API when called with unusual or malformed inputs
  • Response time and throughput under load
  • Whether the API contract has changed between versions

None of these are exotic edge cases. All of them are real defect categories that occur in production. Every one of them is faster and cheaper to catch at the API layer than to infer from UI test failures.

Contract testing: the most important type you're probably skipping

Contract testing verifies that an API honours its published contract — the specification of what endpoints exist, what they accept, what they return. It's distinct from functional testing (which verifies that the API does the right thing) and from integration testing (which verifies that two services work together correctly).

Contract testing is critically important in microservices architectures and in any system where a consumer (front-end application, mobile app, third-party integration) depends on a provider (backend API). When the provider changes its contract — even a well-intentioned change, like renaming a field — consumers break. Contract tests catch this before deployment.

Tools like Pact implement consumer-driven contract testing: the consumer defines the contract it relies on, and the provider's test suite verifies it conforms to that contract. This is more powerful than provider-side contract testing alone, because it encodes the actual dependency.

A concrete example

Your mobile app expects a user object with {"name": "string"}. A backend developer renames the field to firstName for a new feature. No unit tests fail. No UI E2E tests fail immediately (the rename hasn't been deployed yet). A consumer-driven contract test fails immediately, at the point of the backend change, before it can reach the consumer.

Schema validation

Schema validation is the simpler sibling of contract testing. Rather than a bidirectional contract, you're asserting that every response from an endpoint conforms to a defined JSON schema: correct field names, correct data types, no unexpected nulls, required fields present.

Schema validation catches a class of bugs that functional tests miss entirely: the API returns a 200 and the functional test passes, but a field has silently changed type from number to string, and your consumer fails when it tries to do arithmetic on it. Schema validation would have caught this. A functional assertion of "did the request succeed?" wouldn't.

Postman supports JSON Schema validation natively in test scripts. REST Assured supports it with the JSON Schema Validator library. Neither requires significant setup. The payoff is disproportionate to the effort.

Negative testing: where the real bugs are

ISTQB FL ISTQB defines negative testing as testing the system with inputs or conditions it isn't expected to handle gracefully — inputs outside the valid range, missing required fields, invalid formats, boundary values. This is where a surprising number of API bugs live, for a simple reason: developers test their happy path and deploy.

Practical negative test categories for APIs:

  • Missing required fields: does the API return a clear 400 with a useful error message, or a 500 with a stack trace?
  • Invalid data types: string where integer expected, negative number where positive required
  • Boundary values: ISTQB FL exactly at the boundary (max string length, min/max numeric range) and just outside it
  • Oversized payloads: what happens when someone sends a 10MB JSON body?
  • Special characters: SQL injection patterns, Unicode edge cases, emoji in text fields
  • Authentication edge cases: expired tokens, tokens from the wrong environment, tokens with insufficient permissions

The goal isn't to find SQL injection vulnerabilities (though you might). The goal is to verify that your API handles invalid input gracefully — returning appropriate error codes, not leaking implementation details, not crashing the server.

Tools and where to start

The most accessible starting point is Postman, which most teams already have. A Postman collection with test scripts — not just request definitions, but pm.test() assertions on status codes, response bodies, and response times — is already significantly better than manual inspection of responses.

Newman (Postman's CLI runner) lets you execute collections in CI/CD pipelines without the GUI. This is the step from "Postman collections we run manually" to "API tests that run on every build." It's one configuration line in most CI systems.

For teams that want code-based tests with more flexibility, REST Assured (Java) is the established choice for JVM stacks. For JavaScript/TypeScript stacks, Playwright's API testing features (using request context) allow you to write API tests alongside your UI tests in a single framework. k6 is excellent for API tests that also need performance characteristics measured simultaneously.

In CI/CD

API tests should run in CI on every pull request against a test environment. They're fast — a comprehensive API suite can run in under two minutes — and they catch regressions at the point of change rather than at the point of deployment.

The setup discipline is straightforward: environment variables for base URLs and credentials, a test environment that can be seeded to a known state, and a Newman or k6 run command in your pipeline config. The hardest part is usually the seeded test data — building the habit of ensuring each test run starts from a predictable state. Do that once, and the CI integration is straightforward.

If you're currently running zero automated API tests, the highest-value starting point is your authentication endpoint and your highest-traffic data endpoints — three or four Postman tests with Newman in CI is more value than you might expect from the small investment required.


References: ISTQB Foundation Level Syllabus v4.0; ISTQB Advanced Technical Test Analyst Syllabus; Pact documentation; Postman learning center; Newman documentation.

Enjoyed this article?

Let's Talk About Your QA

Free 45-minute assessment. We'll give you an honest review of your testing coverage — no sales pitch.