Contract-based testing 5: The best approach

Two rams locking horns
Photo by jean wimmerlin on Unsplash

In part 3 and part 4, I’ve explained how the two approaches to contract-based testing work. I’ve talked about the provider-driven approach in part 3 and the consumer-driven approach in part 4.

In this part, I want to compare the two approaches to each other. While doing this, I’ll provide some more detail on the approaches.


In previous parts, I talked about the contents of each approaches contracts. The approaches are different, which results in different types of contracts.


A provider-driven contract containing multiple endpoint definitions
Image by Vilas Pultoo

The provider writes a provider-driven contract which contains the definition and documentation of the interface. It describes all the technical details like endpoints, response body structure, etc. Because of this, a provider-driven contract feels a lot like technical documentation. It specifies as many details as possible to leave as little room for assumptions as possible.


A consumer-driven contract containing multiple examples for the same endpoint
Image by Vilas Pultoo

Every consumer writes a consumer-driven contract. Each contract contains request-response pairs for the interface. These pairs are examples which show how the interface should behave. It contains all requirements a consumer has of the interface while containing as few details as possible. Omitting details allows the provider to know which parts of their interface are actually used by the author of the contract.

Summary table comparing contracts in the two approaches


In part 2, I’ve talked about why coupling during testing matters. Coupling during testing can be the difference between headaches or happiness after a long day of work. Which is only slightly exaggerated. However, we can’t get rid of coupling entirely. The tightness of coupling during testing is a balancing act.

Contract-based testing is great because it allows us to test with loose coupling. However, the two approaches do not have the same level of coupling.


Provider coupled to the consumer by only specifications
Image by Vilas Pultoo

The provider-driven approach is the looser of the two. In this approach, the provider does not depend on the consumers at all. The consumers depend on the provider for only one thing: The correct definitions and documentation for the interface.

There is only one thing coupling the two sides during testing. There is enough coupling to prevent stub drift, but not enough for things like interface usage feedback.


Consumer coupled to the provider by tests. The provider coupled to the consumer by test execution.
Image by Vilas Pultoo

The consumer-driven approach is the tighter of the two. Here, the provider depends on the consumers to send them correct and realistic tests. These tests double as the interfaces specifications. The consumer then depends on the provider to run their tests.

Two things couple the sides during testing. There is enough coupling for interface usage feedback, but not enough for things like end-to-end tests.

Summary table comparing coupling in the two approaches

Test types

Multiple test types: Technical correctness, Functional correctness, Security, and Performance
Image by Vilas Pultoo

When testing, we want to ensure quality in multiple aspects of our applications. To that end, we use multiple types of tests. Some examples of test types are; technical correctness, functional correctness, security, and performance. Contract-based testing is an approach to testing correctness. We never use it for anything other than testing the correctness of an integration.

Please challenge my last statement! It’s an interesting thought experiment: Can we apply the contract-based paradigm to other types of tests as well? Though, this is very much out of scope for this article.

The provider-driven approach is great at ensuring technical correctness. However, it fails spectacularly when testing functional correctness across applications. In this approach, there is no data flowing from one application to the other. Because of this, it’s useless at testing functional correctness across applications.

The consumer-driven approach is also great at ensuring technical correctness, and it can do a bit more too. In this approach, data flows from the consumer to the provider. Because of this coupling, the consumer-driven approach can test a limited amount of functional correctness across applications. Note that some consider this to be bad practice.

Summary table comparing test types in the two approaches

Development process

Contract-based testing calls for two things to be made for every integration; the integration and its contracts. When we start with writing the contracts, we call it contract-first. If we start with building the integration instead, it’s called implementation-first.


Contract followed by code
Image by Vilas Pultoo

When testing contract-based, it’s always better to work contract-first. It allows us to use the contracts as specification documents during development. After implementation, we can use the same contracts during testing. This goes for both approaches.

In the provider-driven approach, you feel most benefits of working contract-first during provider-side implementation. The contract is a specification document that has been written with minimal room for assumptions. This makes using the contract very convenient for all involved during development and testing. The provider can even share the contract with their consumers before they finish implementing the interface.

The consumer-driven approach has more benefits when working contract-first. In the contract, the consumer will specify how the interface must work. Using the contract, the provider will implement its interface with a TDD-like development cycle. The biggest difference with TDD is that the provider does not write the tests themselves. Instead, their consumers do it for them. The results of working contract-first with the consumer-driven approach are:

  1. The consumer has an interface available that does exactly what they need, guaranteed.
  2. With minimal design work, the provider has made an interface with no bloat. There is not a single piece of unused data. There is not a single piece of missing data.
  3. The provider can see who is using their interface and in which way. This allows them to make data-driven decisions when evolving and maintaining their interface.


Code followed by contract
Image by Vilas Pultoo

Some situations force us to work implementation-first instead. For example, when working with existing interfaces. In these situations, you can still use both contract-based testing approaches. However, you will lose out on most process advantages that come with working contract-first.

When faced with this, it’s easier to start with the provider-driven approach. In most cases, there is documentation you can easily refactor into a provider-driven contract. Going consumer-driven in an implementation-first situation is generally a lot more work with minimal benefits.

Summary table comparing the development process in the two approaches


A network of integrations with contracts in between.
Image by Vilas Pultoo

A single well-tested interface is great, but big companies have hundreds of interfaces. It’s important to consider how well things will work when applied on such a large scale. When scaling up to hundreds of interfaces we have to:

  • Make more interfaces
    In both approaches, making a new interface requires writing contracts. However, in the provider-driven approach, this will always be only one contract per interface. The consumer-driven approach requires one for every consumer.
  • Write more tests
    Every test comes with some amount of work. Though, in the provider-driven approach, it’s barely any. To upgrade a test to a provider-driven contract-based test, you add a single assertion: The request and response match the contract. It’s a low-effort, high-reward situation.
    In the consumer-driven approach, the consumer adds new tests to their contract. After that, the provider ensures that all contract tests pass. This generally requires the provider to set up test dependencies like data, stubs, other contracts, etc.
  • Add more consumers
    When our interfaces get more popular, we’ll have to deal with more consumers. In the provider-driven approach, we only have to ensure that new consumers can find the contract.
    In the consumer-driven approach, however, the provider has to implement a test suite for every new consumer. These new tests generally require setting up new test dependencies. All of this can be a lot of work for the provider.

Scaling is one of the reasons coupling during testing is important. The provider-driven approach is coupled less tightly, which makes it scale better.

Summary table comparing scaling in the two approaches

Evolving the interface

Schematic view of an interface changing over time
Image by Vilas Pultoo

Contract-based testing can help tremendously as A Service Evolution Pattern. Or, in other words, it can help us when changing our interfaces. We regularly have to update, add to, or delete things from an interface. When we do this, we want to be sure that we’re not introducing a breaking change.

By using contracts, we can catch a lot of breaking changes in our CI pipeline. Ultimately, this boils down to ensuring that the provider does not change things too much when someone else depends on them. How much is too much depends on your communication protocol. For example, a RESTful API is more flexible than a SOAP service is. Read up on what is considered a breaking change for your communication protocol before you change anything.

Usage feedback

When changing an interface, it’s useful to know how the interface is used by the consumers. The more specific, the better. Breaking things that no one depends on is fine.

In the provider-driven approach, there is no way for the consumer to talk to the provider during testing. They can’t tell the provider how they’re using the interface. Because of this, the provider can’t prove that parts of the interface aren’t used. They must err on the safe side and assume that someone depends on every part of the interface. This can make changing the interface a hassle.

In the consumer-driven approach, we have coupling from the consumers to the provider. This coupling allows the consumers to tell the provider exactly how they’re using the interface. The provider uses the examples in the contract to find out how the interface is used. The provider can use this data to make decisions about the interface with confidence.

Detecting breaking changes

Breaking changes can be hard to spot. However, contracts can make it a lot easier.

Most changes will also require a contract change. In the provider-driven approach, we can use this contract change to detect most breaking changes. Any example generated using the old contract must pass validation with the new contract. If the validation fails, we’ve introduced a breaking change. When a breaking change is detected, the provider must either revert the change or introduce a new major version of the interface.

Using the consumer-driven approach, it’s very easy to detect a breaking change. We simply run our contract tests. A breaking change will cause a test failure. When a breaking change is detected, the provider must either revert the change or ask the consumers to change their contracts. When neither is an option, the provider can introduce a new major version as the last resort.

Summary table comparing evolving our interface in the two approaches


The best approach does not exist. In the world of software, there is no best way of doing things that apply to every situation. It’s the reason why I continue to have a job. If you were looking for a straightforward answer, I’m sorry. I don’t have one.

I’ve tried to highlight the most important differences between the two approaches. Doing that allowed me to go a bit deeper into the approaches as well. There is a lot of overlap, but both approaches have their strengths and weaknesses.

In the next part, I want to look at combining the two approaches. Do they complement each other or do they simply break down?

Combined comparison table. Combines all previous tables.