2024 REPORT NOW AVAILABLE: Review the API quality of cloud service providers See the details >

API Update Failure: Or, Beware Of The Leopard

One of the ongoing potential API update failures is handling situations that commonly happen whenever you have to update them.

Updating an API is a perfectly natural thing to do. In fact, often you have to update an API for essential reasons – security changes, new features and more. But if you’re coming from an enterprise background, where APIs are generally only for internal use it’s very easy to forget that there are others dependent on them.

So when considering potential API update failure modes, you’re looking at processes and strategies that you need in place for the following:

  • Updating developer docs – Depending on what you use for this, it could be automatic, but it might not be so it’s important to check.
  • Communicating with all your developers – This was a major issue for Twitter when they moved from Basic Auth to Outh authentication.
  • Allowing for a transition – Not all users will instantly be in a position to roll out changes, so you may need to run APIs in parallel. Or maybe instead of removing an endpoint and leaving an error, return a specific error explaining the change that has happened.
  • Being aware of where you have the API listed – This could be partner sites, documentation sites, regulators or more, so keep a list of all places for consistent updates.

Monitor what you document – not what you know works.

This is where active, external, end-to-end monitoring has the edge over passive monitoring, which only includes gateway and server logs. Passive monitoring can only tell you about what people are pushing at your APIs. That’s typically going to be calls users know will work so it’s easy to end up falling into the trap of only what works being tested.

For instance, let’s say you’ve rolled out a new version of an API. It might be a while before users start using it, especially if the change has been communicated to the users as well as it might have been.

With active monitoring, you know exactly how any API is really behaving from the perspective of the user. You don’t have to rely on users adopting the API of their volition.

Active monitoring also works perfectly with negative testing (an API shouldn’t work when it’s not supposed to) and fuzzing (how does an API cope with unexpected input).

At APIContext, we don’t just help with the ongoing monitoring. With our managed services, we can set up active monitoring in the way that your API is documented – not just in the way your internal operations teams know work. Then, when things are updated and changed, if the API update fails because the API has changed and users could be impacted, you’ll know precisely what happened.

“But the plans were on display…”
“On display? I eventually had to go down to the cellar to find them.”
“That’s the display department.”
“With a flashlight.”
“Ah, well, the lights had probably gone.”
“So had the stairs.”
“But look, you found the notice, didn’t you?”
“Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’”

Douglas Adams – The Hitchhikers Guide to the Galaxy

By nclm, via Wikimedia Commons

Request A Demo

Find A Slot To See A Demo Or Speak To One Of Our Support Specialists

Ready To Start Monitoring?

Want to learn more? Check out our technical knowledge base, or our sector by sector data, or even our starters guide to the API economy. So sign up immediately, without a credit card and be running your first API call in minutes.

Related Posts

Join Us Now!

Join the 100s of companies relying on APIContext.