NEW data on the problem of API drift – most production APIs are not built as designed  Learn more >

APIs Have Only Been Getting Faster: A Report from APIContext

API Quality Report 2016-2017

APIContext has published a report: Impact of Location and Cloud Service Host on API Performance 2016-2017.

Each day, APIContext software agents in more than 60 locations around the world regularly exercise a wide variety of API endpoints. APIContext and its customers use the results of these calls to measure the performance and quality of business-critical APIs. On average, over 600,000 API calls are made by our agents every day.

Because the APIs that are probed are real-world APIs, APIContext has been able to build an unrivaled historical data set over the past 18 months. It represents the true state of the global API ecosystem.

In our report, we present the conclusions of the largest-ever analysis of general API performance by cloud service and location performed to date. Because of the large sample size, we are confident that we can make statistically valid comparisons between clouds and regions.

Our report was created from an analysis of more than 285,000,000 API tests to more than 5,600 API endpoints, which generated 5.7 billion separate data points in the 18-month period from January 2016 to June 2017.

Take a look at our observations with these key graphics:

API calls are getting faster

We are just so used to Moore’s law these days that it’s hard sometimes to remember that there was a time not that long ago when we didn’t expect things to get exponentially better every year. More memory in our phones, faster processors in our computers: we take these for granted.

But what about the speed of the web? A couple of decades ago, we might have thought that a 14.4 kbits/s modem was doing well enough. Now, if we’re lucky, we have access to ultrafast fiber.

But, of course, it is not just the raw speed of our last mile connection that makes a difference to our experience. It’s how fast we get back what we’ve requested from the server that matters. And that depends on quite a few different factors such as the amount of server-side processing power, the speed of network switches and the bandwidth of transoceanic cables.

And APIContext has found that the speed of the web really has steadily increased (pretty much on the whole!) over the last 18 months. Moore’s law still applies to bandwidth.

Google is typically faster for API calls than other cloud services:

Average total API call time (less DNS) fell for all four cloud services we study between 2016 and June 2017. Google was the fastest by a country mile in 2016 and though it suffered a bit of a blip in April (as did Azure), it was the fastest again in May and June.

It is AWS though that gets the special award for most improved having increased its average speed by 57% through June. Which is fairly impressive, even if it is still slower than Google.

Each day, APIContext software agents in more than 60 locations around the world regularly exercise a wide variety of API endpoints. APIContext and its customers use the results of these calls to measure the performance and quality of business-critical APIs. On average, over 600,000 API calls are made by our agents every day.

With such a big dataset, we are confident that our data reflect what users of APIs are likely to have experienced in the real world over the past 18 months. We don’t throw away any data at APIContext, so our unrivaled set of API performance data lets you understand how the API ecosystem is evolving from month to month and quarter to quarter. And as we can see, there’s an arms race. Just like the Red Queen in Alice Through the Looking-Glass and evolutionary theory, cloud providers are having to run faster to stay still.

Connect Time and API Call Time do not correlate:

So what’s going on? There are several factors at play. The servers at each cloud provider’s location are going to get powerful in accordance with the iron diktat of Moore’s law. But, for most APIs, processing time is only part of what determines total call time. Equally, if not more important, is the speed of the network within the data center, the speed of the backbone connections between the cloud location and the end user’s ISP and the speed of the “last mile” from the ISP to the end user.

All of these are affected by the physical nature of the connection (for instance, the amount and type of fiber) and by the various network devices (such as routers and switches) through which data pass. Network devices are, of course, also affected by Moore’s law, just like servers. Hooking up to a fiber connection will give an immediate boost over copper and installing more fiber on transcontinental routes will also help eliminate bottlenecks.

The speed of light is finite and though we might be a long way still from hitting the limits of raw computing power, there are only so many milliseconds we can cut from the duration of a transoceanic leg. It is not surprising that APIs are getting quicker, but we might expect that the rate at which they are getting better to slow. It will be interesting to see what our numbers show in the next report.

We see that the speed increases apply to the TCP Connect Time as well as the Total Call Time. It is fascinating that we can see the month to month and quarter to quarter improvements, for instance, for AWS’s Brazillian location.

The ‘Great Firewall’ of China adds a significant speed and performance overhead:

It’s important though to take into account not just which cloud service provider you are using, but the location. There’s a reason why cloud providers keep adding server capacity and buying more bandwidth. That’s because there’s a constant demand for more server capacity and for quicker APIs. And sometimes demand exceeds capacity, which is why we sometimes do see Total Call Time increase between month whether it is for a cloud service globally or a particular. It is also worth noting that the Great Firewall of China adds about 500 ms to calls to servers in China compared to other East Asia locations.

API performance varies, often dramatically, between clouds and location. This means it is essential, when planning where to host services, to know who your customers are, where they are, and what they will be using to access your services. Monitoring the performance and quality of the APIs you care about – from the end-user perspective – is vital to understanding how they really behave.

APIContext can help you monitor and gain insights about your APIs – for instance, where to host services to maximize customer satisfaction.

For a free copy of our report, just fill in the form below. And while you’re at it, why not sign up for a demo or a trial of the product by clicking that red button down there?

Share

Request A Demo

Find A Slot To See A Demo Or Speak To One Of Our Support Specialists

Ready To Start Monitoring?

Want to learn more? Check out our technical knowledge base, or our sector by sector data, or even our starters guide to the API economy. So sign up immediately, without a credit card and be running your first API call in minutes.

Related Posts

Join Us Now!

Join the 100s of companies relying on APIContext.