As part of our API Directory operations, in 2023 APIContext made more than 650 million API calls to more than 10,000 different API endpoints from more than 100 geographically diverse cloud data center locations across major public cloud service providers including AWS, Azure, Google, and IBM. This proprietary dataset gives us unique insights into the cloud API landscape.
In this report, we analyze the API quality data generated by multiple APIContext services, including the APImetrics platform, our API Directory and our Supplier Index. Now in its sixth year, the result is an unbiased, industry-wide baseline for API quality scoring.
As the volume of API calls across the internet continues to grow, our data continues to get both broader and deeper. To create this analysis, we leverage aggregated, anonymized data from leading API services, including those from infrastructure providers, financial services institutions, social networks, search engines, and other key services.
This is the sixth year of the APIContext Cloud API Performance Report. With years of historical data, we can now start to identify longer term trends in addition to our annual analysis.
While cloud services become even more important to our personal and professional lives, and now comprise the vast majority of all internet traffic, we see cloud providers struggling to maintain speed and quality in all circumstances.
Baseline performance continues to be good, but corner cases and edge cases with reduced quality are more prevalent. This is at a time when user expectations of reliability and availability have never been higher. Here are some key conclusions.
We can now say definitively that the 2020s have seen an increase in latency. This is likely due to two trends that accelerated during the global pandemic, and continued across 2023: Remote Work and APIs Eating Software.
Remote work requirements have increased cloud loads, and have forced cloud providers to expand edge infrastructure. At the same time, both public and internal APIs have continued to expand, running more applications than ever.
Although there have been year-on-year increases in capital expenditure by the main cloud service providers (an average of ~30% per year in the 2017-22 period), demand for cloud computing is outsripping supply in large part because of the phenomenal growth of artificial intelligence and machine learning in 2023 with its associated requirements for immense amounts of compute. This has put pressure on cloud services and may lead to some degradations in network quality.
AWS’s continuing success in improving DNS Time indicates that the other clouds still have some catching up to do in optimizing the performance of different latency components.
Most services are rated as excellent (a CASC score of 9.00+), with no observed services having issues of concern over quality. Overall quality is similar to 2020, 2021 and 2022, which means that improvements in performance might be at a plateau. There is no excuse for not having a highly stable and consistently performant API. You can’t blame your infrastructure provider for everything; only for network performance. Once that has been optimized with the most suitable cloud for your API service, the ball is in your court to offer the fastest and most reliable service possible.
Five 9s is a tough target, but we believe that 99.99% is a goal that should be achievable for most APIs. Only 7% of the services we studied managed to reach this level, down from 18% in 2022, up from 6% in 2021..
PagerDuty was again the API service with highest availability and the only one to achieve Four 9s in both 2022 and 2023, an excellent performance. No API has previously managed to reach the 99.99% mark two years in a row, so this was an exceptional performance by PagerDuty. This is clearly an area of future focus for quality improvements for API services
We see significant differences in performance between clouds. In 2023, Azure was consistently >75 ms slower globally than AWS. In an API-first economy where every millisecond increasingly counts, can you and your customers afford to be using a slow cloud? A typical phone \app operation such as checking in for a flight might use 6–9 APIs or more, which means >400 ms of latency owing to cloud choice, which will create friction for the user.
But the crucial thing is to check performance constantly from the cloud and locations you use and, if possible, cross-check with other cloud locations to determine which choice is best for your service and users.
DNS resolution times slowed in two of the four clouds (Google and IBM) and across all regions in 2023 compared to 2022.
If AWS can have a median DNS Time of 2 ms, so can the other clouds. Individual services can have hundreds of milliseconds of extra latency friction created through non-optimized DNS. In 2024, we want to see DNS Times getting better again across all clouds and regions – and services.
The contribution of DNS to Total Time is particularly important when every millisecond of additional latency counts.
All regions were slower in terms of both DNS Time and Connect Time in 2023 compared to 2022. In the case of Connect Time, South America was slower by 3500%! North America has only improved by 1 ms for DNS Time since 2018, and is no longer the fastest region for that latency component. We recommend that many services look to diversify their hosting locations.
Europe and South America tie for first at 8 ms, and South Asia (11 ms) and East Asia (9 ms) are also faster.
In 2023, API performance was good across a wide range of popular services. In contrast to 2022, no APIs in the study rated as being of concern. But the problem of the API Supply Chain, as Founder and CEO of ProgrammableWeb John Musser calls it, remains significant.
There are meaningful geographic differences, such as physical distances across oceans and continents; and cloud performance variations, such as the amount of bandwidth available through fiber optic cables and the capacity of network equipment. DNS lookup times, which have always been a problem, seem to be getting worse, as do Connect Times.
Using an API isn’t just relying on a black box. The API you provide or use exists in a universe of components including their own cloud service, a CDN provider, probably a gateway of their own, a backend server architecture, and potentially a security and identity service.
Each of those components has its own configuration and cloud dependencies – and a failure could end up costing $200,000 per incident.
DNS Lookup Time has increased across two services and all regions, suggesting there are issues with network infrastructure and configuration for Google and IBM Cloud, particularly in Oceania.
Overall, the average rose to 19 ms in 2023 (10 ms in 2022), with the best DNS Time of 3.4 ms from Box.
In contrast, DocuSign had a DNS Time of 224 ms (up from 199 ms in 2022) and Capital One had a DNS Time of 420 ms (up from 339 ms in 2022).
Such large differences in DNS Time cost money and add unacceptable friction to the user experience. They can be avoided through optimizing network and cloud configuration and adopting industry best practices. AWS is faster than the other three clouds for DNS and has steadily improved its performance over the past few years. You need to ensure that your DNS is properly configured and optimized and choosing Azure, Google or IBM could potentially contribute to ongoing reduced DNS performance for your service with all the additional associated increased support costs that entails as well as a degraded performance for your users.
In all of 2023, PagerDuty had 30.0 minutes of measurable downtime on their APIs. To put this performance in perspective, the worst-performing API had more than 8.9 days of downtime.
If an API is being exercised at an average rate of 50 calls/second, that would mean nearly 39 million attempted calls were lost.
PagerDuty was also first in 2022 with 21.9 minutes of downtime, so while it increased a bit, this was another superb performance. Congratulations to the DevOps team and everyone at PagerDuty involved. They’ve been setting the standard for industry best practices in terms of API availability over the last few years. The DNS host and service host for PagerDuty is AWS, which is consistent with our overall finding that AWS is the most performant cloud.
High-level data for 2023 is provided for free at the APIContext Directory: https://apicontext.com/api-directory. If you’d like to dive deeper into the details, please contact us for licensing access.
API calls were made from 82 data centers around the world using APIContext observer agents running on application servers provided by the cloud computing services of Amazon (AWS), Google, IBM, and Microsoft (Azure). New locations do come online and old ones get taken off. The 82 cloud locations are locations that have been used since we began reporting on the state of the cloud in 2020. For the 2025 report, the number will increase as a significant number of new locations have recently been added.
The sample sizes for each API are roughly the same and are equivalent to a call from each cloud location made to each endpoint every five minutes throughout the year.
We logged the data using the APImetrics platform. Latency, pass rates, and quality scores were recorded in the same way for all APIs. For most APIs, data is available for the whole period.
Pass Rates
In calculating the pass rate, we define failures to include the following:
We ignored call-specific application errors such as issues with the returned content, and client-side HTTP status code 4xx warnings caused by authentication problems such as tokens that have expired.
If an API fails, it may pass if called again immediately and succeed if the outage is transitory. However, our methodology still gives a general indication of availability issues.
The traditional telecommunications standard for service availability is five 9s – at least 99.999% uptime, or just five minutes of downtime in a year.
APIContext uses CASC, our patented quality scoring system, to compare the quality of different APIs. CASC (Cloud API Service Consistency) blends multiple factors to give a “credit rating” for an API, benchmarked against our unmatched historical dataset of API test call records.
It is important to note that CASC scores do not fall on a normal curve. The scores are absolute, and we see no engineering reasons why prominent APIs should not consistently reach a CASC score of 8.00+.
Some calls will be faster than others because of backend processing, so total call duration, even over a sample size of tens of millions of calls, can only give a partial view of API behavior.
All regions had slower DNS in 2023. Oceania was twice as slow (16 ms compared to 8 ms in 2022.) Given that faster performance has been achieved in the past, this is an area for focus in improving service quality for all cloud network engineering teams, who should monitor DNS times on an ongoing basis and ensure performance criteria from all regions and locations are not becoming longer or less stable.