One of the things that we are clear about is APIContext is not a load nor stress testing platform. We’re focused on providing a close indication of the likely quality of the developer experience on REST based APIs. Consequently, we actually don’t test all that often, our default for our public tests is once every 15 minutes. One of the reasons for that has been the behavior of some APIs with cached data in app queries. Testing every 15 minutes in this case actually provides misleading good performance data.
However, this isn’t true for all APIs. We’re also testing the Intel Cloud Services Platform APIs and finding almost the inverse. Testing every 15 minutes yields some errors but testing every 6 minutes yields a lot more. By increasing the test frequency of the testing, we’ve actually seen that the reliability of the API is a full 2% points lower over the course of a typical day, than the 15 minute test interval suggests.
Our proposal for this is to take a mixed approach when you set up the tests. Set off different trials and see if you can see the impact of caching of results. This will typically manifest itself on the latency graphs as a ‘saw-tooth’ pattern, or regular increases in latency as the cache clears. Secondly, if you own the API platform being tests, set up several App accounts with their own ClientID and Secret codes and run the same test with different user credentials and use the test ‘offset’ feature to stagger their run times.
The essence of what we finding is a fascinating range of behavior from standard REST based APIs and a need for providers and developers alike to pay close attention to their test regime and act on the data.