APIContext partners with Akamai to expand advanced API monitoring adoption. Learn more >

Hot Fuzz: The Fuzzing Form of Negative Testing

The hot area du jour in testing is fuzzing. I hadn’t heard of it until the other day, but the concept has been around since the days of punched cards. In those days, people would shuffle a decade and then see what it did when used as the input to a program. Two practical examples not involving 1980s technology are Donald E. Knuth’s trip and trap tests for “torture” testing ports of, respectively, TeX and METAFONT.  In essence then, fuzzing is a form of negative testing.

Generally, we are concerned that a system does what we want it to do. In API monitoring, our particular concern at APIContext, this might mean that if we send a request for a list of hotel rooms in Portland, we hope to get back a 200 HTTP status code and a list of hotel rooms in Portland. If we don’t get those back, the test has failed.

In negative testing, we want the test to fail. If a call needs the correct authentication and authorization associated with it, if those are missing, we expect a 4xx HTTP Client Error status code (usually a 401 or a 403) to be returned with some kind of error message. If a 200 is returned that means either there is a serious security issue or there’s been some overzealous trapping of exceptions (the code blocks the unauthenticated transaction, but in successfully doing that ends up returning a 200 through a programming oversight).

What is fuzzing?

In fuzzing, we provide some random (for certain values of random) input to an interface and see what happens. This might, these days, be a load of emojis instead of plaintext. The interface might have been designed to handle ASCII and the backend database might cope with Unicode of some kind, but what happens with the emojis? This might be a completely unanticipated input case, so there’s no guarantee that things will break gracefully. Depending on the exact nature of the input, you might expect to get 4xx or 5xx (Server Error) )status codes  back, but what if you do get a 200? Many serious security exploits have been found through fuzzing.

How to use fuzzing then with APIContext?

The most straightforward scenario is have a set of tests with pre-defined inputs much as Knuth’s trip and trap are pre-defined. That would be useful if you don’t really trust the stack to be stable. Just because it worked this morning doesn’t mean it will work this afternoon.

A more complicated case would be to use a workflow in which the first call obtains some gobbledygook via an API and the second uses the gobbledygook as input. You might be able to use something like the artillery-plugin-fuzzer to generate the gobbledygook.

Of course, in some cases, you might want want to send complete nonsense at the API because you don’t have a clue a priori what might break it and in others you might have some specific cases to test, perhaps in location.

If you don’t want to use a workflow, you can always write a program that uses our API to set up a call on the fly, perhaps just a one-off call (but it will be recorded in APIContext and the result stored permanently) using input generated by your own fuzzer. So, there’s everything you need to use fuzzing with APIContext and if the demand is there, we’d be happy to built a fuzzer directly into our core functionality.

Photo courtesy of hobvias sudoneighm
Share

Request A Demo

Find A Slot To See A Demo Or Speak To One Of Our Support Specialists

Ready To Start Monitoring?

Want to learn more? Check out our technical knowledge base, or our sector by sector data, or even our starters guide to the API economy. So sign up immediately, without a credit card and be running your first API call in minutes.

Related Posts

Join Us Now!

Join the 100s of companies relying on APIContext.