Introduction to Anti-Fuzzing: A Defence in Depth Aid


Anti-Fuzzing is a set of concepts and techniques that are designed to slowdown and frustrate threat actors looking to fuzz test software products by deliberately misbehaving, misdirecting, misinforming and otherwise hindering their efforts. The goal is to drive down the return of investment seen in fuzzing today by virtue of making it more expensive in terms of time and effort when used by malicious aggressors.

History of Anti-Fuzzing

Some of the original concepts that sit behind this post were conceived and developed by Aaron Adams and myself whilst at Research In Motion (BlackBerry) circa 2010.

The history of Anti-Fuzzing is one of those fortunate accidents that sometimes occur. Whilst at BlackBerry we were looking to do some fuzzing of the legacy USB stack. For whatever reason the developers had added code that when the device encountered an unexpected value at a particular location in the USB protocol the device would deliberately catastrophically fail (catfail in RIM vernacular). This catfail would look to the uninitiated like the device had crashed and thus you would likely be inclined to investigate further to understand why. Ultimately you’d realise it was deliberate and then come to the conclusion that you had wasted time debugging the issue. After realising that wasting cycles in this manner could potentially be an effective and demoralising defensive technique to frustrate and hinder aggressors the concept of Anti-Fuzzing was born.

Over the following years I fielded questions from at least three researchers who believed they may have found a security issue in the product’s USB stack when in fact they had simply tripped over the same intended behaviour.

There is prior art in this space. Two industry luminaries in the guise of Haroon Meer and Roelof Temmingh in their seminal 2004 paper When the Tables Turn. In January 2013 a blog post titled Advanced Persistent Trolling by Francesco Manzoni discussed an Anti-Fuzzing concept specifically designed to frustrate penetration testers during web application assessments. This is obviously not something I condone 🙂 but it introduced some similar techniques and concepts in the context of web applications specifically.

Anti-Tamper: an Introduction

Before we get onto Anti-Fuzzing first it’s worth understanding what Anti-Tamper is as it heavily influenced the early formation of the idea. In short Anti-Tamper is a US Department of Defence concept that is summarised (overview presentation) as follows:

Anti-Tamper (AT) encompasses the systems engineering activities intended to prevent and/or delay exploitation of critical technologies in U.S. weapon systems. These activities involve the entire life-cycle of systems acquisition, including research, design, development, implementation, and testing of AT measures.

Properly employed, AT will add longevity to a critical technology by deterring efforts to reverse-engineer, exploit, or develop countermeasures against a system or system component.

AT is not intended to completely defeat such hostile attempts, but it should discourage exploitation or reverse-engineering or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version.

These goals can equally apply to fuzzing.

Anti-Fuzzing: a Summary

If we take the Anti-Tamper mission statement and adjust the language for Anti-Fuzzing we arrive at something akin to:

Anti-Fuzzing (AF) encompasses the systems engineering activities intended to prevent and/or delay fuzzing of software.

Properly employed, AF will add longevity to the security of a technology by deterring efforts to fuzz and thus find vulnerabilities via this method against a system or system component.

AF is not intended to completely defeat such hostile attempts, but it should discourage fuzzing or make such efforts so time-consuming, difficult, and expensive that even if successful, a critical technology will have been replaced by its next-generation version with improved mitigations.

Now these are lofty goals for sure, but as you’ll see we can go some way as to meet them using a variety of different approaches.

As with Anti-Tamper, Anti-Fuzzing is intended to:

  • Deter: threat actor’s willingness or ability to fuzz effectively (i.e. have the aggressor pick an easier target).
  • Detect: fuzzing and respond accordingly in a defensive manner.
  • Prevent or degrade: the threat actor’s ability to succeed in their fuzzing mission.

Anti-Fuzzing: Execution Design Patterns

Depending on the software or interface that you’re trying to protect you’ll have different execution design patterns for Anti-Fuzzing. An execution design pattern is where the Anti-Fuzzing logic would be instantiated from when an interaction has occurred within the system that triggers the Anti-Fuzzing logic.

The following table provides some sample execution design patterns:

Application Type Anti-Fuzzing Execution Design Pattern 
Operating System DriversIOCTLs
Android Applications Intents
Web Applications/WAFODBC exception handlers
Web Applications – MVCActive record parsing
GenericException handlers
GenericConditional statements (e.g. default case statements)

If we take the example of the USB protocol stack and compare its behaviour against the above table it would have fallen under the ‘conditional statements’ pattern. Equally any complex file format or network protocol parser could equally employ the same pattern.

Anti-Fuzzing: Detecting Fuzzing versus Normal Operation

Detecting and classifying fuzzing versus normal operation may be dependent on a number of factors. These factors would be considered or monitored within the logic that is instantiated, examples of factors that may be considered before deciding how or if to respond include:

  • Environment that the software is executing within i.e. emulation or virtualisation.
  • Source of data.
  • Rate of data submission versus expected.
  • Data processing error rate versus typical (i.e. exception handler calls).
  • Severity and type of error condition being experienced.
  • Code paths / hot paths versus typical.

By way of example, we may decide when processing complex file formats or network traffic to temporarily register an exception handler that catches all unhandled exceptions that would lead to a crash condition and assume they were caused by fuzzing as a last line of defence. Or we may instead within the handler add logic to understand the source of the data, the previous parser error volume and the differences in the data processed before deciding that it was caused by fuzzing attempts.

Anti-Fuzzing: Defensive Behaviours

So what to do when suspected fuzzing is detected? This decision will be based on a trade-off between security, obscurity and usability.

The following table provides some example responses that may be employed depending on the target and level of desire to deter, degrade or prevent the adversary.

Application TypeAnti-Fuzzing Execution Design PatternsPossible Defensive Behaviours
Operating System Drivers IOCTLsFake CrashesPerformance degradation in IOCTL processing 
Android ApplicationsIntentsFake crashesMisinformation (e.g. appearing vulnerable)Shut down
Web Applications/WAFODBC exception handlersMisinformation (e.g. fake yet apparently valid SQL errors)
Web Application – MVC Active record parsing Performance degradation of client processing Business logic redirection 
GenericExecption handlersFake crashes
Generic Conditional statements (e.g. default case statements) Shut downMisinformation Performance degradation Fake crashes Anomalous behaviour

If we take our network traffic parser example were we temporarily register an exception handler that catches all unhandled exceptions that would lead to a crash condition we could re-write the EXCEPTION_RECORD and CONTEXT_RECORD which are pointed to by the EXCEPTION_POINTERS record to make the crash appear something it’s not. For example we may decide to make all the crashes look like a null pointer derefence or otherwise uninteresting !exploitable cases to influence the bucketing and triaging process or we may make it look like an EIP overwrite has been obtained off the bat and thus also guarantee time will be spent further analysing it.

Anti-Fuzzing: a Sample Implementation

The following is a very basic example for Microsoft Windows to show how you can modify an exception that would otherwise occur to look more interesting or otherwise misinform in a crash dump. Interestingly this example will even show the misinformation when a debugger is attached.

// AntiFuzz simple example for Windows
// Released as open source by NCC Group Plc -
// Developed by Ollie Whitehouse, ollie dot whitehouse at nccgroup dot com
// Released under AGPL
#include "stdafx.h"
#include <Windows.h>
 // Now this is where we could do all manner of other activities
 // and decide if it was as a result of fuzzing
 // Make it look super exciting
 // but this could obviously be non-deterministic
 // tell it to continue with the other exception handlers
int _tmain(int argc, _TCHAR* argv[])
 // we register an exception handler
 // say we want to go first
 fprintf(stdout,"[!] Registered handler\n");
 // we do our 'parsing' (this will generate an exception)
 // we unregister our exception handler
 fprintf(stdout,"[!] Unregistered handler\n");
 return 0;

If we wanted to do something similar on Linux or other POSIX compatible operating systems we could achieve this outcome via the use of sigaction to register a handler in order to catch SIGSEGV and adjust the output accordingly. However this example would be extremely trivial to patch around if discovered and its own would likely not withstand casual reverse engineering. As a result we believe that they should likely be combined with Anti-Tamper to ensure longevity.

Anti-Fuzzing: Not a Panacea or without Risk

Now we recognise Anti-Fuzzing is not a panacea or without risk.  For example in practical terms it can only likely be applied to closed source products, as with open source its presence could be discovered trivially. If it’s presence is discovered it can be worked around in both closed and open source products. If it misbehaves it can have negative consequences for legitimate users. Its presence can frustrate and annoy testing and development teams. The following table attempts to summarise the associated risks with implementing an Anti-Fuzzing strategy, impacts and possible mitigations.

Risk ImpactMitigation 
Aggressor discovers Anti-Fuzzing is present in product.    They reverse engineer the Anti-Fuzzing logic and work around it.      Anti-Reversing coupled with the fact that they have had to reverse engineer means that extra effort has been spent Revision of Anti-Fuzzing implementation on each version. 
Aggressor employs binary instrumentation or static disassembly augmented fuzzing combined with solvers. Anti- Fuzzing logic is discovered through program traces. Anti-Fuzzing effectiveness is degraded due to less reliance on crash dumps for vulnerability identification.  Comprehensive regression and functional testing. Risk adverse Anti-Fuzzing implementation. 
Anti-Fuzzing leads to undesirable behaviour within productPoor user experience/denial of service or similar. Comprehensive regression and functional testingRisk adverse Anti-Fuzzing implementation. 
Anti-Fuzzing complicates internal fuzzing efforts. Degraded internal security initiativesRemove Anti-Fuzzing from internal builds 
Anti-Fuzzing complicates field crash debugging. Degrading ability to analyse and resolve customer reported crashes. Encrypted exception records that custom diagnostic tools are able to retrieve for transmission back to vendor. Risk adverse Anti-Fuzzing implementation. 
Anti-Fuzzing introduces or facilitates successful exploitation. Undermines other software defensive mechanisms. Careful implementation review. 
Anti-Fuzzing technology produces/results in increased public bug reports. Whilst false the reporting results in vendor or product reputation damage. Risk adverse Anti-Fuzzing implementation. 
Anti-Fuzzing technology development and maintenance diverts resources. Increased development costs. Reduced development and testing resources for legitimate bug fixing or feature addition. Increased investment in development. 
Cost versus benefit analysis is difficult. Likely to deter public fuzzing efforts (which may report successor failure) and concentrate efforts in private and or industrial fuzzing efforts who would not report any progress before exploit development. Fuzzing telemetry, although ineffective if disabled or otherwise blocked. Public reports of 0day exploitation as an effectiveness gauge. 


This post has briefly introduced our thoughts on the potential applications, benefits and risks of Anti-Fuzzing technologies. While still in its infancy its potential value in protecting high-value systems should not in our opinion be discounted. As Anti-Tamper only calls for you to protect Critical Technology, selective deployments of Anti-Fuzzing may in our opinion be similarly beneficial. However, only with increased deployment and measurement can its true value, if any, be understood. Although as noted cost versus benefit analysis will be difficult for private sector organisations to accurately gauge due to a number of factors.

This topic more generally we believe has other potential applications not covered in this initial post. These include for example the use telemetry to not frustrate fuzzing but to inform vendors to aware of what is being fuzzed within their products, although the privacy implications would need to be carefully managed. 

Some final thoughts: We say clearly we know this only hides bugs, not fixes them, but we believe that this has its place and adds value in some situations. We also believe there is good precedent with anti-reversing tools, which attempt to slow down attack (knowing full well that they won’t win forever). Finally it is worth noting that this isn’t the opposite of making fuzzing easier (for in house teams).. i.e. Google  and Taviso Ormandy’s presentation titled Making Software Dumber where he makes the case for making fuzzing easier. Instead we suggest that the ideal might be easier for debug and hard for release.

Finally thanks to Haroon and John for providing feedback on this post whilst in draft.

Published date:  02 January 2014

Written by:  Ollie Whitehouse