Authorization vulnerabilities continue to be one of the largest and most difficult to remediate classes of vulnerabilities that affect web applications. Compared to other vulnerability classes like XSS or SQL injection, there are no frameworks or design patterns which can be used to prevent authorization flaws at a fundamental level (although this is an area of active research). Insecure Direct Object References (IDOR) are one specific type of authorization flaw that does have a higher-level mitigation strategy. Specifically, many applications try to use unguessable or random identifiers (like UUIDs) that refer to objects to protect them. By using a UUID instead of a simpler, incrementing or guessable ID, IDORs become much more difficult to exploit: even if there is an authorization vuln, an attacker cannot simply iterate through a large number of IDs to dump other users’ data.
This strategy has been considered effective enough that, for some applications, knowledge of a UUID is the only authorization check implemented for access to an object. Rather than validating the user’s session and confirming that the user has an appropriate relationship with the object, these applications assume that – because the UUID is unguessable and only known to the user – no further authorization checks are needed. There have been plenty of blog posts written on why UUIDs shouldn’t be used this way – but they all focus on the fact that there are actually multiple algorithms for generating a UUID, some which would create guessable UUIDs. As long as the application uses UUIDv4 for cryptographically random IDs, and doesn’t give attackers a way to see IDs belonging to other users, it should be safe, right? Unfortunately, there are big gaps in this logic, which ultimately lead me to conclude that unguessable IDs are not safe to use for traditional object-based access control.
Missing Authorization Checks Clash with Functional Changes
When authorization checks aren’t consistently applied across all application routes, changing or adding functionality can suddenly become an extremely dangerous game. Functional changes in far-off parts of the application can easily end up revealing IDs to an attacker, allowing them to access sensitive data. Engineers can’t and shouldn’t be expected to consider how a new route might affect older application functionality that hasn’t changed in years. Instead, they should be able to code safely without having to keep the entire application in mind. For example, if a new route can be used to return a list of valid UUIDs (without the sensitive data for the objects), the engineering and security teams might both think this is OK – but two separate routes might be combined to leak IDs and then read the data from those IDs. If engineers need to be constantly aware of every route in the application in order to avoid creating new authorization flaws, that is a recipe for disaster.
I have seen a number of real world examples of this, so I’ll use a toy example that is similar to a number of real applications. Consider an application which allows users to create shopping lists, each of which has a cryptographically random UUID. Some of those shopping lists might be sensitive, so it shouldn’t be possible for a user to see the items on another user’s shopping list. The route to retrieve the items on a shopping list requires knowing the UUID; so as long as an attacker can’t read those IDs, no authorization check should be needed. Instead, the authorization check is implemented on the user object: each user can only see their own user data, which includes the list of all UUIDs corresponding to their shopping lists.
As long as this app stays super simple, there might not be a major issue with this logic. But when new functionality is implemented, there’s a risk that a change is introduced which violates the core assumption (users cannot see the UUIDs for other users’ carts). In the real world, I have seen this happen when an application was changing its backend from a REST API to a GraphQL API. In that case, the REST API did not return the UUIDs for the sensitive object (e.g. shopping list), so it didn’t “need” an authorization check when reading the list. The new GraphQL API, on the other hand, did have an authorization check for reading the shopping list, but did not have an authorization check for reading the list of UUIDs for a user – these IDs were not seen to be sensitive. Because of the differences in context, it was possible to list UUIDs via the (new) GraphQL API, then read the sensitive data via the (old) REST API. In my experience, this type of issue is extremely common for any moderately-complex application which is relying on IDs for authorization.
Authorization Based on IDs Turns Logs into Toxic Waste
One area of access control and user privacy which has come to the forefront recently is the leakage of sensitive data via application logs. Modern companies have come to realize that logs can and do contain highly-sensitive data; but the controls around access to logs are often much looser than other systems (e.g. a customer support interface). As a result, most companies are attempting to implement stronger controls for log storage and access, but also to remove sensitive user data from logs in the first place. It is obvious that some data (such as users’ passwords) is critically sensitive and should be kept out of logs. IDs are generally not seen as sensitive – at least, it is completely non-obvious to most engineering or security teams that an ID should be considered sensitive. However, if knowledge of an ID is the only thing gating access to sensitive data, suddenly these IDs have become a massive source of toxic data in the application’s logs.
This problem is even more significant than might be initially realized, especially compared to other methods of authorization (e.g. checking against a session value from the user’s cookies). First, IDs are very common to include in URLs or application routes, or in query and body, parameters – compare to a session token, which is almost always in a cookie or Authorization header. Second, there is little-to-no cultural awareness that a UUID could be sensitive, so most logging frameworks or engineers would not expect to strip them from the logs. Finally, IDs typically never “expire” unless the user actively deletes the object referred to by the ID (if this is even possible), meaning they are toxic forever.
Users Lose Control Over Their Data
The last major area where ID-based authorization can create problems is for applications that allow users to share access to objects. Returning to the example of a shopping-list-maker app, we might want to give users the ability to share and collaborate on shopping lists. If any user can access any shopping list simply by going to the correct URL which contains the ID, then we’ve already solved the problem of sharing. In this case, it seems like using UUIDs for access is actually a major feature – any user can just share the URL to allow another user to access a shopping list.
There are multiple problems with this simple solution. IDs don’t expire, so any user with the ID can access the list indefinitely, even if the original user would like to prevent them from doing so. For example, if a user ends up sharing a grocery shopping list, but later reuses it for holiday presents, the other viewers might see what their gift is. In order to workaround this issue, the user would need to create a totally new list each time they wanted to share access. Another scenario to consider is what happens if a user’s account is compromised: even if they recover access to their account, any existing or future data may still be accessible to the attacker.
These issues don’t only affect users, but create problems for the application itself. Without consistent authorization checking, it becomes much more difficult to implement any future access controls, such as if we want to allow granular control over who can only read, or who can read and also edit a shopping list. Because the existing model doesn’t have any concept of who can access the list, implementing new access control checks breaks all the existing access. It’s not difficult to come up with similar examples, such as a social media site that started with all users having public profiles, but now wants to implement a private profile feature – do users switching to private profiles now block their existing followers from access?
Using random IDs for objects, instead of guessable or incrementing IDs, is a change that makes a lot of sense from an engineering perspective. As a defense-in-depth feature that aims to mitigate authorization vulnerabilities, it falls far short of its promise. If you’re using UUIDs as object identifiers, making sure those IDs are unguessable and cryptographically random is a good first step, but it’s not sufficient to prevent authorization flaws. When using this pattern in an application, there should always be a corresponding authorization check that the current user has a valid reason to access the object. This will create a much safer environment in both the short and long term, by reducing the risk of leaking IDs via logs or other application functionality, and making it possible to implement future features safely.