Eran's latest article raises a number of specific security threats by way of arguing that bearer tokens are irredeemably insecure. In this article I examine the attacks Eran calls out and demonstrate that they are already addressed by OAuth 2.0. Eran's article does bring up the interesting question of - do we need defense in depth for the tamper resistance and confidentiality provided by SSL/TLS?
1 The security threats Eran raises
Below I list the specific security threats I found Eran raising in his article:
Developers turning off error checking in SSL Some sites do not have up to date certificates and a few try to issue certificates that aren't signed by a certifying authority trusted by the operating system. To deal with these errors client authors sometimes just turn off SSL cert error checking. Done in a blanket way this invalidates SSL security since any man in the middle can insert itself, send a bad cert and the client will accept it.
Bearer tokens can be replayed Eran claims that if SSL is screwed up (such as above) then an attacker can take the bearer token and use it to make any request they want.
Typos Eran claims that if the client developer puts in a typo (say http://foo.com instead of https://foo.com or https://boo.com instead of https://foo.com) then SSL's security guarantees are compromised.
Client developers don't do security Eran believes that client developers can't get security right and therefore must have libraries to protect them from themselves.
The first and last threats obviously contradict each other. Eran's attack on SSL is about people using the libraries wrong and then he suggests that libraries can fix security? The reality is this - developers who do not pay attention to the security consideration sections and understand the security threats they are under will get it wrong. So either we educate developers or security will fail.
That bearer tokens can be replayed is absolutely true but equally true is that properly designed bearer tokens significantly reduce the damage done. First, they are short lived. Second, they have an audience in them which in most interesting cases (as I discussed previously) kills replay attacks before they can even get started.
The typo attack sounds the most scary. If a developer mistypes a single character the entire security of the system might be forfeit. But is this threat realistic? In OAuth 2.0 clients are required to go through a two step. In the first step they present their credentials to a token endpoint who issues an access token. In the second step they present the access token to an application endpoint to actually do something.
Let's say the developer put in the wrong token endpoint but the right application endpoint. If so then the access token the wrong token endpoint produces won't work on the right application endpoint and the client will fail.
Let's say the developer put in the right token endpoint but the wrong application endpoint. A properly designed OAuth token endpoint request includes the URL of the application endpoint the token is going to be used for. This allows the token endpoint to validate that this is a supported application endpoint. Typically the way systems I'm involved with handle this situation is by putting the base URL for requests into the scope field. But, so long as the token endpoint checks the application endpoint URL and sees that it isn't a supported endpoint then no access token will be issued and no damage is done.
2 Security is insurance, please don't buy more than you need
OAuth 2.0 depends on SSL/TLS to provide two key features - message tampering protection and confidentiality. If SSL/TLS is broken then one or both of those features will be lost. As explored in the appendix below mechanisms like OAuth 1.0's signature protocol don't really provide much defense in depth against SSL/TLS failures. So if we are going to get additional protection it's really only going to be by essentially re-inventing SSL/TLS like capabilities somewhere else in the stack. This would most likely occur by introducing a generic mechanism to sign/encrypt HTTP messages.
But inventing such a mechanism is a non-trivial endeavor. Just look at the complexity of SSL/TLS itself to get some idea of how hard getting a HTTP level message signing/encrypting mechanism right will be. So if we are to invent such a mechanism we need one heck of a good use case. I haven't seen such a use case but if someone has one I'd love to see it because I have some ideas on how to implement HTTP message signing/encrypting.
A Appendix - A quick look at what OAuth 1.0 signatures buy in terms of security
Let's say that a developer has, as Eran realistically describes in his article, turned off SSL cert error checking. Let's further assume that the developer is using OAuth 1.0 signatures. Finally let's assume that a man-in-the-middle (MITM) attack is underway. As soon as the client tries to connect to the server the MITM will redirect the request to their own machine and present a bad cert which will be accepted because cert checking is off. Now let's examine what the attacker can do even though OAuth 1.0 signatures are being used.
OAuth 1.0's security considerations section already points out two things the attacker can do. In section 4.2 it points out that the attacker can silently pass on requests and responses thus allowing them to eavesdrop. But even more fun is section 4.3 which points out that since responses aren't signed the attacker can change the response to be anything they want opening up a Pandora's box of security threats. Trying to check on the status of your web service via it's OAuth protected management interface? The attacker can make it look like everything is fine with your service even as the attacker is taking it down. Looking for the storage location to upload your secret document? The attacker can re-write the response to your query for the directory location to point at a URL they control.
Or check out the warning in section 3.4.1, the request body is only protected by the signature if it's a HTML form. In other words if the request protocol is JSON, XML, etc. then the attacker can not only change the response, they can change the request too without any fear of detection.
In addition, as a practical matter, an attacker can repeat the same request message multiple times if it wants to. OAuth 1.0 tries to prevent replays by using nonces. These are unique values generated by the client that the server is supposed to record for each and every request received from each and every client (at least until the time stamp in the message has passed). The idea is that before processing a request the server will check the nonce in the request and see if it has been seen before and if so will reject the request.
In reality distributed systems will do no such thing. This is because keeping a database of nonces for every single request received from all clients is so expensive to implement and so hard to keep consistent (we run right into the CAP theorem) that in practice scalable systems just won't do it. Instead what they will do is check the time stamp and that is it. So as a practical matter attackers can in fact replay requests.