Using OAuth WRAP and Finger for ad-hoc user authentication

The OpenID community has worked long and hard to make ad-hoc logins possible on the web. Part of that process has been experiments with a number of different technologies and approaches. Below I make my own proposal for how to handle ad-hoc logins on the Internet using OAuth WRAP and my own spin on Finger. I offer this up as food for thought.


1 Disclaimer

My employer has nothing to do with anything in this article. They didn’t review it, authorize it or influence it. It’s my ideas and my ideas alone. So blame me.

I have a deep interest in identity issues mostly as part of my fervent desire to live in an Open Web and that web doesn’t exist today. So I ask the reader to please take these ideas in the spirit that they are offered, as a fellow traveler trying to find a successful path to a web where users decide who they want to interact with.

2 Logging into the Foo service - a scenario

Joe wants to use the Foo service. To do this Joe needs to log into the Foo service. Joe’s identity provider is where Joe is known as The Foo service and have no previous relationship. Joe tells the Foo service that his email is (note: the scenario works just as well with just Joe’s domain so there is no need to expose one’s identity). The Foo service then uses finger (my thoughts on which I have explored here) to obtain’s symmetric key negotiation service and login service endpoints. The Foo service uses the key negotiation service to negotiate a symmetric key with Then the Foo service uses a profile of OAuth WRAP to forward Joe to the login service asking for proof that the user really is validates Joe’s identity and then forwards Joe’s web browser back to the Foo service with a security token attesting to Joe’s identity.

3 My thoughts on requirements

3.1 No federation, no registration, it’s truly ad-hoc

We need an approach to authenticating users across services that doesn’t require the services to have any pre-existing relationship. The services must not need to register with each other, federate or any other ’off line’ magic in order to successfully authenticate users to each other.

3.2 No public key encryption beyond SSL/TLS, at least not initially

The use of public key encryption has obvious applications to this scenario. But I believe we have to start simple with solutions that do not require any form of PKI beyond SSL/TLS which I believe should be mandatory requirements of the protocol. Yes, in the future, we definitely should extend to PKI because it has some very nice advantages but I believe that the base functionality shouldn’t use anything more than SSL/TLS with some HMAC thrown in for validation of security tokens. Note however that support for SSL/TLS is mandatory and a critical component of the security of the key negotiation algorithm defined below.

4 An example

When Joe first navigates to the Foo service he is prompted to login. Ideally Joe should just type in his identity provider’s domain name ( but realistically most users won’t ’get’ that (at least not initially) and will probably just type in their e-mail addresses, in this case,

The Foo service has never had an interaction with before so the first thing it does is do a lookup on’s finger service to find the location of’s key negotiation endpoint. This process consists of sending a POST request to with a scope (a la OAuth WRAP) of URN:SomeStandardsOrgId:KeyNegotiationService. The response would contain a URL such as

The Foo service would then send a request to establish a symmetric key to The request would contain Foo’s key negotiation service URL (, a request ID and a cryptographically secure randomly generated number that I’ll call proof1. would then double check the key negotiation service location by using a finger request on to find’s key negotiation endpoint. Once it had validated that the URL it received in the key negotiation request matched the value retrieved from finger then would issue a POST to that URL with the request ID, proof1 and a symmetric key value that is to be used to sign security tokens between and the Foo service.

At this point there is now a symmetric key that both and the Foo service (represented by the domain have agreed to use with each other.

Now the Foo service will perform a second finger lookup but this time look for URN:SomeStandardsOrgId:LoginService. This time the Foo service wants to know where to send Joe in order to log him in.

Foo service will then forward Joe’s browser to the URI returned in the previous finger request along with an OAuth WRAP style permission request of ”Please log this person in and tell me who the heck they are.” will then log Joe in and ask Joe if he wants to tell Foo service who he is. Joe will say yes and will redirect Joe back to Foo service with a security token, HMAC’d with the shared key, asserting Joe’s identity.

And now we’re done. Joe logged into Foo service using his identity provider even though Foo service and had never had any previous interactions with each other.

Figure 1: An example of ad hoc authentication

Only the colored links involve standardization. The blue links (3/3R, 5/5R and 7/7R) use finger whose design I have discussed elsewhere. The green links (4/4R and 6/6R) are the symmetric key negotiation algorithm I introduce in this article. The red links (2R, 8, 9R and 10) are a minor profile of OAuth WRAP that I discuss in this article.

4.1 What if Joe’s identity provider isn’t the same as his domain name?

This is actually not an uncommon situation. For example, users establish Facebook accounts with e-mail addresses that Facebook doesn’t own yet Facebook acts very much as an identity provider. Some services, like Google and Live are identity providers where users can either choose to use a domain for their e-mail controlled by Google or Live (e.g. or or they can use an existing e-mail address. This can sometimes create confusion. If Joe has a Live account he created with the e-mail address and Joe wants to login to the Foo service then if he says his e-mail is the Foo service will go to instead of who is Joe’s identity provider.

This is a really sticky problem and I’m pretty convinced that it’s not interesting to solve. The reason identity providers like Live or Google let users use existing e-mail addresses from domains not owned by identity provider was as a convenience. That convenience is proving more and more troublesome to the point where the message needs to go out - login using your identity provider. So if Joe wants to be known as then better be his identity provider. Otherwise he needs to get another e-mail address.

I know many people don’t find this to be a satisfying answer but I believe that bending ourselves over backwards to solve a weird pointer problem just isn’t worth the effort.

4.2 Why do we need proof1 in messages 4 and 6? Shouldn’t the request ID be enough?

As long as the request ID contains a cryptographically secure random number of sufficient length then yes it is enough. But I felt like calling out the requirement for the secure value as its own value in the exchange in order to hammer home how important it is from a security perspective. The security of the key establishment hinges on the use of TLS/SSL and the exchange of an unguessable secret. If either is compromised then the key establishment protocol is not secure.

4.3 How did know what security token format to send proof of Joe’s identity in to the Foo service or what claims to use?

My assumption is that we will define profiles for common situations like this. So there will be a profile called ’login’ and that profile will specify things like what security token format to use and what claims can be placed in that security token.

4.4 Wouldn’t it be easier for the Foo service to just make one directory lookup request against instead of two?

This is really more of an issue for the finger server article but my opinion is that with HTTP/1.1 pipelining (and in this case the POST’s semantics are idempotent so pipelining is fine) I don’t see any reason to over optimize. Besides these requests should be reasonably rare.

4.5 Why is using finger in 5/5R to find Foo’s key negotiation service URL when that value was already provided in message 4?

In theory any request that can be validated against a domain over TLS/SSL should be ’trusted’ as having properly come from that domain. But in practice paranoia is a healthy thing. Let’s say an attacker has managed to take over some small part of and is using that to launch key attacks. By checking the key negotiation service location in message 4 against the value returned by the finger service in 5R, gives itself an extra level of protection.

4.6 How do keys expire and get replaced?

As part of the key exchange one expects that an expiration date will be associated with the key. To keep things running smoothly it will be necessary to be able to roll keys over without a ’gap’ where no key is in place. This means that at a minimum the two parties to a key negotiation need to handle having at least two keys active at the same time.

For example, let’s say a key has been negotiated and will expire in a few days. One of the parties may decide it is time to create a new key so that it can be established before the old key expires. For this scheme to work at least two keys have to be active at the same time, the old key that is about to expire and the newly established key.

In general a new key is established any time the key negotiation exchange is made. So in theory an unbounded number of keys could be put into play. In practice however each side is likely to have some upper limit to how many keys they want in play at any one time. This limit can be enforced either by refusing to add new keys if there already exist a maximum number of unexpired keys or dropping off an existing key (e.g. no longer honoring it) if the maximum number of supported keys has been reached. Which approach isn’t as important as making sure both ends of the conversation understand what has happened. My own preference is for each side to just support a maximum number of keys and to refuse to create new ones if that maximum is filled with unexpired keys. I think the failure scenarios for that situation are easier for each side to understand.

4.7 What happens if a key gets compromised or lost?

My own thinking is that the key negotiation protocol should have a message exchange type of ”Delete all keys we have negotiated” along with a human description of what the heck happened and some contact information since a follow up is probably necessary. This exchange would occur using the same two step pattern as establishing a key so as to prevent attacks.

4.8 Aren’t there race conditions between two services trying to establish keys?

You betcha. Imagine a case where a user is logging into one of’s services using a Foo service identity and vice versa. This could easily result in two different keys getting established between the same services. But as long as we mandate that each side be able to handle a reasonable number of keys (say 10 just to pick some integer) then it really shouldn’t matter.

4.9 Does Foo service repeat this entire process with everyone from who wants to login?

Heck no. The negotiated key is good for all communications between and

4.10 Why do we need to discover the location of the key negotiation or login services? Why not just hard code their locations under /.well-known?

Somewhere I can hear Mark Nottingham groaning. Strictly speaking hard coding the locations under /.well-known is at least logically consistent. But in general hard coding makes for fragile systems. It limits implementers flexibility around where services should be hosted. So it seems like a good idea to hard code as few things as possible and let everything else be dynamic. Hence the desire to redirect through a finger server.

7 thoughts on “Using OAuth WRAP and Finger for ad-hoc user authentication”

  1. Every time /.well-known is used, a kitten dies.

    Seriously, it’s only for things that really, really need to use it. Much like a gun under the pillow*, I’m totally OK with it not being used.

    * trying to relate to you US residents here…

    1. If you have a better way to solve the initial discovery problem I’m all ears.
      (And what fool keeps a gun under his pillow? Your head will just hurt all night. You sleep with your gun in your hand of course.)

  2. Why do you think services would accept identity from a random identity provider? Some would, I’m sure, like maybe in cases where you login to comment on a blog post. But your claim that Google or Live would just use to log the user instead of using as the username might not hold water. Google and Live(and other sites in general) have a vested interest of keeping a high up-time, but now they would have to depend on another company to be up and running, otherwise that user would not be able to login.

    1. As you point out, it completely depends on context. For low value interactions like blog posts (which was actually where, if memory serves, OpenID came from in the first place) services like Google and Live are fine with taking arbitrary identities. For higher value scenarios (such as logging into one’s Live account) one suspects they will be choosier. But their choice list won’t be just themselves. They will accept services like Facebook, Hi5, etc. because of their dominance in their local markets.

      But my real interest has more to do with the other scenarios I’m talking about in regards to granting and receiving permissions. In those scenarios a user wants to say “give” access and the service better do it and that will in some cases require logging Joe in in order to prove the requester is Joe which means having to talk to Joe’s Identity Provider regardless of who it is. That is my motivating scenario.

  3. That scenario makes sense.

    I also wondered whether key exchange flow is necessary. Identity Providers can sign their tokens with their SSL certs, no? What do you think?

    1. There are several reasons why IPs can’t use their SSL/TLS certs to sign/encrypt assertions.

      1. Public key support on most languages/platforms is awful so most implementers will fail. The only reason why SSL/TLS works so well is that it’s provided as a complete service, typically by the OS or by mature software packages. So this isn’t a technical reason, but just a realistic one.

      2. X.509 certs have a key usage field in them that restricts what they can be used for. Most SSL certs are restricted to just being server side TLS certs and nothing else. If you ask nicely sometimes the CA will also mark them as being o.k. for client side TLS certs. But that’s about it. They aren’t allowed to be used for anything else. I suppose we could tell everyone to just ignore the keyusage field but that probably doesn’t lead anywhere we want to go.

      3. I suspect the crypto folks would flip if a key was used for two completely different purposes (e.g. both TLS key exchange and for signing/encrypting assertions). This is the sort of thing that leads to crypto-analysis attacks.

      So if IdPs are going to use public keys for signing/encrypting tokens/assertions they are going to have to get different keys than their TLS keys.

      That all having been said I actually support using public keys for signing/encrypting assertions/tokens. I just wanted to provide a low pain alternative for the majority of people on platforms that have lousy public key support.

Leave a Reply

Your email address will not be published. Required fields are marked *