Thoughts on web development, tech, and life.

Category: Open Social Web (Page 2 of 4)

Implementing OAuth is still too hard… but it doesn’t have to be

I recently helped Dave Winer debug his OAuth Consumer code, and the process was more painful than it should have been. (He was trying to write a Twitter app using their beta OAuth support, and since he has his own scripting environment for his OPML editor, there wasn’t an existing library he could just drop in.) Now I’m a big fan of OAuth–it’s a key piece of the Open Stack, and it really does work well once you get it working. I’ve written both OAuth Provider and Consumer code in multiple languages and integrated with OAuth-protected APIs on over half a dozen sites. I’ve also helped a lot of developers and companies debug their OAuth implementations and libraries. And the plain truth is this: it’s empirically way too painful still for first-time OAuth developers to get their code working, and despite the fact that OAuth is a standard, the empirical “it-just-works-rate” is way too low.

We in the Open Web community should all be concerned about this, since OAuth is the “gateway” to most of the open APIs we’re building, and quite often this first hurdle is the hardest one in the entire process. That’s not the “smooth on-ramp” we should be striving for here. We can and should do better, and I have a number of suggestions for how to do just that.

To some extent, OAuth will always be “hard” in the sense that it’s crypto–if you get one little bit wrong, the whole thing doesn’t work. The theory is, “yeah it might be hard the first time, but at least you only have to suffer that pain once, and then you can use it everywhere”. But even that promise falls short because most OAuth libraries and most OAuth providers have (or have had) bugs in them, and there aren’t good enough debugging, validating, and interop tools available to raise the quality bar without a lot of trial-and-error testing and back-and-forth debugging. I’m fortunate in that a) I’ve written and debugged OAuth code so many times that I”m really good at it now, and b) I personally know developers at most of the companies shipping OAuth APIs, but clearly most developers don’t have those luxuries, nor should they have to.

After I helped Dave get his code working, he said “you know, what you manually did for me was perfect. But there should have been a software tool to do that for me automatically”. He’s totally right, and I think with a little focused effort, the experience of implementing and debugging OAuth could be a ton better. So here are my suggestions for how to help make implementing OAuth easier. I hope to work on some or all of these in my copious spare time, and I encourage everyone that cares about OAuth and the Open Stack to pitch in if you can!

  • Write more recipe-style tutorials that take developers step-by-step through the process of building a simple OAuth Consumer that works with a known API in a bare-bones, no-fluff fashion. There are some good tutorials out there, but they tend to be longer on theory and shorter on “do this, now do this, now you should see that, …”, which is what developers need most to get up and running fast. I’ve written a couple such recipes so far–one for becoming an OpenID relying party, and one for using Netflix’s API–and I’ve gotten tremendous positive feedback on both, so I think we just need more like that.
  • Build a “transparent OAuth Provider” that shows the consumer exactly what signature base string, signature key, and signature it was expecting for each request. One of the most vexing aspects of OAuth is that if you make a mistake, you just get a 401 Unauthorized response with little or no debugging help. Clearly in a production system, you can’t expect the provider to dump out all the secrets they were expecting, but there should be a neutral dummy-API/server where you can test and debug your basic OAuth library or code with full transparency on both sides. In addition, if you’re accessing your own user-data on a provider’s site via OAuth, and you’ve logged in via username and password, there should be a special mode where you can see all the secrets, base strings, etc. that they’re expecting when you make an OAuth-signed request. (I plan to add this to Plaxo, since right now I achieve it by grepping through our logs and them IMing with the developers who are having problems, and this is, uhh, not scalable.)
  • Build an OAuth validator for providers and consumers that simulates the “other half” of the library (i.e. a provider to test consumer-code and a consumer to test provider-code) and that takes the code through a bunch of different scenarios, with detailed feedback at each step. For instance, does the API support GET? POST? Authorization headers? Does it handle empty secrets properly? Does it properly encode special characters? Does it compute the signature properly for known values? Does it properly append parameters to the oauth_callback URL when redirecting? And so on. I think the main reason that libraries and providers so far have all had bugs is that they really didn’t have a good way to thoroughly test their code. As a rule, if you can’t test your code, it will have bugs. So if we just encoded the common mistakes we’ve all seen in the field so far and put those in a validator, future implementations could be confident that they’ve nailed all the basics before being released in the wild. (And I’m sure we’d uncover more bugs in our existing libraries and providers in the process!)
  • Standardize the terms we use both in our tutorials and libraries. The spec itself is pretty consistent, but it’s already confusing enough to have a “Consumer Token”, “Request Token”, and “Access Token”, each of which consist of a “Token Key” and “Token Secret”, and it’s even more confusing when these terms aren’t used with exact specificity. It’s too easy to just say “token” to mean “request token” or “token” to mean “token key”–I do it all the time myself, but we really need to keep ourselves sharp when trying to get developers to do the right thing. Worse still, all the existing libraries use different naming conventions for the functions and steps involved, so it’s hard to write tutorials that work with multiple libraries. We should do a better job of using specific and standard terms in our tutorials and code, and clean up the stuff that’s already out there.
  • Consolidate the best libraries and other resources so developers have an easier time finding out what the current state-of-the-art is. Questions that should have obvious and easily findable answers include: is there an OAuth library in my language? If so, what’s the best one to use? How much has it been tested? Are their known bugs? Who should I contact if I run into problems using it? What are the current best tutorials, validators, etc. for me to use? Which companies have OAuth APIs currently? What known issues exist for each of those providers? Where is the forum/mailing-list/etc for each of those APIs? Which e-mail list(s) should I send general OAuth questions to? Should I feel confident that emails sent to those lists will receive prompt replies? Should I expect that bug reports or patches I submit there will quickly find their way to the right place? And so on.
  • Share more war stories of what we’ve tried, what hasn’t worked, and what we had to do to make it work. I applauded Dave for suffering his developer pain in public via his blog, and I did the same when working with Netflix’s API, but if we all did more of that, our collective knowledge of bugs, patterns, tricks, and solutions would be out there for others to find and benefit form. I should do more of that myself, and if you’ve ever tried to use OAuth, write an OAuth library, or build your own provider, you should too! So to get things started: In Dave’s case, the ultimate problem turned out to be that he was using his Request Secret instead of his Access Secret when signing API requests. Of course this worked when hitting the OAuth endpoint to get his Access token in the first place, and it’s a subtle difference (esp. if you don’t fully grok what all these different tokens are for, which most people don’t), but it didn’t work when hitting a protected API, and there’s no error message on any provider that says “you used the wrong secret when signing your request” since the secrets are never transmitted directly. The way I helped him debug it was to literally watch our debugging logs (which spit out all the guts of the OAuth signing process, including Base String, Signature Key, and final Signature), and then I sent him all that info and asked him to print out the same info on his end and compare the two. Once he did that, it was easy to spot and fix the mistake. But I hope you can see how all of the suggestions above would have helped make this process a lot quicker and less painful.

What else can we as a community do to make OAuth easier for developers? Add your thoughts here or in your own blog post. As Dave remarked to me, “the number of OAuth developers out there is about to skyrocket” now that Google, Yahoo, MySpace, Twitter, Netflix, TripIt, and more are providing OAuth-protected APIs. So this is definitely the time to put in some extra effort to make sure OAuth can really achieve its full potential!

Test-Driving the New Hybrid

The quest to open up the Social Web is quickly shifting from a vision of the future to a vision of the present. Last week we reached an important milestone in delivering concrete benefits to mainstream users from the Open Stack. Together with Google, we released a new way to join Plaxo–without having to create yet-another-password or give away your existing password to import an address book. We’re using a newly developed “hybrid protocol” that blends OpenID and OAuth so Gmail users (or any users of a service supporting these open standards) can, in a single act of consent, create a Plaxo account (using OpenID) and grant access to the data they wish to share with Plaxo (using OAuth).

We’re testing this new flow on a subset of Plaxo invites sent to @gmail.com users, which means we can send those users through this flow without having to show them a long list of possible Identity Provider choices, and without them having to know their own OpenID URL. The result is a seamless and intuitive experience for users (“Hey Plaxo, I already use Gmail, use that and don’t make me start from scratch”) and an opportunity for both Plaxo and Google to make our services more interoperable while reducing friction and increasing security. I’m particularly excited about this release because it’s a great example of “putting the pieces together” (combining multiple Open Stack technologies in such a way that the whole is greater than the sum of the parts), and it enables an experience that makes a lot of sense for mainstream users, who tend to think of “using an existing account” as a combination of identity (“I already have a gmail password”) and data (“I already have a gmail address book”). And, of course, because this integration is based on open standards, it will be easy for both Google and Plaxo to turn around and do similiar integrations with other sites, not to mention that the lessons we learn from these experiments will be helpful to any sites that want to build a similar experience.

To learn more about this integration, you can read Plaxo’s blog post and the coverage on TechCrunch, VentureBeat, or ReadWriteWeb (which got syndicated to The New York Times, neat!), and of course TheSocialWeb.tv. But I thought I’d take a minute to explain a bit more about how this integration works “under the hood”, and also share a bit of the backstory on how it came to be.

Under the hood

For those interested in the details of how the OpenID+OAuth hybrid works, and how we’re using it at Plaxo, here’s the meat: it’s technically an “OAuth Extension” for OpenID (using the standard OpenID extension mechanism already used by simple registration and attribute exchange) where the Relying Party asks the Identity Provider for an OAuth Request Token (optionally limited to a specific scope, e.g. “your address book but not your calendar data”) as part of the OpenID login process. The OP recognizes this extension and informs the user of the data the RP is requesting as part of the OpenID consent page. If the user consents, the OP sends back a pre-authorized OAuth Request Token in the OpenID response, which the RP can then exchange for a long-lived Access Token following the normal OAuth mechanism.

Note that RPs still need to obtain an OAuth Consumer Key and Secret offline beforehand (we’ve worked on ways to support unregistered consumers, but they didn’t make it into the spec yet), but they *don’t* have to get an Unauthorized Request Token before initiating OpenID login. The point of obtaining a Request Token separately is mainly to enable desktop and mobile OAuth flows, where popping open a web browser and receiving a response isn’t feasible. But since OpenID login is always happening in a web browser anyway, it makes sense for the OP to generate and pre-authorize the Request Token and return it via OpenID. This also frees the RP from the burden of having to deal with fetching and storing request tokens–given especially the rise in prominence of “directed identity” logins with OpenID (e.g. the RP just shows a “sign in with your Yahoo! account” button, which sends the OpenID URL  “yahoo.com” and relies on the OP to figure out which user is logging in and return a user-specific OpenID URL in the response), the RP often can’t tell in advance which user is trying to log in and whether they’ve logged in before, and thus in the worst case they might otherwise have to generate a Request Token before every OpenID login, even though the majority of such logins won’t end up doing anything with that token. Furthermore, the OP can feel confident that they’re not inadvertently giving away access to the user’s private data to an attacker, because a) they’re sending the request token back to the openid.return_to URL, which has to match the openid.realm which is displayed to the user (e.g. if the OP says “Do you trust plaxo.com to see your data”, they know they’ll only send the token back to a plaxo.com URL), and b) they’re only sending the Request Token in the “front channel” of the web browser, and the RP still has to exchange it for an Access Token on the “back channel” of direct server-to-server communication, which also requires signing with a Consumer Secret. In summary, the hybrid protocol is an elegant blend of OpenID and OAuth that is relatively efficient on the wire and at least as secure as each protocol on their own, if not more so.

Once a user signs up for Plaxo using the hybrid protocol, we can create an account for them that’s tied to their OpenID (using the standard recipe) and then attach the OAuth Access Token/Secret to the new user’s account. Then instead of having to ask the user to choose from a list of webmail providers to import their address book from, we see that they already have a valid Gmail OAuth token and we can initiate an automatic import for them–no passwords required! (We’re currently using Google’s GData Contacts API for the import, but as I demoed in December at the Open Stack Meetup, soon we will be able to use Portable Contacts instead, completing a pure Open Stack implementation.) Finally, when the user has finished setting up their Plaxo account, we show them a one-time “education page” that tells them to click “Sign in with your Google account” next time they return to Plaxo, rather than typing in a Plaxo-specific email and password (since they don’t have one).

However, because Google’s OP supports checkid_immediate, and because Plaxo sets a cookie when a user logs in via OpenID, in most cases we can invisibly and automatically keep the user logged into Plaxo as long as they’re still logged into Gmail. Specifically, if the user is not currently logged into Plaxo, but they previously logged in via OpenID, we attempt a checkid_immediate login (meaning we redirect to the OP and ask them if the user is currently logged in, and the OP immediately redirects back to us and tells us one way or the other). If we get a positive response, we log the user into Plaxo again, and as far as the user can tell, they were never signed out. If we get a negative response, we set a second cookie to remember that checkid_immediate failed, so we don’t try it again until the user successfully signs in. But the net result is that even though the concept of logging into Plaxo using your Google account may take some getting used to for mainstream users, most users will just stay automatically logged into Plaxo (as long as they stay logged into Gmail, which for most Gmail users is nearly always).

The backstory

The concept of combining OpenID and OAuth has been around for over a year. After all, they share a similar user flow (bounce over to provider, consent, bounce back to consumer with data), and they’re both technologies for empowering users to connect the websites they use (providing the complementary capabilities of Authentication and Authorization, respectively). David Recordon and I took a first stab at specifying an OpenID OAuth Extension many months ago, but the problem was there were no OpenID Providers that also supported OAuth-protected APIs yet, so it wasn’t clear who could implement the spec and help work out the details. (After all, OAuth itself was only finalized as a spec in December 07!). But then Google started supporting OAuth for its GData APIs, and they subsequently became an OpenID provider. Yahoo! also became hybrid-eligible (actually, they became an OpenID provider before Google, and added OAuth support later as part of Y!OS), and MySpace adopted OAuth for its APIs and shared their plans to become an OpenID provider as part of their MySpaceID initiative. Suddenly, there was renewed interest in finishing up the hybrid spec, and this time it was from people in a position to get the details right and then ship it.

The Google engineers had a bunch of ideas about clever ways to squeeze out extra efficiency when combining the two protocols (e.g. piggybacking the OAuth Request Token call on the OpenID associate call, or piggybacking the OAuth Access Token call on the OpenID check_authentication call). They also pointed out that given their geographically distributed set of data centers, their “front channel” and “back channel” servers might be on separate continents, so assuming that data could instantly be passed between them (e.g. generating a request token in one data center and then immediately showing it on an authorization page from another data center) wouldn’t be trivial (the solution is to use a deterministically encrypted version of the token as its own secret, rather than storing that in a database or distributed cache). As we considered these various proposals, the tension was always between optimizing for efficiency vs. “composability”–there were already a decent number of standalone OpenID and OAuth implementations in the wild, and ideally combining them shouldn’t require drastic modifications to either one. In practice, that meant giving up on a few extra optimizations to decrease overall complexity and increase the ease of adoption–a theme that’s guided many of the Open Stack technologies. As a proof of the progress we made on that front, the hybrid implementation we just rolled out used our existing OpenID implementation as is, and our existing OAuth implementation as is (e.g. that we also used for our recent Netflix integration), with no modifications required to either library. All we did was add the new OAuth extension to the OpenID login, as well as some simple logic to determine when to ask for an OAuth token and when to attach the token to the newly created user. Hurrah!

A few more drafts of the hybrid spec were floated around for a couple months, but there were always a few nagging issues that kept us from feeling that we’d nailed it. Then came the Internet Identity Workshop in November, where we held a session on the state of the hybrid protocol to get feedback from the larger community. There was consensus that we were on the right track, and that this was indeed worth pursuing, but the nagging issues remained. Until that night, when as in IIW tradition we all went to the nearby Monte Carlo bar and restaurant for dinner and drinks. Somehow I ended up at a booth with the OpenID guys from Google, Yahoo, and Microsoft, and we started rehashing those remaining issues and thinking out loud together about what to do. Somehow everything started falling into place, and one by one we started finding great solutions to our problems, in a cascade that kept re-energizing us to keep working and keep pushing. Before I knew it, it was after midnight and I’d forgotten to ever eat any dinner, but by George we’d done it! I drove home and frantically wrote up as many notes from the evening as I could remember. I wasn’t sure what to fear more–that I would forget the breakthroughs that we’d made that night, or that I would wake up the next morning and realize that what we’d come up with in our late night frenzy was in fact totally broken. 🙂 Thankfully, neither of those things happened, and we ended up with the spec we’ve got today (plus a few extra juicy insights that have yet to materialize).

It just goes to show that there’s still no substitute for locking a bunch of people in a room for hours at a time to focus on a problem. (Though in this case, we weren’t so much locked in as enticed to stay with additional drink tickets, heh.) And it also shows the power of collaborating across company lines by developing open community standards that everyone can benefit from (and thus everyone is incentivized to contribute to). It was one of those amazing nights that makes me so proud and grateful to work in this community of passionate folks trying to put users in control of their data and truly open up the Social Web.

What’s next?

Now that we’ve released this first hybrid experiment, it’s time to analyze the results and iterate to perfection (or as close as we can get). While it’s too early to report on our findings thus far, let me just say that I’m *very* encouraged by the early results we’re seeing. 😉 Stay tuned, because we’ll eagerly share what we’ve learned as soon as we’ve had time to do a careful analysis. I foresee this type of onboarding becoming the norm soon–not just for Plaxo, but for a large number of sites that want to streamline their signup process. And of course the best of part of doing this all with open standards is that everyone can learn along with us and benefit from each other’s progress. Things are really heating up and I couldn’t be more excited to keep forging ahead here!

Portable Contacts: The (Half) Year in Review

I’m excited and humbled by the amazing progress we’ve made this year on Portable Contacts, which started out as little more than a few conversations and an aspirational PowerPoint deck this summer. We’ve now got a great community engaged around solving this problem (from companies large and small as well as from the grass-roots), we had a successful Portable Contacts Summit together, we’ve got a draft spec that’s getting pretty solid, we’ve got several implementations in the wild (with many more in the works), we’ve achieved wire-alignment with OpenSocial’s RESTful people API, and we’ve seen how Portable Contacts when combined with other “open building blocks” like OpenID, OAuth, and XRD creates a compelling “Open Stack” that is more than the sum of its parts.

At the recent Open Stack Meetup hosted by Digg, I gave a presentation on the state of Portable Contacts, along with several demos of Portable Contacts in action (and our crew from thesocialweb.tv was on hand to film the entire set of talks). In addition to showing Portable Contacts working with Plaxo, MySpace, OpenSocial, and twitter (via hCard->vCard->PoCo transformers), I was thrilled to be able to give the first public demo of Portable Contacts working live with Gmail. Better still, I was able to demo Google’s hybrid OpenID+OAuth onboarding plus OAuth-protected Portable Contacts Gmail API. In other words, in one fell swoop I was able to sign up for a Plaxo account using my existing Google account, and I was able to bring over my google credentials, my pre-validated gmail.com e-mail address, and my gmail address book–all at once, and all in an open, secure and vendor-neutral way. Now that’s progress worth celebrating!

I have no doubt that we’re on the cusp of what will become the default way to interact with most new websites going forward. The idea that you had to re-create an account, password, profile, and friends-list on every site that you wanted to check out, and that none of that data or activity flowed with you across the tools you used, will soon seem archaic and quaint. And if you think we came a long way in 2008, you ain’t seen nothing yet! There has never been more momentum, more understanding, and more consolidated effort behind opening up the social web, and the critical pieces–yes, including Portable Contacts–are all but in place. 2009 is going to be a very exciting year indeed!

So let me close out this amazing year by saying Thank You to everyone that’s contributed to this movement. Your passion is infectious and your efforts are all having a major and positive impact on the web. I feel increddibly fortunate to participate in this movement, and I know our best days are still ahead of us. Happy New Year!

A New Open Stack: Greater than the Sum of its Parts (Internet Identity Workshop 2008b)

A New Open Stack: Greater than the Sum of its Parts
Internet Identity Workshop 2008b
Mountain View, CA
November 10, 2008

Download PPT (5.5MB)

I was asked to give one of the opening overview talks at the Internet Identity Workshop about how the “Open Stack” is getting mainstream sites interested in supporting OpenID, OAuth, and Portable Contacts, because the combined value these technologies offer together is greater than the sum of their parts. Having learned so much myself at previous IIWs, it was both an honor and a unique challenge to address this crowd and do them justice–the audience is a mix of super-savvy veterans and new people just getting interested in the space, and I wanted to please everybody. So I put together a new talk with a new core message: the Open Stack is greater than the sum of its parts, and together these building blocks are delivering enough value to make the proposition a win-win-win for developers, users, and site owners to adopt and embrace.

The talk was well received, and it led to a lively discussion afterwards in the break and at dinner. I can’t wait to see what sessions people will call over the next two days to discuss these issues in more depth. It was certainly a joy to be able to demo running code on Yahoo, Google, and MySpace as part of my talk–this is no longer a theoretical exercise when it comes to talking about putting these standards to work! I was even able to show off a newly developed Android app that uses OAuth and Portable Contacts to allow import into your cell phone from an arbitrary address book. I just found about the app this morning–now that’s the Open Stack in action!

As usual, John McCrea covered the event and provides a great write-up with pictures.

Update: MySpace’s Max Engel captured a good portion of my talk on video.

The Widgets Shall Inherit the Web (Widget Summit 2008)

The Widgets Shall Inherit the Web
Widget Summit 2008
San Francisco, CA
November 4, 2008

Download PPT (7.1MB)

For the second year in a row, I gave a talk at Niall Kennedy‘s Widget Summit in San Francisco. My my, what a difference a year makes! Last year, I was still talking about high-performance JavaScript, and while I’d started working on opening up the social web, the world was a very different place: no OpenSocial, no OAuth, no Portable Contacts, and OpenID was still at version 1.1, with very little mainstream support. Certainly, these technologies were not top-of-mind at a conference about developing web widgets.

But this year, the Open Stack was on everybody’s mtheind–starting with Cody Simms’s keynote on Yahoo’s Open Strategy, and following with talks from Google, hi5, and MySpace, all about how they’ve opened up their platforms using OpenSocial, OAuth, and the rest of the Open Stack. My talk was called “The Widgets Shall Inherit the Web”, and it explained how these open building blocks will greatly expand the abilities of widget developers to add value not just inside existing social networks, but across the entire web. John McCrea live-blogged my talk, as well as the follow-on talk from Max Engel of MySpace.
Most of the slides themselves came from my recent talk at Web 2.0 Expo NY, but when adapting my speech to this audience, something struck me: widget developers have actually been ahead of their time, and they’re in the best position of anyone to quickly take advantage of the opening up the social web. After all, widgets assume that someone else is taking care of signing up users, getting them to fill out profiles and find their friends, and sharing activity with one another. Widgets live on top of that existing ecosystem and add value by doing something new and unique. And importantly, it’s a symbiotic relationship–the widget developers can focus on their unique value-add (instead of having to build everything from scratch), and the container sites get additional rich functionality they didn’t have to build themselves.

This is the exactly the virtous cycle that the Open Stack will deliver for the social web, and so to this audience, it was music to their ears.

PS: Yes, I still voted on the same day I gave this talk. I went to the polls first thing in the morning, but I waited in line for over 90 minutes (!), so I missed some of the opening talks. Luckily my talk wasn’t until the afternoon. And of course, it was well worth the wait! 🙂

Using Netflix’s New API: A step-by-step guide

Netflix announces an APIAs a longtime avid Netflix fan, I was excited to see that they finally released an official API today. As an avid fan of the Open Web, I was even more excited to see that this API gives users full access to their ratings, reviews, and queue, and it does so using a familiar REST interface, with output available in XML, JSON, and ATOM. It even uses OAuth to grant access to protected user data, meaning you can pick up an existing OAuth library and dive in (well, almost, see below). Netflix has done a great job here, and deserves a lot of kudos!

Naturally, I couldn’t wait to get my hands on the API and try it out for real. After a bit of tinkering, I’ve now got it working so it gives me my own list of ratings, reviews, and recently returned movies, including as an ATOM feed that can be embedded as-is into a feed reader or aggregator. It was pretty straightforward, but I noticed a couple of non-standard things and gotchas along the way, so I thought it would be useful to share my findings. Hopefully this will help you get started with Netflix’s API even faster than I did!

So here’s how to get started with the Netflix API and end up with an ATOM feed of your recently returned movies:

  1. Sign up for mashery (which hosts Netflix’s API) at http://developer.netflix.com/member/register (you have to fill out some basic profile info and respond to an email round-trip)
  2. Register for an application key at http://developer.netflix.com/apps/register (you say a bit about what your app does and it gives you a key and secret). When you submit the registration, it will give you a result like this:
    Netflix API: k5mds6sfn594x4drvtw96n37   Shared Secret: srKNVRubKX

    The first string is your OAuth Consumer Key and the second one is your OAuth Consumer Secret. I’ve changed the secret above so you don’t add weird movies to my account, but this gives you an idea of what it looks like. 🙂

  3. Get an OAuth request token. If you’re not ready to start writing code, you can use an OAuth test client like http://term.ie/oauth/example/client.php. It’s not the most user-friendly UI, but it will get the job done. Use HMAC-SHA1 as your signature method, and use http://api.netflix.com/oauth/request_token as the endpoint. Put your newly issued consumer key and secret in the spaces below, and click the “request_token” button. If it works, you’ll get a page with output like this:
    oauth_token=bpn8ycnma7hzuwec5dmt8f2j&oauth_token_secret=DArhPYzsUCkz&application_name=JosephSmarrTestApp&login_url=https%3A%2F%2Fapi-user.netflix.com%2Foauth%2Flogin%3Foauth_token%3Dbpn8ycnma7hzuwec5dmt8f2j

    Your OAuth library should parse this for you, but if you’re playing along in the test client, you’ll have to pull out the OAuth Request Token (in this case, bpn8ycnma7hzuwec5dmt8f2j) and OAuth Request Secret (DArhPYzsUCtt). Note it also tells you the application_name you registered (in this case, JosephSmarrTestApp), which you’ll need for the next step (this is not a standard part of OAuth, and not sure why they require you to pass it along). They also give you a login_url, which is also non-standard, and doesn’t actually work, since you need to append additional parameters to it.

  4. Ask the user to authorize your request token. Here the OAuth test client will fail you because Netflix requires you to append additional query parameters to the login URL, and the test client isn’t smart about merging query parameters on the endpoint URL with the OAuth parameters it adds. The base login URL is https://api-user.netflix.com/oauth/login and as usual you have to append your Request Token as oauth_token=bpn8ycnma7hzuwec5dmt8f2j and provide an optional callback URL to redirect to the user to upon success. But it also makes you append your OAuth Consumer Key and application name, so the final URL you need to redirect your user to looks like this:

    This is not standard behavior, and it will probably cause unnecessary friction for developers, but now you know. BTW if you’re getting HTTP 400 errors on this step, try curl-ing the URL on the command line, and it will provide a descriptive error message that may not show up in your web browser. For instance, if you leave out the application name, e.g.

    curl ‘https://api-user.netflix.com/oauth/login?oauth_token=bpn8ycnma7hzuwec5dmt8f2j&oauth_callback=YOUR_CALLBACK_URL&oauth_consumer_key=k5mds6sfn594x4drvtw96n37’

    You’ll get the following XML response (I’ve replaced the angle brackets with [] because wordpress keeps eating my escaped tags, grr):

    [status]
      [status_code]400[/status_code]
      [message]application_name is missing[/message]
    [/status]

    If your login URL is successfully constructed, it will take the user to an authorization page that looks like this:
    Netflix OAuth authorization page

    If the user approves, they’ll be redirected back to your oauth_callback URL (if supplied), and your request token has now been authorized.

  5. Exchange your authorized request token for an access token. You can use the OAuth test client again for this, and it’s basically just like getting the request token, except the endpoint is http://api.netflix.com/oauth/access_token and you need to fill out both your consumer token and secret as well as your request token and secret. Then click the access_token button, and you should get a page with output like this:
    oauth_token=T1lVQLSlIW38NDgeumjnyypbxc6yHD0xkaD21d8DpLVaIs3d2T1Aq_yeOor9PCIW2Bz5ksIPr7aXBKvTTg599m9Q–&user_id=T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-&oauth_token_secret=AKeGYam8NJ4X

    (Once again I’ve altered my secret to protect the innocent.) In addition to providing an OAuth Access Token and OAuth Access Secret (via the oauth_token and oauth_token_secret parameters, respectively), you are also given the user_id for the authorized user, which you need to use when constructing the full URL for REST API calls. This is non-standard for OAuth, and you may need to modify your OAuth library to return this additional parameter, but that’s where you get it. (It would be nice if you could use an implicit userID in API URLs like @me, and it could be interpreted as “the user that granted this access token”, so you could skip this step of having to extract and use an explicit userID; that’s how Portable Contacts and OpenSocial get around this problem. Feature request, anyone?)

  6. Use your access token to fetch the user’s list of protected feeds. Having now successfully gone through the OAuth dance, you’re now ready to make your first protected API call! You can browse the list of available API calls at http://developer.netflix.com/docs/REST_API_Reference and in each case, the URL starts out as http://api.netflix.com/ and you append the path, substituting the user_id value you got back with your access token wherever the path calls for userID. So for instance, to get the list of protected ATOM feeds for the user, the REST URL is http://api.netflix.com/users/userID/feeds, or in this case http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/feeds.

    Here’s where the OAuth test client is a bit confusing: you need put that feeds URL as the endpoint, fill out the consumer key and secret as normal, and fill out your *access* token and secret under the “request token / secret” fields, then click the “access_token” button to submit the OAuth-signed API request. If it works, you’ll get an XML response with a bunch of links to different protected feeds available for this user. Here’s an example of the response, showing just a couple of the returned links, and again with angle brackets replaced with square brackets to appease my lame wordpress editor:

    Each link contains an href attribute pointing to the actual feed URL, as well as a rel attribute describing the type of data available for that link, and a human-readable title attribute. In our case, we want the “Titles Returned Recently” feed, which is available at http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/rental_history/returned?feed_token=T1ksEAR97Ki14sIyQX2pfnGH0Llom4eaIDMwNWlUOmRZ0duD2YDbp_5PPUKBcedH51XSxPTnUOI5rCLz9feBXx9A–&oauth_consumer_key=k5mds6sfn594x4drvtw96n37&output=atom (note the XML escapes &s in URLs as XML entities, so you have to un-escape them to get the actual URL). As you can see, this feed URL looks like a normal API request, including my userID on the path, but with an extra feed_token parameter, which is different for each available user feed. This way, the ATOM feed can be fetched without having to do any OAuth signing, so you can drop it in your feed reader or aggregator of choice and it should just work. And giving access to one feed won’t let anyone access your other feeds, since they’re each protected with their own feed_token values.

  7. Fetch the feed of recently returned movies. Now you can just fetch the feed URL you found in the previous step (in my case, http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/rental_history/returned?feed_token=T1ksEAR97Ki14sIyQX2pfnGH0Llom4eaIDMwNWlUOmRZ0duD2YDbp_5PPUKBcedH51XSxPTnUOI5rCLz9feBXx9A–&oauth_consumer_key=k5mds6sfn594x4drvtw96n37&output=atom), and you’ll get nicely formatted “blog posts” back for each movie the user recently returned. Here’s a sample of how the formatted ATOM entries look:
    Netflix rental returns as a feed
    Of course, if you want to format the results differently, you can make a REST API call for the same data, e.g. http://api.netflix.com/users/userID/rental_history/returned OAuth-sign it like you did in step 6, and you’ll get all the meta-data for each movie returned as XML, including various sizes of movie poster image.
  8. Profit! Now you’ve got a way to let your users provide access to their netflix data, which you can use in a variety of ways to enhance your site. If this is the first time you’ve used OAuth, it might have seemed a little complex, but the good news is it’s the same process for all other OAuth-protected APIs you may want to use in the future.

I hope you found this helpful. If anything is confusing, or if I made any mistakes in my write-up, please leave a comment so I can make it better. Otherwise, let me know when you’ve got your Netflix integration up and running!

Performance Challenges for the Open Web (Stanford CS193H)

Performance Challenges for the Open Web
Stanford CS193H: High Performance Web Sites
Stanford, CA
September 29, 2008

Download PPT (6.8 MB)

Open Web brings new performance challengesWeb site performance guru Steve Souders is teaching a class at Stanford this fall on High Performance Web Sites (CS193H). He invited me to give a guest lecture to his class on the new performance challenges emerging from our work to open up the social web. As a recent Stanford alum (SSP ’02, co-term ’03), it was a thrill to get to teach a class at my alma mater, esp. in the basement of the Gates bldg, where I’ve taken many classes myself.

I originally met Steve at OSCON 07 when I was working on high-performance JavaScript, and we were giving back-to-back talks. We immediately hit it off and have remained in good touch since. Over the last year or so, however, my focus has shifted to opening up the social web. So when Steve asked me to speak at his class, my first reaction was “I’m not sure I could tell your students anything new that isn’t already in your book”.

But upon reflection, I realized that a lot of the key challenges in creating a truly social web are directly related to performance, and the set of performance challenges in this space are quite different than in optimizing a single web site. In essence, the challenge is getting multiple sites to work together and share content in a way that’s open and flexible but also tightly integrated and high-performance. Thus my new talk was born.

Lots of open building blocksI provided the students with an overview of the emerging social web ecosystem, and some of the key open building blocks making it possible (OpenID, OAuth, OpenSocial, XRDS-Simple, microformats, etc.). I then gave some concrete examples of how these building blocks can play together, and that led naturally into a discussion of the performance challenges involved.

I broke the challenges into four primary categories:

  • minimizing round trips (the challenge is combining steps to optimize vs. keeping the pieces flexible and simple),
  • caching (storing copies of user data for efficiency vs. always having a fresh copy),
  • pull vs. push (the difficulty of scaling mass-polling and the opportunities presented by XMPP and Gnip to decrease both latency and load), and
  • integrating third-party content (proxying vs. client-side fetching, iframes vs. inline integration, etc.).

In each of these cases, there are fundamental trade-offs to make, so there’s no “easy, right answer”. But by understanding the issues involved, you can make trade-offs that are tailored to the situation at hand. Some of the students in that class will probably be writing the next generation of social apps, so I’m glad they can start thinking about these important issues today.

Web 2.0/Web 3.0 Mashup (EmTech08)

Web 2.0/Web 3.0 Mashup
Emerging Technologies Conference at MIT (EmTech08)
Boston, MA
September 24, 2008

Attribution: ValleywagI was invited to speak on a panel at EmTech, the annual conference on emerging technologies put on by MIT’s TechnologyReview Magazine, on the future of the web. The conference spans many disciplines (alternative energy, cloud computing, biotech, mobile, etc.) and we were the representatives of the consumer internet, which was quite a humbling task! Robert Scoble moderated the panel, which featured me, David Recordon, Dave Morin, and Nova Spivak.

It was a loose and lively back-and-forth discussion of the major trends we see on the web today: it’s going social, it’s going open, it’s going real-time, and it’s going ubiquitous. These trends are all working together: it’s now common (at least in silicon valley) to use your iPhone on the go to see what articles/restaurants/etc your friends have recommended from a variety of distributed tools, aggregated via FriendFeed, Plaxo Pulse, or Facebook. A lot of the vision behind the Semantic Web (structured data enabling machine-to-machine communication on a user’s behalf) is now happening, but it’s doing so bottoms-up, with open standards that let users easily create content online and share it with people they know. As the audience could clearly tell from our passionate and rapid-fire remarks, this is an exciting and important time for the web.

We got lots of positive feedback on our panel from attendees (and also via twitter, of course), as well as from the TR staff. We even received the distinct honor of attracting snarky posts from both Valleywag and Fake Steve Jobs (if you don’t know the valley, trust me: that’s a good thing). You can watch a video of the entire panel on TechnologyReview’s website.

I must say I’m quite impressed with TechnologyReview and EmTech. They do a good job of pulling together interesting people and research from a variety of technical frontiers and making it generally accessible but not dumbed-down. The piece they wrote recently on opening up the social web (which featured a full page photo of yours-truly diving into a large bean bag) was perhaps the most insightful mainstream coverage to date of our space. They gave me a free one-year subscription to TR for speaking at EmTech, and I’ll definitely enjoy reading it. Here’s looking forward to EmTech09!

Tying it All Together: Implementing the Open Web (Web 2.0 Expo New York)

Tying it All Together: Implementing the Open Web
Web 2.0 Expo New York
New York, NY
September 19, 2008

Download PPT (7.2 MB)

I gave the latest rev of my talk on how the social web is opening up and how the various building blocks (OpenID, OAuth, OpenSocial, PortableContacts, XRDS-Simple, Microformats, etc.) fit together to create a new social web ecosystem. Thanks to Kris Jordan, Mark Scrimshire, and Steve Kuhn for writing up detailed notes of what I said. Given that my talk was scheduled for the last time slot on the last day of the conference, it was well attended and the audience was enthusiastic and engaged, which I always take as a good sign.

I think the reason that people are reacting so positively to this message (besides the fact that I’m getting better with practice at explaining these often complex technologies in a coherent way!) is that it’s becoming more real and more important every day. It’s amazing to me how much has happened in this space even since my last talk on this subject at Google I/O in May (I know because I had to update my slides considerably since then!). Yahoo has staked its future on going radically open with Y!OS, and it’s using the “open stack” to do it. MySpace hosted our Portable Contacts Summit (an important new building block), and is using OpenID, OAuth, and OpenSocial for it’s “data availability” platform. Google now uses OAuth for all of its GData APIs. These are three of the biggest, most mainstream consumer web businesses around, and they’re all going social and open in a big way.

At the same time, the proliferation of new socially-enabled services continues unabated. This is why users and developers are increasingly receptive to an Open Web in which the need to constantly re-create and maintain accounts, profiles, friends-lists, and activity streams is reduced. And even though some large sites like Facebook continue to push a proprietary stack, they too see the value of letting their users take their data with them across the social web (which is precisely what Facebook Connect does). Thus all the major players are aligned in their view of the emerging “social web ecosystem” in which Identity Providers, Social Graph Providers, and Content Aggregators will help users interact with the myriad social tools we all want to use.

So basically: everyone agrees on the architecture, most also agree on the open building blocks, and nothing prevents the holdouts from going open if/when they decide it’s beneficial or inevitable. This is why I’m so optimistic and excited to be a part of this movement, and it’s why audiences are so glad to hear the good news.

PS: Another positive development since my last talk is that we’re making great progress on actually implementing the “open stack” end-to-end. One of the most compelling demos I’ve seen is by Brian Ellin of JanRain, which shows how a user can sign up for a new site and provide access to their private address book, all in a seamless and vendor-neutral way!

OpenSocial, OpenID, and OAuth! Oh, My! (Google I/O)

OpenSocial, OpenID, and OAuth! Oh, My!
Google I/O
San Francisco, CA
May 29, 2008

Download PPT (7.3 MB)

Update: Google has posted a full-length video of my talk, along with a web-friendly copy of my slides.

Giving my talk: I was one of only a few non-Google employees who was invited to give a talk at Google’s big developer conference, Google I/O, in San Francisco. This was a huge event, and Google clearly went all-out on design and production. Not only were there a ton of talks and an amazing reception party, the open spaces were filled with colorful balls, beanbags, drink and snack stations (including made-to-order giant pretzels with salt), pool tables, demo areas, and more. This definitely felt like being inside the Googleplex.

Chilling in Google's beanbag chairsMost of the talks focused on a particular Google API, product, or service, and they were organized into tracks like “Maps & Geo”, “Mobile”, and of course “Social”, where my talk lived. Not surprisingly, most of the Social talks focused on OpenSocial, and originally I was asked to present as an OpenSocial container (on behalf of Plaxo). When I suggested that I could probably add even more value by talking about all the other building blocks of the open social web and how they complement OpenSocial, they were enthusiastic, and so my talk was born. I got to do a first version of a talk on this theme at Web 2.0 Expo in April, but enough things had changed in the world since last month that I had to do quite a bit of revising and adding to that talk for Google I/O (a sign of how quickly things are moving in this space!)

Open Social Web pool classicI gave my talk on Thursday morning and the room was literally packed to the walls. Several people came up to me afterwards and lamented that they’d tried to get in but were turned away because the room was already over capacity. Wow, I guess people really do want to understand how the social web is opening up! I was very pleased with how the talk went, judging both by the positive feedback I received (in person and in tweets) and by the long and engaged Q&A session that followed for more than half an hour after the talk officially ended. Interestingly, 100% of the questions were about the details of how these technologies work and how to best apply them, rather than whether opening up the social web is a good idea in the first place or whether it’s feasible. Granted, this was a developer conference, but it’s still a strong indication to me of the momentum that our movement has generated, and the increasing extent to which people view it as both inevitable and good. We’re definitely making progress, and I couldn’t be more excited to keep pushing forward!

Update: My partner-in-crime John McCrea has coverage of my talk, including photos and a video clip he shot towards the end of my talk.

« Older posts Newer posts »

© 2024 Joseph Smarr

Theme by Anders NorenUp ↑