Joseph Smarr

Thoughts on web development, tech, and life.

Author: joseph (page 2 of 4)

Sources of inspiration for 2010

Despite all the serious challenges the world is facing these days, I’m also seeing more and more that inspires me–cases where great things are happening to great people doing great work and improving the world in the process. Specifically, the following recent examples come to mind:

  • Movies: Avatar
  • Music: Lady Gaga
  • TV: Netflix HD streaming to Blu-Ray/TiVo/Xbox/etc.
  • Mobile: Palm Pre & Nexus One
  • Desktop: Chrome OS
  • Social: Foursquare

What do all of these have in common? They’re all cases of insanely talented outsiders changing the world by just working really hard and doing great stuff.

Everyone said James Cameron was crazy to make Avatar, just like they said he was crazy to make Titanic. They’re both such impossibly grand, expensive, and difficult visions to capture. But he did it anyway, and the work is brilliant, and the only thing that went broke were the previous box-office records.

Same thing with Lady Gaga: just two years ago, no one had heard of her, and she was just playing little clubs in New York. But using her incredible talent in song-writing and performance art, and a willingness to work insanely hard every day, she unleashed her strange and unique vision of music/fashion/art/performance and took the world by storm, becoming the first artist ever to score four #1 hits off her debut album, and at age 23 no less. If you haven’t paid close attention and think she’s just another made-to-order corporate pop starlet, take a closer look, you’ll be surprised (as I was).

Netflix is certainly an outsider when it comes to watching TV in your living room (it’s neither a cable provider nor a set-top box manufacturer), yet now that I can watch entire seasons of Lost in HD on my TV whenever I want–thanks to its Watch Instantly streaming and integrations with existing set-top boxes and gaming consoles–I find myself rarely watching “real TV” any more. And Netflix’s user experience is far superior–it knows which episodes I’ve already seen, so I can just pick up where I left off whenever I have a free moment. And if I’m up at Tahoe for the weekend, I can watch it there too on my laptop, and my episode history is kept in sync because it’s stored in the cloud. Brilliant.

Both Palm Pre and Android (hard to pick a favorite yet!) are up against fierce competition from Apple and the old-world mobile establishment, and neither company (Palm or Google) is an established player in this space, but they’re both producing excellent devices that simultaneously improve the quality of the experience for users while opening up more flexibility and power to developers. And they’re also making web development more of a “first-class citizen” for mobile apps. It’s hard to think of a more ambitious challenge than building your own mobile platform–hardware, OS, software stack, apps, and distribution in physical stores–but that’s not stopping these guys from having a major impact, and the game is just beginning.

Chrome OS isn’t even out yet really, but it’s already clear that the desktop will soon evolve to their vision where all important data lives in the cloud, and it will no longer matter if your computer dies or if you want to use multiple computers in different places–a pain that I’ve experienced many times, as I’m sure you have. Windows is one of the most well-established monopolies there is, so again it’s crazy in some sense to try and compete there, let alone with a radical new vision that does a lot *less* than the status quo, and instead re-imagines the problem in a new way. And yet people buy new computers all the time, so it’s not hard to believe that they could establish considerable market share in a short number of years, while forcing radical change from their competitors at the same time.

And perhaps closest to my own area of work on the Social Web, I think it’s noteworthy that the company that has had the biggest positive impact on how I connect and share with my friends in the last year is not any of the big established players, but a tiny startup that’s building itself up from scratch by making it easier and more rewarding to share where you are and what you’re doing: Foursquare. While I cringe at the amount of work they have to do to integrate with each separate social network and build apps for each separate mobile device (that’s why we need more common standards, of course!), they’re still able to deliver an awesome product with a tiny team, and their service is taking off like a rocket.

Why are these examples so inspiring to me? They provide reassurance that in 2010, two basic things are still true–perhaps more true than ever before:

  1. You *can* win by being excellent and working hard to build a better product
  2. You *can* win even if you’re an outsider in a field of powerful incumbents

It’s hard to believe in transformational innovation if you can’t believe in those two points, and it’s often easy to get discouraged, since these are such difficult challenges. But if those guys can all do it, so can we. In fact, it’s not hard to believe that it’s actually getting easier to succeed in these ways. After all, barriers to entry keep getting lowered, and the spread of information keeps getting faster and more efficient, so the good stuff should be able to be discovered and bubble to the top faster than ever before. If that’s true, then the new “hard problem” should be doing great work in the first place, and that’s the problem I want to be tackling!

What examples are inspiring you right now?

Joseph Smarr has new work info…

High on my to-do list for 2010 will be to update my contact info in Plaxo, because I’ll be starting a new job in late January. After nearly 8 amazing years at Plaxo, I’m joining Google to help drive a new company-wide focus on the future of the Social Web. I’m incredibly excited about this unique opportunity to turbo-charge my passionate pursuit of a Social Web that is more open, interoperable, decentralized, and firmly in the control of users.

I’ve worked closely with Google as a partner in opening up the social web for several years, and they’ve always impressed me with their speed and quality of execution, and more importantly, their unwavering commitment to do what’s right for users and for the health of the web at large. Google has made a habit of investing heavily and openly in areas important to the evolution of the web—think Chrome, Android, HTML5, SPDY, PublicDNS, etc. Getting the future of the Social Web right—including identity, privacy, data portability, messaging, real-time data, and a distributed social graph—is just as important, and the industry is at a critical phase where the next few years may well determine the platform we live with for decades to come. So when Google approached me recently to help coordinate and accelerate their innovation in this area, I could tell by their ideas and enthusiasm that this was an opportunity I couldn’t afford to pass up.

Now, anyone who knows me should immediately realize two things about this decision—first, it in no way reflects a lack of love or confidence from me in Plaxo, and second, I wouldn’t have taken this position if I hadn’t convinced myself that I could have the greatest possible impact at Google. For those that don’t know me as well personally, let me briefly elaborate on both points:

I joined Plaxo back in March of 2002 as their first non-founder employee, before they had even raised their first round of investment. I hadn’t yet finished my Bachelor’s Degree at Stanford, and I’d already been accepted into a research-intensive co-terminal Masters program there, but I was captivated by Plaxo’s founders and their ideas, and I knew I wanted to be a part of their core team. So I spent the first 15 months doing essentially two more-than-full-time jobs simultaneously (and pretty much nothing else). Since that time, I’ve done a lot of different things for Plaxo—from web development to natural language processing to stats collection and analysis to platform architecture, and most recently, serving as Plaxo’s Chief Technology Officer. Along the way, I’ve had to deal with hiring, firing, growth, lack of growth, good press, bad press, partnerships with companies large and small, acquisitions—both as the acquirer and the acquiree—and rapidly changing market conditions (think about it: we started Plaxo before users had ever heard of flickr, LinkedIn, friendster, Gmail, Facebook, Xobni , Twitter, the iPhone, or any number of other companies, services, and products that radically altered what it means to “stay in touch with the people you know and care about across all the tools and services that you and they use”). When I joined Plaxo, there were four of us. Now we have over 60 employees, and that’s not counting our many alumni. All of this is to make the following plain: Plaxo has been my life, my identity, my passion, and my family for longer than I’ve known my wife, longer than I was at Stanford, and longer than I’ve done just about anything before. Even at a year-and-a-half since our acquisition by Comcast, Plaxo has the same magic and mojo that’s made it a joy and an honor to work for all these years. And with our current team and strategic focus, 2010 promises to be one of the best years yet. So I hope this makes it clear that I was not looking to leave Plaxo anytime soon, and that the decision to do so is one that I did not make lightly.

Of all the things I’ve done at Plaxo over the years, my focus on opening up the Social Web over the past 3+ years is the work I’m proudest of, and the work that I think has had the biggest positive impact—both for Plaxo and the web itself. Actually, it really started way back in 2004, when I first read about FOAF and wrote a paper about its challenges from Plaxo’s perspective, for which I was then selected to speak at my first industry conference, the FOAF Workshop in Galway, Ireland. Since that time, I realized what a special community of people there were that cared about these issues in a web-wide way, and I tried to participate on the side and in my free time whenever possible. After leading Plaxo’s web development team to build a rich and complex new AJAX address book and calendar (something that also reinforced to me the value of community participation and public speaking, albeit on the topic of high-performance JavaScript), I knew I wanted to work on the Social Web full-time, and luckily it coincided perfectly with Plaxo’s realization that fulfilling our mission required focusing on more than just Outlook, webmail, and IM as important sources of “people data”. So we crafted a new role for me as Chief Platform Architect, and off I went, turning Plaxo into the first large-scale OpenID Relying Party, the first live OpenSocial container, co-creator of the Portable Contacts spec, co-creator and first successful deployment of hybrid onboarding combining OpenID and OAuth, and so on. Along the way I co-authored the Bill of Rights for Users of the Social Web, coined the term Open Stack, was elected to the Boards of both the OpenID Foundation and OpenSocial Foundation, and worked closely with members of the grass-roots community as well as with people at Google, Yahoo, Microsoft, AOL, Facebook, MySpace, Twitter, LinkedIn, Netflix, The New York Times, and others, often as a launch partner or early adopter of their respective forays into supporting these same open standards. And collectively, I think it’s fair to say that our efforts greatly accelerated the arrival, quality, and ubiquity of a Social Web ecosystem that has the potential to be open, decentralized, and interoperable, and that may define the next wave of innovation in this space, much as the birth of the web itself did nearly 20 years ago.

But we’re not done yet. Not by a long shot. And the future is never certain.

At the recent OpenID Summit hosted by Yahoo!, I gave a talk in which I outlined the current technical and user-experience challenges standing in the way of OpenID becoming truly successful and a “no-brainer” for any service large or small to implement. Despite all the progress that we’ve made over the past few years, and that I’ve proudly contributed to myself, there is no shortage of important challenges left to meet before we can reach our aspirations for the Social Web. There is also no shortage of people committed to “fighting the good fight”, but as with any investment for the future with a return that will be widely shared, most people and companies are forced to make tough trade-offs about whether to focus on what already works today or what may work better tomorrow. There are a lot of good people in a lot of places working on the future of the Social Web, and we need them all and more. But in my experience, Google is unmatched in its commitment to doing what’s right for the future of the web and its willingness to think long-term. One need only look at the current crop of Social Web “building blocks” being actively worked on and deployed by Google—including OpenID, OAuth, Portable Contacts, OpenSocial, PubSubHubbub, Webfinger, Salmon, and more—to see how serious they are. And yet they came to me because they want to turn up the intensity and focus and coordination and boldness even more.

I talked to a lot of Googlers before deciding to join, and from the top to the bottom they really impressed me with how genuinely they believe in this cause that I’m so passionate about, and how strong a mandate I feel throughout the company to do something great here. I also heard over and over how surprisingly easy it still is to get things built and shipped — both new products, tools, and specs, as well as integrating functionality into Google’s existing services. And, of course, there are so many brilliant and talented people at Google, and so much infrastructure to build on, that I know I’ll have more opportunity to learn and have an impact than I could ever hope to do anywhere else. So while there are other companies large and small (or perhaps not yet in existence) where I could also have some form of positive impact on the future of the Social Web, after looking closely at my options and doing some serious soul searching, I feel confident that Google is the right place for me, and now is the right time.

Let me end by sincerely thanking everyone that has supported me and worked with me not just during this transition process but throughout my career. I consider myself incredibly fortunate to be surrounded by so many amazing people that genuinely want to have a positive impact on the world and want to empower me to do the best that I can to contribute, even it means doing so from inside (or outside) a different company. It’s never easy to make big decisions involving lots of factors and rapidly changing conditions, let alone one with such deep personal and professional relationships at its core. Yet everyone has treated me with such respect, honesty, and good faith, that it fills me with a deep sense of gratitude, and reminds me why I so love living and working in Silicon Valley.

2010 will be an exciting and tumultuous year for the Social Web, and so will it be for me personally. Wish us both luck, and here’s to the great opportunities that lie ahead!

What an RP Wants, Part 2 (OpenID Summit 2009)

What an RP Wants, Part 2
OpenID Summit 2009 (Hosted by Yahoo!)
Mountain View, CA
November 2, 2009

Download PPT (2.1 MB)

I was invited to give a talk at the OpenID Summit as a follow-up to my talk “What an RP Wants“, which I gave in February at the OpenID Design Summit. In both cases, I shared my experiences from Plaxo’s perspective as a web site that is trying to succeed at letting users sign up using accounts they already have on Google, Yahoo, and other OpenID Provider sites. This talk reviewed the progress we’ve made as a community since February, and laid out the major remaining challenges to making it a truly-successful end-to-end experience to be an OpenID Relying Party (RP).

My basic message was this: we’ve made a lot of progress, but we’ve still got a lot left to do. So let’s re-double our efforts and commit ourselves once again to working together and solving these remaining problems. As much success as OpenID has had to date, its continued relevance is by no means guaranteed. But I remain optimistic because the same group of people that have brought us this far are still engaged, and none of the remaining challenges are beyond our collective ability to solve.

See more coverage of the OpenID Summit, including my talk, at The Real McCrea.

And here are a couple of video excerpts from my talk:

Full video of my Google I/O talk now available

The kind Google I/O folks have now posted a full video of my recent talk, “The Social Web: An Implementer’s Guide“. They did a great job editing back and forth between me and my slides. If you weren’t able to make it to Google I/O (or you thought the talk was so good you want to see it again), then please check it out! :)

The Social Web: An Implementer’s Guide (Google I/O 2009)

The Social Web: An Implementer’s Guide
Google I/O 2009
San Francisco, CA
May 28, 2009

Download PPT (7.3 MB)

Google invited me back for a second year in a row to speak at their developer conference about the state-of-the-art of opening up the social web. While my talk last year laid out the promise and vision of an interoperable social web ecosystem, this year I wanted to show all the concrete progress we’ve made as an industry in achieving that goal. So my talk was full of demos–signing up for Plaxo with an existing Gmail account in just two clicks, using MySpaceID to jump into a niche music site without a separate sign-up step, ending “re-friend madness” by honoring Facebook friend connections on Plaxo (via Facebook Connect), killing the “password anti-pattern” with user-friendly contact importers from a variety of large sites (demonstrated with FriendFeed), and sharing activity across sites using Google FriendConnect and Plaxo. Doing live demos is always a risky proposition, especially when they involve cross-site interop, but happily all the demos worked fine and the talk was a big success!

I began my talk by observing that the events of the last year has made it clear: The web is going social, and the social web is going open. By the end of my talk, having showed so many mainstream sites with deep user-friendly and user-friendly interoperability, I decided to go a step further and declare: The web is now social, and the social web is now open. You don’t have to wait any longer to start reaping the benefits. It’s time to dive in.

Portable Contacts and vCardDAV (IETF 74)

Portable Contacts and vCardDAV
IETF 74
San Francisco, CA
March 25, 2009

Download PPT (81 KB) or PDF

You may remember the venerable IETF standards-body from such foundational internet RFCs as HTTP (aka the web), SMTP (aka e-mail), and vCard (aka contact info). So I’ll be honest that I was a bit intimidated when they invited me to their IETF-wide conference to speak about my work on Portable Contacts.

In addition to being chartered with updating vCard itself, the IETF has a working group building a read-write standard for sharing address book data called CardDAV. (It’s a set of extensions to WebDAV for contact info, hence the name.) Since Portable Contacts is also trying to create a standard for accessing contact into online (albeit with a less ambitious scope and feature set), I was eager to share the design decisions we had made and the promising early adoption we’ve already seen.

My optimistic hope was that perhaps some of our insights might end up influencing the direction of CardDAV–or perhaps even vCard itself. But I was also a bit nervous that such an august and rigorous standards body might have little interest in the pontifications of a “scrappy Open Stack hacker” like me. Or that even if they liked what I said, it might be impossible to have an impact this late in the game. But I figured if nothing else, here’s a group of people that are as passionate about the gory details of contact info as we are, so at least we should meet one another and see where it leads.

Boy was I impressed and inspired by their positive reception of my remarks! Far from being a hostile or dis-interested audience, everyone seemed genuinely excited by the work we’d done, especially since companies large and small are already shipping compliant implementations. The Q&A was passionate and detailed, and it spilled out into the hallway and continued long after the session officially ended.

Best of all, I then got to sit down with Simon Perreault, one of the primary authors of vCard 4.0 and vCardXML, and we went literally line-by-line through the entire Portable Contacts spec and wrote a list of all the ways if differs from the next proposed version of vCard. As you might imagine, there were some passionate arguments on both sides concerning the details, but actually there were really no “deal breakers” in there, and Simon sounded quite open (or even excited) about some of the “innovations” we’d made. It really does look like we might be able to get a common XML schema across PoCo and vCard / CardDAV, and some of the changes might well land in core vCard!

Of course, any official spec changes will happen through the normal IETF mailing lists and process. But as I’m sure you can tell, I think things went amazingly well today, and the future of standards for sharing contact info online has never looked brighter! Thanks again to Marc Blanchet, Cyrus Daboo, and the rest of the vCardDAV working group for their invitation and warm reception. Onward ho!

Social data sharing will change lives and business (DEMO 09)

Social data sharing will change lives and business
DEMO 09
Palm Desert, CA
March 3, 2009

Watch full video (via Brightcove)
I flew down to an oddly-lush oasis in the middle of the desert last week to attend a panel at DEMO about the future of the social web. Max Engel from MySpace has a nice write-up of the event, and a full video of our panel is available on Brightcove. Eric Eldon of VentureBeat moderated the panel, which featured me, Max Engel, Dave Morin from Facebook, and Kevin Marks from Google. In addition to a lively discussion, we each demoed our “latest and greatest” efforts at opening up the social web. Max showed the first public demo of MySpace’s support for hybrid OpenID+OAuth login using a friendly popup, Kevin showed off how to add FriendConnect to any blog, and Dave showed off some new examples of Facebook Connect in the wild. I showed our new Google-optimized onboarding experiment with Plaxo, and revealed that it’s working so well that we’re now using it for 100% of new users invited to Plaxo via a gmail.com email address.

It’s just amazing and inspiring to me that these major mainstream internet sites are all now able to stand up and demo slick, user-friendly cross-site interoperability and data sharing using open APIs, and we’re all continuing to converge on common standards so developers don’t have to write separate code to interoperate with each site. You can really measure the speed of progress in this space by watching the quantity and quality of these Open Web demos continue to increase, and with SXSW, Web 2.0 Expo, Google I/O, and Internet Identity Workshop all still to come in the first half of 2009, I have a feeling that we all ain’t seen nuthin’ yet! :)

Missing the Oscars Finale: A Case Study in Technology Failure (and Opportunity)

Yesterday one of my wife’s friends came over to visit, and we decided on a lark to watch the Oscars (which we haven’t done most years). Even though we pay for Cable and are avid TiVo users, due to a variety of circumstances we missed both the beginning of the Oscars and–more importantly–the entire finale, from best female actress through best picture. My frustration and indignation led to me to think systematically about the various ways that technology could and should have helped us avoid this problem. I decided to share my thoughts in the hope that better understanding technology’s current limitations will help inspire and illuminate the way to improving them. As usual, I welcome your feedback, comments, and additional thoughts on this topic.

The essence of the failure was this: the Oscars was content that we wanted to watch, we were entitled to watch, but were ultimately unable to watch. But specifically, here’s what went wrong that could and should have done better:

  • Nothing alerted me that the Oscars was even on that day, nor did it prompt me to record it. I happened to return home early that day from the Plaxo ski trip, but might well have otherwise missed it completely. This is ridiculous given that the Oscars is a big cultural event in America, and that lots of people were planning to watch and record it. That “wisdom of the crowds” should have caused TiVo or someone to send me an email or otherwise prompt me to ask “Lots of people are planning to watch the Oscars–should I TiVo that for you?”
  • As a result of not having scheduled the Oscars to record in advance, when we turned on the TV it turned out that the red carpet pre-show had started 15 minutes ago. Sadly, there was no way to go back and watch the 15 minutes we had missed. Normally TiVo buffers the last 30 minutes of live TV, but when you change channels, it wipes out the buffer, and in this case we were not already on the channel where the Oscars were being recorded. Yet clearly this content could and should be easily accessible, especially when it just happened–you could imagine a set of servers in the cloud buffering the last 30 minutes of each channel, and then providing a similar TiVo-like near-past rewind feature no matter which channel you happen to change to (this would be a lot easier than full on-demand since the last 30 minutes of all channels is a tiny subset of the total content on TV).
  • Once we started watching TV and looked at the schedule, we told TiVo to record the Oscars, but elected to skip the subsequent Barbara Walters interview or whatever was scheduled to follow. Part way through watching in near-real-time, my wife and her friend decided to take a break and do something else (a luxury normally afforded by TiVo). When they came back to finish watching, we discovered to our horror that the Oscars had run 30+ minutes longer than scheduled, and thus we had missed the entire finale. We hadn’t scheduled anything to record after the Oscars, so TiVo in theory could have easily recorded this extra material, but we hadn’t told it to do so, and it didn’t know the program had run long, and its subsequent 30-minute buffer had passed over the finale hours ago, so we were sunk. There are multiple failures at work here:
  1. TiVo didn’t know that the Oscars would run long or that it was running long. My intent as a user was “record the Oscars in its entirety” but what actually happens is TiVo looks at the (static and always-out-of-date) program guide data and says “ok, I’ll record channel 123 from 5:30pm until 8:30pm and hope that does the trick”. Ideally TiVo should get updated program guide data on-the-fly when a program runs long, or else it should be able to detect that the current program has not yet ended and adjust its recording time appropriately. In absence of those capabilities, TiVo has a somewhat hackish solution of knowing which programs are “live broadcasts” and asking you “do you want to append extra recording time just in case?” when you go to record the show. We would have been saved if we’d chosen to do so, but that brings me to…
  2. We had no information that the Oscars was likely to run long. Actually, that’s not entirely true. Once we discovered our error, my wife’s friend remarked, “oh yeah, the Oscars always runs long”. Well, in that case, there should be ample historical data of the expected chance that a repeated live event like the Oscars should have extra time appended to the recording, and TiVo should be able to present that data to help its users make a more informed choice about whether to add additional recording time. If failure #1 was addressed, this whole issue would me moot, but in the interim, if TiVo is going to pass the buck to its users to decide when to add recording time, it should at least gather enough information to help the user make an informed choice.
  3. We weren’t able to go back and watch the TV we had missed, even though nothing else was being recorded during that time. Even though we hadn’t specifically told TiVo to record past the scheduled end of the Oscars, we also hadn’t told it to record anything else. So it was just sitting there, on the channel we wanted to record, doing nothing. Well, actually it was buffering more content, but only for 30 minutes, and only until it changed channels a few hours later to record some other pre-scheduled show. With hard drives as cheap as they are today, there’s no reason TiVo couldn’t have kept recording that same channel until it was asked to change channels. You could easily imagine an automatic overrun-prevention scheme where TiVo keeps recording say an extra hour after each scheduled show (unless it’s asked to change channels in the interim) and holds that in a separate, low-priority storage area (like the suggestions it records when extra space is free) that’s available to be retrieved at the end of a show (“The scheduled portion of this show has now ended, but would you like to keep watching?”), provided you watch that show soon after it was first recorded. In this case, it was only a few hours after the scheduled show had ended, so TiVo certainly would have had the room and ability to do this for us.
  • Dismayed at our failure to properly record the Oscars finale, we hoped that online content delivery had matured to the point where we could just watch the part we had missed online. After all, this is a major event on a broadcast channel whose main point is to draw attention to the movie industry, so if there were ever TV content whose owners should be unconflicted about maximizing viewership in any form, this should be it. But again, here we failed. First of all, there was no way to go online without seeing all the results, thus ruining the suspense we were hoping to experience. One could easily imagine first asking users if they had seen the Oscars, and having a separate experience for those wanting to watch it for the first time vs. those merely wanting a summary or re-cap. But even despite that setback, there was no way to watch the finale online in its entirety. The official Oscars website did have full video of the acceptance speeches, which was certainly better than nothing, but we still missed the introductions, the buildup, and so on. It blows my mind that you still can’t go online and just watch the raw feed, even of a major event on a broadcast channel like ABC, even when the event happened just a few hours ago. In this case it seems hard to believe that the hold-up would be a question of whether the viewer is entitled to view this content (compared to, say, some HBO special or a feature-length movie), but even if it were, my cable company knows that I pay to receive ABC, and presumably has this information available digitally somewhere. Clips are nice, but ABC must have thought the Oscars show was worth watching in its entirety (since it broadcast the whole thing), so there should be some way to watch it that way online, especially soon after it aired (again, this is a simpler problem than archiving all historical TV footage for on-demand viewing). Of course, there is one answer here: I’m sure I could have found the full Oscars recording on bittorrent and downloaded it. How sad (if not unexpected) that the “pirates” are the closest ones to delivering the user experience that the content owners themselves should be striving for!

Finally, aside from just enabling me to passively consume this content I wanted, I couldn’t help but notice a lot of missed opportunity to make watching the Oscars a more compelling, consuming, and social experience. For instance, I had very little context about the films and nominees–which were expected to win? Who had won or been nominated before? Which were my friends’ favorites? In some cases, I didn’t even know which actors had been in which films, or how well those films had done (both in the box office and with critics). An online companion to the Oscars could have provided all of this information, and thus drawn me in much more deeply. And with a social dimension (virtually watching along with my friends and seeing their predictions and reactions), it could have been very compelling indeed. If such information was available Online, the broadcast certainly didn’t try to drive any attention there (not even a quick banner under the show saying “Find out more about all the nominees at Oscar.com”, at least not that I saw). And my guess is that whatever was online wasn’t nearly interactive enough or real-time enough for the kind of “follow along with the show and learn more as it unfolds” type of experience I’m imagining. And even if such an experience were built, my guess is it would only be real-time with the live broadcast. But there’s no reason it couldn’t prompt you to click when you’ve seen each award being presented (even if watching later on your TiVo) and only then revealing the details of how your pick compared to your friends and so on.

So in conclusion, I think there’s so much opportunity here to make TV content easier and more reliable to consume, and there’s even more opportunity to make it more interactive and social. I know I’m not the first person to realize this, but it still amazes me when you really think in detail about the current state of the art how many failures and opportunities there are right in front of us. As anyone who knows me is painfully aware, I’m a huge TiVo fan, but in this case TiVo let me down. It wasn’t entirely TiVo’s fault, of course, but the net result is the same. And in terms of making the TV experience more interactive and social, it seems like the first step is to get a bunch of smart Silicon Valley types embedded in the large cable companies, especially the ones that aren’t scared of the internet. Well, personally I’m feeling pretty good about that one. 😉

Implementing OAuth is still too hard… but it doesn’t have to be

I recently helped Dave Winer debug his OAuth Consumer code, and the process was more painful than it should have been. (He was trying to write a Twitter app using their beta OAuth support, and since he has his own scripting environment for his OPML editor, there wasn’t an existing library he could just drop in.) Now I’m a big fan of OAuth–it’s a key piece of the Open Stack, and it really does work well once you get it working. I’ve written both OAuth Provider and Consumer code in multiple languages and integrated with OAuth-protected APIs on over half a dozen sites. I’ve also helped a lot of developers and companies debug their OAuth implementations and libraries. And the plain truth is this: it’s empirically way too painful still for first-time OAuth developers to get their code working, and despite the fact that OAuth is a standard, the empirical “it-just-works-rate” is way too low.

We in the Open Web community should all be concerned about this, since OAuth is the “gateway” to most of the open APIs we’re building, and quite often this first hurdle is the hardest one in the entire process. That’s not the “smooth on-ramp” we should be striving for here. We can and should do better, and I have a number of suggestions for how to do just that.

To some extent, OAuth will always be “hard” in the sense that it’s crypto–if you get one little bit wrong, the whole thing doesn’t work. The theory is, “yeah it might be hard the first time, but at least you only have to suffer that pain once, and then you can use it everywhere”. But even that promise falls short because most OAuth libraries and most OAuth providers have (or have had) bugs in them, and there aren’t good enough debugging, validating, and interop tools available to raise the quality bar without a lot of trial-and-error testing and back-and-forth debugging. I’m fortunate in that a) I’ve written and debugged OAuth code so many times that I”m really good at it now, and b) I personally know developers at most of the companies shipping OAuth APIs, but clearly most developers don’t have those luxuries, nor should they have to.

After I helped Dave get his code working, he said “you know, what you manually did for me was perfect. But there should have been a software tool to do that for me automatically”. He’s totally right, and I think with a little focused effort, the experience of implementing and debugging OAuth could be a ton better. So here are my suggestions for how to help make implementing OAuth easier. I hope to work on some or all of these in my copious spare time, and I encourage everyone that cares about OAuth and the Open Stack to pitch in if you can!

  • Write more recipe-style tutorials that take developers step-by-step through the process of building a simple OAuth Consumer that works with a known API in a bare-bones, no-fluff fashion. There are some good tutorials out there, but they tend to be longer on theory and shorter on “do this, now do this, now you should see that, …”, which is what developers need most to get up and running fast. I’ve written a couple such recipes so far–one for becoming an OpenID relying party, and one for using Netflix’s API–and I’ve gotten tremendous positive feedback on both, so I think we just need more like that.
  • Build a “transparent OAuth Provider” that shows the consumer exactly what signature base string, signature key, and signature it was expecting for each request. One of the most vexing aspects of OAuth is that if you make a mistake, you just get a 401 Unauthorized response with little or no debugging help. Clearly in a production system, you can’t expect the provider to dump out all the secrets they were expecting, but there should be a neutral dummy-API/server where you can test and debug your basic OAuth library or code with full transparency on both sides. In addition, if you’re accessing your own user-data on a provider’s site via OAuth, and you’ve logged in via username and password, there should be a special mode where you can see all the secrets, base strings, etc. that they’re expecting when you make an OAuth-signed request. (I plan to add this to Plaxo, since right now I achieve it by grepping through our logs and them IMing with the developers who are having problems, and this is, uhh, not scalable.)
  • Build an OAuth validator for providers and consumers that simulates the “other half” of the library (i.e. a provider to test consumer-code and a consumer to test provider-code) and that takes the code through a bunch of different scenarios, with detailed feedback at each step. For instance, does the API support GET? POST? Authorization headers? Does it handle empty secrets properly? Does it properly encode special characters? Does it compute the signature properly for known values? Does it properly append parameters to the oauth_callback URL when redirecting? And so on. I think the main reason that libraries and providers so far have all had bugs is that they really didn’t have a good way to thoroughly test their code. As a rule, if you can’t test your code, it will have bugs. So if we just encoded the common mistakes we’ve all seen in the field so far and put those in a validator, future implementations could be confident that they’ve nailed all the basics before being released in the wild. (And I’m sure we’d uncover more bugs in our existing libraries and providers in the process!)
  • Standardize the terms we use both in our tutorials and libraries. The spec itself is pretty consistent, but it’s already confusing enough to have a “Consumer Token”, “Request Token”, and “Access Token”, each of which consist of a “Token Key” and “Token Secret”, and it’s even more confusing when these terms aren’t used with exact specificity. It’s too easy to just say “token” to mean “request token” or “token” to mean “token key”–I do it all the time myself, but we really need to keep ourselves sharp when trying to get developers to do the right thing. Worse still, all the existing libraries use different naming conventions for the functions and steps involved, so it’s hard to write tutorials that work with multiple libraries. We should do a better job of using specific and standard terms in our tutorials and code, and clean up the stuff that’s already out there.
  • Consolidate the best libraries and other resources so developers have an easier time finding out what the current state-of-the-art is. Questions that should have obvious and easily findable answers include: is there an OAuth library in my language? If so, what’s the best one to use? How much has it been tested? Are their known bugs? Who should I contact if I run into problems using it? What are the current best tutorials, validators, etc. for me to use? Which companies have OAuth APIs currently? What known issues exist for each of those providers? Where is the forum/mailing-list/etc for each of those APIs? Which e-mail list(s) should I send general OAuth questions to? Should I feel confident that emails sent to those lists will receive prompt replies? Should I expect that bug reports or patches I submit there will quickly find their way to the right place? And so on.
  • Share more war stories of what we’ve tried, what hasn’t worked, and what we had to do to make it work. I applauded Dave for suffering his developer pain in public via his blog, and I did the same when working with Netflix’s API, but if we all did more of that, our collective knowledge of bugs, patterns, tricks, and solutions would be out there for others to find and benefit form. I should do more of that myself, and if you’ve ever tried to use OAuth, write an OAuth library, or build your own provider, you should too! So to get things started: In Dave’s case, the ultimate problem turned out to be that he was using his Request Secret instead of his Access Secret when signing API requests. Of course this worked when hitting the OAuth endpoint to get his Access token in the first place, and it’s a subtle difference (esp. if you don’t fully grok what all these different tokens are for, which most people don’t), but it didn’t work when hitting a protected API, and there’s no error message on any provider that says “you used the wrong secret when signing your request” since the secrets are never transmitted directly. The way I helped him debug it was to literally watch our debugging logs (which spit out all the guts of the OAuth signing process, including Base String, Signature Key, and final Signature), and then I sent him all that info and asked him to print out the same info on his end and compare the two. Once he did that, it was easy to spot and fix the mistake. But I hope you can see how all of the suggestions above would have helped make this process a lot quicker and less painful.

What else can we as a community do to make OAuth easier for developers? Add your thoughts here or in your own blog post. As Dave remarked to me, “the number of OAuth developers out there is about to skyrocket” now that Google, Yahoo, MySpace, Twitter, Netflix, TripIt, and more are providing OAuth-protected APIs. So this is definitely the time to put in some extra effort to make sure OAuth can really achieve its full potential!

Test-Driving the New Hybrid

The quest to open up the Social Web is quickly shifting from a vision of the future to a vision of the present. Last week we reached an important milestone in delivering concrete benefits to mainstream users from the Open Stack. Together with Google, we released a new way to join Plaxo–without having to create yet-another-password or give away your existing password to import an address book. We’re using a newly developed “hybrid protocol” that blends OpenID and OAuth so Gmail users (or any users of a service supporting these open standards) can, in a single act of consent, create a Plaxo account (using OpenID) and grant access to the data they wish to share with Plaxo (using OAuth).

We’re testing this new flow on a subset of Plaxo invites sent to @gmail.com users, which means we can send those users through this flow without having to show them a long list of possible Identity Provider choices, and without them having to know their own OpenID URL. The result is a seamless and intuitive experience for users (“Hey Plaxo, I already use Gmail, use that and don’t make me start from scratch”) and an opportunity for both Plaxo and Google to make our services more interoperable while reducing friction and increasing security. I’m particularly excited about this release because it’s a great example of “putting the pieces together” (combining multiple Open Stack technologies in such a way that the whole is greater than the sum of the parts), and it enables an experience that makes a lot of sense for mainstream users, who tend to think of “using an existing account” as a combination of identity (“I already have a gmail password”) and data (“I already have a gmail address book”). And, of course, because this integration is based on open standards, it will be easy for both Google and Plaxo to turn around and do similiar integrations with other sites, not to mention that the lessons we learn from these experiments will be helpful to any sites that want to build a similar experience.

To learn more about this integration, you can read Plaxo’s blog post and the coverage on TechCrunch, VentureBeat, or ReadWriteWeb (which got syndicated to The New York Times, neat!), and of course TheSocialWeb.tv. But I thought I’d take a minute to explain a bit more about how this integration works “under the hood”, and also share a bit of the backstory on how it came to be.

Under the hood

For those interested in the details of how the OpenID+OAuth hybrid works, and how we’re using it at Plaxo, here’s the meat: it’s technically an “OAuth Extension” for OpenID (using the standard OpenID extension mechanism already used by simple registration and attribute exchange) where the Relying Party asks the Identity Provider for an OAuth Request Token (optionally limited to a specific scope, e.g. “your address book but not your calendar data”) as part of the OpenID login process. The OP recognizes this extension and informs the user of the data the RP is requesting as part of the OpenID consent page. If the user consents, the OP sends back a pre-authorized OAuth Request Token in the OpenID response, which the RP can then exchange for a long-lived Access Token following the normal OAuth mechanism.

Note that RPs still need to obtain an OAuth Consumer Key and Secret offline beforehand (we’ve worked on ways to support unregistered consumers, but they didn’t make it into the spec yet), but they *don’t* have to get an Unauthorized Request Token before initiating OpenID login. The point of obtaining a Request Token separately is mainly to enable desktop and mobile OAuth flows, where popping open a web browser and receiving a response isn’t feasible. But since OpenID login is always happening in a web browser anyway, it makes sense for the OP to generate and pre-authorize the Request Token and return it via OpenID. This also frees the RP from the burden of having to deal with fetching and storing request tokens–given especially the rise in prominence of “directed identity” logins with OpenID (e.g. the RP just shows a “sign in with your Yahoo! account” button, which sends the OpenID URL  “yahoo.com” and relies on the OP to figure out which user is logging in and return a user-specific OpenID URL in the response), the RP often can’t tell in advance which user is trying to log in and whether they’ve logged in before, and thus in the worst case they might otherwise have to generate a Request Token before every OpenID login, even though the majority of such logins won’t end up doing anything with that token. Furthermore, the OP can feel confident that they’re not inadvertently giving away access to the user’s private data to an attacker, because a) they’re sending the request token back to the openid.return_to URL, which has to match the openid.realm which is displayed to the user (e.g. if the OP says “Do you trust plaxo.com to see your data”, they know they’ll only send the token back to a plaxo.com URL), and b) they’re only sending the Request Token in the “front channel” of the web browser, and the RP still has to exchange it for an Access Token on the “back channel” of direct server-to-server communication, which also requires signing with a Consumer Secret. In summary, the hybrid protocol is an elegant blend of OpenID and OAuth that is relatively efficient on the wire and at least as secure as each protocol on their own, if not more so.

Once a user signs up for Plaxo using the hybrid protocol, we can create an account for them that’s tied to their OpenID (using the standard recipe) and then attach the OAuth Access Token/Secret to the new user’s account. Then instead of having to ask the user to choose from a list of webmail providers to import their address book from, we see that they already have a valid Gmail OAuth token and we can initiate an automatic import for them–no passwords required! (We’re currently using Google’s GData Contacts API for the import, but as I demoed in December at the Open Stack Meetup, soon we will be able to use Portable Contacts instead, completing a pure Open Stack implementation.) Finally, when the user has finished setting up their Plaxo account, we show them a one-time “education page” that tells them to click “Sign in with your Google account” next time they return to Plaxo, rather than typing in a Plaxo-specific email and password (since they don’t have one).

However, because Google’s OP supports checkid_immediate, and because Plaxo sets a cookie when a user logs in via OpenID, in most cases we can invisibly and automatically keep the user logged into Plaxo as long as they’re still logged into Gmail. Specifically, if the user is not currently logged into Plaxo, but they previously logged in via OpenID, we attempt a checkid_immediate login (meaning we redirect to the OP and ask them if the user is currently logged in, and the OP immediately redirects back to us and tells us one way or the other). If we get a positive response, we log the user into Plaxo again, and as far as the user can tell, they were never signed out. If we get a negative response, we set a second cookie to remember that checkid_immediate failed, so we don’t try it again until the user successfully signs in. But the net result is that even though the concept of logging into Plaxo using your Google account may take some getting used to for mainstream users, most users will just stay automatically logged into Plaxo (as long as they stay logged into Gmail, which for most Gmail users is nearly always).

The backstory

The concept of combining OpenID and OAuth has been around for over a year. After all, they share a similar user flow (bounce over to provider, consent, bounce back to consumer with data), and they’re both technologies for empowering users to connect the websites they use (providing the complementary capabilities of Authentication and Authorization, respectively). David Recordon and I took a first stab at specifying an OpenID OAuth Extension many months ago, but the problem was there were no OpenID Providers that also supported OAuth-protected APIs yet, so it wasn’t clear who could implement the spec and help work out the details. (After all, OAuth itself was only finalized as a spec in December 07!). But then Google started supporting OAuth for its GData APIs, and they subsequently became an OpenID provider. Yahoo! also became hybrid-eligible (actually, they became an OpenID provider before Google, and added OAuth support later as part of Y!OS), and MySpace adopted OAuth for its APIs and shared their plans to become an OpenID provider as part of their MySpaceID initiative. Suddenly, there was renewed interest in finishing up the hybrid spec, and this time it was from people in a position to get the details right and then ship it.

The Google engineers had a bunch of ideas about clever ways to squeeze out extra efficiency when combining the two protocols (e.g. piggybacking the OAuth Request Token call on the OpenID associate call, or piggybacking the OAuth Access Token call on the OpenID check_authentication call). They also pointed out that given their geographically distributed set of data centers, their “front channel” and “back channel” servers might be on separate continents, so assuming that data could instantly be passed between them (e.g. generating a request token in one data center and then immediately showing it on an authorization page from another data center) wouldn’t be trivial (the solution is to use a deterministically encrypted version of the token as its own secret, rather than storing that in a database or distributed cache). As we considered these various proposals, the tension was always between optimizing for efficiency vs. “composability”–there were already a decent number of standalone OpenID and OAuth implementations in the wild, and ideally combining them shouldn’t require drastic modifications to either one. In practice, that meant giving up on a few extra optimizations to decrease overall complexity and increase the ease of adoption–a theme that’s guided many of the Open Stack technologies. As a proof of the progress we made on that front, the hybrid implementation we just rolled out used our existing OpenID implementation as is, and our existing OAuth implementation as is (e.g. that we also used for our recent Netflix integration), with no modifications required to either library. All we did was add the new OAuth extension to the OpenID login, as well as some simple logic to determine when to ask for an OAuth token and when to attach the token to the newly created user. Hurrah!

A few more drafts of the hybrid spec were floated around for a couple months, but there were always a few nagging issues that kept us from feeling that we’d nailed it. Then came the Internet Identity Workshop in November, where we held a session on the state of the hybrid protocol to get feedback from the larger community. There was consensus that we were on the right track, and that this was indeed worth pursuing, but the nagging issues remained. Until that night, when as in IIW tradition we all went to the nearby Monte Carlo bar and restaurant for dinner and drinks. Somehow I ended up at a booth with the OpenID guys from Google, Yahoo, and Microsoft, and we started rehashing those remaining issues and thinking out loud together about what to do. Somehow everything started falling into place, and one by one we started finding great solutions to our problems, in a cascade that kept re-energizing us to keep working and keep pushing. Before I knew it, it was after midnight and I’d forgotten to ever eat any dinner, but by George we’d done it! I drove home and frantically wrote up as many notes from the evening as I could remember. I wasn’t sure what to fear more–that I would forget the breakthroughs that we’d made that night, or that I would wake up the next morning and realize that what we’d come up with in our late night frenzy was in fact totally broken. :) Thankfully, neither of those things happened, and we ended up with the spec we’ve got today (plus a few extra juicy insights that have yet to materialize).

It just goes to show that there’s still no substitute for locking a bunch of people in a room for hours at a time to focus on a problem. (Though in this case, we weren’t so much locked in as enticed to stay with additional drink tickets, heh.) And it also shows the power of collaborating across company lines by developing open community standards that everyone can benefit from (and thus everyone is incentivized to contribute to). It was one of those amazing nights that makes me so proud and grateful to work in this community of passionate folks trying to put users in control of their data and truly open up the Social Web.

What’s next?

Now that we’ve released this first hybrid experiment, it’s time to analyze the results and iterate to perfection (or as close as we can get). While it’s too early to report on our findings thus far, let me just say that I’m *very* encouraged by the early results we’re seeing. 😉 Stay tuned, because we’ll eagerly share what we’ve learned as soon as we’ve had time to do a careful analysis. I foresee this type of onboarding becoming the norm soon–not just for Plaxo, but for a large number of sites that want to streamline their signup process. And of course the best of part of doing this all with open standards is that everyone can learn along with us and benefit from each other’s progress. Things are really heating up and I couldn’t be more excited to keep forging ahead here!

Older posts Newer posts

© 2016 Joseph Smarr

Theme by Anders NorenUp ↑