Thoughts on web development, tech, and life.

Category: Open Social Web (Page 1 of 4)

Winning Market Share in the Sharing Market

[Usual disclaimer: I currently work for Google on Google+ (which I think is awesome), but as always the thoughts on this blog are purely my personal opinion as a lifelong technology lover and builder.]

At the core of innovation in social networking is the competition to be a user’s tool of choice when they have something to share. Most people go to social networks because their friends are sharing there, and there is an inherit scarcity to compete for because most people aren’t willing to share something more than once (some may syndicate their content elsewhere, if permitted, but the original content is usually shared on a single service). Thus every time people are moved to share, they must choose which tool to use (if any)–a social network, via email/sms, etc.

Conventional wisdom holds that the main factor determining where a user will share is audience size (i.e. where they have the most friends/followers, which for most people today means Facebook or Twitter), but the successful rise of services like Foursquare, Instagram, Path, Tumblr, and Google+ shows that there’s more to the story. Despite the continued cries of “it’s game-over already for social networking” or “these services are all the same”, I don’t think that’s the case at all, and apparently I’m not alone.

These newer services are all getting people to share on them, even though in most cases those users are sharing to a smaller audience than they could reach on Facebook or Twitter. What’s going on here? It seems to me that these services have all realized that the competition for sharing is about more than audience size, and in particular includes the following additional two axes:

  • Ease of sharing to a more specific/targeted audience
  • Ease of sharing more tailored/beautiful content

Sharing to a more specific/targeted audience sounds like the opposite of “sharing where all of my friends/followers already are”, but not everything we say is meant to be heard by everyone. In fact, when I ask people why they don’t share more online, two of the most frequent responses I get have to do with having too much audience: privacy and relevance. Sometimes you don’t want to share something broadly because it’s too personal or sensitive, and sometimes you just don’t want to bug everyone. In both cases, services that help you share more narrowly and with more precision can “win your business” when you otherwise would have shared reluctantly or not at all.

Foursquare, Path, and Google+ (and email) are all good examples of this. Sharing your current location can be both sensitive and noisy, but on foursquare most people maintain a smaller and more highly trusted list of friends, and everyone there has also chosen to use the service for this purpose. Thus it wins on both privacy and relevance, whereas Facebook Places has largely failed to compete, in part because people have too many friends there. Similarly, Path encourages you to only share to your closest set of friends and family, and thus you feel free to share more, both because you trust who’s seeing your information, and because they’re less likely to feel you’re “spamming them” if they’re close to you. And by letting you share to different circles (or publicly) each time you post, Google+ lets you tune both the privacy and relevance of your audience each time you have something to say. Google+ users routinely post to several distinct audiences, showing that we all have more to say if we can easily control who’s listening.

Sharing more tailored/beautiful content is in theory orthogonal to audience size/control, but it’s another instance in which smaller services often have an advantage, because they can specialize in helping users quickly craft and share content that is richer and more visually delightful than they could easily do on a larger and more generic social network. This is an effective axis for competition in sharing because we all want to express ourselves in a way that’s compelling and likely to evoke delight and reaction from our friends/followers. So services that help you look good can again “win your business” when you otherwise would have shared something generic or not at all.

Instagram and Tumblr jump out from my list above as services that have attracted users because they help you “look good for less”, but actually all five services have succeeded here to some extent. A foursquare check-in has rich metadata about the place you’re visiting, so it’s a more meaningful way to share than simply saying “I’m eating at Cafe Gibraltar”. Instagram lets you apply cool filters to your photos, and more importantly IMO, constrains the sharing experience to only be a single photo with a short caption (thus further encouraging this type of sharing). Path just looks beautiful end-to-end, which entices you to live in their world, and they also add lovely touches like displaying the local weather next to your shared moments or letting you easily share when you went to bed and woke up (and how much sleep you got). Again, these are all things you could share elsewhere, but it just feels better and more natural on these more tailored services. Tumblr is “just another blogging tool” in some ways, but its beautiful templates and its focus on quickly reblogging other content along with a quick message has led to an explosion of people starting up their own “tumblelogs”. And on Google+, one of the reasons a thriving photography community quickly formed is just that photos are displayed in a more compelling and engaging way than elsewhere.

The future of social networking will be largely about making it easier to share richer content with a more relevant audience. Some already decry the current “information overload” on social networks, but I would argue that (a) many/most of the meaningful moments in the lives of people I care about are still going un-shared (mainly because it’s too hard), (b) the feeling of overload largely comes from the low “signal-to-noise ratio” of posts shared today (owing to the lack of easy audience targeting), and (c) the content shared today is largely generic and homogeneous, thus it’s harder to pick out the interesting bits at the times when its most relevant. But as long as entrepreneurs can think up new ways to make it easier to capture and share the moments of your life (and the things you find interesting) in beautiful, high fidelity, and make it easier to share those moments with just the right people at the right time, we’ll soon look back on the current state of social networking with the same wistful bemusement we cast today on brick-sized feature phones and black-and-white personal computers. Let’s get to it!

Fighting for the Future of the Social Web: Selling Out and Opening Up (OSCON 2011)

Fighting for the Future of the Social Web: Selling Out and Opening Up
O’Reilly Open Source Convention (OSCON) 2011
Portland, OR
July 27, 2011

(Note: some of the footer fonts are messed up on slideshare, sorry.)

Download PPT (12.5 MB)

A year and a half after joining Google, and a year after my last talk on the Social Web, I returned to OSCON (one of my favorite conferences, which I’ve been speaking at for over half a decade now!) to reflect on the progress we’ve collectively made (and haven’t made) to open up the social web. I covered the latest developments in OpenID, OAuth, Portable Contacts, and related open web standards, mused about the challenges we’re still facing to adoption and ease-of-use and what to do about them, and considered what changes we should expect going forward now that many of the formerly independent open social web enthusiasts (myself included) now work for larger companies.

Not to spoil the punchline, but if you know me at all it won’t surprise you to learn that I’m still optimistic about the future! 😉

Bridging the islands: Building fluid social experiences across websites (Google I/O 2010)

Bridging the islands: Building fluid social experiences across websites
Google I/O 2010
San Francisco, CA
May 19, 2010

View talk and download slides as PDF

My third year speaking at Google I/O, and my first as a Googler! I teamed up with fellow Googler John Panzer, and together we demonstrated how far open standards have come in allowing developers to build rich cross-site social integrations. From frictionless sign-up using OpenID, OAuth, and webfinger to finding your friends with Portable Contacts and microformats to sharing rich activities and holding real-time distributed conversations with ActivityStrea.ms, PubSubHubBub, and salmon, it really is remarkable how much progress we’ve made as a community. And it still feels like we’re just getting started, with the real payoff right around the corner!

We took a literal approach to our concept of “bridging the islands” by telling a story of two imaginary islanders, who meet while on vacation and fall in love. They struggle with all the same problems that users of today’s social web do–the pain of immigrating to a new place, the pain of being able to find your friends once they’ve moved, and the pain of being able to stay in touch with the people you care about, even when you don’t all live in the same place. Besides having fun stretching the metaphor and making pretty slides (special thanks to Chris Messina for his artistic inspiration and elbow grease!), the point is that these are all fundamental problems, and just as we created technology to solve them in the real world, so must be solve them on the Social Web.

Chris’s talk at I/O told this story at a high level and with additional color, while we dove more into the technology that makes it possible. Make sure to check out both talks, and I hope they will both inspire and inform you–whether as a developer or a user–to help us complete this important work as a community!



Implementing PubSubHubbub subscriber support: A step-by-step guide

One of the last things I did before leaving Plaxo was to implement PubSubHubbub (PuSH) subscriber support, so that any blogs which ping a PuSH hub will show up almost instantly in pulse after being published. It’s easy to do (you don’t even need a library!), and it significantly improves the user experience while simultaneously reducing server load on your site and the sites whose feeds you’re crawling. At the time, I couldn’t find any good tutorials for how to implement PuSH subscriber support (add a comment if you know of any), so here’s how I did it. (Note: depending on your needs, you might find it useful instead to use a third-party service like Gnip to do this.)

My assumption here is that you’ve already got a database of feeds you’re subscribing to, but that you’re currently just polling them all periodically to look for new content. This tutorial will help you “gracefully upgrade” to support PuSH-enabled blogs without rewriting your fundamental polling infrastructure. At the end, I’ll suggest a more radical approach that is probably better overall if you can afford a bigger rewrite of your crawling engine.

The steps to add PuSH subscriber support are as follows:

  1. Identify PuSH-enabled blogs extract their hub and topic
  2. Lazily subscribe to PuSH-enabled blogs as you discover them
  3. Verify subscription requests from the hub as you make them
  4. Write an endpoint to receive pings from the hub as new content is published
  5. Get the latest content from updated blogs as you receive pings
  6. Unsubscribe from feeds when they’re deleted from your system

1. Identify PuSH-enabled blogs extract their hub and topic

When crawling a feed normally, you can look for some extra metadata in the XML that tells you this blog is PuSH-enabled. Specifically, you want to look for two links: the “hub” (the URL of the hub that the blog pings every time it has new content, which you in turn communicate with to subscribe and receive pings when new content is published), and the “self” (the canonical URL of the blog you’re subscribing to, which is referred to as the “topic” you’re going to subscribe to from the hub).

A useful test blog to use while building PuSH subscriber support is http://pubsubhubbub-example-app.appspot.com/, since it lets anyone publish new content. If you view source on that page, you’ll notice the standard RSS auto-discovery tag that tells you where to find the blog’s feed:

<link title="PubSubHubbub example app" type="application/atom+xml" rel="alternate" />

And if you view source on http://pubsubhubbub-example-app.appspot.com/feed, you’ll see the two PuSH links advertised underneath the root feed tag:

<link type="application/atom+xml" title="PubSubHubbub example app" rel="self" />
<link rel="hub" href="http://pubsubhubbub.appspot.com/" />

You can see that the “self” link is the same as the URL of the feed that you’re already using, and the “hub” link is to the free hub being hosted on AppEngine at http://pubsubhubbub.appspot.com/. In both cases, you want to look for a link tag under the root feed tag, match the appropriate rel-value (keeping in mind that rel-attributes can have multiple, space-separated values, e.g. rel="self somethingelse", so split the rel-value on spaces and then look for the specific matching rel-value), and then extract the corresponding href-value from that link tag. Note that the example above is an ATOM feed; in RSS feeds, you generally have to look for atom:link tags under the channel tag under the root rss tag, but the rest is the same.

Once you have the hub and self links for this blog (assuming the blog is PuSH-enabled), you’ll want to store the self-href (aka the “topic”) with that feed in your database so you’ll know whether you’ve subscribed to it, and, if so, whether the topic has changed since you last subscribed.

2. Lazily subscribe to PuSH-enabled blogs as you discover them

When you’re crawling a feed and you notice it’s PuSH-enabled, check your feed database to see if you’ve got a stored PuSH-topic for that feed, and if so, whether the current topic is the same as your stored value. If you don’t have any stored topic, or if the current topic is different, you’ll want to talk to that blog’s PuSH hub and initiate a subscription so that you can receive real-time updates when new content is published to that blog. By storing the PuSH-topic per-feed, you can effectively “lazily subscribe” to all PuSH-enabled blogs by continuing to regularly poll and crawl them as you currently do, and adding PuSH subscriptions as you find them. This means you don’t have to do any large one-time migration over to PuSH, and you can automatically keep up as more blogs become PuSH-enabled or change their topics over time. (Depending on your crawling infrastructure, you can either initiate subscriptions as soon as you find the relevant tags, or you can insert an asynchronous job to initiate the subscription so that some other part of your system can handle that later without slowing down your crawlers.)

To subscribe to a PuSH-enabled blog, just send an HTTP POST to its hub URL and provide the following POST parameters:

  • hub.callback = [the URL of your endpoint for receiving pings, which we’ll build in step 4]
  • hub.mode = subscribe
  • hub.topic = [the self-link / topic of the feed you’re subscribing to, which you extracted in step 1]
  • hub.verify = async [means the hub will separately call you back to verify this subscription]
  • hub.verify_token = [a hard-to-guess token associated with this feed, which the hub will echo back to you to prove it’s a real subscription verification]

For the hub.callback URL, it’s probably best to include the internal database ID of the feed you’re subscribing to, so it’s easy to look up that feed when you receive future update pings. Depending on your setup, this might be something like http://yoursite.com/push/update?feed_id=123 or http://yoursite.com/push/update/123. Another advantage of this technique is that it makes it relatively hard to guess what the update URL is for an arbitrary blog, in case an evil site wanted to send you fake updates. If you want even more security, you could put some extra token in the URL that’s different per-feed, or you could use the hub.secret mechanism when subscribing, which will cause the hub to send you a signed verification header with every ping, but that’s beyond the scope of this tutorial.

For the hub.verify_token, the simplest thing would just be to pick a secret word (e.g. “MySekritVerifyToken“) and always use that, but an evil blog could use its own hub and quickly discover that secret. So a better idea is to do something like take the HMAC-SHA1 of the topic URL along with some secret salt you keep internally. This way, the hub.verify_token value is feed-specific, but it’s easy to recompute when you receive the verification.

If your subscription request is successful, the hub will respond with an HTTP 202 “Accepted” code, and will then proceed to send you a verification request for this subscription at your specified callback URL.

3. Verify subscription requests from the hub as you make them

Shortly after you send your subscription request to the hub, it will call you back at the hub.callback URL you specified with an HTTP GET request containing the following query parameters:

  • hub.mode = subscribe
  • hub.topic = [the self-link / topic of the URL you requested a subscription for]
  • hub.challenge = [a random string to verify this verification that you have to echo back in the response to acknowledge verification]
  • hub.verify_token = [the value you sent in hub.verify_token during your subscription request]

Since the endpoint you receive this verification request is the same one you’ll receive future update pings on, your logic has to first look for hub.mode=subscribe, and if so, verify that the hub sent the proper hub.verify_token back to you, and then just dump out the hub.challenge value as the response body of your page (with a standard HTTP 200 response code). Now you’re officially subscribed to this feed, and will receive update pings when the blog publishes new content.

Note that hubs may periodically re-verify that you still want a subscription to this feed. So you should make sure that if the hub makes a similar verification request out-of-the-blue in the future, you respond the same way you did the first time, providing you indeed are still interested in that feed. A good way to do this is just to look up the feed every time you get a verification request (remember, you build the feed’s ID into your callback URL), and if you’ve since deleted or otherwise stopped caring about that feed, return an HTTP 404 response instead so the hub will know to stop pinging you with updates.

4. Write an endpoint to receive pings from the hub as new content is published

Now you’re ready for the pay-out–magically receiving pings from the ether every time the blog you’ve subscribed to has new content! You’ll receive inbound requests to your specified callback URL without any additional query parameters added (i.e. you’ll know it’s a ping and not a verification because there won’t be any hub.mode parameter included). Instead, the new entries of the subscribed feed will be included directly in the POST body of the request, with a request Content-Type of application/atom+xml for ATOM feeds and application/rss+xml for RSS feeds. Depending on your programming language of choice, you’ll need to figure out how to extract the raw POST body contents. For instance, in PHP you would fopen the special filename php://input to read it.

5. Get the latest content from updated blogs as you receive pings

The ping is really telling you two things: 1) this blog has updated content, and 2) here it is. The advantage of providing the content directly in the ping (a so-called “fat ping“) is so that the subscriber doesn’t have to go re-crawl the feed to get the updated content. Not only is this a performance savings (especially when you consider that lots of subscribers may get pings for a new blog post at roughly the same time, and they might otherwise all crawl that blog at the same time for the new contents; the so-called “thundering herd” problem), it’s also a form of robustness since some blogging systems take a little while to update their feeds when a new post is published (especially for large blogging systems that have to propagate changes across multiple data-centers or update caching tiers), so it’s possible you’ll receive a ping before the content is available to crawl directly. For these reasons and more, it’s definitely a best-practice to consume the fat ping directly, rather than just using it as a hint to go crawl the blog again (i.e. treating it as a “light ping”).

That being said, most crawling systems are designed just to poll URLs and look for new data, so it may be easier to start out by taking the “light ping” route. In other words, when you receive a PuSH ping, look up the feed ID from the URL of the request you’re handling, and assuming that feed is still valid, just schedule it to crawl ASAP. That way, you don’t have to change the rest of your crawling infrastructure; you just treat the ping as a hint to crawl now instead of waiting for the next regular polling interval. While sub-optimal, in my experience this works pretty well and is very easy to implement. (It’s certainly a major improvement over just polling with no PuSH support!) If you’re worried about crawling before the new content is in the feed, and you don’t mind giving up a bit of speed, you can schedule your crawler for “in N seconds” instead of ASAP, which in practice will allow a lot of slow-to-update feeds to catch up before you crawl them.

Once you’re ready to handle the fat pings directly, extract the updated feed entries from the POST body of the ping (the payload is essentially an exact version of the full feed you’d normally fetch, except it only contains entries for the new content), and ingest it however you normally ingest new blog content. In fact, you can go even further and make PuSH the default way to ingest blog content–change your polling code to act as a “fake PuSH proxy” and emit PuSH-style updates whenever it finds new entries. Then your core feed-ingesting code can just process all your updated entries in the same way, whether they came from a hub or your polling crawlers.

However you handle the pings, once you find that things are working reliably, you can change the polling interval for PuSH-enabled blogs to be much slower, or even turn it off completely, if you’re not worried about ever missing a ping. In practice, slow polling (e.g. once a day) is probably still a good hedge against the inevitable clogs in the internet’s tubes.

6. Unsubscribe from feeds when they’re deleted from your system

Sometimes users will delete their account on your system or unhook one of their feeds from their account. To be a good citizen, rather than just waiting for the next time the hub sends a subscription verification request to tell it you no longer care about this feed, you should send the hub an unsubscribe request when you know the feed is no longer important to you. The process is identical to subscribing to a feed (as described in steps 2 and 3), except you use “unsubscribe” instead of “subscribe” for the hub.mode values in all cases.

Testing your implementation

Now that you know all the steps needed to implement PuSH subscriber support, it’s time to test your code in the wild. Probably the easiest way is to hook up that http://pubsubhubbub-example-app.appspot.com/ feed, since you can easily add content it to it to test pings, and it’s known to have valid hub-discovery metadata. But you can also practice with any blog that is PuSH-enabled (perhaps your shiny new Google Buzz public posts feed?). In any case, schedule it to be crawled normally, and verify that it correctly extracts the hub-link and self-link and adds the self-link to your feed database.

The first time it finds these links, it should trigger a subscription request. (On subsequent crawls, it shouldn’t try to subscribe again, since the topic URL hasn’t changed. ) Verify that you’re sending a request to the hub that includes all the necessary parameters, and verify that it’s sending you back a 202 response. If it’s not working, carefully check that you’re sending all the right parameters.

Next, verify that upon sending a subscription request, you’ll soon get an inbound verification request from the hub. Make sure you detect requests to your callback URL with hub.mode=subscribe, and that you are checking the hub.verify_token value against the value you sent in the subscription request, and then that you’re sending the hub.challenge value as your response body. Unfortunately, it’s usually not easy to inspect the hub directly to confirm that it has properly verified your subscription, but hopefully some hubs will start providing site-specific dashboards to make this process more transparent. In the meantime, the best way to verify that things worked properly is to try making test posts to the blog and looking for incoming pings.

So add a new post on the example blog, or write a real entry on your PuSH-enabled blog of choice, and look in your server logs to make sure a ping came in. Depending on the hub, the ping may come nearly instantaneously or after a few seconds. If you don’t see it after several seconds, something is probably wrong, but try a few posts to make sure you didn’t just miss it. Look at the specific URL that the hub is calling on your site, and verify that it has your feed ID in the URL, and that it does indeed match the feed that just published new content. If you’re using the “light ping” model, check that you scheduled your feed to crawl ASAP. If you’re using the “fat ping” model, check that you correctly ingested the new content that was in the POST body of the ping.

Once everything appears to be working, try un-hooking your test feed (and/or deleting your account) and verify that it triggers you to send an unsubscribe request to the hub, and that you properly handle the subsequent unsubscribe verification request from the hub.

If you’ve gotten this far, congratulations! You are now part of the real-time-web! Your users will thank you for making their content show up more quickly on your site, and the sites that publish those feeds will thank you for not crawling them as often, now that you can just sit back and wait for updates to be PuSH-ed to you. And I and the rest of the community will thank you for supporting open standards for a decentralized social web!

(Thanks to Brett Slatkin for providing feedback on a draft of this post!)

Joseph Smarr has new work info…

High on my to-do list for 2010 will be to update my contact info in Plaxo, because I’ll be starting a new job in late January. After nearly 8 amazing years at Plaxo, I’m joining Google to help drive a new company-wide focus on the future of the Social Web. I’m incredibly excited about this unique opportunity to turbo-charge my passionate pursuit of a Social Web that is more open, interoperable, decentralized, and firmly in the control of users.

I’ve worked closely with Google as a partner in opening up the social web for several years, and they’ve always impressed me with their speed and quality of execution, and more importantly, their unwavering commitment to do what’s right for users and for the health of the web at large. Google has made a habit of investing heavily and openly in areas important to the evolution of the web””think Chrome, Android, HTML5, SPDY, PublicDNS, etc. Getting the future of the Social Web right””including identity, privacy, data portability, messaging, real-time data, and a distributed social graph””is just as important, and the industry is at a critical phase where the next few years may well determine the platform we live with for decades to come. So when Google approached me recently to help coordinate and accelerate their innovation in this area, I could tell by their ideas and enthusiasm that this was an opportunity I couldn’t afford to pass up.

Now, anyone who knows me should immediately realize two things about this decision””first, it in no way reflects a lack of love or confidence from me in Plaxo, and second, I wouldn’t have taken this position if I hadn’t convinced myself that I could have the greatest possible impact at Google. For those that don’t know me as well personally, let me briefly elaborate on both points:

I joined Plaxo back in March of 2002 as their first non-founder employee, before they had even raised their first round of investment. I hadn’t yet finished my Bachelor’s Degree at Stanford, and I’d already been accepted into a research-intensive co-terminal Masters program there, but I was captivated by Plaxo’s founders and their ideas, and I knew I wanted to be a part of their core team. So I spent the first 15 months doing essentially two more-than-full-time jobs simultaneously (and pretty much nothing else). Since that time, I’ve done a lot of different things for Plaxo””from web development to natural language processing to stats collection and analysis to platform architecture, and most recently, serving as Plaxo’s Chief Technology Officer. Along the way, I’ve had to deal with hiring, firing, growth, lack of growth, good press, bad press, partnerships with companies large and small, acquisitions””both as the acquirer and the acquiree””and rapidly changing market conditions (think about it: we started Plaxo before users had ever heard of flickr, LinkedIn, friendster, Gmail, Facebook, Xobni , Twitter, the iPhone, or any number of other companies, services, and products that radically altered what it means to “stay in touch with the people you know and care about across all the tools and services that you and they use”). When I joined Plaxo, there were four of us. Now we have over 60 employees, and that’s not counting our many alumni. All of this is to make the following plain: Plaxo has been my life, my identity, my passion, and my family for longer than I’ve known my wife, longer than I was at Stanford, and longer than I’ve done just about anything before. Even at a year-and-a-half since our acquisition by Comcast, Plaxo has the same magic and mojo that’s made it a joy and an honor to work for all these years. And with our current team and strategic focus, 2010 promises to be one of the best years yet. So I hope this makes it clear that I was not looking to leave Plaxo anytime soon, and that the decision to do so is one that I did not make lightly.

Of all the things I’ve done at Plaxo over the years, my focus on opening up the Social Web over the past 3+ years is the work I’m proudest of, and the work that I think has had the biggest positive impact””both for Plaxo and the web itself. Actually, it really started way back in 2004, when I first read about FOAF and wrote a paper about its challenges from Plaxo’s perspective, for which I was then selected to speak at my first industry conference, the FOAF Workshop in Galway, Ireland. Since that time, I realized what a special community of people there were that cared about these issues in a web-wide way, and I tried to participate on the side and in my free time whenever possible. After leading Plaxo’s web development team to build a rich and complex new AJAX address book and calendar (something that also reinforced to me the value of community participation and public speaking, albeit on the topic of high-performance JavaScript), I knew I wanted to work on the Social Web full-time, and luckily it coincided perfectly with Plaxo’s realization that fulfilling our mission required focusing on more than just Outlook, webmail, and IM as important sources of “people data”. So we crafted a new role for me as Chief Platform Architect, and off I went, turning Plaxo into the first large-scale OpenID Relying Party, the first live OpenSocial container, co-creator of the Portable Contacts spec, co-creator and first successful deployment of hybrid onboarding combining OpenID and OAuth, and so on. Along the way I co-authored the Bill of Rights for Users of the Social Web, coined the term Open Stack, was elected to the Boards of both the OpenID Foundation and OpenSocial Foundation, and worked closely with members of the grass-roots community as well as with people at Google, Yahoo, Microsoft, AOL, Facebook, MySpace, Twitter, LinkedIn, Netflix, The New York Times, and others, often as a launch partner or early adopter of their respective forays into supporting these same open standards. And collectively, I think it’s fair to say that our efforts greatly accelerated the arrival, quality, and ubiquity of a Social Web ecosystem that has the potential to be open, decentralized, and interoperable, and that may define the next wave of innovation in this space, much as the birth of the web itself did nearly 20 years ago.

But we’re not done yet. Not by a long shot. And the future is never certain.

At the recent OpenID Summit hosted by Yahoo!, I gave a talk in which I outlined the current technical and user-experience challenges standing in the way of OpenID becoming truly successful and a “no-brainer” for any service large or small to implement. Despite all the progress that we’ve made over the past few years, and that I’ve proudly contributed to myself, there is no shortage of important challenges left to meet before we can reach our aspirations for the Social Web. There is also no shortage of people committed to “fighting the good fight”, but as with any investment for the future with a return that will be widely shared, most people and companies are forced to make tough trade-offs about whether to focus on what already works today or what may work better tomorrow. There are a lot of good people in a lot of places working on the future of the Social Web, and we need them all and more. But in my experience, Google is unmatched in its commitment to doing what’s right for the future of the web and its willingness to think long-term. One need only look at the current crop of Social Web “building blocks” being actively worked on and deployed by Google””including OpenID, OAuth, Portable Contacts, OpenSocial, PubSubHubbub, Webfinger, Salmon, and more””to see how serious they are. And yet they came to me because they want to turn up the intensity and focus and coordination and boldness even more.

I talked to a lot of Googlers before deciding to join, and from the top to the bottom they really impressed me with how genuinely they believe in this cause that I’m so passionate about, and how strong a mandate I feel throughout the company to do something great here. I also heard over and over how surprisingly easy it still is to get things built and shipped — both new products, tools, and specs, as well as integrating functionality into Google’s existing services. And, of course, there are so many brilliant and talented people at Google, and so much infrastructure to build on, that I know I’ll have more opportunity to learn and have an impact than I could ever hope to do anywhere else. So while there are other companies large and small (or perhaps not yet in existence) where I could also have some form of positive impact on the future of the Social Web, after looking closely at my options and doing some serious soul searching, I feel confident that Google is the right place for me, and now is the right time.

Let me end by sincerely thanking everyone that has supported me and worked with me not just during this transition process but throughout my career. I consider myself incredibly fortunate to be surrounded by so many amazing people that genuinely want to have a positive impact on the world and want to empower me to do the best that I can to contribute, even it means doing so from inside (or outside) a different company. It’s never easy to make big decisions involving lots of factors and rapidly changing conditions, let alone one with such deep personal and professional relationships at its core. Yet everyone has treated me with such respect, honesty, and good faith, that it fills me with a deep sense of gratitude, and reminds me why I so love living and working in Silicon Valley.

2010 will be an exciting and tumultuous year for the Social Web, and so will it be for me personally. Wish us both luck, and here’s to the great opportunities that lie ahead!

What an RP Wants, Part 2 (OpenID Summit 2009)

What an RP Wants, Part 2
OpenID Summit 2009 (Hosted by Yahoo!)
Mountain View, CA
November 2, 2009

Download PPT (2.1 MB)

I was invited to give a talk at the OpenID Summit as a follow-up to my talk “What an RP Wants“, which I gave in February at the OpenID Design Summit. In both cases, I shared my experiences from Plaxo’s perspective as a web site that is trying to succeed at letting users sign up using accounts they already have on Google, Yahoo, and other OpenID Provider sites. This talk reviewed the progress we’ve made as a community since February, and laid out the major remaining challenges to making it a truly-successful end-to-end experience to be an OpenID Relying Party (RP).

My basic message was this: we’ve made a lot of progress, but we’ve still got a lot left to do. So let’s re-double our efforts and commit ourselves once again to working together and solving these remaining problems. As much success as OpenID has had to date, its continued relevance is by no means guaranteed. But I remain optimistic because the same group of people that have brought us this far are still engaged, and none of the remaining challenges are beyond our collective ability to solve.

See more coverage of the OpenID Summit, including my talk, at The Real McCrea.

And here are a couple of video excerpts from my talk:

The Social Web: An Implementer’s Guide (Google I/O 2009)

The Social Web: An Implementer’s Guide
Google I/O 2009
San Francisco, CA
May 28, 2009

Download PPT (7.3 MB)

Google invited me back for a second year in a row to speak at their developer conference about the state-of-the-art of opening up the social web. While my talk last year laid out the promise and vision of an interoperable social web ecosystem, this year I wanted to show all the concrete progress we’ve made as an industry in achieving that goal. So my talk was full of demos–signing up for Plaxo with an existing Gmail account in just two clicks, using MySpaceID to jump into a niche music site without a separate sign-up step, ending “re-friend madness” by honoring Facebook friend connections on Plaxo (via Facebook Connect), killing the “password anti-pattern” with user-friendly contact importers from a variety of large sites (demonstrated with FriendFeed), and sharing activity across sites using Google FriendConnect and Plaxo. Doing live demos is always a risky proposition, especially when they involve cross-site interop, but happily all the demos worked fine and the talk was a big success!

I began my talk by observing that the events of the last year has made it clear: The web is going social, and the social web is going open. By the end of my talk, having showed so many mainstream sites with deep user-friendly and user-friendly interoperability, I decided to go a step further and declare: The web is now social, and the social web is now open. You don’t have to wait any longer to start reaping the benefits. It’s time to dive in.

Portable Contacts and vCardDAV (IETF 74)

Portable Contacts and vCardDAV
IETF 74
San Francisco, CA
March 25, 2009

Download PPT (81 KB) or PDF

You may remember the venerable IETF standards-body from such foundational internet RFCs as HTTP (aka the web), SMTP (aka e-mail), and vCard (aka contact info). So I’ll be honest that I was a bit intimidated when they invited me to their IETF-wide conference to speak about my work on Portable Contacts.

In addition to being chartered with updating vCard itself, the IETF has a working group building a read-write standard for sharing address book data called CardDAV. (It’s a set of extensions to WebDAV for contact info, hence the name.) Since Portable Contacts is also trying to create a standard for accessing contact into online (albeit with a less ambitious scope and feature set), I was eager to share the design decisions we had made and the promising early adoption we’ve already seen.

My optimistic hope was that perhaps some of our insights might end up influencing the direction of CardDAV–or perhaps even vCard itself. But I was also a bit nervous that such an august and rigorous standards body might have little interest in the pontifications of a “scrappy Open Stack hacker” like me. Or that even if they liked what I said, it might be impossible to have an impact this late in the game. But I figured if nothing else, here’s a group of people that are as passionate about the gory details of contact info as we are, so at least we should meet one another and see where it leads.

Boy was I impressed and inspired by their positive reception of my remarks! Far from being a hostile or dis-interested audience, everyone seemed genuinely excited by the work we’d done, especially since companies large and small are already shipping compliant implementations. The Q&A was passionate and detailed, and it spilled out into the hallway and continued long after the session officially ended.

Best of all, I then got to sit down with Simon Perreault, one of the primary authors of vCard 4.0 and vCardXML, and we went literally line-by-line through the entire Portable Contacts spec and wrote a list of all the ways if differs from the next proposed version of vCard. As you might imagine, there were some passionate arguments on both sides concerning the details, but actually there were really no “deal breakers” in there, and Simon sounded quite open (or even excited) about some of the “innovations” we’d made. It really does look like we might be able to get a common XML schema across PoCo and vCard / CardDAV, and some of the changes might well land in core vCard!

Of course, any official spec changes will happen through the normal IETF mailing lists and process. But as I’m sure you can tell, I think things went amazingly well today, and the future of standards for sharing contact info online has never looked brighter! Thanks again to Marc Blanchet, Cyrus Daboo, and the rest of the vCardDAV working group for their invitation and warm reception. Onward ho!

Social data sharing will change lives and business (DEMO 09)

Social data sharing will change lives and business
DEMO 09
Palm Desert, CA
March 3, 2009

Watch full video (via Brightcove)
I flew down to an oddly-lush oasis in the middle of the desert last week to attend a panel at DEMO about the future of the social web. Max Engel from MySpace has a nice write-up of the event, and a full video of our panel is available on Brightcove. Eric Eldon of VentureBeat moderated the panel, which featured me, Max Engel, Dave Morin from Facebook, and Kevin Marks from Google. In addition to a lively discussion, we each demoed our “latest and greatest” efforts at opening up the social web. Max showed the first public demo of MySpace’s support for hybrid OpenID+OAuth login using a friendly popup, Kevin showed off how to add FriendConnect to any blog, and Dave showed off some new examples of Facebook Connect in the wild. I showed our new Google-optimized onboarding experiment with Plaxo, and revealed that it’s working so well that we’re now using it for 100% of new users invited to Plaxo via a gmail.com email address.

It’s just amazing and inspiring to me that these major mainstream internet sites are all now able to stand up and demo slick, user-friendly cross-site interoperability and data sharing using open APIs, and we’re all continuing to converge on common standards so developers don’t have to write separate code to interoperate with each site. You can really measure the speed of progress in this space by watching the quantity and quality of these Open Web demos continue to increase, and with SXSW, Web 2.0 Expo, Google I/O, and Internet Identity Workshop all still to come in the first half of 2009, I have a feeling that we all ain’t seen nuthin’ yet! 🙂

« Older posts

© 2024 Joseph Smarr

Theme by Anders NorenUp ↑