Thoughts on web development, tech, and life.

Category: Personal (Page 2 of 4)

HipChat is consumer-meets-enterprise done right — check it out!

Three of Plaxo’s best engineers and designers left almost a year ago to start a new company (much as they’d done a few years ago with HipCal, which Plaxo acquired in 2006). After a brief private beta, today they are launching to the public.

HipChatMeet HipChat. It’s a new (and, IMO, very clever and promising) approach to group collaboration within companies and teams–essentially group chat plus file-sharing done with the simplicity and polish of a great consumer app, but targeted at the enterprise. And it’s meant to spread organically and bottoms-up by attracting enthusiastic team members who really find it useful, rather than top-down through long sales-cycles to CIOs–in other words, winning by actually being better for the people that use it every day. You’ll be able to tell this from the moment you start using it–it’s distinctly “un-enterprise-y” in all the right ways, yet every enterprise needs something like this to be more productive and organized.


[ More HipChat screenshots ]

I’m excited about HipChat for several reasons:

First, the founders (Pete Curley, Garret Heaton, and Chris Rivers) are all rockstar talents and super nice guys; the best of the young web 2.0 “bootstrap from nothing and build something genuinely good that grows because people are using and loving it” approach that’s only become feasible recently. Whatever they work on, I know it’ll be well thought through and well executed, and it’ll keep getting better over time. These are good guys to know and watch, and they’re just getting started.

Second, group collaboration is a space that everyone knows is important, and yet that nothing does a good job of solving today. At Plaxo we’ve tried tons of wikis, internal blogs, mailing lists, document depots, dashboards, you name it. They’re always too complicated and cumbersome and never have streamlined workflows that work the way you need. One of my early surprises coming to Google is that for all their efforts and internal tools, the situation is ultimately not much better. Information is still spread everywhere across a variety of systems, is too hard to find and curate, and too often forces you to just ask the person next to you and hope for the best. Maybe new tools like Google Wave will make a difference here, but of course the more flexible and general-purpose a tool like that is, the greater the risk that it will do too many things and none of them just the way you want. HipChat may not be the magic solution to this complex problem either, but it’s refreshing to see the team apply a consumer-app eye and discipline to the problem–focusing on specific task arcs to really nail, and an end-to-end polish and friendliness that’s so clearly lacking from most other groupware tools.

This last point deserves its own slot: in my experience, the only way to really advance the state of technology making a real difference in the lives of real people is to subject it to the harsh Darwinian landscape of consumer software and devices, where if it doesn’t “just work” and provide a compelling and enjoyable experience, it doesn’t get used. This is the sharpening steel that’s honed all the best apps we have today, from gmail to facebook to the iPhone to boxee, and so on. And if you think about it, it’s the missing piece that makes most enterprise software so terrible–your company buys it, and you’re stuck with it, like it or not. The typical enterprise “fitness function” yields a much slower and sloppier rate of evolution than the consumer one, and that I believe is the main reason the quality of the two classes of apps differs so much. So it’s great to see an increase in companies willing to try and swim upstream to gain corporate adoption with a consumer mindset, whether it’s Google Apps, box.net, Yammer, or now HipChat.

If you work on a team, if you’re dissatisfied with the state of collaboration tools, or if you just want to see a really well done new app, I encourage you to check out HipChat. We used several early betas inside Plaxo, and while any new communications tool faces an uphill battle to gain critical mass of adoption and change old habits, enough of us had enough “eureka moments” using HipChat to see its strong potential and to wish that we could fast-forward time until more people are using it and it’s had even more love put into it. The next best thing we can do is to spread the good word and give our support to the team, so consider this post a down payment!

Sources of inspiration for 2010

Despite all the serious challenges the world is facing these days, I’m also seeing more and more that inspires me–cases where great things are happening to great people doing great work and improving the world in the process. Specifically, the following recent examples come to mind:

  • Movies: Avatar
  • Music: Lady Gaga
  • TV: Netflix HD streaming to Blu-Ray/TiVo/Xbox/etc.
  • Mobile: Palm Pre & Nexus One
  • Desktop: Chrome OS
  • Social: Foursquare

What do all of these have in common? They’re all cases of insanely talented outsiders changing the world by just working really hard and doing great stuff.

Everyone said James Cameron was crazy to make Avatar, just like they said he was crazy to make Titanic. They’re both such impossibly grand, expensive, and difficult visions to capture. But he did it anyway, and the work is brilliant, and the only thing that went broke were the previous box-office records.

Same thing with Lady Gaga: just two years ago, no one had heard of her, and she was just playing little clubs in New York. But using her incredible talent in song-writing and performance art, and a willingness to work insanely hard every day, she unleashed her strange and unique vision of music/fashion/art/performance and took the world by storm, becoming the first artist ever to score four #1 hits off her debut album, and at age 23 no less. If you haven’t paid close attention and think she’s just another made-to-order corporate pop starlet, take a closer look, you’ll be surprised (as I was).

Netflix is certainly an outsider when it comes to watching TV in your living room (it’s neither a cable provider nor a set-top box manufacturer), yet now that I can watch entire seasons of Lost in HD on my TV whenever I want–thanks to its Watch Instantly streaming and integrations with existing set-top boxes and gaming consoles–I find myself rarely watching “real TV” any more. And Netflix’s user experience is far superior–it knows which episodes I’ve already seen, so I can just pick up where I left off whenever I have a free moment. And if I’m up at Tahoe for the weekend, I can watch it there too on my laptop, and my episode history is kept in sync because it’s stored in the cloud. Brilliant.

Both Palm Pre and Android (hard to pick a favorite yet!) are up against fierce competition from Apple and the old-world mobile establishment, and neither company (Palm or Google) is an established player in this space, but they’re both producing excellent devices that simultaneously improve the quality of the experience for users while opening up more flexibility and power to developers. And they’re also making web development more of a “first-class citizen” for mobile apps. It’s hard to think of a more ambitious challenge than building your own mobile platform–hardware, OS, software stack, apps, and distribution in physical stores–but that’s not stopping these guys from having a major impact, and the game is just beginning.

Chrome OS isn’t even out yet really, but it’s already clear that the desktop will soon evolve to their vision where all important data lives in the cloud, and it will no longer matter if your computer dies or if you want to use multiple computers in different places–a pain that I’ve experienced many times, as I’m sure you have. Windows is one of the most well-established monopolies there is, so again it’s crazy in some sense to try and compete there, let alone with a radical new vision that does a lot *less* than the status quo, and instead re-imagines the problem in a new way. And yet people buy new computers all the time, so it’s not hard to believe that they could establish considerable market share in a short number of years, while forcing radical change from their competitors at the same time.

And perhaps closest to my own area of work on the Social Web, I think it’s noteworthy that the company that has had the biggest positive impact on how I connect and share with my friends in the last year is not any of the big established players, but a tiny startup that’s building itself up from scratch by making it easier and more rewarding to share where you are and what you’re doing: Foursquare. While I cringe at the amount of work they have to do to integrate with each separate social network and build apps for each separate mobile device (that’s why we need more common standards, of course!), they’re still able to deliver an awesome product with a tiny team, and their service is taking off like a rocket.

Why are these examples so inspiring to me? They provide reassurance that in 2010, two basic things are still true–perhaps more true than ever before:

  1. You *can* win by being excellent and working hard to build a better product
  2. You *can* win even if you’re an outsider in a field of powerful incumbents

It’s hard to believe in transformational innovation if you can’t believe in those two points, and it’s often easy to get discouraged, since these are such difficult challenges. But if those guys can all do it, so can we. In fact, it’s not hard to believe that it’s actually getting easier to succeed in these ways. After all, barriers to entry keep getting lowered, and the spread of information keeps getting faster and more efficient, so the good stuff should be able to be discovered and bubble to the top faster than ever before. If that’s true, then the new “hard problem” should be doing great work in the first place, and that’s the problem I want to be tackling!

What examples are inspiring you right now?

Joseph Smarr has new work info…

High on my to-do list for 2010 will be to update my contact info in Plaxo, because I’ll be starting a new job in late January. After nearly 8 amazing years at Plaxo, I’m joining Google to help drive a new company-wide focus on the future of the Social Web. I’m incredibly excited about this unique opportunity to turbo-charge my passionate pursuit of a Social Web that is more open, interoperable, decentralized, and firmly in the control of users.

I’ve worked closely with Google as a partner in opening up the social web for several years, and they’ve always impressed me with their speed and quality of execution, and more importantly, their unwavering commitment to do what’s right for users and for the health of the web at large. Google has made a habit of investing heavily and openly in areas important to the evolution of the web””think Chrome, Android, HTML5, SPDY, PublicDNS, etc. Getting the future of the Social Web right””including identity, privacy, data portability, messaging, real-time data, and a distributed social graph””is just as important, and the industry is at a critical phase where the next few years may well determine the platform we live with for decades to come. So when Google approached me recently to help coordinate and accelerate their innovation in this area, I could tell by their ideas and enthusiasm that this was an opportunity I couldn’t afford to pass up.

Now, anyone who knows me should immediately realize two things about this decision””first, it in no way reflects a lack of love or confidence from me in Plaxo, and second, I wouldn’t have taken this position if I hadn’t convinced myself that I could have the greatest possible impact at Google. For those that don’t know me as well personally, let me briefly elaborate on both points:

I joined Plaxo back in March of 2002 as their first non-founder employee, before they had even raised their first round of investment. I hadn’t yet finished my Bachelor’s Degree at Stanford, and I’d already been accepted into a research-intensive co-terminal Masters program there, but I was captivated by Plaxo’s founders and their ideas, and I knew I wanted to be a part of their core team. So I spent the first 15 months doing essentially two more-than-full-time jobs simultaneously (and pretty much nothing else). Since that time, I’ve done a lot of different things for Plaxo””from web development to natural language processing to stats collection and analysis to platform architecture, and most recently, serving as Plaxo’s Chief Technology Officer. Along the way, I’ve had to deal with hiring, firing, growth, lack of growth, good press, bad press, partnerships with companies large and small, acquisitions””both as the acquirer and the acquiree””and rapidly changing market conditions (think about it: we started Plaxo before users had ever heard of flickr, LinkedIn, friendster, Gmail, Facebook, Xobni , Twitter, the iPhone, or any number of other companies, services, and products that radically altered what it means to “stay in touch with the people you know and care about across all the tools and services that you and they use”). When I joined Plaxo, there were four of us. Now we have over 60 employees, and that’s not counting our many alumni. All of this is to make the following plain: Plaxo has been my life, my identity, my passion, and my family for longer than I’ve known my wife, longer than I was at Stanford, and longer than I’ve done just about anything before. Even at a year-and-a-half since our acquisition by Comcast, Plaxo has the same magic and mojo that’s made it a joy and an honor to work for all these years. And with our current team and strategic focus, 2010 promises to be one of the best years yet. So I hope this makes it clear that I was not looking to leave Plaxo anytime soon, and that the decision to do so is one that I did not make lightly.

Of all the things I’ve done at Plaxo over the years, my focus on opening up the Social Web over the past 3+ years is the work I’m proudest of, and the work that I think has had the biggest positive impact””both for Plaxo and the web itself. Actually, it really started way back in 2004, when I first read about FOAF and wrote a paper about its challenges from Plaxo’s perspective, for which I was then selected to speak at my first industry conference, the FOAF Workshop in Galway, Ireland. Since that time, I realized what a special community of people there were that cared about these issues in a web-wide way, and I tried to participate on the side and in my free time whenever possible. After leading Plaxo’s web development team to build a rich and complex new AJAX address book and calendar (something that also reinforced to me the value of community participation and public speaking, albeit on the topic of high-performance JavaScript), I knew I wanted to work on the Social Web full-time, and luckily it coincided perfectly with Plaxo’s realization that fulfilling our mission required focusing on more than just Outlook, webmail, and IM as important sources of “people data”. So we crafted a new role for me as Chief Platform Architect, and off I went, turning Plaxo into the first large-scale OpenID Relying Party, the first live OpenSocial container, co-creator of the Portable Contacts spec, co-creator and first successful deployment of hybrid onboarding combining OpenID and OAuth, and so on. Along the way I co-authored the Bill of Rights for Users of the Social Web, coined the term Open Stack, was elected to the Boards of both the OpenID Foundation and OpenSocial Foundation, and worked closely with members of the grass-roots community as well as with people at Google, Yahoo, Microsoft, AOL, Facebook, MySpace, Twitter, LinkedIn, Netflix, The New York Times, and others, often as a launch partner or early adopter of their respective forays into supporting these same open standards. And collectively, I think it’s fair to say that our efforts greatly accelerated the arrival, quality, and ubiquity of a Social Web ecosystem that has the potential to be open, decentralized, and interoperable, and that may define the next wave of innovation in this space, much as the birth of the web itself did nearly 20 years ago.

But we’re not done yet. Not by a long shot. And the future is never certain.

At the recent OpenID Summit hosted by Yahoo!, I gave a talk in which I outlined the current technical and user-experience challenges standing in the way of OpenID becoming truly successful and a “no-brainer” for any service large or small to implement. Despite all the progress that we’ve made over the past few years, and that I’ve proudly contributed to myself, there is no shortage of important challenges left to meet before we can reach our aspirations for the Social Web. There is also no shortage of people committed to “fighting the good fight”, but as with any investment for the future with a return that will be widely shared, most people and companies are forced to make tough trade-offs about whether to focus on what already works today or what may work better tomorrow. There are a lot of good people in a lot of places working on the future of the Social Web, and we need them all and more. But in my experience, Google is unmatched in its commitment to doing what’s right for the future of the web and its willingness to think long-term. One need only look at the current crop of Social Web “building blocks” being actively worked on and deployed by Google””including OpenID, OAuth, Portable Contacts, OpenSocial, PubSubHubbub, Webfinger, Salmon, and more””to see how serious they are. And yet they came to me because they want to turn up the intensity and focus and coordination and boldness even more.

I talked to a lot of Googlers before deciding to join, and from the top to the bottom they really impressed me with how genuinely they believe in this cause that I’m so passionate about, and how strong a mandate I feel throughout the company to do something great here. I also heard over and over how surprisingly easy it still is to get things built and shipped — both new products, tools, and specs, as well as integrating functionality into Google’s existing services. And, of course, there are so many brilliant and talented people at Google, and so much infrastructure to build on, that I know I’ll have more opportunity to learn and have an impact than I could ever hope to do anywhere else. So while there are other companies large and small (or perhaps not yet in existence) where I could also have some form of positive impact on the future of the Social Web, after looking closely at my options and doing some serious soul searching, I feel confident that Google is the right place for me, and now is the right time.

Let me end by sincerely thanking everyone that has supported me and worked with me not just during this transition process but throughout my career. I consider myself incredibly fortunate to be surrounded by so many amazing people that genuinely want to have a positive impact on the world and want to empower me to do the best that I can to contribute, even it means doing so from inside (or outside) a different company. It’s never easy to make big decisions involving lots of factors and rapidly changing conditions, let alone one with such deep personal and professional relationships at its core. Yet everyone has treated me with such respect, honesty, and good faith, that it fills me with a deep sense of gratitude, and reminds me why I so love living and working in Silicon Valley.

2010 will be an exciting and tumultuous year for the Social Web, and so will it be for me personally. Wish us both luck, and here’s to the great opportunities that lie ahead!

Missing the Oscars Finale: A Case Study in Technology Failure (and Opportunity)

Yesterday one of my wife’s friends came over to visit, and we decided on a lark to watch the Oscars (which we haven’t done most years). Even though we pay for Cable and are avid TiVo users, due to a variety of circumstances we missed both the beginning of the Oscars and–more importantly–the entire finale, from best female actress through best picture. My frustration and indignation led to me to think systematically about the various ways that technology could and should have helped us avoid this problem. I decided to share my thoughts in the hope that better understanding technology’s current limitations will help inspire and illuminate the way to improving them. As usual, I welcome your feedback, comments, and additional thoughts on this topic.

The essence of the failure was this: the Oscars was content that we wanted to watch, we were entitled to watch, but were ultimately unable to watch. But specifically, here’s what went wrong that could and should have done better:

  • Nothing alerted me that the Oscars was even on that day, nor did it prompt me to record it. I happened to return home early that day from the Plaxo ski trip, but might well have otherwise missed it completely. This is ridiculous given that the Oscars is a big cultural event in America, and that lots of people were planning to watch and record it. That “wisdom of the crowds” should have caused TiVo or someone to send me an email or otherwise prompt me to ask “Lots of people are planning to watch the Oscars–should I TiVo that for you?”
  • As a result of not having scheduled the Oscars to record in advance, when we turned on the TV it turned out that the red carpet pre-show had started 15 minutes ago. Sadly, there was no way to go back and watch the 15 minutes we had missed. Normally TiVo buffers the last 30 minutes of live TV, but when you change channels, it wipes out the buffer, and in this case we were not already on the channel where the Oscars were being recorded. Yet clearly this content could and should be easily accessible, especially when it just happened–you could imagine a set of servers in the cloud buffering the last 30 minutes of each channel, and then providing a similar TiVo-like near-past rewind feature no matter which channel you happen to change to (this would be a lot easier than full on-demand since the last 30 minutes of all channels is a tiny subset of the total content on TV).
  • Once we started watching TV and looked at the schedule, we told TiVo to record the Oscars, but elected to skip the subsequent Barbara Walters interview or whatever was scheduled to follow. Part way through watching in near-real-time, my wife and her friend decided to take a break and do something else (a luxury normally afforded by TiVo). When they came back to finish watching, we discovered to our horror that the Oscars had run 30+ minutes longer than scheduled, and thus we had missed the entire finale. We hadn’t scheduled anything to record after the Oscars, so TiVo in theory could have easily recorded this extra material, but we hadn’t told it to do so, and it didn’t know the program had run long, and its subsequent 30-minute buffer had passed over the finale hours ago, so we were sunk. There are multiple failures at work here:
  1. TiVo didn’t know that the Oscars would run long or that it was running long. My intent as a user was “record the Oscars in its entirety” but what actually happens is TiVo looks at the (static and always-out-of-date) program guide data and says “ok, I’ll record channel 123 from 5:30pm until 8:30pm and hope that does the trick”. Ideally TiVo should get updated program guide data on-the-fly when a program runs long, or else it should be able to detect that the current program has not yet ended and adjust its recording time appropriately. In absence of those capabilities, TiVo has a somewhat hackish solution of knowing which programs are “live broadcasts” and asking you “do you want to append extra recording time just in case?” when you go to record the show. We would have been saved if we’d chosen to do so, but that brings me to…
  2. We had no information that the Oscars was likely to run long. Actually, that’s not entirely true. Once we discovered our error, my wife’s friend remarked, “oh yeah, the Oscars always runs long”. Well, in that case, there should be ample historical data of the expected chance that a repeated live event like the Oscars should have extra time appended to the recording, and TiVo should be able to present that data to help its users make a more informed choice about whether to add additional recording time. If failure #1 was addressed, this whole issue would me moot, but in the interim, if TiVo is going to pass the buck to its users to decide when to add recording time, it should at least gather enough information to help the user make an informed choice.
  3. We weren’t able to go back and watch the TV we had missed, even though nothing else was being recorded during that time. Even though we hadn’t specifically told TiVo to record past the scheduled end of the Oscars, we also hadn’t told it to record anything else. So it was just sitting there, on the channel we wanted to record, doing nothing. Well, actually it was buffering more content, but only for 30 minutes, and only until it changed channels a few hours later to record some other pre-scheduled show. With hard drives as cheap as they are today, there’s no reason TiVo couldn’t have kept recording that same channel until it was asked to change channels. You could easily imagine an automatic overrun-prevention scheme where TiVo keeps recording say an extra hour after each scheduled show (unless it’s asked to change channels in the interim) and holds that in a separate, low-priority storage area (like the suggestions it records when extra space is free) that’s available to be retrieved at the end of a show (“The scheduled portion of this show has now ended, but would you like to keep watching?”), provided you watch that show soon after it was first recorded. In this case, it was only a few hours after the scheduled show had ended, so TiVo certainly would have had the room and ability to do this for us.
  • Dismayed at our failure to properly record the Oscars finale, we hoped that online content delivery had matured to the point where we could just watch the part we had missed online. After all, this is a major event on a broadcast channel whose main point is to draw attention to the movie industry, so if there were ever TV content whose owners should be unconflicted about maximizing viewership in any form, this should be it. But again, here we failed. First of all, there was no way to go online without seeing all the results, thus ruining the suspense we were hoping to experience. One could easily imagine first asking users if they had seen the Oscars, and having a separate experience for those wanting to watch it for the first time vs. those merely wanting a summary or re-cap. But even despite that setback, there was no way to watch the finale online in its entirety. The official Oscars website did have full video of the acceptance speeches, which was certainly better than nothing, but we still missed the introductions, the buildup, and so on. It blows my mind that you still can’t go online and just watch the raw feed, even of a major event on a broadcast channel like ABC, even when the event happened just a few hours ago. In this case it seems hard to believe that the hold-up would be a question of whether the viewer is entitled to view this content (compared to, say, some HBO special or a feature-length movie), but even if it were, my cable company knows that I pay to receive ABC, and presumably has this information available digitally somewhere. Clips are nice, but ABC must have thought the Oscars show was worth watching in its entirety (since it broadcast the whole thing), so there should be some way to watch it that way online, especially soon after it aired (again, this is a simpler problem than archiving all historical TV footage for on-demand viewing). Of course, there is one answer here: I’m sure I could have found the full Oscars recording on bittorrent and downloaded it. How sad (if not unexpected) that the “pirates” are the closest ones to delivering the user experience that the content owners themselves should be striving for!

Finally, aside from just enabling me to passively consume this content I wanted, I couldn’t help but notice a lot of missed opportunity to make watching the Oscars a more compelling, consuming, and social experience. For instance, I had very little context about the films and nominees–which were expected to win? Who had won or been nominated before? Which were my friends’ favorites? In some cases, I didn’t even know which actors had been in which films, or how well those films had done (both in the box office and with critics). An online companion to the Oscars could have provided all of this information, and thus drawn me in much more deeply. And with a social dimension (virtually watching along with my friends and seeing their predictions and reactions), it could have been very compelling indeed. If such information was available Online, the broadcast certainly didn’t try to drive any attention there (not even a quick banner under the show saying “Find out more about all the nominees at Oscar.com”, at least not that I saw). And my guess is that whatever was online wasn’t nearly interactive enough or real-time enough for the kind of “follow along with the show and learn more as it unfolds” type of experience I’m imagining. And even if such an experience were built, my guess is it would only be real-time with the live broadcast. But there’s no reason it couldn’t prompt you to click when you’ve seen each award being presented (even if watching later on your TiVo) and only then revealing the details of how your pick compared to your friends and so on.

So in conclusion, I think there’s so much opportunity here to make TV content easier and more reliable to consume, and there’s even more opportunity to make it more interactive and social. I know I’m not the first person to realize this, but it still amazes me when you really think in detail about the current state of the art how many failures and opportunities there are right in front of us. As anyone who knows me is painfully aware, I’m a huge TiVo fan, but in this case TiVo let me down. It wasn’t entirely TiVo’s fault, of course, but the net result is the same. And in terms of making the TV experience more interactive and social, it seems like the first step is to get a bunch of smart Silicon Valley types embedded in the large cable companies, especially the ones that aren’t scared of the internet. Well, personally I’m feeling pretty good about that one. 😉

OpenIDDevCamp was hack-tastic!

DSCN5744I spent the weekend in SF at OpenIDDevCamp, hosted at SixApart’s offices. In the style of BarCamp and iPhoneDevCamp, the idea was just to get a lot of people together who were interested in OpenID, provide the space and amenities for them to work together, and let them loose. About 30-40 people showed up, including Brad Fitzpatrick, Scott Kveton, Tantek (with blinking wheels on his shoes), Chris Messina, David Recordon, Christopher Allen, John Bradley, Luke Shepard, and many more.

DSCN5748Over the course of the weekend, I got OpenID 2.0 relying party support deployed on Plaxo, and we found and fixed a bunch of little bugs along the way. You can now use directed identity (e.g. just type in “myopenid.com” as your OpenID and sign in on their side), and you can even use iNames (e.g. I can now sign in with =joseph.smarr). Thanks again to my hacker friend Michael Krelin, who did most of the hard work, and to John Bradley of ooTao for helping me figure out the subtleties of making iNames work properly. David Recordon and I also developed a firm spec for combining OpenID and OAuth into a single OP round-trip–it turns out it’s easier than we thought/feared; write-up to follow shortly. And Chris, David, and I came to a clear consensus on best practices for “remember me on this computer” behavior for OpenID RPs, which I’ll try to write up soon as well.

DSCN5747There was also a lot of great discussion about the future of OpenID, OAuth, microformats, and related technologies, as well as some lively debate on data portability (as you might expect). A personal highlight for me was when Christopher Allen (a co-inventor of SSL) regaled us with tales of how crazy and uncertain the process was to get Microsoft, Netscape, and the other big players at the time to all agree on a common set of principles that laid the groundwork for the development of SSL, which we now take for granted as an essential open web standard. It felt a lot like what’s going on right now in the social web, and the outcome there is an inspirational example to follow.

I’ve said it before and I’ll say it again–I love living and working in Silicon Valley with so many smart, energetic, passionate, and fundamentally nice and optimistic people. To wit: I just gave up a perfectly good weekend so I could stay up past midnight writing code and learning the finer points of XRI resolution, and it felt great! 🙂

PS: If you eat at Brickhouse cafe, I recommend the “half ass burger”–it’s just the right size. 😉

Tim Berners-Lee groks the social graph

I guess I missed it with the Thanksgiving break, but Tim Berners-Lee recently wrote a thoughtful and compelling essay on the Social Graph, and how it fits in to the evolution of the net and the web so far. Definitely worth a read!

I was pleasantly surprised to see that he references the Bill of Rights for Users of the Social Web that I co-authored, as evidence of recent “cries from the heart for my friendship, that relationship to another person, to transcend documents and sites.” He echoes “the frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again,” and points out that “The separate web sites, separate documents, are in fact about the same thing–but the system doesn’t know it.”

I can’t think of a more eloquent way to describe the work I’m doing at Plaxo these days on opening up the social web and enabling true friends-list portability, and it’s certainly inspiring to hear it placed in the context of the larger vision for the Semantic Web by someone who’s already had such an impact on fundamentally improving the ways computers and people can interact online.

Onward, ho!

Bravo for No Such Thing–Go see it!

My good friend Chris’s wife Annamarie MacLeod is an actor, and she’s starring in a new play called No Such Thing that opens tonight in San Francisco. Michelle and I went to see the final dress rehearsal last night, and we were both very impressed.

The play is essentially a haunting, impressionistic sketch of a man gradually succumbing to the stresses of work, life, and romantic rivalry. It’s a minimalist production–nearly bare set, very little dialog–and so much of the story is told through the expressions on the actors’ faces and their movements. It also uses projected video and sound, as well as creative lighting, to intensify the mood and add to the narrative.

The way I saw the play described sounded very high-concept. But it never felt hoaky to me, or arty-for-the-sake-of-being-arty. Quite to the contrary: I found the basic story it told to be very effective and compelling, and the minimalism draws you in and gets you to relate more personally, because your mind is filling in the pieces from your own experiences. I liken it to one of those pencil sketches of a female form by Matisse–it’s impressive because it speaks to you, and it’s even more impressive because it does so with only a few strokes.

The play is a 70-minute one-act with a cast of 5 or 6 in a small and intimate theater (which is why the actors can convey so much with subtle expressions and movements), produced by naked masks. It’s playing on Friday and Saturday for the next three weekends (except the Friday after Thanksgiving). I think you’ll find it’s time well spent.

My Ajax talk is now on YUI theater

When I gave my talk on High-Performance JavaScript at OSCON in July, I found out that I was speaking right before “Chief Performance Yahoo!” Steve Souders. To be honest, I was a bit nervous–we read everything Steve writes at Plaxo, and he runs a whole group at Yahoo that does nothing but focus on web performance. But our talks turned out to be quite complementary, and we really hit it off as fellow evangelists to developers that “you CAN do something to save your users from slow web sites”.

When we got back to Silicon Valley, Steve said “let’s do lunch at Yahoo! some time”. So I went over on Monday and had lunch with him and JavaScript guru Doug Crockford (also at Yahoo!). Doug is actively working on how to enable secure cross-site mashups, something near to my heart, so we had a great discussion. When we were coordinating lunch plans, Steve had said “hey Joseph, as long as you’re coming over, why not give your talk at Yahoo!, and I’ll give mine again, and we can put them both up on YUI Theater“. And that’s just what we did!

It turns out that Yahoo! has a set of classrooms in one of its buildings where employees regularly come to hear various talks (both from fellow Yahoos and outsiders), so they had a great setup there, and the room was filled with several dozen fellow web hackers. Eric Miraglia, the engineering manager for YUI (which we use in Plaxo Online 3.0), personally videoed both talks, and we had a great discussion afterwards. He told me it would take “about a week” to get the video online, so imagine my delight when I saw it already posted this morning! (He must have heard about that whole “under-promise and over-deliver” strategy, heh).

I was honored to be invited to speak in front of a company like Yahoo! and to a group of people like Steve, Doug, and Eric who are absolutely at the forefront of web technology and are also true believers in sharing their knowledge with the web community. I’ve learned a lot from them all, and I think Yahoo’s recent work with YDN, JSON, and YUI is the best example of open and pragmatic involvement with developers I’ve seen at any big company in recent memory. After the talk, I asked Doug Crockford if I’d done right by him, and he said “that was really great–I only disagreed with one thing you said.” Wow–that’s good enough for me! 🙂

Robert Scoble interviews me on video

Alpha blogger and avant-garde digital media journalist Robert Scoble came over to Plaxo yesterday to talk with me and John McCrea about the Online Identity Consolidator I wrote that Plaxo launched today and open-sourced. He posted a 30-minute video of the interview with his analysis on Scobleizer, and I’ve included the video below as well.

Scoble’s interview style is always a great mix of technical deep dives interspersed with questions that ask to “explain this in terms that anyone could understand”. He’s both passionate and skeptical of new technology, and it’s an effective way of teasing apart the hype and substance surrounding the announcements he covers. He also immerses himself in the technology he discusses, and thus develops deeper and more personal opinions about it (e.g. he’s an active Plaxo Pulse user), which in this age of sound bytes and talking points is something we sorely need more of.

Anyway, enjoy the video, and I hope it helps get you as passionate about the open social web as I am!

My quiet twitter friends are getting lost

My Twitter friendsI like twitter, and I use it a lot (I even a twitter widget on my web site). A lot of my friends use it too, some more regularly than others. I use Bloglines to keep up with the stream of status updates from my twitter friends so I can check in periodically and pick up where I left off.

But increasingly I’m feeling like it’s too easy to miss updates from my friends that don’t post constantly. They just get drowned out in the surging river of tweets from the “power users” I follow. It’s a shame, especially because the infrequent users are often my closer friends, whose messages I really don’t want to miss, whereas the chattier users have (almost by definition) a lower signal-to-noise ratio generally.

I’ve been heads-down at Plaxo this week working on some great open-social-web tools, so when I checked my twitter feed this morning I had 200 unread items (perhaps more, but Bloglines annoyingly caps you at 200 unread items per feed). I scrolled through the long list of updates knowing that probably I wouldn’t notice the messages I cared most about. Technology is not helping me here. But there must be a way to fix it.

Since I’m a self-confessed data-glutton, my first step was to quantify and visualize the problem. So I downloaded the HTML of my 200 unread tweets from Bloglines and pulled out the status messages with a quick grep '<h3' twitter.html | cut -d\> -f3 | cut -d\< -f1 | sort | cut -c1-131 and then counted the number of updates from each user by piping that through cut -d: -f1 | sort | uniq -c (the unix pipe is a text-hacker’s best friend!). Here are the results:

      1 adam
      2 BarCamp
      1 BarCampBlock
      2 Blaine Cook
      4 Brian Suda
      1 Cal Henderson
      3 Dave McClure
     22 Dave Winer
      7 David Weinberger
      1 Frederik Hermann
      1 Garret Heaton
      1 Jajah
      3 Jeff Clavier
     52 Jeremiah
     12 Kevin Marks
     10 Lunch 2.0
     28 Mr Messina
      8 Scott Beale
      2 Silona
      5 Tantek Celik
     20 Tara
     10 Tariq KRIM
      4 Xeni Jardin

As expected, there were a bunch of users in there with only 1 or 2 status updates that I’d completely missed. And a few users generated the majority of the 200 tweets. I threw the data into excel and spit out a pie chart, which illustrated my subjective experience perfectly:

Twitter status pie chart

The illegible crowd of names at the top is a wonderfully apt visual representation of the problem. They’re getting trampled in the stampede. And over half of the messages are coming from Jeremiah Owyang, Chris Messina, and Dave Winer (who I suspect will consider this a sign of accomplishment, heh). Now don’t get me wrong, I really want to know what Jeremiah, Chris, and Dave are doing and thinking about, I just don’t want it to be at the expense of that squished group of names at the top, who aren’t quite so loquacious.

But just by doing this experiment, an obvious solution is suggested. Allow a view that groups updates by username and only shows say 1-3 messages per user, with the option to expand and see the rest. This would ensure that you could get a quick view of “who’s said something since I last checked twitter” and it would put everyone on equal footing, regardless of how chatty they are. I could still drill down for the full content, but I wouldn’t feel like I have to wade through my prolific friends to find the muffled chirps of the light twitter users. While there’s clearly value in seeing a chronologically accurate timeline of all status updates, in general I use twitter as another way of keeping in touch with people I care about, so e.g. I think I’d rather know that Garret said something since I last checked in than exactly when he said it.

What do you think? Would this be a useful feature? If so, do we need to wait for Twitter or Bloglines to build it, or would it be easy to do as a mashup? The only hard part I can see is keeping track of the read/unread status, but maybe just keeping a last-read timestamp in a cookie/db and then pulling down all statuses since then and grouping them would be sufficient and quick enough? Now if only I had time for side projects… 🙂

« Older posts Newer posts »

© 2024 Joseph Smarr

Theme by Anders NorenUp ↑