Joseph Smarr

Thoughts on web development, tech, and life.

Category: Personal (page 2 of 3)

Missing the Oscars Finale: A Case Study in Technology Failure (and Opportunity)

Yesterday one of my wife’s friends came over to visit, and we decided on a lark to watch the Oscars (which we haven’t done most years). Even though we pay for Cable and are avid TiVo users, due to a variety of circumstances we missed both the beginning of the Oscars and–more importantly–the entire finale, from best female actress through best picture. My frustration and indignation led to me to think systematically about the various ways that technology could and should have helped us avoid this problem. I decided to share my thoughts in the hope that better understanding technology’s current limitations will help inspire and illuminate the way to improving them. As usual, I welcome your feedback, comments, and additional thoughts on this topic.

The essence of the failure was this: the Oscars was content that we wanted to watch, we were entitled to watch, but were ultimately unable to watch. But specifically, here’s what went wrong that could and should have done better:

  • Nothing alerted me that the Oscars was even on that day, nor did it prompt me to record it. I happened to return home early that day from the Plaxo ski trip, but might well have otherwise missed it completely. This is ridiculous given that the Oscars is a big cultural event in America, and that lots of people were planning to watch and record it. That “wisdom of the crowds” should have caused TiVo or someone to send me an email or otherwise prompt me to ask “Lots of people are planning to watch the Oscars–should I TiVo that for you?”
  • As a result of not having scheduled the Oscars to record in advance, when we turned on the TV it turned out that the red carpet pre-show had started 15 minutes ago. Sadly, there was no way to go back and watch the 15 minutes we had missed. Normally TiVo buffers the last 30 minutes of live TV, but when you change channels, it wipes out the buffer, and in this case we were not already on the channel where the Oscars were being recorded. Yet clearly this content could and should be easily accessible, especially when it just happened–you could imagine a set of servers in the cloud buffering the last 30 minutes of each channel, and then providing a similar TiVo-like near-past rewind feature no matter which channel you happen to change to (this would be a lot easier than full on-demand since the last 30 minutes of all channels is a tiny subset of the total content on TV).
  • Once we started watching TV and looked at the schedule, we told TiVo to record the Oscars, but elected to skip the subsequent Barbara Walters interview or whatever was scheduled to follow. Part way through watching in near-real-time, my wife and her friend decided to take a break and do something else (a luxury normally afforded by TiVo). When they came back to finish watching, we discovered to our horror that the Oscars had run 30+ minutes longer than scheduled, and thus we had missed the entire finale. We hadn’t scheduled anything to record after the Oscars, so TiVo in theory could have easily recorded this extra material, but we hadn’t told it to do so, and it didn’t know the program had run long, and its subsequent 30-minute buffer had passed over the finale hours ago, so we were sunk. There are multiple failures at work here:
  1. TiVo didn’t know that the Oscars would run long or that it was running long. My intent as a user was “record the Oscars in its entirety” but what actually happens is TiVo looks at the (static and always-out-of-date) program guide data and says “ok, I’ll record channel 123 from 5:30pm until 8:30pm and hope that does the trick”. Ideally TiVo should get updated program guide data on-the-fly when a program runs long, or else it should be able to detect that the current program has not yet ended and adjust its recording time appropriately. In absence of those capabilities, TiVo has a somewhat hackish solution of knowing which programs are “live broadcasts” and asking you “do you want to append extra recording time just in case?” when you go to record the show. We would have been saved if we’d chosen to do so, but that brings me to…
  2. We had no information that the Oscars was likely to run long. Actually, that’s not entirely true. Once we discovered our error, my wife’s friend remarked, “oh yeah, the Oscars always runs long”. Well, in that case, there should be ample historical data of the expected chance that a repeated live event like the Oscars should have extra time appended to the recording, and TiVo should be able to present that data to help its users make a more informed choice about whether to add additional recording time. If failure #1 was addressed, this whole issue would me moot, but in the interim, if TiVo is going to pass the buck to its users to decide when to add recording time, it should at least gather enough information to help the user make an informed choice.
  3. We weren’t able to go back and watch the TV we had missed, even though nothing else was being recorded during that time. Even though we hadn’t specifically told TiVo to record past the scheduled end of the Oscars, we also hadn’t told it to record anything else. So it was just sitting there, on the channel we wanted to record, doing nothing. Well, actually it was buffering more content, but only for 30 minutes, and only until it changed channels a few hours later to record some other pre-scheduled show. With hard drives as cheap as they are today, there’s no reason TiVo couldn’t have kept recording that same channel until it was asked to change channels. You could easily imagine an automatic overrun-prevention scheme where TiVo keeps recording say an extra hour after each scheduled show (unless it’s asked to change channels in the interim) and holds that in a separate, low-priority storage area (like the suggestions it records when extra space is free) that’s available to be retrieved at the end of a show (“The scheduled portion of this show has now ended, but would you like to keep watching?”), provided you watch that show soon after it was first recorded. In this case, it was only a few hours after the scheduled show had ended, so TiVo certainly would have had the room and ability to do this for us.
  • Dismayed at our failure to properly record the Oscars finale, we hoped that online content delivery had matured to the point where we could just watch the part we had missed online. After all, this is a major event on a broadcast channel whose main point is to draw attention to the movie industry, so if there were ever TV content whose owners should be unconflicted about maximizing viewership in any form, this should be it. But again, here we failed. First of all, there was no way to go online without seeing all the results, thus ruining the suspense we were hoping to experience. One could easily imagine first asking users if they had seen the Oscars, and having a separate experience for those wanting to watch it for the first time vs. those merely wanting a summary or re-cap. But even despite that setback, there was no way to watch the finale online in its entirety. The official Oscars website did have full video of the acceptance speeches, which was certainly better than nothing, but we still missed the introductions, the buildup, and so on. It blows my mind that you still can’t go online and just watch the raw feed, even of a major event on a broadcast channel like ABC, even when the event happened just a few hours ago. In this case it seems hard to believe that the hold-up would be a question of whether the viewer is entitled to view this content (compared to, say, some HBO special or a feature-length movie), but even if it were, my cable company knows that I pay to receive ABC, and presumably has this information available digitally somewhere. Clips are nice, but ABC must have thought the Oscars show was worth watching in its entirety (since it broadcast the whole thing), so there should be some way to watch it that way online, especially soon after it aired (again, this is a simpler problem than archiving all historical TV footage for on-demand viewing). Of course, there is one answer here: I’m sure I could have found the full Oscars recording on bittorrent and downloaded it. How sad (if not unexpected) that the “pirates” are the closest ones to delivering the user experience that the content owners themselves should be striving for!

Finally, aside from just enabling me to passively consume this content I wanted, I couldn’t help but notice a lot of missed opportunity to make watching the Oscars a more compelling, consuming, and social experience. For instance, I had very little context about the films and nominees–which were expected to win? Who had won or been nominated before? Which were my friends’ favorites? In some cases, I didn’t even know which actors had been in which films, or how well those films had done (both in the box office and with critics). An online companion to the Oscars could have provided all of this information, and thus drawn me in much more deeply. And with a social dimension (virtually watching along with my friends and seeing their predictions and reactions), it could have been very compelling indeed. If such information was available Online, the broadcast certainly didn’t try to drive any attention there (not even a quick banner under the show saying “Find out more about all the nominees at Oscar.com”, at least not that I saw). And my guess is that whatever was online wasn’t nearly interactive enough or real-time enough for the kind of “follow along with the show and learn more as it unfolds” type of experience I’m imagining. And even if such an experience were built, my guess is it would only be real-time with the live broadcast. But there’s no reason it couldn’t prompt you to click when you’ve seen each award being presented (even if watching later on your TiVo) and only then revealing the details of how your pick compared to your friends and so on.

So in conclusion, I think there’s so much opportunity here to make TV content easier and more reliable to consume, and there’s even more opportunity to make it more interactive and social. I know I’m not the first person to realize this, but it still amazes me when you really think in detail about the current state of the art how many failures and opportunities there are right in front of us. As anyone who knows me is painfully aware, I’m a huge TiVo fan, but in this case TiVo let me down. It wasn’t entirely TiVo’s fault, of course, but the net result is the same. And in terms of making the TV experience more interactive and social, it seems like the first step is to get a bunch of smart Silicon Valley types embedded in the large cable companies, especially the ones that aren’t scared of the internet. Well, personally I’m feeling pretty good about that one. πŸ˜‰

OpenIDDevCamp was hack-tastic!

DSCN5744I spent the weekend in SF at OpenIDDevCamp, hosted at SixApart’s offices. In the style of BarCamp and iPhoneDevCamp, the idea was just to get a lot of people together who were interested in OpenID, provide the space and amenities for them to work together, and let them loose. About 30-40 people showed up, including Brad Fitzpatrick, Scott Kveton, TantekΒ (with blinking wheels on his shoes), Chris Messina, David Recordon, Christopher Allen, John Bradley, Luke Shepard, and many more.

DSCN5748Over the course of the weekend, I got OpenID 2.0 relying party support deployed on Plaxo, and we found and fixed a bunch of little bugs along the way. You can now use directed identity (e.g. just type in “myopenid.com” as your OpenID and sign in on their side), and you can even use iNames (e.g. I can now sign in with =joseph.smarr). Thanks again to my hacker friend Michael Krelin, who did most of the hard work, and to John Bradley of ooTao for helping me figure out the subtleties of making iNames work properly. David Recordon and I also developed a firm spec for combining OpenID and OAuth into a single OP round-trip–it turns out it’s easier than we thought/feared; write-up to follow shortly. And Chris, David, andΒ I came to a clear consensus on best practices for “remember me on this computer” behavior for OpenID RPs, which I’ll try to write up soon as well.

DSCN5747There was also a lot of great discussion about the future of OpenID, OAuth, microformats, and related technologies, as well as some lively debate on data portability (as you might expect). A personal highlight for me was when Christopher Allen (a co-inventor of SSL) regaled us with tales of how crazy and uncertain the process was to get Microsoft, Netscape, and the other big players at the time to all agree on a common set of principles that laid the groundwork for the development of SSL, which we now take for granted as an essential open web standard. It felt a lot like what’s going on right now in the social web, and the outcome there is an inspirational example to follow.

I’ve said it before and I’ll say it again–I love living and working in Silicon Valley with so many smart, energetic, passionate, and fundamentally nice and optimistic people. To wit: I just gave up a perfectly good weekend so I could stay up past midnight writing code and learning the finer points of XRI resolution, and it felt great! πŸ™‚

PS: If you eat at Brickhouse cafe, I recommend the “half ass burger”–it’s just the right size. πŸ˜‰

Tim Berners-Lee groks the social graph

I guess I missed it with the Thanksgiving break, but Tim Berners-Lee recently wrote a thoughtful and compelling essay on the Social Graph, and how it fits in to the evolution of the net and the web so far. Definitely worth a read!

I was pleasantly surprised to see that he references the Bill of Rights for Users of the Social Web that I co-authored, as evidence of recent “cries from the heart for my friendship, that relationship to another person, to transcend documents and sites.” He echoes “the frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again,” and points out that “The separate web sites, separate documents, are in fact about the same thing–but the system doesn’t know it.”

I can’t think of a more eloquent way to describe the work I’m doing at Plaxo these days on opening up the social web and enabling true friends-list portability, and it’s certainly inspiring to hear it placed in the context of the larger vision for the Semantic Web by someone who’s already had such an impact on fundamentally improving the ways computers and people can interact online.

Onward, ho!

Bravo for No Such Thing–Go see it!

My good friend Chris’s wife Annamarie MacLeod is an actor, and she’s starring in a new play called No Such Thing that opens tonight in San Francisco. Michelle and I went to see the final dress rehearsal last night, and we were both very impressed.

The play is essentially a haunting, impressionistic sketch of a man gradually succumbing to the stresses of work, life, and romantic rivalry. It’s a minimalist production–nearly bare set, very little dialog–and so much of the story is told through the expressions on the actors’ faces and their movements. It also uses projected video and sound, as well as creative lighting, to intensify the mood and add to the narrative.

The way I saw the play described sounded very high-concept. But it never felt hoaky to me, or arty-for-the-sake-of-being-arty. Quite to the contrary: I found the basic story it told to be very effective and compelling, and the minimalism draws you in and gets you to relate more personally, because your mind is filling in the pieces from your own experiences. I liken it to one of those pencil sketches of a female form by Matisse–it’s impressive because it speaks to you, and it’s even more impressive because it does so with only a few strokes.

The play is a 70-minute one-act with a cast of 5 or 6 in a small and intimate theater (which is why the actors can convey so much with subtle expressions and movements), produced by naked masks. It’s playing on Friday and Saturday for the next three weekends (except the Friday after Thanksgiving). I think you’ll find it’s time well spent.

My Ajax talk is now on YUI theater

When I gave my talk on High-Performance JavaScript at OSCON in July, I found out that I was speaking right before “Chief Performance Yahoo!” Steve Souders. To be honest, I was a bit nervous–we read everything Steve writes at Plaxo, and he runs a whole group at Yahoo that does nothing but focus on web performance. But our talks turned out to be quite complementary, and we really hit it off as fellow evangelists to developers that “you CAN do something to save your users from slow web sites”.

When we got back to Silicon Valley, Steve said “let’s do lunch at Yahoo! some time”. So I went over on Monday and had lunch with him and JavaScript guru Doug Crockford (also at Yahoo!). Doug is actively working on how to enable secure cross-site mashups, something near to my heart, so we had a great discussion. When we were coordinating lunch plans, Steve had said “hey Joseph, as long as you’re coming over, why not give your talk at Yahoo!, and I’ll give mine again, and we can put them both up on YUI Theater“. And that’s just what we did!

It turns out that Yahoo! has a set of classrooms in one of its buildings where employees regularly come to hear various talks (both from fellow Yahoos and outsiders), so they had a great setup there, and the room was filled with several dozen fellow web hackers. Eric Miraglia, the engineering manager for YUI (which we use in Plaxo Online 3.0), personally videoed both talks, and we had a great discussion afterwards. He told me it would take “about a week” to get the video online, so imagine my delight when I saw it already posted this morning! (He must have heard about that whole “under-promise and over-deliver” strategy, heh).

I was honored to be invited to speak in front of a company like Yahoo! and to a group of people like Steve, Doug, and Eric who are absolutely at the forefront of web technology and are also true believers in sharing their knowledge with the web community. I’ve learned a lot from them all, and I think Yahoo’s recent work with YDN, JSON, and YUI is the best example of open and pragmatic involvement with developers I’ve seen at any big company in recent memory. After the talk, I asked Doug Crockford if I’d done right by him, and he said “that was really great–I only disagreed with one thing you said.” Wow–that’s good enough for me! πŸ™‚

Robert Scoble interviews me on video

Alpha blogger and avant-garde digital media journalist Robert Scoble came over to Plaxo yesterday to talk with me and John McCrea about the Online Identity Consolidator I wrote that Plaxo launched today and open-sourced. He posted a 30-minute video of the interview with his analysis on Scobleizer, and I’ve included the video below as well.

Scoble’s interview style is always a great mix of technical deep dives interspersed with questions that ask to “explain this in terms that anyone could understand”. He’s both passionate and skeptical of new technology, and it’s an effective way of teasing apart the hype and substance surrounding the announcements he covers. He also immerses himself in the technology he discusses, and thus develops deeper and more personal opinions about it (e.g. he’s an active Plaxo Pulse user), which in this age of sound bytes and talking points is something we sorely need more of.

Anyway, enjoy the video, and I hope it helps get you as passionate about the open social web as I am!

My quiet twitter friends are getting lost

My Twitter friendsI like twitter, and I use it a lot (I even a twitter widget on my web site). A lot of my friends use it too, some more regularly than others. I use Bloglines to keep up with the stream of status updates from my twitter friends so I can check in periodically and pick up where I left off.

But increasingly I’m feeling like it’s too easy to miss updates from my friends that don’t post constantly. They just get drowned out in the surging river of tweets from the “power users” I follow. It’s a shame, especially because the infrequent users are often my closer friends, whose messages I really don’t want to miss, whereas the chattier users have (almost by definition) a lower signal-to-noise ratio generally.

I’ve been heads-down at Plaxo this week working on some great open-social-web tools, so when I checked my twitter feed this morning I had 200 unread items (perhaps more, but Bloglines annoyingly caps you at 200 unread items per feed). I scrolled through the long list of updates knowing that probably I wouldn’t notice the messages I cared most about. Technology is not helping me here. But there must be a way to fix it.

Since I’m a self-confessed data-glutton, my first step was to quantify and visualize the problem. So I downloaded the HTML of my 200 unread tweets from Bloglines and pulled out the status messages with a quick grep '<h3' twitter.html | cut -d\> -f3 | cut -d\< -f1 | sort | cut -c1-131 and then counted the number of updates from each user by piping that through cut -d: -f1 | sort | uniq -c (the unix pipe is a text-hacker’s best friend!). Here are the results:

      1 adam
      2 BarCamp
      1 BarCampBlock
      2 Blaine Cook
      4 Brian Suda
      1 Cal Henderson
      3 Dave McClure
     22 Dave Winer
      7 David Weinberger
      1 Frederik Hermann
      1 Garret Heaton
      1 Jajah
      3 Jeff Clavier
     52 Jeremiah
     12 Kevin Marks
     10 Lunch 2.0
     28 Mr Messina
      8 Scott Beale
      2 Silona
      5 Tantek Celik
     20 Tara
     10 Tariq KRIM
      4 Xeni Jardin

As expected, there were a bunch of users in there with only 1 or 2 status updates that I’d completely missed. And a few users generated the majority of the 200 tweets. I threw the data into excel and spit out a pie chart, which illustrated my subjective experience perfectly:

Twitter status pie chart

The illegible crowd of names at the top is a wonderfully apt visual representation of the problem. They’re getting trampled in the stampede. And over half of the messages are coming from Jeremiah Owyang, Chris Messina, and Dave Winer (who I suspect will consider this a sign of accomplishment, heh). Now don’t get me wrong, I really want to know what Jeremiah, Chris, and Dave are doing and thinking about, I just don’t want it to be at the expense of that squished group of names at the top, who aren’t quite so loquacious.

But just by doing this experiment, an obvious solution is suggested. Allow a view that groups updates by username and only shows say 1-3 messages per user, with the option to expand and see the rest. This would ensure that you could get a quick view of “who’s said something since I last checked twitter” and it would put everyone on equal footing, regardless of how chatty they are. I could still drill down for the full content, but I wouldn’t feel like I have to wade through my prolific friends to find the muffled chirps of the light twitter users. While there’s clearly value in seeing a chronologically accurate timeline of all status updates, in general I use twitter as another way of keeping in touch with people I care about, so e.g. I think I’d rather know that Garret said something since I last checked in than exactly when he said it.

What do you think? Would this be a useful feature? If so, do we need to wait for Twitter or Bloglines to build it, or would it be easy to do as a mashup? The only hard part I can see is keeping track of the read/unread status, but maybe just keeping a last-read timestamp in a cookie/db and then pulling down all statuses since then and grouping them would be sufficient and quick enough? Now if only I had time for side projects… πŸ™‚

BarCampBlock exemplifies Silicon Valley

I love living here in Silicon Valley. I’m surrounded by smart, passionate people who don’t feel they need permission to make a difference.

BarCampBlockCase in point was BarCampBlock this weekend–a spontaneous un-conference-style gathering of 900+ hackers and other valleyites sprawled across the streets of Palo Alto, as well as inside the offices of several host startups. The basic idea is that when we go to conferences and events, the major benefit is the chance to meet and talk with other like-minded people, so why do we need the conference at all? Just organize an open event where people will show up and figure out how to spend their time together.

BarCampBlock organizersIt was organized by a few people (mainly Chris Messina, Tara Hunt, and Tantek Γ‡elik) in a short amount of time, and with essentially no budget. It was promoted purely by word of mouth and blogging, and yet not only was there an amazing turnout, nearly 100 companies stepped up to help show their support and sponsor the event. Even Plaxo kicked in a sponsorship, which was a no-brainer since they cleverly set the max contribution at $300 to prevent the possibility of an arms race. And then, like magic, people showed up, organized, and we had a productive and fun weekend figuring out the future.

I just have to stop and reflect on how unusual and awesome it is that events like this can and do take place here with relative ease here. It’s only possible because of the combination of (a) ambitious would-be organizers, (b) a community of people who care enough about what they’re doing to spend a perfectly good weekend networking and nerding with their cohort, and (c) a plethora of companies that care enough about being a part of the community to pool their resources and make events like this possible.

Social network portability sessionIt also requires the flat, meritocratic, egalitarian cultural norms of the area. The important people show up and hang out like everyone else; they’re not hard to find. In my own sphere of opening up the social web, the big deal recently was Brad Fitzpatrick’s (founder of LiveJournal, creator of OpenID, now at Google) new manifesto on how to do an end-run around uncooperative companies and get the ball rolling now. It had already spurred a hot conversation, and yet the next morning there he was (down from SF, mind you), talking to whomever was interested.

John McCrea engages in 'grass-roots marketing'We ended up hosting a session together on social network portability, and it was packed. It must have gone well, because the rest of the evening people kept coming up to me to express their shared passion for what we’re doing. In fact, enough people gave me their free drink tickets out of tribute that I couldn’t finish them all! Now that’s what I call “work hard, play hard”. πŸ™‚

In a funny way, BarCamp shares the same spirit (and initial impetus) as Lunch 2.0–we’re all living here to be a part of this community, so let’s get together. The cost is small and readily obtainable, and the results of meeting up are never predictable but always valuable.

Anyway, congratz to the organizers, you did an amazing job! And congratz to us all for taking advantage of opportunities like this and not waiting to be told what to work on. As usual, there are plenty of photos from me and others.

Smashing Pumpkins blew me away again

As a longtime Smashing Pumpkins fan, I was thrilled to get to see them play live again last night at The Fillmore. If you’ve never seen a show there, the Fillmore is a tiny, intimate venue in SF–I was about 6 feet from the stage in the center, and the view and sound were amazing. It was this wonderfully raw feeling of just seeing some “normal guys on stage” playing music–who happened to be extremely talented. πŸ™‚

And get this–they played for over 3 hours. I didn’t get out of the show until after 1am! They played for 2 1/2 hours straight without taking a break, and then did two encores (finishing with a 10+ min improvised version of silverfuck). And this is the 11th show in a row they’ve played here (the last show in the series is tomorrow). How do they have the stamina to do this every night?! Amazing. I wish I could have gotten tickets for more of their shows, but they sold out in literally about 90 seconds, so I was lucky to even get the pair of tickets I got.

Of the 3+ hours they played, I’d say >1 hour was new unreleased material they’d recently written, including a number of beautiful acoustic pieces. They also performed a 30-minute song called Gossamer that was originally supposed to be on Zeitgeist. I had the good fortune to be standing next to a serious pumpkin-head who had been to 10 of the shows and new all the new stuff by heart already. I asked him how he knew the names of these unreleased songs and he said the sound board in the back displays the name of each song and fans post the info online. Crowd-sourcing at work!

Highlights for me included a hard-rocking electric rendition of Tonight, Tonight, the performance of To Sheila, in which the full band kicked in half way through, and a completely deconstructed new version of Heavy Metal Machine. Luckily someone’s already posted the set list from last night, and there are tons of photos and videos already online as well. Gotta love the internets!

Congrats to David Recordon!

David Record receives an Open Source Award at OSCONYesterday morning, I watched David Recordon lead an “OpenID Bootcamp” for OSCON attendees (including a handout for everyone of the implementation guide I wrote, wow!). Then last night he received a Google – O’Reilly Open Source Award for his contributions to the development and spread of OpenID. What a day!

David has been a great friend and mentor to me throughout my involvement with OpenID. Even when he was traveling all around the world (which he does a lot for his job), he always made time to help answer questions and debug issues (including once over Google Talk from an international airport while his flight was being boarded!)

I’m sure I’m not the only one he’s been so helpful to, and his passion and positive attitude was clearly not lost on Google and O’Reilly. Congrats David, your recognition is much deserved. And viva OpenID!!

Older posts Newer posts
explanation of business lead articles explanation of newbusiness lead opportunities explanation of finance lead deposit explanation of moneymaking lead art explanation of loan lead deposits explanation of makeyour lead home good explanation of income lead issue explanation of medicine productivity

© 2019 Joseph Smarr

Theme by Anders NorenUp ↑