Thoughts on web development, tech, and life.

Author: joseph (Page 4 of 5)

Using Netflix’s New API: A step-by-step guide

Netflix announces an APIAs a longtime avid Netflix fan, I was excited to see that they finally released an official API today. As an avid fan of the Open Web, I was even more excited to see that this API gives users full access to their ratings, reviews, and queue, and it does so using a familiar REST interface, with output available in XML, JSON, and ATOM. It even uses OAuth to grant access to protected user data, meaning you can pick up an existing OAuth library and dive in (well, almost, see below). Netflix has done a great job here, and deserves a lot of kudos!

Naturally, I couldn’t wait to get my hands on the API and try it out for real. After a bit of tinkering, I’ve now got it working so it gives me my own list of ratings, reviews, and recently returned movies, including as an ATOM feed that can be embedded as-is into a feed reader or aggregator. It was pretty straightforward, but I noticed a couple of non-standard things and gotchas along the way, so I thought it would be useful to share my findings. Hopefully this will help you get started with Netflix’s API even faster than I did!

So here’s how to get started with the Netflix API and end up with an ATOM feed of your recently returned movies:

  1. Sign up for mashery (which hosts Netflix’s API) at http://developer.netflix.com/member/register (you have to fill out some basic profile info and respond to an email round-trip)
  2. Register for an application key at http://developer.netflix.com/apps/register (you say a bit about what your app does and it gives you a key and secret). When you submit the registration, it will give you a result like this:
    Netflix API: k5mds6sfn594x4drvtw96n37   Shared Secret: srKNVRubKX

    The first string is your OAuth Consumer Key and the second one is your OAuth Consumer Secret. I’ve changed the secret above so you don’t add weird movies to my account, but this gives you an idea of what it looks like. 🙂

  3. Get an OAuth request token. If you’re not ready to start writing code, you can use an OAuth test client like http://term.ie/oauth/example/client.php. It’s not the most user-friendly UI, but it will get the job done. Use HMAC-SHA1 as your signature method, and use http://api.netflix.com/oauth/request_token as the endpoint. Put your newly issued consumer key and secret in the spaces below, and click the “request_token” button. If it works, you’ll get a page with output like this:
    oauth_token=bpn8ycnma7hzuwec5dmt8f2j&oauth_token_secret=DArhPYzsUCkz&application_name=JosephSmarrTestApp&login_url=https%3A%2F%2Fapi-user.netflix.com%2Foauth%2Flogin%3Foauth_token%3Dbpn8ycnma7hzuwec5dmt8f2j

    Your OAuth library should parse this for you, but if you’re playing along in the test client, you’ll have to pull out the OAuth Request Token (in this case, bpn8ycnma7hzuwec5dmt8f2j) and OAuth Request Secret (DArhPYzsUCtt). Note it also tells you the application_name you registered (in this case, JosephSmarrTestApp), which you’ll need for the next step (this is not a standard part of OAuth, and not sure why they require you to pass it along). They also give you a login_url, which is also non-standard, and doesn’t actually work, since you need to append additional parameters to it.

  4. Ask the user to authorize your request token. Here the OAuth test client will fail you because Netflix requires you to append additional query parameters to the login URL, and the test client isn’t smart about merging query parameters on the endpoint URL with the OAuth parameters it adds. The base login URL is https://api-user.netflix.com/oauth/login and as usual you have to append your Request Token as oauth_token=bpn8ycnma7hzuwec5dmt8f2j and provide an optional callback URL to redirect to the user to upon success. But it also makes you append your OAuth Consumer Key and application name, so the final URL you need to redirect your user to looks like this:

    This is not standard behavior, and it will probably cause unnecessary friction for developers, but now you know. BTW if you’re getting HTTP 400 errors on this step, try curl-ing the URL on the command line, and it will provide a descriptive error message that may not show up in your web browser. For instance, if you leave out the application name, e.g.

    curl ‘https://api-user.netflix.com/oauth/login?oauth_token=bpn8ycnma7hzuwec5dmt8f2j&oauth_callback=YOUR_CALLBACK_URL&oauth_consumer_key=k5mds6sfn594x4drvtw96n37’

    You’ll get the following XML response (I’ve replaced the angle brackets with [] because wordpress keeps eating my escaped tags, grr):

    [status]
      [status_code]400[/status_code]
      [message]application_name is missing[/message]
    [/status]

    If your login URL is successfully constructed, it will take the user to an authorization page that looks like this:
    Netflix OAuth authorization page

    If the user approves, they’ll be redirected back to your oauth_callback URL (if supplied), and your request token has now been authorized.

  5. Exchange your authorized request token for an access token. You can use the OAuth test client again for this, and it’s basically just like getting the request token, except the endpoint is http://api.netflix.com/oauth/access_token and you need to fill out both your consumer token and secret as well as your request token and secret. Then click the access_token button, and you should get a page with output like this:
    oauth_token=T1lVQLSlIW38NDgeumjnyypbxc6yHD0xkaD21d8DpLVaIs3d2T1Aq_yeOor9PCIW2Bz5ksIPr7aXBKvTTg599m9Q–&user_id=T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-&oauth_token_secret=AKeGYam8NJ4X

    (Once again I’ve altered my secret to protect the innocent.) In addition to providing an OAuth Access Token and OAuth Access Secret (via the oauth_token and oauth_token_secret parameters, respectively), you are also given the user_id for the authorized user, which you need to use when constructing the full URL for REST API calls. This is non-standard for OAuth, and you may need to modify your OAuth library to return this additional parameter, but that’s where you get it. (It would be nice if you could use an implicit userID in API URLs like @me, and it could be interpreted as “the user that granted this access token”, so you could skip this step of having to extract and use an explicit userID; that’s how Portable Contacts and OpenSocial get around this problem. Feature request, anyone?)

  6. Use your access token to fetch the user’s list of protected feeds. Having now successfully gone through the OAuth dance, you’re now ready to make your first protected API call! You can browse the list of available API calls at http://developer.netflix.com/docs/REST_API_Reference and in each case, the URL starts out as http://api.netflix.com/ and you append the path, substituting the user_id value you got back with your access token wherever the path calls for userID. So for instance, to get the list of protected ATOM feeds for the user, the REST URL is http://api.netflix.com/users/userID/feeds, or in this case http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/feeds.

    Here’s where the OAuth test client is a bit confusing: you need put that feeds URL as the endpoint, fill out the consumer key and secret as normal, and fill out your *access* token and secret under the “request token / secret” fields, then click the “access_token” button to submit the OAuth-signed API request. If it works, you’ll get an XML response with a bunch of links to different protected feeds available for this user. Here’s an example of the response, showing just a couple of the returned links, and again with angle brackets replaced with square brackets to appease my lame wordpress editor:

    Each link contains an href attribute pointing to the actual feed URL, as well as a rel attribute describing the type of data available for that link, and a human-readable title attribute. In our case, we want the “Titles Returned Recently” feed, which is available at http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/rental_history/returned?feed_token=T1ksEAR97Ki14sIyQX2pfnGH0Llom4eaIDMwNWlUOmRZ0duD2YDbp_5PPUKBcedH51XSxPTnUOI5rCLz9feBXx9A–&oauth_consumer_key=k5mds6sfn594x4drvtw96n37&output=atom (note the XML escapes &s in URLs as XML entities, so you have to un-escape them to get the actual URL). As you can see, this feed URL looks like a normal API request, including my userID on the path, but with an extra feed_token parameter, which is different for each available user feed. This way, the ATOM feed can be fetched without having to do any OAuth signing, so you can drop it in your feed reader or aggregator of choice and it should just work. And giving access to one feed won’t let anyone access your other feeds, since they’re each protected with their own feed_token values.

  7. Fetch the feed of recently returned movies. Now you can just fetch the feed URL you found in the previous step (in my case, http://api.netflix.com/users/T1G.NK54IqxGkXi3RbkKgudF3ZFkmopPt3lR.dlOLC898-/rental_history/returned?feed_token=T1ksEAR97Ki14sIyQX2pfnGH0Llom4eaIDMwNWlUOmRZ0duD2YDbp_5PPUKBcedH51XSxPTnUOI5rCLz9feBXx9A–&oauth_consumer_key=k5mds6sfn594x4drvtw96n37&output=atom), and you’ll get nicely formatted “blog posts” back for each movie the user recently returned. Here’s a sample of how the formatted ATOM entries look:
    Netflix rental returns as a feed
    Of course, if you want to format the results differently, you can make a REST API call for the same data, e.g. http://api.netflix.com/users/userID/rental_history/returned OAuth-sign it like you did in step 6, and you’ll get all the meta-data for each movie returned as XML, including various sizes of movie poster image.
  8. Profit! Now you’ve got a way to let your users provide access to their netflix data, which you can use in a variety of ways to enhance your site. If this is the first time you’ve used OAuth, it might have seemed a little complex, but the good news is it’s the same process for all other OAuth-protected APIs you may want to use in the future.

I hope you found this helpful. If anything is confusing, or if I made any mistakes in my write-up, please leave a comment so I can make it better. Otherwise, let me know when you’ve got your Netflix integration up and running!

Performance Challenges for the Open Web (Stanford CS193H)

Performance Challenges for the Open Web
Stanford CS193H: High Performance Web Sites
Stanford, CA
September 29, 2008

Download PPT (6.8 MB)

Open Web brings new performance challengesWeb site performance guru Steve Souders is teaching a class at Stanford this fall on High Performance Web Sites (CS193H). He invited me to give a guest lecture to his class on the new performance challenges emerging from our work to open up the social web. As a recent Stanford alum (SSP ’02, co-term ’03), it was a thrill to get to teach a class at my alma mater, esp. in the basement of the Gates bldg, where I’ve taken many classes myself.

I originally met Steve at OSCON 07 when I was working on high-performance JavaScript, and we were giving back-to-back talks. We immediately hit it off and have remained in good touch since. Over the last year or so, however, my focus has shifted to opening up the social web. So when Steve asked me to speak at his class, my first reaction was “I’m not sure I could tell your students anything new that isn’t already in your book”.

But upon reflection, I realized that a lot of the key challenges in creating a truly social web are directly related to performance, and the set of performance challenges in this space are quite different than in optimizing a single web site. In essence, the challenge is getting multiple sites to work together and share content in a way that’s open and flexible but also tightly integrated and high-performance. Thus my new talk was born.

Lots of open building blocksI provided the students with an overview of the emerging social web ecosystem, and some of the key open building blocks making it possible (OpenID, OAuth, OpenSocial, XRDS-Simple, microformats, etc.). I then gave some concrete examples of how these building blocks can play together, and that led naturally into a discussion of the performance challenges involved.

I broke the challenges into four primary categories:

  • minimizing round trips (the challenge is combining steps to optimize vs. keeping the pieces flexible and simple),
  • caching (storing copies of user data for efficiency vs. always having a fresh copy),
  • pull vs. push (the difficulty of scaling mass-polling and the opportunities presented by XMPP and Gnip to decrease both latency and load), and
  • integrating third-party content (proxying vs. client-side fetching, iframes vs. inline integration, etc.).

In each of these cases, there are fundamental trade-offs to make, so there’s no “easy, right answer”. But by understanding the issues involved, you can make trade-offs that are tailored to the situation at hand. Some of the students in that class will probably be writing the next generation of social apps, so I’m glad they can start thinking about these important issues today.

Web 2.0/Web 3.0 Mashup (EmTech08)

Web 2.0/Web 3.0 Mashup
Emerging Technologies Conference at MIT (EmTech08)
Boston, MA
September 24, 2008

Attribution: ValleywagI was invited to speak on a panel at EmTech, the annual conference on emerging technologies put on by MIT’s TechnologyReview Magazine, on the future of the web. The conference spans many disciplines (alternative energy, cloud computing, biotech, mobile, etc.) and we were the representatives of the consumer internet, which was quite a humbling task! Robert Scoble moderated the panel, which featured me, David Recordon, Dave Morin, and Nova Spivak.

It was a loose and lively back-and-forth discussion of the major trends we see on the web today: it’s going social, it’s going open, it’s going real-time, and it’s going ubiquitous. These trends are all working together: it’s now common (at least in silicon valley) to use your iPhone on the go to see what articles/restaurants/etc your friends have recommended from a variety of distributed tools, aggregated via FriendFeed, Plaxo Pulse, or Facebook. A lot of the vision behind the Semantic Web (structured data enabling machine-to-machine communication on a user’s behalf) is now happening, but it’s doing so bottoms-up, with open standards that let users easily create content online and share it with people they know. As the audience could clearly tell from our passionate and rapid-fire remarks, this is an exciting and important time for the web.

We got lots of positive feedback on our panel from attendees (and also via twitter, of course), as well as from the TR staff. We even received the distinct honor of attracting snarky posts from both Valleywag and Fake Steve Jobs (if you don’t know the valley, trust me: that’s a good thing). You can watch a video of the entire panel on TechnologyReview’s website.

I must say I’m quite impressed with TechnologyReview and EmTech. They do a good job of pulling together interesting people and research from a variety of technical frontiers and making it generally accessible but not dumbed-down. The piece they wrote recently on opening up the social web (which featured a full page photo of yours-truly diving into a large bean bag) was perhaps the most insightful mainstream coverage to date of our space. They gave me a free one-year subscription to TR for speaking at EmTech, and I’ll definitely enjoy reading it. Here’s looking forward to EmTech09!

Tying it All Together: Implementing the Open Web (Web 2.0 Expo New York)

Tying it All Together: Implementing the Open Web
Web 2.0 Expo New York
New York, NY
September 19, 2008

Download PPT (7.2 MB)

I gave the latest rev of my talk on how the social web is opening up and how the various building blocks (OpenID, OAuth, OpenSocial, PortableContacts, XRDS-Simple, Microformats, etc.) fit together to create a new social web ecosystem. Thanks to Kris Jordan, Mark Scrimshire, and Steve Kuhn for writing up detailed notes of what I said. Given that my talk was scheduled for the last time slot on the last day of the conference, it was well attended and the audience was enthusiastic and engaged, which I always take as a good sign.

I think the reason that people are reacting so positively to this message (besides the fact that I’m getting better with practice at explaining these often complex technologies in a coherent way!) is that it’s becoming more real and more important every day. It’s amazing to me how much has happened in this space even since my last talk on this subject at Google I/O in May (I know because I had to update my slides considerably since then!). Yahoo has staked its future on going radically open with Y!OS, and it’s using the “open stack” to do it. MySpace hosted our Portable Contacts Summit (an important new building block), and is using OpenID, OAuth, and OpenSocial for it’s “data availability” platform. Google now uses OAuth for all of its GData APIs. These are three of the biggest, most mainstream consumer web businesses around, and they’re all going social and open in a big way.

At the same time, the proliferation of new socially-enabled services continues unabated. This is why users and developers are increasingly receptive to an Open Web in which the need to constantly re-create and maintain accounts, profiles, friends-lists, and activity streams is reduced. And even though some large sites like Facebook continue to push a proprietary stack, they too see the value of letting their users take their data with them across the social web (which is precisely what Facebook Connect does). Thus all the major players are aligned in their view of the emerging “social web ecosystem” in which Identity Providers, Social Graph Providers, and Content Aggregators will help users interact with the myriad social tools we all want to use.

So basically: everyone agrees on the architecture, most also agree on the open building blocks, and nothing prevents the holdouts from going open if/when they decide it’s beneficial or inevitable. This is why I’m so optimistic and excited to be a part of this movement, and it’s why audiences are so glad to hear the good news.

PS: Another positive development since my last talk is that we’re making great progress on actually implementing the “open stack” end-to-end. One of the most compelling demos I’ve seen is by Brian Ellin of JanRain, which shows how a user can sign up for a new site and provide access to their private address book, all in a seamless and vendor-neutral way!

Data Portability, Privacy, and the Emergence of the Social Web (Web 2.0 Expo)

Data Portability, Privacy, and the Emergence of the Social Web
Web 2.0 Expo
San Francisco, CA
April 23, 2008

Download PPT (5.3 MB)

cover-slideI’ve been talking about opening up the social web for some time, but the world keeps changing around me, so I can never use an old talk for very long. Since Web 2.0 Expo is such a big venue (probably the biggest conference I’ve ever spoken at), and since at Plaxo we’ve recently come to a new degree of clarity on how we see the emerging social web ecosystem emerging, I decided to make a totally fresh talk that answers “what is all this stuff going on right now, and where is it all headed”. After doing a dry-run for Plaxo employees yesterday, it was suggested that the visual impact of my slides could use some “polishing” (hey, I’m an engineer!), so our creative director Michael jumped in and worked with me into the night to help pretty things up. He’s amazing and this is easily the most beautiful set of slides I’ve ever had the privledge to deliver. 🙂

The room was packed, and I think the talk went very well. In fact, the Q&A was so lively and went on for so long that I actually got “played off the stage” with music to make room for the next speaker! And the huddle around the stage lasted considerably longer. So I guess I at least I got people thinking and talking. 😉 I was also pleasantly surprised to see a torrent of positive real-time reviews in the twitter-sphere (archived screenshot). My talk was live-blogged by Andrew Mager and Mark Scrimshire (thanks, guys!), and John McCrea even shot some video.

SocialWebDiagram-5It’s very exciting to be in the middle of such a transformative period in the Web. I firmly believe we’re on the cusp of the next major phase of the Web–the social web–and that a new layer of service providers are emerging to empower users to interact with the thousands of socially-enabled sites and services: identity providers, content aggregators, and social graph providers. There are examples of companies today that fulfill one or more of these rolls, and Plaxo is certainly going to participate in all of them, but we’re all just getting started, and–as I find myself saying more and more–you ain’t seen nothing yet!

Open Social Web roadshow continues

I mentioned earlier that the opening up of the social web has become a hot topic that’s taking center stage at many recent conferences and community events–and it seems to keep getting hotter every day. As a passionate advocate and early adopter / implementor of many of the building-block technologies (OpenID, OAuth, OpenSocial, microformats, Social Graph API, friends-list portability, etc.) working for a startup that’s helping define the new consumer and business ecosystem that’s emerging (both inside Plaxo Pulse and by helping users connect up the different tools and services they use), I’ve been speaking and otherwise participating in a lot of these events. Here’s an updated list of events I’ll be at in the next few weeks (including events today, tomorrow, and next week, heh). If you’re around at one or more of them, I hope you’ll come find me and say hi! 🙂

ReMIX08: Mountain View, Apr 17
Panel: “The Future of Social Networking” (3-4pm)

Data Sharing Workshop: San Francisco, Apr 18-19
Opening speaker: “What’s the problem?” (9am)

Web 2.0 Expo: San Francisco, Apr 22-25
Talk: Data Portability, Privacy, and the Emergence of the Social Web (Apr 23, 10:50am)

Web 2.0 Expo: San Francisco, Apr 22-25
Panel: OpenID, OAuth, Data Portability, and the Enterprise (Apr 23, 2:40pm)

OAuth Hackathon: San Francisco, Apr 26 (2-8pm)
Trying to help increase adoption of OAuth

Internet Identity Workshop: Mountain View, May 12-14
User-generated conference; I’m sure I’ll be running a few sessions

Data Sharing Summit: Mountain View, May 15
Directly following IIW

Google I/O: San Francisco, May 28-29
Talk: OpenSocial, OpenID, and OAuth: Oh, My! (exact time TBD)

Whoa, that’s a lot of events, considering they’re all in the next 6 weeks or so. 🙂 What can I say? The next major phase of the web is being formed as we speak, and it seems like every day another piece of the puzzle is being added. And between the technical, privacy, business, and user experience issues to debate, there’s always plenty to talk about.

If you can only make it to one of these events, I recommend trying to attend the Internet Identity Workshop. Everyone you’d want to meet in this community will be there, it’s incredibly accessible (both in terms of price to attend and ease of talking with key people), and it’s a good mix of explaining where we’re at today and getting down to real work pushing the envelope of where things go next. I always learn a ton at every IIW, I always have a great time, and I always leave with a bunch of great new ideas I can’t wait to work on. I’m sure if you come, you’ll have the same experience.

Social Networks: Where are they taking us? (MIX 08)

Social Networks: Where are they taking us?
MIX 08 (panel)
Las Vegas, NV (Venetian)
March 6, 2008

Download audio (WMV 43.9 MB, MP4 38.6 MB)

My panel on social networks at MIXJoshua Allen from Microsoft contacted me and asked if I’d like to be on a panel at MIX 08, Microsoft’s big web-focused conference, about the future of social networks. I’d never been to a Microsoft conference before (most of the events I go to are full of fellow valley startup people), so I was curious for the “anthropological value”, and when he told me the panel would be moderated by Guy Kawasaki and feature a cast of heavy hitters (Dave Morin from Facebook, Garret Camp from StumbleUpon, Marc Canter, and John Richards from Microsoft Live Platform), I knew I couldn’t possibly pass up this chance. Good thing too, because it was a remarkable event and certainly quite memorable.

The panel itself went very well–it was right after the amazing, boisterous keynote conversation between Steve Ballmer and Guy Kawasaki, so the fact that Guy was also running our panel brought in an extra large crowd. The discussion was heated and productive: how quickly will/should social networks open up, when will OpenID be ready for mass adoption, what about privacy issues, and so on. Guy was his usual awesome self: light-hearted but pointed, and always cutting to the chase. There were lots of questions from the audience, and they came up to talk for quite a while after the panel ended, so they were clearly engaged and interested, which is the best thing I could hope for.

Another thrill for me was getting to spend a lot of time with the IE team. The first IE 8 beta had just been released, and it was clear the team was fired up to really make a leap forward in standards support, performance, and features. Along with dojo creator Alex Russell, PPK of QuirksMode fame, JavaScript guru Doug Crockford, and a few others, the IE team invited us to a VIP party with them that started in TAO (a ridiculously large night club in the Venetian, complete with a roof-top beach) and ended up in the “Kingpin Suite” at the Palms, complete with in-room bowling alleys. Man, these guys know how to party! And they were genuinely interested in hearing our feedback about how to make IE better, how to provide better tools, and so on. As a long-time web developer, I normally assume I have no visibility into or control over the actual browser, how it works, or where it’s going, and my job is just to work around its issues as I find them. So it’s an amazing feeling to actually know the people writing the code for the next version of IE, and to know that my feedback might actually have a real impact. That coupled with the passion of the new IE team members gives me great optimism that the web platform will indeed get a lot better soon.

Oh yeah, and they lost my suitcase :(It was an odd feeling going to such a large conference where I knew so few people, and where there were so few startups represented (most of the developers seemed to be from large companies, IT organizations, and so on). But I learned a ton, had a great time, and even managed to shoot some photos in the process. The only downside was that upon leaving the hotel to go to the airport, the hotel realized they couldn’t find my suitcase which I’d checked earlier that day. Turns out some bellhop put it in the trunk of another car by mistake, and it ended up with a family in LA. The hotel said they’d pay to have it shipped up to me, but I still don’t have it. Since I was leaving the next day for SXSW, I had to quickly scrounge together a fresh set of toiletries, clothes, and so on. Luckily nothing too irreplaceable was in my suitcase, and hopefully it will show up on my doorstep any day now, but yeesh, what a way to end a trip!

The Future of Social Networks (Future of Web Apps Miami)

The Future of Social Networks
Future of Web Apps Miami (with Tantek Çelik and Brian Oberkirch)
Miami, FL
February 29, 2008

View Slides (slideshare)
Download MP3 Audio (37.3 MB)

In addition to the half-day workshop I presented at FOWA Miami, I also gave a talk as part of the main event with Tantek and Brian Orberkirch (who also has a great write-up of our talk) on The Future of Social Networks. I summarized my remarks in my previous FOWA post, but I wanted to add a separate post for this talk so I could link to the slides and audio (and video should be available soon as well). FOWA was a great event, and I’m eager for the next one!

Tim Berners-Lee groks the social graph

I guess I missed it with the Thanksgiving break, but Tim Berners-Lee recently wrote a thoughtful and compelling essay on the Social Graph, and how it fits in to the evolution of the net and the web so far. Definitely worth a read!

I was pleasantly surprised to see that he references the Bill of Rights for Users of the Social Web that I co-authored, as evidence of recent “cries from the heart for my friendship, that relationship to another person, to transcend documents and sites.” He echoes “the frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again,” and points out that “The separate web sites, separate documents, are in fact about the same thing–but the system doesn’t know it.”

I can’t think of a more eloquent way to describe the work I’m doing at Plaxo these days on opening up the social web and enabling true friends-list portability, and it’s certainly inspiring to hear it placed in the context of the larger vision for the Semantic Web by someone who’s already had such an impact on fundamentally improving the ways computers and people can interact online.

Onward, ho!

« Older posts Newer posts »

© 2024 Joseph Smarr

Theme by Anders NorenUp ↑