Thoughts on web development, tech, and life.

Year: 2007 (Page 3 of 5)

WordPress "XML-RPC server accepts POST requests only."

This morning I found I couldn’t publish to WordPress using Windows Live Writer any more. I would get errors like “Invalid Server Response – The response to the blogger.getUsersBlogs method received from the weblog server was invalid.” and when I looked at the request in HTTPAnalyzer, WordPress’s xmlrpc.php was sending “XML-RPC server accepts POST requests only.”, even though I WAS posting. Since WordPress’s WSYWIG and HTML editors both horribly mangle any code samples you try to use, this was quite frustrating.

But after a bit of Googling, I found a quick solution that worked perfectly. Just add the following line of PHP to the top of your xmlrpc.php file (inside the <?php of course):

$HTTP_RAW_POST_DATA = file_get_contents("php://input");

Thanks to helpful bloggers like Will for reporting solutions to problems like this, and thanks to Google for helping other distressed hackers find them! I hope this increases the ease with which this particular solution can be found by others.

The hidden cost of meta-refresh tags

We just discovered at Plaxo that redirecting using meta-refresh tags has a surprising performance penalty as a side-effect: it causes all cached resources on the redirected-to page to be re-requested (as if the user had hit the “refresh” button). Even though most of them should return 304s (if you’re caching them properly), this still results in a ton of extra requests and round-trips. A good workaround is to replace the meta-refresh tags with JavaScript redirects.

A bit more detail

A standard technique for redirecting users from one web page to another is to put a “meta refresh” tag in the first page that says to immediately redirect to the other page, e.g.

<html>
<head>
<title>Redirecting...</title>
<meta http-equiv=refresh content="0; url=http://new-page" />
...

When browsers see this, they go to the URL specified, and if the time specified before redirecting is (the “0;” in the above example), they’re even smart enough to remove the redirecting page from the browser history so when you press Back you don’t get re-redirected.

However, there is a side effect that none of us knew about or expected. We only discovered it while performance-tuning Plaxo Online 3.0 (coming real soon now!!) while using HTTPAnalyzer. For some people, after the initial visit to the site, on return visits none of the images would be re-requested (we send long cache expiration times, so this is good behavior). But for others, they’d be requested each time they went to the page.

The difference turned out to be that some people were accessing the site by a slightly different URL that uses a meta-refresh tag to go to the actual site. Since “http-equiv=refresh” means “treat this as if I’d sent the equivalent HTTP header ‘refresh'”, the browser (especially IE) acts as if the user had hit the reload button and re-requests all cached images on the redirected-to page with If-Modified-Since and If-None-Match headers. If you’ve got the right cache headers, these requests will all return 304 (i.e. the images won’t actually be re-downloaded), but it still results in a big–and unnecessary/unintended–performance hit because you’re now sending a bunch of extra round-trip requests while loading the page.

The ideal solution for redirecting is to send a 302 redirect response from the web server itself to avoid even loading the intermediate web page. However, there are times when this isn’t feasible, e.g. if you don’t have that level of control over the server or if your template system wants to only do this if certain variables are set. Another case is if you want to redirect from an HTTPS page to an HTTP page–if you try to do this on the server, you’ll get a browser warning about redirecting from a secure page to an insecure page, but if you do it with meta-refresh, it works fine (bravo, browser security dudes, heh). So in these cases you want to redirect client-side, but you don’t want to incur the side-effect of re-fetching all the cached resources.

A good solution is to use JavaScript (while some web developers like to degrade gracefully when JavaScript is disabled, we’re redirecting to a rich Ajax app, so this isn’t really an issue). Wherever you’d use a meta-refresh tag, instead insert the following script block:

<script type="text/javascript">
location.replace('http://real-page');
</script>

By using location.replace (instead of, say, location.href=), the browser will purge the redirecting page from the browser history, just like a meta-refresh tag with 0 wait time, which is a good thing. And you won’t get any bad caching side effects, which is also a good thing.

Thanks to the two Ryans for figuring this out! Now you know too. 🙂

I’m so impressed with f8

Facebook Platform Launch (photo by Ted "Dogster" Rheingold)

As f8 would have it, I was in San Francisco yesterday for Facebook’s platform launch and hackathon. What a day it was!

The event itself quite a spectacle (they filled the SF Design Center with about 800 people, Mark gave a Jobs-esque keynote, and the hackathon was set up with tons of couches, tray-passed hors d’oeuvres, a DJ, and Facebook engineers a plenty to help out with the hacking).

But the platform itself was the real star–Facebook really wants developers to be able to build apps that are as powerful and as integrated as the ones Facebook could build themselves, and the Platform really delivers on that audacious goal. You can host your app pages inside Facebook’s chrome, add items to news feeds, send notifications, and basically hook into all the places that Facebook’s existing apps do.

The technology to make this work–and still be safe–is quite clever: basically they curl your page with enough query args to let you access their APIs, you construct the page results, and they display it inside Facebook. Instead of returning straight HTML, they have a modified version they call FBML which, in addition to stripping out JavaScript and sandboxing your CSS, lets you insert special facebook tags to easily do things like link to your friends, use Facebook-styled UI widgets, and even some basic AJAX (they proxy the call through Facebook and then give you a limited-but-still-quite-powerful set of options for what to do with the results, like injecting them into a given DOM node). And since they’re just calling your page via HTTP and it’s up to you to interact with Facebook’s REST API while constructing your result, there’s no need to even write your Facebook app in PHP (which is good news for me, since I’m no Terry, heh).

But the most impressive thing to be about f8 is just how much Facebook “gets it”. They could have continued to be a walled garden–they were doing quite well at it!–but it’s clear from their words and their actions that they really believe they will be more successful by being an open platform and letting developers have real power to extend the experience and take advantage of the social graph they’ve built up. They’re pushing the limits of technology to enable deep integration, they’re providing prominence to third-party apps inside Facebook to help them spread, and they’re even letting the apps keep 100% of the ad money they generate. It’s of course quite defensive for Facebook inasmuch as it disincentivizes people from trying to build new social networks and gives them a multiplier on the features they can offer their users, but it still shows great vision and I couldn’t be happier or more impressed with what they’re doing!

See you at Internet Identity Workshop

IIW2007 Registration banner

I’ll be attending the Internet Identity Workshop (IIW2007a, to be precise) this Mon-Wed at the Computer History Museum in Mountain View, CA. I went to IIW2006b last year and was immediately excited to be a part of this community. The people involved are not only very smart, they’re pragmatic, hands-on, accessible, and motivated by all the right reasons.

The progress of OpenID has been stunning–developing the standard, building libraries, folding in related projects, and getting broad support–and I think we may well start to see its adoption hit the mainstream this year (we’re certainly playing with it at Plaxo these days).

Like other workshops and conferences that I go to, this will also be an opportunity to catch up with a lot of friends that I (for whatever reason) seem to only find time to see at events like these. So if you’re planning to attend, come say hi or give me a call (my latest contact info is linked to from my blog sidebar, thanks to Plaxo of course).

See you there, js

I’m speaking at OSCON in July

Now that the OSCON 07 site is up, I guess it’s official–for the second year in a row, I’ve been selected to give a talk at O’Reilly’s annual Open Source Convention (OSCON) in Portland, OR from July 23-27.

The title of my talk this year is “High-Performance JavaScript: Why Everything You’ve Been Taught is Wrong“. I’m basically going to share all the secrets we learned the hard way while building Plaxo Online 3.0 and trying to make it fast. It turns out that so many of the techniques and axioms used by the AJAX community have serious performance implications, so I’ll explain how we discovered and worked around them. I’ll also argue that a different design philosophy is necessary if performance is truly a goal–one in which every line of code you write has a cost that needs to be justified and in which you assume the platform (web browser) is fragile and brittle rather than infinite and omnipotent. In other words, you have to stop asking “what can I make the browser do?” and instead ask “what should I make it do?”.

In the last few years I’ve been going to a lot of conferences on behalf of Plaxo, and OSCON is easily my favorite of them all. (Thanks again Terry for turning me on to it!) I think it’s because there’s a higher signal-to-noise ratio of people attending who are really building and influencing things to those that are just attending to figure out what all the latest hype is about. It doesn’t feel corporate; it feels like a bunch of smart hackers trying to figure things out together and share what they’ve learned. It’s what many conferences aspire to but so rarely achieve these days. Plus Portland is a really fun town to spend a week in (last year I went to the rose garden, Japanese tea garden, a custom martini bar, and not one but two Dresden Dolls shows).

I can’t wait. See you there!

Has it really been five years already??

My Stanford class book pageI can’t believe it, but Stanford is already telling me to get ready for my five-year college reunion this fall. Five years–that’s as long as I was in college (including my Master’s degree) but this five years sure went by a lot faster than the previous five! Then again, I just passed my five-year anniversary at Plaxo (the math is a bit funny because I started working at Plaxo before I finished my MS, which btw is not advisable for one’s sanity).

Anyway as part of the reunion they asked everyone to make a page for a “class book” that they’ll be distributing. It’s a one-pager where you share some of you Stanford memories and give an update on your life since graduating. I think they expected most people to draw their class book page by hand and snail-mail it in or use their web-based pseudo-WYSIWIG editor, but I wanted a bit more control. So I downloaded the template PDF and opened it in Adobe Illustrator, which converted it to line-art (wow–product compatibility, who knew?!). Then I was able to add the type and graphics in Illustrator and save the final copy back out to a PDF.

For me, life since Stanford meant three things: doing NLP research (this is the reunion for my undergrad class), working at Plaxo, and getting married. As scary as it is to consider that five years have gone by already, when I actually stop to think of all the wonderful things that have happened since then, I consider myself extremely fortunate. I couldn’t be happier. In fact, I could really use another five years like this one!

One quick technical note: Since I embedded lots of photos in my class book page at their original resolution (I just scaled them down in Illustrator so they would still print at high quality), the file ended up being almost 200MB. When I first exported it as a PDF, I kept all the default options, including “preserve Illustrator editing capabilities” and the resulting PDF was 140MB. Clearly I could not e-mail this to Stanford nor post it on my web site. So I tried again, unchecked the Illustrator option, and also went into the compression settings and told it to use JPEG for the color images (which of course the originals were, but the default PDF option is to use 8-bit ZIP). This made a huge difference and the PDF was only 3MB but still high resolution. I also tried the compression option “Average downsampling at 300 dpi” for color images, but that essentially took out all the resolution in the images, so as soon as you magnified the document at all, they were very pixelated (looked more like 72 dpi to me). Apparently just telling it to use JPEG with the original images is plenty.

Handling GeoCoding Errors From Yahoo Maps

One of the best features of Yahoo’s AJAX Maps API is its ability to geo-code full-text mailing addresses into lat/long on-the-fly, so you can say for instance “draw a map of 1300 Crittenden Lane, Mountain View, CA 94043“. (By now, Google and MSFT also offer geocoding, but Yahoo had it way earlier because they embraced JSON-P at a time when everyone else was still scratching their heads).

Yahoo does a pretty good job of geocoding addresses even if the format is a bit weird or some the info is missing, but of course they can’t always figure it out, especially if the address is garbage to start with. Their API indicates (somewhat cryptically) that you can capture an event when geocoding completes, but they don’t tell you what data you get or how to deal with it. Since there doesn’t appear to be much discussion of this on the Internets, I thought I’d provide the answer here for the record.

When creating a YMap, you can register a callback function to get triggered every time geocoding completes, like so:

var map = new YMap( ”¦ );
YEvent.Capture(map, EventsList.onEndGeoCode, myCallback);
”¦
function myCallback(resultObj) { ”¦ }

Yahoo’s API claims that you can also pass an optional private context object, but as far as I can tell they never send it back to you. Of course you can always use a closure around your callback function to achieve the same thing.

Now for the part they don’t tell you: your callback is called with a single resultObj argument. You can figure out the contents of this argument by just writing your callback function to take an argument and then writing console.dir(resultObj) to print out its full nested structure in the unbelievably useful Firebug (Joe, you’re my hero!). Here’s what you’ll see:

var resultObj = {
  success: 1, /* 1 for success, 0 for failure */
  /* Original address you tried to geo-code */
  Address: “1300 Crittenden Lane Mountain View, CA 94043”³,
  GeoPoint: {
    /* This is a YGeoPoint, which also has a bunch of functions you can call */
    Lat: 37.424663,
    Long: -122.07248
  },
  ThisMap: { /* reference to the YMap */ }
};

So in your callback function you just test for resultObj.success, and if the geocoding failed, you can show an appropriate error message.

One trick I found for showing an error message is that you can embed a hidden div with an error message inside the map-holder div you pass to the YMap constructor, and YMap won’t get rid of it. If you use absolute positioning and give it a z-index, you can then show it when geocoding fails and get a nice “Map not available” right where the map would normally be.

Here’s a working example of handling geocoding and showing an error message. Thanks Yahoo! for the great API, and hopefully some of this info will find its way into the next rev of your API docs. :)

PS: Special thanks to Mark Jen for finding me a decent code-writing plug-in for Windows Live Writer! Boy did I struggle with getting WordPress not to mangle my code in the eval post!

The origins of Lunch 2.0

In honor of the first officially-sanctioned Lunch 2.0 at Yahoo today, I thought I would finally write something on how and why we started this valley phenomenon:

Living in Silicon Valley is expensive, and the traffic on 101 sucks. So why not telecommute from, say, somewhere in the Midwest? What does living out here get you that working remotely doesn’t? Well, for one, all the other cool companies are out here. And, more importantly, the smart, innovative people behind those companies all live and work out here. But except for hiring employees, we rarely take advantage of that fact. We read about these companies in the blogs, and we use their products, and we’d probably all love to see how these companies and people live and work, but we don’t. Even though they’re like 5 minutes away from us, and they’re full of people just like us that would love to see how we live and work too!

And though many silicon valley companies are ostensibly at least somewhat in competition with one another, I think in most aspects we’re all kindred spirits fighting the same fight””trying to transform the world through technology and build a successful, functioning organization in the process. We all face the same issues: prioritizing features, hiring, nurturing a happy and productive work environment, dealing with growth, dealing with meetings and process (how much is too much? How little is too little?) and so on. Yet we rarely talk about these things, mainly because we’re all so busy trying to figure them out on our own. While traditional conferences may fill this need to some degree, they’re usually too big, too expensive, too impersonal, and too infrequent to appeal to most working people in the valley. But lunch is a perfect venue to get together, “talk shop”, and see how each other are set up. Everyone has to eat, it’s an informal setting, and it tends to be a manageable size. And silicon valley is such a small, closely connected world, that we know people at all the companies we care about within a degree or two of separation.

So initially, we just started doing this ourselves, e.g. “hey, you know so-and-so at Yahoo, can we go meet him for lunch next week?” or “my friend has this new startup and they just got an office, let’s go see them”. We thought others would be interested to see what we had seen, so we took photos and posted them online (in the process, coining “lunch 2.0” since we needed a name for the site, and it felt like a web-2.0 approach to the problem of meeting people). We also blogged upcoming events, but mainly just as an alternative for managing a mailing list of our few friends that wanted to come to these events. As we told our friends what we were doing, more and more wanted to come too, so we just pointed them to the blog, not thinking much of it.

The “we” in this case was initially me, Mark Jen (yes, that Mark; he joined Plaxo right after leaving Google), and Terry Chay from Plaxo (now at Tagged), and Terry’s friend Dave at Yahoo. Mark and I started having more lunches out at friends’ companies and Terry said he and Dave had been trying to do the same, so we quickly joined forces. Terry now tells people he was the “VC of lunch 2.0” because he plunked down the 5-bucks or so for the lunch20.com domain name. 🙂

The first company to realize that officially hosting lunch 2.0 would be a good thing was SimplyHired in early March ’06. Previously, we all just went to lunch with friends at Yahoo!, Google, and so on, but no one from the company officially “hosted” it, and certainly no one paid for us to eat. But Kay Kuo at SimplyHired wanted to get the word out about her company, so they ordered a bunch of food, gave us a tour of their office, demoed their site, and even gave us some free t-shirts! The event was a huge success, both for SimplyHired and for the people that came. Soon after, other companies started offering to host their own lunch 2.0 events. Mainly this was because someone from that company had attended a previous lunch 2.0 event, gotten excited, and gone back to tell their company they should do the same. Early lunch 2.0 hosts were Meebo, Plaxo, AOL, JotSpot, and Zvents.

Another big milestone was in May 06, when some people from Microsoft’s Silicon Valley Center got permission to host a lunch 2.0 event at their campus. This was definitely the most prominent company to host lunch 2.0 so far, and they did an amazing job, including paying for our lunch at the MSFT cafeteria, providing a tour of their 6-bldg campus, and bringing a lot of their own engineers to the event. By this point, lunch 2.0 had picked up enough of its own momentum that our roles as stewards changed from mainly trying to find and convince new people to host events to just coordinating times and logistics for companies that came to us and wanted to host. That trend has continued thus far, and shows no signs of slowing yet.

Other important milestones in lunch 2.0 history:

  • When JotSpot hosted lunch 2.0, something like 45 people showed up. Previously the biggest event had around 20 people, so this was the first time we thought “whoa, this thing is really getting out there”.
  • Meebo hosted a lunch 2.0 early in the summer and invited all summer interns in the valley to come. They had about 6 employees at the time and were sub-leasing a small amount of office space from another startup. About 80 people showed up, completely filling the office and spilling out onto the street.
  • Zazzle hosted an outdoor BBQ at their office and attracted a record crowd of about 150 people. They also set up tables with umbrellas, a professional BBQ setup and buffet line, custom-printed posters and banners, and even custom-printed lunch 2.0 t-shirts for all attendees.
  • Jeremiah from Hitachi Data Systems organized a combination lunch 2.0 and “web expo” at their executive briefing center. There were about 300 attendees, and we picked 10 data-intensive startups to bring laptops and set up an informal web expo where they could demo their products and talk about how they dealt with large amounts of data.

Going forward, it’s great to see that some of these events have gotten so large, but we also want to make sure that smaller startups can host lunch 2.0 events without feeling like they have to handle a ton of people or spend a lot of money. There are still plenty of cool companies in the area that we’ve never been to yet, so we’re hoping to keep doing lunch 2.0 for the foreseeable future.

Returning to those initial observations about making the most of living in the valley, I think the best thing that’s come from lunch 2.0 is that we’ve met so many other great people in the area, seen how they work, and they’ve met us in return. I feel more connected to what we’re all doing here, and I feel that I’m taking better advantage of the time and space in which we’re all living.

Fixing eval() to use global scope in IE

[Note: This is the first in what I hope will become a series of technical articles on the lessons I’ve learned “from the trenches” of my web development work at Plaxo. Non-techy readers are invited to skip any articles categorized under “Web development”. :)]

Update: This article has been picked up by Ajaxian, and it’s sparked an interesting discussion there. 

At Plaxo I’ve been working on a new (soon to be released) version of Plaxo Online (our web-based address book, calendar, and more) that is very ambitious both technically and in terms of user experience. We’re currently deep into performance tuning and bug fixing, and we’ve already learned a lot of interesting things, most of which I hope to share on this blog. The first lesson is how to correctly eval() code in the global scope (e.g. so functions you define inside the eval’d code can be used outside).

When we built the first version of the new site, we combined all the JavaScript into one giant file as part of our deployment process. The total codebase was huge and it had the predictable effect that initial page-load time was terrible because the user’s CPU was solidly spiked for several seconds while the poor browser choked through the massive amount of code it had to parse. So we started loading a lot of our code on-demand (packaging it into several logical chunks of related files and using dojo’s package/loader system to pull in the code as needed).

All was well until we started defining global functions in the loaded JavaScript. (We did this mainly for event handler code so we didn’t have to spend time walking the DOM and finding all the clickable nodes after injecting innerHTML to hook them up to the right scoped functions.) In Firefox, everything kept working fine, but in IE, none of the global functions were callable outside of the module being loaded on-demand (you would get a typically cryptic IE error that in effect said those global functions weren’t defined). It seemed clear that when the code being loaded got eval’d, the functions weren’t making it into the global scope of the page in IE. What was unclear was how to fix this.

Here’s a simplified version of the situation we faced:

function loadMyFuncModule() {
  // imagine this was loaded via XHR/etc
  var code = 'function myFunc() { alert("myFunc"); }';
  return eval(code); // doesn't work in FF or IE
}

function runApp() {
  loadMyFuncModule(); // load extra code "on demand"
  myFunc(); // execute newly loaded code
}

The thing to note above is that just calling eval() doesn’t stick the code in global scope in either browser. Dojo’s loader code solves this in Firefox by creating a dj_global variable that points to the global scope and then calling eval on dj_global if possible:

function loadMyFuncModule() {
  // imagine this was loaded via XHR/etc
  var code = 'function myFunc() { alert("myFunc"); }';
  var dj_global = this; // global scope object
  return dj_global.eval ? dj_global.eval(code) : eval(code);
}

This works in Firefox but not in IE (eval is not an object method in IE). So what to do? The answer turns out to be that you can use a proprietary IE method window.execScript to eval code in the global scope (thanks to Ryan “Roger” Moore on our team for figuring this out). The only thing to note about execScript is that it does NOT return any value (unlike eval). However when we’re just loading code on-demand, we aren’t returning anything so this doesn’t matter.

The final working code looks like this:

function loadMyFuncModule() {
  var dj_global = this; // global scope reference
  if (window.execScript) {
    window.execScript(code); // eval in global scope for IE
    return null; // execScript doesn't return anything
  }
  return dj_global.eval ? dj_global.eval(code) : eval(code);
}

function runApp() {
  loadMyFuncModule(); // load extra code "on demand"
  myFunc(); // execute newly loaded code
}

And once again all is well in the world. Hopefully this is the type of thing that will be hidden under the hood in future versions of dojo and similar frameworks, but for the time being it may well impact you if you’re loading code on demand. So may this article save you much time scratching you head and swearing at IE. 🙂

(PS: Having found the magic term execScript, I was then able to find some related articles on this topic by Dean Edwards and Jeff Watkins. However much of the details are buried in the comments, so I hope this article will increase both the findability and conciseness of this information).

« Older posts Newer posts »

© 2024 Joseph Smarr

Theme by Anders NorenUp ↑