Web app development is trending towards running all user logic and interaction code on the client-side, leaving the server to expose REST or RPC interfaces. Compilers are targeting JS as a platform, and the next versions ECMAScript are being designed to take that into account. Client-side frameworks such as Backbone, Ember, and Require encourage the creation of feature-rich applications that not only have a lot of code, but have a lot of interactions between components, and between components and data.

This is all great and can lead to some excellent user experiences, but there's no question that it's harder to develop web applications vs web pages.

The fundamental reason for this is that serving your code and data over the internet, to run on some random browser, in javascript, a language with which you need to be extremely careful[1] is a totally deficient code deployment platform. And it's not getting better very quickly. I feel like if Star Trek were real life, Captain Jean-Luc Picard would, every once in awhile, not be able to fight Klingons because his dashboard was still loading.

I'd like to highlight 3 relatively common mistakes with easy solutions, and talk about some specific things we've encountered and learned at ReadyForZero[2].


1. Stripping cache-busting headers

You may serve static content through a CDN, which is of course desirable. If you pass requests to your real server on a cache-miss (eg: "custom-origin" on AWS points to your real website), you should be careful. You probably use a cache-busting string inside the name of the served file to serve new versions when they're deployed, so that your filenames look something like this:

This isn't so hard to do, you can use any hashing algorithm to generate a signature for the file that will change when the contents change. When the new url is referenced, it can't possibly be cached, so it will be retrieved anew from the server. The common mistake happens here. There's a lot of advice on-line that recommends stripping the cache-busting headers in nginx and always serving the most recent version of the file. This can lead to your site serving different files (eg: html and javascript) with inconsistent versions, but, more importantly, it can easily lead to your CDN caching an incorrect version if you're using multiple server processes (eg: on different machines). The error happens like this:

  • Initial state, both servers are serving HTML1 and JS1.
  • Server A restarts, serving HTML2 and JS2.
  • Client with HTML2 requests JS2 from the CDN, which since the file is new causes a cache miss.
  • CDN passes this through to your custom origin, and this happens to go to server B.
  • Server B, since it hasn't restarted, strips the cache-busting string and serves the old version.
  • CDN caches the old version under the new name.
This is pretty obvious when you think about it, but just blindly following advice online can easily lead to this error. What's worse is that you may never know since everything seems fine to you, but users in a different physical region served by another CDN region may be having issues. The solution is to not strip cache-busting strings and store static assets in a place where all versions can be served concurrently.

2. Serving one gigantic JS bomb

Everyone knows that you should minify and concatenate your javascript files together. But it's a mistake to do this blindly. If the concatenated file is large enough it may be more efficient to parallelize the requests. In addition, if you're modifying parts of the file frequently, you'll trigger a lot of invalidations, when a lot of the file may not have changed.

If you separate files that change frequently, then you'll have a handle on both issues. I'd recommend using something like require.js - this gives real dependency management to your javascript, is incredibly easy to set up when you're first starting out (and a big pain to add later), and will help you understand and manage your dependencies, including some advanced options like asynchronous loading.

One cautionary note: require.js gives up trying to load a resource after a certain amount of time, this is specified in the waitSeconds option, and the default time is 7 seconds, which depending on where your users are (eg: mobile), may be quite short.


3. Not aggregating error events

You can't just launch your Javascript out into the world and not keep tabs on it. You can't possibly test every browser combination with each user account state. Also differences in load times may cause weird states. So it's important to set up some sort of feedback mechanism to see if your users are getting errors. You can do that fairly easily by specifying a global error handler that collects errors to send back to the server. Here's a trivial example: The tricky part is there are always going to be some non-zero amount of errors because people have weird toolbars installed or whatever. So you need to track what the steady state is and check for deviations against that.

At ReadyForZero we trap onError events at the top level, and send them back to the server and get a daily report summarizing how many users got errors and what they were. We've found that many times the error messages are insufficient, so we also pass back the last few events from our event system. Having the most recent Backbone or jQuery events that were triggered is often helpful in providing the user's exact context when they triggered the error.


Low hanging fruit

The frustrating part is that we shouldn't have to worry about any of this shit. Companies should be worrying about their products, and building them out quickly and correctly. Making sure that some of these easy wins are set up will let you focus on the big stuff.

People spend a lot of time obsessing over funnels, but just getting your app working properly can lead to gains just as big.


  1. Does your client-side code have any memory leaks? Are you sure? How do you know?
  2. We've got some really smart folk at ReadyForZero working on pushing the state of the art.


Discuss this on hacker news.