Knockout.js 1.3 External Templates

{Cross posted on FreshBrewedCode}

Finally! I updated my Knockout.js-External-Templates plugin to support the new template architecture in Knockout.js 1.3. “So,” you ask, “why would I want to use this?” If you’re a developer using Knockout.js, perhaps you’ve run into the inevitable code bloat as your templates multiply and begin to crowd your document? If you’ve been using jQuery templates with Knockout.js, perhaps you’ve not only grown tired of keeping templates in SCRIPT elements, but you’ve also wanted to take advantage of real markup syntax highlighting in your favorite IDE (which should be WebStorm, by the way) – but alas, markup inside a SCRIPT element just can’t do that. Fret no more! This plugin will enable you to:

  • Separate your concerns by keeping templates in separate files, OR in the document, OR both (mix and match to your heart’s content).
  • Take advantage of syntax highlighting by storing your native or jQuery templates in their own HTML file.
  • Lazy load templates only as they are needed by your application.

Enough preamble already, let’s look at an example.

Here we have a simple page that displays a list of states.  Each state has a list of cities associated with it, and each city has an image and an accompanying list of statistics:

KoExternalTemplateExample App

Here’s a look at the JavaScript view model that contains the data bound to the view(s) on the page:

var viewModel = {
    states: [
        new State("Tennessee", "Southeast", [
            new City("Nashville", [
                new Statistic("Population", "749,935"),
                new Statistic("Mayor" ,"Karl Dean")
            new City("Franklin", [
                new Statistic("Population", "62,487"),
                new Statistic("Mayor" ,"Ken Moore")
            new City("Brentwood", [
                new Statistic("Population", "37,060"),
                new Statistic("Mayor" ,"Paul Webb")
            new City("Murfreesboro", [
                new Statistic("Population", "108,755"),
                new Statistic("Mayor" ,"Tommy Bragg")
        new State("Georgia", "Southeast", [
            new City("Atlanta", [
                new Statistic("Population", "3,500,000"),
                new Statistic("Mayor" ,"Kasim Reed")
            new City("Snellville", [
                new Statistic("Population", "18,242"),
                new Statistic("Mayor" ,"Jerry Oberholtzer")
        new State("Ohio", "Mid-West", [
            new City("Columbus", [
                new Statistic("Population", "1,100,000"),
                new Statistic("Mayor" ,"Michael B. Coleman")

To drive this page, we have a state template, a city template and a statistics template, all of which are stored separately from the index.html page, here’s a screen capture of the project folder hierarchy:

File Structure

As you can see from above, the templates reside in the “templates” folder – relative to the index.html document.  At the top of the main.js file, we tell infuser where to go and look for templates with this line: infuser.config.templateUrl = “templates”;.  You can easily change the template look-up location by altering the templateUrl.


The state.html template:

<li class="state-container">
    <h3 data-bind="text: name"></h3>
        <ul data-bind="template: { name: 'city', foreach: cities }"></ul>

The city.html template:

<li class="city-container">
        <img class="city-img" data-bind="attr: { src: img, alt: name }">
        <div class="city-facts">
            <em data-bind="text:name"></em>
            <div data-bind="template: { name: 'stats'}"></div>
        <span style="clear:both;"></span>

The stats.html template:

<ul data-bind="foreach: stats">
    <li class="stat-container" data-bind="ifnot: editing">
        <div class="stat stat-name" data-bind="text:name"></div>
        <div class="stat stat-value" data-bind="text:value, click: toggleEdit "></div>

    <li class="stat-container" data-bind="if: editing">
        <span class="stat stat-name" data-bind="text:name"></span>
        <input class="stat stat-value"type="text" data-bind="value: value"><input type="button" value="save" data-bind="click: toggleEdit">

The main index.html file is very lean, holding only the script & css includes, plus the initial placeholder for the starting template:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
    <title>koExternalTemplateEngine Example</title>
    <link rel="stylesheet" href="css/style.css">
    <script type="text/javascript" src="js/jquery-1.5.2.js"></script>
    <script type="text/javascript" src="js/knockout-latest.debug.js"></script>
    <script type="text/javascript" src="js/koExternalTemplateEngine_all.js"></script>
    <script type="text/javascript" src="js/main.js"></script>
    <ul class="state-list" data-bind="template: { name: 'state', foreach: states }"></ul>

When you call ko.applyBindings(viewModel), Knockout will parse the template binding on line 13 (UL element) in the index.html file and ask the template engine to get the template content for the “state” template. The KoExternalTemplateEngine will check the DOM first (if you included the template in the page), and if it doesn’t find it locally, it tells infuser to pull the template down from the external endpoint you configured via the infuser.config options. (It’s worth noting that the KoExternalTemplateEngine will handle anonymous templates just like the native engine if the template being retrieved isn’t a DOM template or an external one.) As the “state” template is evaluated, the same steps will occur when Knockout asks for the “city” template, and again when it comes across the template binding to the “stats” template.

Although this is a very simple example, I’ve included a click binding that will swap the “view” template for a statistic out with an “edit” version if you click on one of the statistic values. This is to demonstrate that all the normal “native KO template” functionality behaves as you would expect.

KoExternalTemplateEngine Example App 2

The KoExternalTemplateEngine also supports jQuery templates (special thanks to Ryan Niemeyer for fixing the last two bugs I had with jquery-tmpl support). Since it takes a dependency on infuser, you may want to look at the configuration options available in infuser (it will give you an idea of how you can control where templates are pulled from, etc.).

The sample app from this blog post is included as part of the KoExternalTemplateEngine repository on github (it’s in the example/native2 folder).


Infuser – a Template Loader

{Cross-posted on Fresh Brewed Code}

In my last post I discussed a simple jQuery plugin called “Traffic Cop” – which prevents duplicate AJAX requests for the same resource while an identical request is already running. The reason I wrote Traffic Cop was to support a project called “infuser“. Infuser was born out of frustrations I’ve had with where other template engines/binding frameworks expect templates to reside (quite often SCRIPT tags). Requiring template fragments to reside in SCRIPT tags is a recipe for epic fail when it comes to maintenance & readability (not to mention the lack of IDE syntax support when writing markup inside a SCRIPT tag). I was surprised to find that no one had written a “generic-ized” utility that could interface with a given template engine and handle the fetching of templates from a remote resource. So, borrowing some ideas from Dave Ward, I initially wrote an external template engine for Knockout.js. The Knockout 1.3 beta included significant changes to templating, so as part of my effort to update the plugin for 1.3, I began to separate out utilities that could be re-used on their own or with other frameworks.

So – what does infuser do?

  • Asynchronously (or synchronously, if you must) retrieves templates and stores them locally in the client once retrieved. Default storage options are in-memory (hash) or in SCRIPT tags (since some engines prefer that). You can write a different storage provider if necessary.
  • Provides hooks for a callback to be invoked after you “get” a template, OR if you use the “infuse” method, you have more extensive control about how the template is rendered (if it’s data-driven) and attached to the DOM – including pre- and post-attach options and a render override.
  • Provides a hook for telling infuser how your preferred template engine handles binding a model to a template (making it possible for infuser to work with several major template engines).

But wait, why is this useful?

  • First, you don’t have to put your template content in your main document (or duplicate it in multiple documents, God forbid)
  • If your template engine expects your templates to be in SCRIPT tags, you don’t have to lose syntax highlighting, etc. in your IDE – you can still place them in their own files with a valid markup extension
  • Dovetailing the above point, it can aid in re-usability and maintainability via separation of concerns
  • Infuser takes advantage of not only it’s own in-memory storage (once a template has been retrieved, it’s cached), but your browser’s cache is also leveraged (assuming it’s not disabled and assuming the server is returning a 304 response)
  • It abstracts away infrastructure/ceremonial code involved in retrieving, binding and rendering templates to the DOM


Infuser provides two main ways to interact with templates. The first is the bare-bones “get” method. You provide it two arguments: a template name, and a callback to be invoked when the template is retrieved (the callback takes the retrieved template as it’s only argument). The second way is via the “infuse” method, which provides a much more sophisticated set of functionality. Let’s look at examples of both:

“get(templateId, callback)”

Let’s assume we have a static template that we want to load when a button is clicked. In the following example, we’re telling infuser to look for templates in a “templates” directory (relative to the current document) that have a prefix of “tmpl_” and a file extension of “.html”. Then, we’re binding the “#btnTemplate” button’s click event to a function that gets the “HellowWorld” template. In our callback, we’re hiding the original content and removing it, then fading in the new content:

    infuser.config.templateUrl= "./templates",// look for templates in a "templates" directory (relative to this page)
    infuser.config.templateSuffix= ".html"    // look for templates with an ".html" suffix (this is a default value, just showing as an example)
    infuser.config.templatePrefix = "tmpl_";  // look for templates with a "tmpl_" prefix
    // Now - wire up a click event handler that asynchronously fetches a static html file and appends it to an element
        infuser.get("HelloWorld", function(template){
            var tgt = $("#target");

Very straightforward stuff. What about data-driven templates? In this example we’re retrieving a jQuery template, binding it and then rendering it:

// Pulling a jquery-tmpl
var model = { names: ["Ronald", "George", "William", "Richard"] };
        infuser.config.templateUrl= "./templates",
        infuser.get("Example", function(template){
            var tgt = $("#target");
            var div = $("<div/>");
            $.tmpl(template, model).appendTo(div);

So – while it’s straightforward, it’s in danger of turning into a lot of ceremonial code – especially if you have several templates to pull in for a given document. Plus, what if you always wanted to take the approach of “hiding, then removing” the target content, and then fading in the new template? Writing that each time would be overkill. That’s where “infuse()” comes into play:

“infuse(templateId, options)”

The “infuse” method takes two arguments: the template id/name, and an options hash that has the following members:

  • preRender: a method with a signature of (target, template), where “target” is a selector used to target n-number of DOM elements where the template is to be rendered, and “template” is the retrieved template content prior to any “binding” (if a template engine is involved).
  • render: method with a signature of (target, template), where “target” is a selector used to target n-number of DOM elements where the template is to be rendered, and “template” is the completed template (i.e.- if you’re using a template engine, this is after the model has been bound to it).
  • postRender: method with a signature of (target), where “target” is a selector used to target n-number of DOM elements where the template has been rendered.
  • target: can be a function or a string. If it’s a function, it takes the template id as its only argument and should return – based on whatever transformation logic you desire – a selector (string). (The default implementation is a function that takes the template id and returns it prefixed with “#”, making it a selector that targets a DOM element with an id matching the template name.) If it’s not a function, then the value provided is treated as a selector. Typical usage is to define a default function used for most cases, and then override it with a specific value in the options hash when needed.
  • loadingTemplate: an object containing a content member (the HTML to display while loading a template), plus transitionIn and transitionOut methods that can be used to handle the appearance of the loading template (i.e.- you can fade it in and fade it out, etc.).
  • bindingInstruction: a method with a signature of (template, model) where “template” is the content retrieved from the server and model is a JavaScript object that will be bound to the template. For example, to tell infuser you are using jQuery templates, then you’d do this: infuser.defaults.bindingInstruction = function(template, model) { return $.tmpl(template, model); };. It defaults to simply return the template content.
  • useLoadingTemplate: a boolean indicating if the loadingTemplate member should be used when loading templates.

Whoa! That’s a lot of options, you’re probably thinking. The good news is that basic defaults are provided for each, so you only have to provide what you’re overriding. Let’s take the jQuery template example from above and re-write it using “infuse”:

    infuser.config.templateUrl= "./templates";
    infuser.defaults = $.extend({}, infuser.defaults, {
        bindingInstruction: function(template, model) {
            return $.tmpl(template, model);
        preRender: function(target, template) {
        render: function(target, template) {

        infuser.infuse("Example",{ target: "#target", model: model });

We could also easily make the click event handler even shorter if we renamed our target element’s id to “Example” instead of “target”:

    infuser.infuse("Example",{ model: model });

It’s important to note that the defaults we set on the “infuser.defaults” object will now apply to any template rendering on the page, so we’ve reduced the “noise” significantly. You can override any of the defaults at any point by providing a different implementation of it in the options hash (second arg to “infuse”). For example, if your heart was really set on sliding a template down (as opposed to the default fadeIn() in the above example) then you could do the following:

        model: model,
        render: function(target, template) {

Wrapping Things Up

Infuser provides a getSync call for synchronous retrieval, but the “infuse” call is always async (I have no intention of creating a synchronous version, and may remove the getSync call at some point, since synchronous AJAX is a bad idea all around). “Traffic Cop” is used internally by infuser to prevent multiple simultaneous requests for the same template. So far I’ve used infuser in conjunction with jQuery templates, Underscore templates & static content. In theory, infuser should work with any template engine that can be abstracted behind the “bindingInstruction(template, model)” call. If you want to see more usage demonstration, please check out the examples included in the github repository. Your feedback (and pull requests) are welcome!


Traffic Cop

{Cross-posted on Fresh Brewed Code}

Recently I’ve been working on a project called ‘infuser‘ – it’s a JavaScript library for retrieving resources (i.e. – views/templates) asynchronously, and providing what is hopefully a nice API around rendering (it supports data-driven templates from multiple template engines, as well as static content), and attaching the completed content to the DOM.  I ran into a situation where multiple requests could be made for the same external resource simultaneously, and I wanted to prevent the unnecessary duplicate round trips.  Thus, “Traffic Cop” was born.  It’s roughly 30 lines of code that wraps the standard jQuery $.ajax() call with a custom $.trafficCop() function:

Author: Jim Cowart
License: Dual licensed MIT ( & GPL (
Version 0.1.0
(function($, undefined) {

var inProgress = {};

$.trafficCop = function(url, options) {
    var reqOptions = url, key;
    if(arguments.length === 2) {
        reqOptions = $.extend(true, options, { url: url });
    key = JSON.stringify(reqOptions);
    if(inProgress[key]) {

    var remove = function() {
            delete inProgress[key];
        traffic = {
            successCallbacks: [reqOptions.success],
            errorCallbacks: [reqOptions.error],
            success: function() {
                var args = arguments;
                $.each($(inProgress[key].successCallbacks), function(idx,item){ item.apply(null, args); });
            error: function() {
                var args = arguments;
                $.each($(inProgress[key].errorCallbacks), function(idx,item){ item.apply(null, args); });
    inProgress[key] = $.extend(true, {}, reqOptions, traffic);
    return $.ajax(inProgress[key]);


Breaking it Down:

  • Line 11 – This is not your typical jQuery plugin.  We’re adding to the jQuery object itself, as it is not intended to be used on DOM elements, and is instead an alternative to $.ajax() (it supports the same function signature as $.ajax()).
  • Line 16 – by the time we get here we have a full options hash for an ajax request, and we’ve stringified it to get a “poor man’s hash” key, since the metadata in the reqOptions object that can be serialized to JSON is also what uniquely identifies the request.
  • Line 17 – if this key already exists in our “inProgress” object, then we append the success/error callbacks for this reqOptions object to an existing array of success and error callbacks, and then return.
  • Lines 23-38 – this is where the real work happens.  If we’ve reached this point, this request is the first of its kind out of all requests currently processing.
    • We set aside a reference to function that will remove this key from the inProgress object.
    • Then we create a “traffic” object.  This is basically an $.ajax() options hash that has an array of callbacks for success (successCallbacks), and an array of callbacks for error (errorCallbacks).  We take the original success and error callbacks and init these arrays with each as the starting member (respectively).
    • Next, we create a new success callback that will iterate over the successCallbacks array and invoke each one, passing in any relevant response data, followed by invoking our “remove()” function to remove this traffic object from the “inProgress” object.  We do the same for the error callback.
    • Then we extend the traffic object (which contains our modified callback approach) onto the reqOptions object, then extend the combined traffic/reqOptions object onto a new object.  Taking this approach means that we capture all the data provided to us in the reqOptions objects, while substituting the original success and error callbacks with the modified versions from the traffic object.  We add the resulting new reference to our inProgress object, using the “poor man’s hash” we created earlier as the key/member name.
    • Finally, we invoke jQuery’s $.ajax() function, passing in the modified “traffic/reqOptions” hash as the options for the call.  From this point, jQuery will process it like a normal request, invoking the success or error callback, based on the status of the response.

If any other calling code invokes $.trafficCop with a request identical to one already running, it will simply take the duplicate request’s success and error callbacks and push them into the successCallbacks and errorCallbacks array(s) of the currently running request.  This prevents a duplicate request, while still notifying the caller of success/error when the original request completes. When the request completes, it is removed from the inProgress object, so any subsequent requests with the same metadata will start the cycle over again. Obviously, subsequent requests to the same endpoint may or may not result in an actual request being made, since the response may be in the browser’s cache.

Thoughts & Future Improvements

It’s worth noting that while my main use of TrafficCop has been in the context of “infuser” (retrieving remote template/static content asynchronously), it can be used for any request that you would invoke via $.ajax().

I’m not aware of any JavaScript implementations that would cause the “trafficCop” function to yield (before returning) to a completed AJAX request, and thus introducing a ‘race condition’ where the original request’s iterative “success” or “error” callbacks get invoked before a duplicate request’s success/error callbacks were pushed into the successCallbacks and errorCallbacks array(s).  However – if one was to insist on using web workers, then I assume that it’s possible for such a condition to exist.  That being the case, one area of improvement would be for me to add some double-check logic that prevents removal of an original request (and the firing of the success/error callbacks) if a duplicate request is currently being grandfathered in.

Or I could just recommend that you avoid web workers. :-)


Microcosm vs Macrocosm

Just some quick random thoughts.

Like any developer, I constantly long for greenfield projects….and like most developers – and to my chagrin – I consistently find myself in brownfield (think “manure”) project-land, supporting apps of the worst order.  The vast majority of those brownfield situations have come about because a company made an investment into a proprietary framework (Web Forms, Silverlight, MS-SQL-as-an-application-platform-God-have-mercy-on-us-all), and ignored the need to refactor, re-engineer and re-write (when necessary).  At the bare minimum, there should have been serious thought and effort put into adapting legacy code bases to newer implementations (think “anti-corruption layer”, “adapter pattern” or whatever-pattern-du-jour seems to communicate the concept to your management).  I’ve heard the litany of excuses so many times (and uttered them myself in darker moments) that my brain conjures up the image of a pointy-haired-executive-from-hell any time a discussion hints in the direction of  ”sunk cost”.

The web is a classic example of how we – as a total community worldwide – are addressing the friction of legacy-meets-progress.  The list of contenders is long, and the casualty count high – but a pattern consistently emerges: leveraging the mindshare of open source, and embracing open standards vs proprietary prisons best positions you for the inevitable technological curve ball.  The drive for proprietary lock-in has brought us the often-spiteful differences in DOM implementations, and yet so has the push for innovation.  Companies would do well to learn from the lessons being played out before their eyes in the web.  While it’s not always practical or feasible for other browser vendors to implement the features of their competitors, having libraries like jQuery make it possible to navigate those differences with much less pain (example lesson: encapsulate what changes frequently or what’s beyond your control to change – at a system level).  As more browser vendors adopt a feature that a rival pioneered, those using libraries like jQuery were already able to (at best) emulate the feature in all browsers or (at worst), gracefully degrade with less overhead.  This sort of “change buffer” is the cartilage that should exist in all the joints of company systems.

Be warned, the longer you focus on polishing the brass of your proprietary cell bars (ahem…IE…Silverlight), the more your competitors can move past the mere differences in approach, and begin to differentiate themselves in more significant ways (Google with V8 and Chrome’s dev tools, for example).  At the very least, non-technical company leadership should be aware of the long term technical debt they are taking on.  Parting shot/example: if your preferred vendor encourages an architecture that tightly couples your services to your views’ implementations, then don’t complain when your architects tell you that the service layer has to be re-written to accomodate the shiny new market niche you’re salivating over.


Can Conventions Work Well in Dynamic Languages?

I’ve been reading up on FubuMVC lately.  Perhaps I’m jumping the gun, but after just scratching the surface of how FubuMVC works, I feel confident in saying that I would enjoy working in it over ASP.NET MVC (or any other “MVC-esque” offerings in the .NET world).  While my web work as of late has focused on the concepts of a client-side stack, using RESTful back-end services (i.e. – not server-side-framework-driven), I think FubuMVC is a compelling approach, as it addresses many of the areas where Microsoft has, in my opinion, fallen down on the job in offering a good server-side web application framework that encourages productivity and good design.

Part of the genius of FubuMVC is how it leverages the type system in C# to wire up conventions.  A LOT is done for you out-of-the-box – with flexibility just a line of configuration away.  Really, it’s impressive.

You sense there’s a “but” coming, don’t you?  Not really a “but”, more like an “also”.

In some of the material I’ve read, developers who’ve adopted FubuMVC have repeatedly emphasized how the .NET type system is crucial to making conventions “really work” – and the inference is that statically typed conventions are superior to what’s possible in a dynamic language.

It’s that “inference” that I wish to address.  So, let’s get some particulars out of the way before I begin:

  • First, I love FubuMVC – what I’ve seen of it so far.
  • Second, I love C#, and often conjure up images of hugging a personification of the type system – especially when I see interface segregation, IoC, and assembly scanning done well
  • Third, I love JavaScript, and often conjure up images of hugging a personification of the loosely-typed power of “Oh, you don’t have that property yet and it’s run-time?  BAM!  Now you do….let’s continue”
  • Fourth – and this is the crux of it – accomplishing conventions in static vs dynamic languages is, IMO, simply *different*, not better or worse.  I’m not sure I can agree with anyone who paints with as broad of a brush as claiming one is always superior.
I believe that convention-based approaches in dynamic languages leverage intent, where static languages leverage type metadata (and sometimes intent as well).

Let’s look at a completely oversimplified approach using JavaScript and Node.js.  [This example has been trimmed down to focus on the possibilities of how to handle conventions in a language like JavaScript.  Real battle-tested frameworks exist for Node.js already (like Express).]

For my example, I wanted to create a web server that infers the following from the request:
  • module name
  • method on module to invoke
  • parameters to pass to target module.method
Below is my “index.js” – it’s loosely analogous to a routing module or front controller:
var http = require('http'),
    HamlEngine = require("./hamlViewEngine.js").HamlEngine,
    viewEngine = new HamlEngine("./views"),
    path = require('path'),
    qs = require('querystring');

var handler = function(req, res) {
        var body = "";

        if(req.url !== '/favicon.ico') {
            req.on("data", function(data) {
                body += data;

            req.on("end", function(){
                router(req.method.toLowerCase(), req.url, body, res);
    server = http.createServer(handler),
    router = function(method, url, body, res) {
        try {
            var segments = url.substring(1).split('/'),
                moduleName = segments[0],
                arguments = segments.slice(1).concat(qs.parse(body)),
                modulePath = path.join(path.resolve('./modules'),moduleName + ".js"),
                data = require(modulePath)[method].apply(null, arguments);
            res.writeHead(200, {'Content-Type' : 'text/html'});
            res.write(viewEngine.renderView(moduleName, data));
        catch(exception) {
            res.writeHead(404, {'Content-Type': 'text/plain'});
            res.write("AW SNAP, my demo is already sucking: " + JSON.stringify(exception));

So – what’s going on here?
  • On lines 1-5 we’re importing the modules we’ll be using
  • On line 7 we declare a “handler” function that will be invoked whenever our web server receives a request.  This handler:
    • Ignores client requests for the favicon.
    • Concatenates the full body of the request as data events occur (I know, I know, stop it already!)
    • When the request has ended, it invokes a router function to dispatch the request to the correct handler
  • On line 20, we create a web server and tell it to use our handler function to process incoming requests
  • On line 21, we create a router function.  This is where the convention-magic happens:
    • it takes the HTTP method, the url, the request body and the response object as arguments
    • My example conventions assume that the target module will always be the first segment in the path, so a request to http://myserver/customer/1, would be targeting a “customer” module.  We parse the module name from the url on line 24.
    • The remaining module segments are parsed out into an array which also include the parsed request body as an object.  *NOTE – this is a vast oversimplification (I warned you, didn’t I?), as it’s possible that these segments could really be key/value pairs, but that would invoke a REST debate, potentially, and I’d like to avoid it, thanks! :-)
    • On line 26, we take the module name and resolve it’s location on the file system, and then we attempt to invoke it on line 27.  Again, my simple example assumes that our module will be returning a value.  We’ll come back to line 27 in a moment.
  • On lines 28 and 29, we marry the results of our module call up with a template engine and write the rendered HTML to the response.
  • Obviously, the above operations are wrapped in a try/catch, so if an error occurs, I’m cheating and throwing a 404 back to the client
  • Finally, line 38 starts the process by telling the server to listen on port 8000.
So how does line 27 work?  In this example, the conventions “rules” assume that the first segment of the path is a module name, and it’s expected that a module have a method implemented for each HTTP verb that could be called on it, and that those methods should be exported from the module at the top level.  In JavaScript, I can simply call “apply” on the module.method and pass the remaining arguments which we parsed off the url and from the request body in as an arguments array.
So let’s look quickly at a (yet another highly oversimplified) module that follows these conventions.  Here we have a “customer” module, which implements a “get” and “post” method (again, I’m cheating by using an in-memory cache of customers, whereas the real world would probably involve access to a data store, other service, etc.).  Notice that both the “get” and “post” methods expect a “custId” argument – which is pulled off the url (i.e. – /customer/1).  The “post” method takes an additional object to update the target resource (which turns out to be just the form/request body parsed into a JavaScript object).  Again, the example is simple, but the point comes across.
// fakin' some data
var customers = [
        id: 1,
        name: "ACME, Inc.",
        address: {
            street: "1234 Canyon Rd.",
            city: "Alberquerque",
            state: "NM",
            zip: 87101
        id: 2,
        name: "Bugs, LLC",
        address: {
            street: "Some Rabbit Hole",
            city: "Alberquerque",
            state: "NM",
            zip: 87101
        id: 3,
        name: "Road Runner, Inc",
        address: {
            street: "1234 Speedy Rd.",
            city: "Phoenix",
            state: "AZ",
            zip: 85024

// super silly examples, but you get the point.
module.exports = {
    get: function(custId) {
        var customer = customers.filter(function(x) { return == custId; })[0];
        if(customer) {
            return customer;
        throw "Customer " + custId + " not found!";

    post : function(custId, data) {
        var customer = customers.filter(function(x) { return == custId; })[0];
        if(customer) {
            // I know, should have just used an "extend" here.  Laziness, FTL
   = ||;
            customer.address.street = data.street || customer.address.street;
   = ||;
            customer.address.state = data.state || customer.address.state;
   = ||;
            return customer;
        throw "Customer " + custId + " not found!";
And, for grins, here’s the over-simplified view engine:
var haml = require('haml'),
    fs = require('fs'),
    path = require('path');

var Engine = function(hamlDir) {
    var viewDir = path.resolve(hamlDir),
        getView = function(name) {
            return fs.readFileSync(path.join(viewDir,name + ".haml"), 'utf8');
        master = getView('layout');

    this.renderView = function(viewName, data) {
        var viewData = {
            title: "Totally Over-simplified View Engine",
            contents: haml.render(getView(viewName), {locals: data})
        return haml.render(master, {locals: viewData});

exports.HamlEngine = Engine;
With these conventions in place, any request to /customer/2 will be routed to the customer.js module, invoke the “get” method, which retrieves the customer object from the collection, passes it on to the view engine which marries it to a haml template and writes the result to the response.  As a side-benefit to this kind of approach, you could create or update modules in place at any point, adding the ability for the site to process a host of new functionality, or change existing logic on the fly.
Lest our contrived example not have evidence to prove it works:

Customer 2 retrieved by conventions

So, let’s wrap this up.  I kept the example at the level of contrived simplicity so the focus could be the ability to wire up convention-based approaches in a dynamic language.  This Node.js application doesn’t have to know about the module in advance, and in fact won’t even load it unless it’s specifically targeted.  Obviously, real world scenarios would entail a more robust approach (better error handling, content negotiation, real HTTP status code management, etc.), but the point was to show that the conventions, in this case, followed the intent of the request, and that type metadata wasn’t necessary in order to wire them up.  [If you're super-observant, you'll notice where the loosely-typed nature of JavaScript could really bite the usual well-meaning JavaScript developer: I intentionally used "==" and not "===" in the customer module, since the ids are integers, but would come across the request as strings.] C# and JavaScript both offer a unique perspective in how to solve this kind of problem.  If anything, my point is to give credit to the FubuMVC guys for solving this problem with the strengths C# has to offer, rather than simply trying to port a dynamic-based framework over to .NET.  Bringing to bear the strengths of a given language as you target a problem is essential, as is the realization that most problems can be solved well by more than one language.  The mistake, IMO, is to think of the solution in the concrete details of *only one language*.

Update: Code samples for this can be found here.


Dreams and Intuition

A while back I listened to Rich Hickey’s “Hammock Driven Development” talk.  The entire problem solving process is an endless fascination for me, and I found Hickey’s thoughts compelling.  At one point he says that it’s important for us to “use our waking mind time to feed work to our background mind”.  When we take time to *really* focus on a problem, something interesting often happens.  It’s assigned, in essence, dedicated background threads in our brain that have higher priority.  My wife could probably tell you the number of times I’ve forgotten silly things (like my lunch, my keys…perhaps my brain) over the last 3 months.  Why did this happen?  I suspect it’s because my brain has dedicated quite a number of threads to solving some problems I want to see tackled in the web client space…and they often preempt my “common sense” threads.  I’m amazed at the role intuition can play in this process – though I think that requires some explanation.

Creative intuition is largely about connecting things.  Steve Jobs was quoted as saying “When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while.”  This is definitely my experience as a musician and arranger (my life before software).  But we have such false notions of creativity, intuition and breakthrough – most notably the idea that it “just comes out of nowhere”.  Hardly.  My best arrangements were the products of long process of thinking, listening and mediating on each detail.  Sure, there were times in the studio where things simply came together – but those are the product of all the times before where I worked, labored, cried and toiled over my instrument, writing and arrangements.  The point?  Being intuitive is only useful if it is informed intuition.  The mind needs – must have – a wide array of prior history from which connections can be inferred, behavior extrapolated & solutions proposed.  There’s nothing more frustrating to me than a developer whose intuition comes quickly (more so than others around him/her) and yet they pride themselves on just hacking through by brute force until they reach a solution.  Their only skill is that they can hack through spaghetti faster than the average human.

I’m a work-in-progress, but I’m trying to widen that array of exposure so that my “background mind” can more productively find the ideas and patterns that elude me.  I completely agree with Hickey – taking time to step away from the keyboard and think through problems at length is a rewarding and fulfilling experience (not to mention productive!).  I’ve lost count of the number of times I’ve dreamed of my solutions recently – once or twice nearly having lucid dreams where I realized I was asleep, but consciously decided to stay that way and think through the problems I’m trying to solve.

Of course – there’s no guarantee that the solutions I’m dreaming of are good ones – some could very well be nightmares.  In the meantime, though, I plan to learn from the developers who are attacking the same front in producing things like Knockoutjs, Backbonejs, Angularjs and Batmanjs.  Hopefully my informed intuition will conspire with my skill set and help me produce a client-side binding framework that builds on the lessons learned by guys as sharp as them….


Code is *Language*, Fluency Requires Literacy

How often do you read through other people’s code?  For whatever reason, reading code has not been a natural habit for me – instead it’s a discipline I have to enforce on myself.  The payoff, however, has been huge.  A much publicized Slashdot comment recently compared development to writing.  I think that analogy allows us a wonderful opportunity to approach the idea of programming “languages” from a different angle and make some “bar-stool-observations” on general guidelines that can help us become better developers.

What’s one recommendation that teachers consistently offer to students to expand their vocabulary? READ books, and lots of them.  Sure, you can grow your vocabulary by subscribing to’s “word of the day”.  However, reading provides a better context (and if it’s a good book, a much more effective vehicle) to assimilate new words.  One could argue that the programming equivalent of “word of the day” would be if you only tried to expand your knowledge of a language by looking at the usually-not-production-worthy examples in most books, or by sticking to language/man docs alone.  The former is like trying to become a great adult novelist by reading children’s books, the latter like reading the dictionary.  Just like children, we all start somewhere, so I’m *not* advising developers to skip learning to swim before jumping into the deep end.  But why learn to swim if you’re going to stay in the 4-feet-deep section?  You learned to swim so that you can handle yourself in the deep end.  You learned to read to move past Goodnight Moon and onto classics like Hamlet and modern-should-be-classics like The Passage.  As developers, we should be reading classics like Erlang’s OTP and modern gems like Node.js, Knockout.js, Simple.Data & {insert awesome project here}.

Wait, reading code is difficult!  It’s easier to read our own code since we know what was going on in the author’s mind.  But try reading that code six months later.  Just as easy to read?  Nope.  And what do you do?  You read it until it all comes back.  You know your own style and idiosyncrasies – so re-familiarizing yourself with your own work comes easier.  Just as it’s possible to recognize the style of a favorite novelist, it’s possible for us to develop the same ‘in-tune-ness’ with other developers.  Read enough of their code, and it becomes easier to digest.  Read enough code from a wide array of developers and you begin to see common idioms emerge (not just design patterns per se, but more of a ‘zeitgeist’ on how the community views certain kinds of problems, and how they can be solved).  Read enough code from authors spanning multiple languages and you’ll find yourself borrowing and stealing from great masters, equally appreciating the strengths of your language-of-choice while lamenting what it lacks.

My elementary school teachers spent a lot of time with classmates who used their finger to keep track of where they were on a page.  “It slows you down”, they said.  Sure enough, as those students abandoned the crutch, they found that they read much faster.  The same goes for many development “habits”.  Crutches might help us walk, but they will keep us from running.  This might hurt….but crutches very often include things like copy/paste, drag and drop, wizards and re-typing code as opposed to reading it.  Crutches come in more complex forms as well.  My friend Chris Ammerman made a great point in saying “I know too many people who decide that because they can use the ServiceLocator in the Patterns & Practices libs, that means they are doing DI and IoC.  That’s a crutch I suppose… using a shrink-wrapped one-size-fits-all implementation of a pattern, rather than learning and understanding the pattern itself.” 

Don’t get me wrong, I *love* intellisense, Re-Sharper, syntax highlighting and all the trimmings.  I don’t think it’s bad that developers new to a pattern use a pre-packaged version of it.  We just need to understand that crutches can and will abstract away from us things we need to know.  Most of us can honestly assess when we’re relying on a crutch – so find out what you need to know in order to operate without it.

Sites like have made great code easily accessible, but fluency is not attained by reading code alone.  We have to write code as well.  Great novelists are well read, but they became great by honing their abilities through writing.  Chris Ammerman covers this very well in his post “Throwaway Projects” – I highly encourage you to read his thoughts as well.


Advantages and Disadvantages

In my last post, I explained that before I dove headlong into development, I was a professional musician.  I’ve been continually amazed at how the world of software development acts as the ultimate “field leveler”.  Someone like me, with no ‘CS or MIS degree’, can – within 5 years of initial development ‘dabblings’ – go from novice to a team lead, mentoring younger counterparts coming on board (who *do* actually have CS degrees).  [I feel the need to make it clear that if I, in fact, had a time machine, I would go back in a heartbeat and tell myself to double major between music and computer science.  Although I’d also be tempted to go back a little further and tell myself to never, ever, ever date someone named Angela – but that’s a story for my personal blog…] In pure “American Dream” style, my own story seems to smack of the quick rise from minimum-wage-plasma-donating-service-industry-employee to lead developer of a national company’s e-commerce team.  Ultimate leveler, indeed.

Or is it?

Being the quintessential geek that I am, one of my favorite ST:TNG episodes is when Picard is given the chance to go back in time and alter events where he was stabbed through the heart in a fight, resulting in him having an artificial heart.  The changes have disastrous consequences for him personally.  He’s no longer a bold, daring leader, with a life of recklessness-tempered-over-time-to-become-wisdom.  Instead, he’s a timid has-been.  Ultimately he’s given the chance to go back and return things to their original outcome – and as he’s stabbed from behind, he sees the blade protruding from his chest & he laughs.  That left an indelible impression on me.  The creative side of me is tragically melancholy – so I have a horrible habit wishing I could change the very things that end up making me who I am.

But what does this have to do with development?  I started out as a Cold Fusion developer (as far back as Allaire still owning it!).  As I made friends with a talented VB.NET developer at the headquarters of my company I grew more frustrated that I wasn’t coding ASP.NET/VB.NET (cue the “grass is greener” syndrome).  Then, the headquarters recruited me to come work for them.  Was this my “big chance” to shift into MS-focused web development?  Far from it.  Instead, I was going to be trained in Progress 4GL, WebSpeed and a proprietary BI platform called Brio.

On one hand – I had a job, and a good one considering the post-dot-com implosion still rippling through the industry at the time.  But I felt like someone had derailed me.  Wasn’t .NET “where it was at”?  And there I was writing in languages so obscure that even people at my own company hadn’t heard of them. 

It’s funny how those perceived disadvantages are, in retrospect, some of the best things that could have happened to me.  Don’t get me wrong – I wouldn’t go near Progress 4GL unless my life absolutely depended on it – but what I learned has proven to be essential for me today.  What were the big takeaways?  I interacted daily with our Unix admins – and learned what a large relational database installation was like outside the ‘safe’ world of Windows Server and SQL Server.  I worked with those guys to overcome integration obstacles, learned a ton about the price of transaction locks and isolation levels and worked with the vendor to overcome shortcomings of their ODBC driver – and while we arrived at a workable solution, I was experiencing the tremendous pain of integrating without a good API on either side of the boundary.  Then there was Brio.  A heavy, slow and overpriced BI system, in my opinion.  However – the viewers included a JavaScript runtime.  Any user interaction with what could be fairly-complex report UIs had to be scripted out using JavaScript.  I took over management of this platform from a Cobol developer (and quickly understood the joke that Cobol developers are the most verbose).  My first task was to refactor a report with over 5k lines of JavaScript.  I got it down to under 300.  As report requests came in, I found myself re-writing the same kinds of validation and helper functions – and wished for a way to have all the Brio reports share a common code base.  (Nothing allowed this out of the box – a huge fail on their part – and our solution involved a wireframe report template that pulled the code from an external store and eval-ed it as it loaded. Ugly!)

At the same time, I was learning C# (since the e-commerce team had standardized on C# instead of VB.NET).  While the C-style syntax was a welcome relief for me compared to the Progress 4GL code I had written, having coded in CF, T-SQL, Progress 4GL, JavaScript, VB.NET and C# all within close proximity began to give me a great appreciation for the differences in languages, and the effect they could have on developer productivity.  Writing JavaScript inside the Brio platform forever removed any confusion between the DOM and JavaScript as a language.  It also laid the foundation for later realizations that the underlying runtime could provide hooks to a language that it didn’t necessarily support natively (think proxies for current JavaScript in node.js, for example).

Brio was eventually replaced by the Microsoft BI stack, my JavaScript development was reduced to page enhancements again, and I moved more deeply into C#-middle-tier-complementing-data-warehousing-development in SQL Server.  But as I look over those years now, I am incredibly thankful that, while I was still involved in developing classic asp & ASP.NET Web Forms apps, that experience didn’t cause me to see the web only through Microsoft’s narrow version of it.  I’m also incredibly thankful for that JavaScript experience.  No, it doesn’t immediately translate into me being a DOM-manipulation-ninja – I’m still working on that.  It does mean, though, that I wrestle much less with a language, and more just in learning how someone implemented an object model.  I’m thankful for the training classes I had at the Progress campus – I saw both relational querying and web-form binding approaches that were very foreign to the Microsoft or Cold Fusion approach. 

So – Malcolm Gladwell would be proud.  I realize that those perceived disadvantages were anything but.  The same goes for the years I spent with my long time writing partner, getting finicky audio/MIDI hardware and software to work on Windows 3.1 and Windows 95.  (Let’s just say we learned a lot about memory addresses and IRQs).  So while development, *is* a great field leveler, it really does appear to work best for:

  • those not only with access to the right information at the right time, but with access to people writing and architecting systems better than they could ever hope to (surround yourself with those you want to be like)
  • those learning languages ahead of when they become the language-du-jour. (as Gladwell explains in Outliers – if you have your ten thousand hours in a skill, you’re ready when things break wide open demanding that skill)
  • those with the time, or with the discipline and willingness to lose some sleep here and there, to study and research not only information relevant to their current tasks, but bigger-picture knowledge relevant to growing your career and bettering your understanding of what you do

So my story isn’t just a story of working hard, studying, reading and coding.  It’s a story of working with the right people, at critical turning points in my career, learning multiple languages – not just the popular ones – and doing what I can to increase the surface area of my life so that I can come into contact with those people, along with the right information – at those critical times.  If my employer in 2008 had been interested in relieving me of database development, so I could focus on C#, I wouldn’t have gone to work for Sommet.  Then I wouldn’t have met Alex, Elijah, Josh, Dan or Evan.  If Evan hadn’t worked at Sommet, Alex and I wouldn’t have been introduced to some of the messaging concepts for distributed architectures which we discussed.  Form there we may not have discovered CouchDB or RabbitMQ, and the subsequent “what’s this thing called Erlang that they’re written in?”  Then we may not have pursued trying to implement highly scalable apps in .NET, only to hit some pains which drove us to try Erlang.  If Sommet hadn’t imploded, I wouldn’t have met Chris, nor would I have attended QCon in 2010 (critical moment for me!), nor would I have met Dru.  I could go on.  While many of these events were far beyond my control (or ability to predict), I made it possible to be a part of them by looking for the right team to work with.

What are the things you’ve viewed as setbacks and disadvantages?  In looking back, do any of them contribute to the kind of knowledge that ‘pulls back the vendor curtain’ and helps you see what particular vendor frameworks were hiding from you?  Do any of them include writing in languages you would have avoided otherwise?  If so, they may perhaps be your secret weapons – having given you the kind of knowledge that vendor-homogenous developers don’t even realize they’re missing.  What can you do to increase your career’s ‘surface area’ to better the odds of coming into contact with the right people & right information at those critical times?


Direction, Questions and Values

ThinkingMan328x28412 years ago I was a professional musician, travelling on the road with small bands, playing coffee houses and working on original material I barely had time for.

11 years ago I’d taken a short term job in hardware and logistics at a technology company – it was supposed to be a temporary thing until my next road gig worked out.  I quickly started down the hardware route – getting A+ certified, and was about to dive into studying for MCSE.  I wasn’t satisfied, though.  I was tired of building incredible machines for other people, and never getting to *do* something with them.  I quickly transitioned into writing Cold Fusion web applications.  Realizing I loved development, could make a decent living at it, and it would allow me to pursue original music on my own terms, I decided to not go back on the road.

8 years ago I was recruited to work for the headquarters of my company, and began working from home, travelling, and continued the most intense learning streak my life has ever experienced.  I inherited a proprietary BI system from a co-worker in which the only interactivity/scripting was a proprietary object model exposed through JavaScript.  I was deeply disappointed to leave web development behind for a time, and hated that my JavaScript skills were less focused on the DOM.  I had no idea how valuable this experience would be – learning JavaScript apart from the DOM.

3 years ago I was team lead, travelling more, spread across legacy asp, ASP.NET, SQL, Data Warehousing, windows services and middle tier.  On the surface, I had the ideal job.  Good pay (with a raise on the way), travel and the accompanying benefits, working from home, flexible schedule, a great boss and good teammates.  But I was still unsatisfied.  I’d been reading up on architecture, development methodologies, good C# practices – anything I could get my hands on.  Many at work seemed ok to just coast.  Our SQL 2K databases were ancient at that point – we’d barely started migrating to 2005 – but it didn’t bother many.  I wanted to be on a team that challenged me – and not just be the challenger.  Soon after, I left the safest job on the planet, took a pay cut & added a commute to work on a team that seemed to hold all the promise of challenging me more than I’d ever been.

Ten months ago I was about to lose my job thanks to a low-life-embezzling-CEO who brought his company down around him in flames (and, in the process, breaking up the most talented team I have ever had the pleasure of working with).  In my short time there I’d learned more about distributed architecture, messaging, ORMs, WPF, NoSQL – and much much more thanks to guys like this – than I could have hoped for going into it.  Then the IRS and FBI raided our building (thanks a lot, Brian), and the process of scrambling to find new work kicked in.

These things are running through my mind tonight – as I sit outside a coffee shop on the banks of the Tennessee River in Chattanooga.  I would’ve never guessed I’d land here.  I miss my old town (Nashville), but more than that I miss the friends I made there.  I heard a quote recently: “If you only learn the skills related to your day job, then your skills are at the mercy of the one who pays you.”  Nine months ago I had slipped into the nasty habit of only focusing on the “here and now”.  When the rug was pulled out from under me, I swore that I would never allow myself to fall into those habits again.

This is the tough thing about our industry.  Unless you’re fortunate to work at a company like Google – where you can take 20% of your on-the-clock time to pursue personal ‘innovation’ projects – then learning and study happens in between work, family, kids, working out, sleep, doing the dishes, walking the dog, fixing your car and the myriad other responsibilities we all bear.  Tough choices have to be made.  I’m still a musician to the core – always will be – but time for recording projects is virtually nil at this point in life.  I think I’m beginning to learn how to better approach this, however.

At first I thought of it primarily as an exercise in time management.  Time doesn’t scale, though – after all, there are only 24 hours in the day, and you can only go without sleep for a finite amount of time before it plunges your effectiveness.  To use a budget analogy – this is like a married couple trying to squeeze every penny out of their budget, while stuck between the large house note and medical insurance bill.  Those two items tend to be the most non-negotiable-large-sum items in the budget (and you can only tweak the smaller categories so far until you bump into Tesler’s Law).

A huge improvement to the situation is to find the right kind of company to work for.  This requires you to be on your toes – not only in your job search, but as you spend time at a company.  Ask yourself – are the values being expressed in action – not just words – the values you can get behind?  Are they open to new ideas?  Do they promote innovation, or do the curmudgeons rule the roost?  Does your management have credibility with you and your team?  Does your team have credibility with your customers?  Are you encouraged and supported in your career growth, or are you simply a warm body, code-it-this-way-who-gives-a-crap-what-you-think-and-shut-up-thank-you-very-much developer?  To continue the budget analogy, finding the right place to work is like finding the right place to live – with a mortgage payment you can afford, and which offers better flexibility to the other smaller-sum categories.

But what about the “medical insurance bill”?  How does one maintain and promote the good health of their career, in the midst of learning and finding the right place to work?  If we’ve tweaked the smaller categories, and found a more affordable mortgage, how do we lower the other non-negotiable cost?  While this isn’t the only aspect, I’ve come to strongly believe that the choice of technologies (and especially technology frameworks) drastically affects your career health more than you realize.  I’m not necessarily talking about languages – this isn’t a C# vs. Ruby vs. Erlang debate.  However – you need to ask yourself questions like:

  • “Am I getting locked into a vendor toolset?”  (If the answer is yes, then ask “Does this vendor support open standards?”  If the answer is no, run.)
  • “What is this framework abstracting from me that I might need to know about – even if the knowledge simply helps me understand (or appreciate) the framework more?”
  • “What is this framework abstracting from me that is hurting my knowledge and programming theory?
  • “What is this framework abstracting from me that is hurting the performance, extensibility, usability or interoperability of my application?”

I won’t presume to know these answers for anyone other than myself.  I will say that, as I’ve asked myself those very questions, the answers have steered me away from things like Silverlight, SharePoint, ASP.NET Web Forms (notice a pattern here?) and towards open-standards-based approaches.  Asking those questions helped me see how opinionated certain frameworks can be in the wrong direction (Web Forms, for example).  It’s also given me renewed love for dynamic languages like JavaScript, and great js frameworks (that seem to abound these days).  I’m much more skeptical of vendor-laden architectural advice, and less satisfied with heavier server side frameworks for web apps.  I’m more interested in employers that value talent over vendor homogeneity.  I’m coming to believe that picking the right technologies – ones that amplify good habits, teach you as you use them, play well in the wild with others – is akin to being in top shape so that you can afford that higher deductible and lower those monthly costs.

Making time to study is vital, but equally so are picking the right things to study, and working in a company whose vision is one you can get behind.  The latter two make the former more effective than it can ever be on its own.


Making Knockout.js Support External Templates

The last month has been a blast for me!  Contributing to my “kid in candy store” syndrome was my discovery of Steve Sanderson’s Knockout.js framework.  In a nutshell – Knockout.js provides the basis for using an MVVM (Model View ViewModel) or, if you’re familiar with Martin Fowler’sPatterns of Enterprise Application Architecture”, a “Presentation Model” approach to building web applications in JavaScript and HTML.  Beyond that, I’m going to assume you are either familiar with Knockout.js (if not, go here, read a bit, enjoy yourself and then come back and finish my post), OR you like throwing yourself to the wolves by skipping introductions and diving right into extending unfamiliar frameworks.

Knockout.js comes with an out-of-the-box template engine (using jQuery.tmpl).  Using this template engine requires you to place your templates inside script elements.  While I could probably guess at the reasons that led Steve Sanderson to set it up this way (all good reasons, by the way) – I think it’s less than ideal for anything but the smallest of projects.  So, I set out to tackle the following:

  • Allow for templates to be created and edited in separate files that don’t require script tags, with the added benefit of your IDE-of-choice being able to provide proper syntax highlighting (since many break once you place HTML inside a script tag).
  • Allow for templates to be loaded from the server on an as-needed basis, and once they’ve been downloaded, cache them so that they don’t have to be downloaded again.

As I began to look over Steve Sanderson’s default jQuery template engine plugin for Knockout, I realized I could take advantage of his work and simply tweak a few things to allow for external templates.  The primary change relates to the behavior of the “getTemplateNode” function.  Let’s look at the original:

this.getTemplateNode = function (template) {
        var templateNode = document.getElementById(template);
        if (templateNode == null)
            throw new Error("Cannot find template with ID=" + template);
        return templateNode;

As you can see, it’s very straightforward. The function looks for an item in the body with an id that matches the template name passed to it. If it cannot find it, an error is thrown, otherwise, it returns the node. Now let’s look at my version:

this.getTemplateNode = function (templateId) {
        var node = document.getElementById(templateId);
        if(node == null)
            var templatePath = this.getTemplatePath(templateId);
            var templateHtml = null;
                "async": false,
                "dataType": "html",
                "type": "GET",
                "timeout": this.timeout,
                "success": function(response) { templateHtml = response;},
                "error": function(exception) {
                        templateHtml = this.defaultErrorTemplateHtml.replace('{STATUSCODE}', exception.status);


            if(templateHtml === null)
                throw new Error("Cannot find template with ID=" + templateId);

            var node = $("<script/>", {
                                         "type": "text/html",

        return node;

So what's different?

  • Like the original jQuery template engine plugin, my version checks to see if the template is already in the DOM, and if so, it simply returns it.  However, if the template is not found in the DOM, an ajax call is made retrieving the template from the server. 
  • You’ll notice we’re calling “this.getTemplatePath” – it’s here that we’re utilizing the configuration values that may have been provided to specify where and how to access the template on the server (more on that in a moment).
  • If the call to the server fails – and if the value of “useDefaultErrorTemplate” is true – then a default error template is provided instead. 
  • The ajax call is synchronous, and the result of the call is assigned to the templateHtml variable (you can set an optional timeout value to prevent the call from blocking the browser indefinitely!).  Once we have a template, we add a script element to the document, set the attributes appropriately, and set the contents equal to what we just retrieved from the server.  We’re now at a state identical to if we had simply included the template in the document from the start (i.e. – Knockout’s functionality is intact).

My external template plugin allows you to set the following options:

  • templateUrl: the directory on the server where the templates reside.  For example, “/Templates” for a sub-directory relative to where the document originated.
  • templatePrefix: a standard prefix that is pre-pended to all template file names.  For example, if you want a naming convention of “template_”, then you’d set this member equal to that value.
  • templateSuffix: just like templatePrefix, except it’s appended to the template name.  For example, you might have naming convention where all template files end in “.tpl.html” instead of “.html”.
  • useDefaultErrorTemplate: defaults to true.  Setting this to false means that your application will display the error response you receive for any template that cannot be found.  This could get messy (ever seen an IIS 404?)
  • defaultErrorTemplateHtml: allows you to set the actual html content of the default error template.

The script file containing this plugin takes care of auto-wiring itself to knockout.js so that all you have to do is simply reference the script in your html document.  It does this by executing the following:

ko.ExternaljQueryTemplateEngine.prototype = new ko.templateEngine();
// Giving you an easy handle to set member values like templateUrl, templatePrefix and templateSuffix.
ko.externaljQueryTemplateEngine = new ko.ExternaljQueryTemplateEngine();
// overrides the default template engine KO normally wires up.

The full source for the “Knockout.js External Template Engine” can be found on github.  Let’s take a quick peek at a very simple example showing nested external templates being pulled down as they are needed (this example is included in the git repo):

The Templates:

The following three templates are all in their own .html files.

The ‘Master’ template (below) is simply a div container.  In a real world scenario, it might contain several other elements that should appear at the highest (“app”) level. In our example, it simply provides a top level container within which "companies" are displayed.

<div data-bind="template: {name: 'Company', foreach: Companies }"></div>

The ‘Company’ template (below) is displayed for each company in the collection. It contains a list of Employees, and iterates over each one, using another ‘Employee’ template.

		<span data-bind="text: CompanyName"></span>
	<div data-bind="template: {name: 'Employee', data: Employees}"></div>

The ‘Employee’ template (below) is self-referencing. If an employee has children, then each child is rendered using the ‘Employee’ template.

        <span data-bind="text: Name"></span>
        <span data-bind="template: {name: 'Employee', foreach: Children}"></span>

The Main Page:

    <link rel="stylesheet" href="style.css" />
    <script type="text/javascript" src="jquery-1.5.js"></script>
    <script type="text/javascript" src="jquery.tmpl.js"></script>
    <script type="text/javascript" src="knockout-latest.debug.js"></script>
    <script type="text/javascript" src="koExternalTemplateEngine.js"></script>
    <script type="text/javascript">
        // Simple view model.  This is a one-way binding example (i.e. - none of the members are observable)
        var viewModel = {
                Companies: [
                    CompanyName: "ACME",
                    Employees: {
                                Name: "Bugs Bunny",
                                Children: [
                                        Name: "Peeps",
                                        Children: [
                                                Name: "Daffy Duck",
                                                Children: []
                                                Name: "Tweety Bird",
                                                Children: []
                                                Name: "Road Runner",
                                                Children: []
                                        Name: "Always Getting the Short End of the Stick",
                                        Children: [
                                                Name: "Yosemite Sam",
                                                Children: []
                                                Name: "Wyle E. Coyote",
                                                Children: []
                    CompanyName: "Superfriends",
                    Employees: {
                                Name: "Batman",
                                Children: [
                                        Name: "Lesser Peeps",
                                        Children: [
                                                Name: "Superman",
                                                Children: [
                                                        Name: "Aquaman",
                                                        Children: []
                                        Name: "Wonder Woman",
                                        Children:   [
                                                Name: "Robin",
                                                Children: []
        $(function() {
            // demonstrating that templates can be called from a different path
            // than this html file was delivered from (same server, of course)
            // and also that special template file naming conventions can still
            // be used without cluttering up the name of the template itself.

                                                            templateUrl: "Templates",

                                                            templatePrefix: "tmpl",

                                                            templateSuffix: ".tpl.html"



    <!-- Thanks to the prefix, suffix and url settings above, the "Master" template will be pulled down from Templates/tmplMaster.tpl.html -->
    <div data-bind="template: {name: 'Master';, data: viewModel}"></div>

In the code above, we’re creating a view model that contains a simple – but ragged – hierarchy (note that this is a one-way binding example, since the members are strings and arrays, not observables).  Then, in our DOM-ready function, we’re setting the templateUrl, templatePrefix and templateSuffix.  (Since we’ve already included a reference to “koExternalTemplateEngine.js”, the plugin has replaced the default Knockout template engine.)  Lastly, we call ko.ApplyBindings() on our view model to tell Knockout to wire everything up.

Here’s a screen shot of the requests (using Chrome).  Note that the last three requests are for the templates, and they occur after the page is loaded (click on the image for a larger view):


If you’re working with Knockout.js, I encourage you to try this plugin out and give me your feedback!