Can Conventions Work Well in Dynamic Languages?

I’ve been reading up on FubuMVC lately.  Perhaps I’m jumping the gun, but after just scratching the surface of how FubuMVC works, I feel confident in saying that I would enjoy working in it over ASP.NET MVC (or any other “MVC-esque” offerings in the .NET world).  While my web work as of late has focused on the concepts of a client-side stack, using RESTful back-end services (i.e. – not server-side-framework-driven), I think FubuMVC is a compelling approach, as it addresses many of the areas where Microsoft has, in my opinion, fallen down on the job in offering a good server-side web application framework that encourages productivity and good design.

Part of the genius of FubuMVC is how it leverages the type system in C# to wire up conventions.  A LOT is done for you out-of-the-box – with flexibility just a line of configuration away.  Really, it’s impressive.

You sense there’s a “but” coming, don’t you?  Not really a “but”, more like an “also”.

In some of the material I’ve read, developers who’ve adopted FubuMVC have repeatedly emphasized how the .NET type system is crucial to making conventions “really work” – and the inference is that statically typed conventions are superior to what’s possible in a dynamic language.

It’s that “inference” that I wish to address.  So, let’s get some particulars out of the way before I begin:

  • First, I love FubuMVC – what I’ve seen of it so far.
  • Second, I love C#, and often conjure up images of hugging a personification of the type system – especially when I see interface segregation, IoC, and assembly scanning done well
  • Third, I love JavaScript, and often conjure up images of hugging a personification of the loosely-typed power of “Oh, you don’t have that property yet and it’s run-time?  BAM!  Now you do….let’s continue”
  • Fourth – and this is the crux of it – accomplishing conventions in static vs dynamic languages is, IMO, simply *different*, not better or worse.  I’m not sure I can agree with anyone who paints with as broad of a brush as claiming one is always superior.
I believe that convention-based approaches in dynamic languages leverage intent, where static languages leverage type metadata (and sometimes intent as well).

Let’s look at a completely oversimplified approach using JavaScript and Node.js.  [This example has been trimmed down to focus on the possibilities of how to handle conventions in a language like JavaScript.  Real battle-tested frameworks exist for Node.js already (like Express).]

For my example, I wanted to create a web server that infers the following from the request:
  • module name
  • method on module to invoke
  • parameters to pass to target module.method
Below is my “index.js” – it’s loosely analogous to a routing module or front controller:
var http = require('http'),
    HamlEngine = require("./hamlViewEngine.js").HamlEngine,
    viewEngine = new HamlEngine("./views"),
    path = require('path'),
    qs = require('querystring');

var handler = function(req, res) {
        var body = "";

        if(req.url !== '/favicon.ico') {
            req.on("data", function(data) {
                body += data;

            req.on("end", function(){
                router(req.method.toLowerCase(), req.url, body, res);
    server = http.createServer(handler),
    router = function(method, url, body, res) {
        try {
            var segments = url.substring(1).split('/'),
                moduleName = segments[0],
                arguments = segments.slice(1).concat(qs.parse(body)),
                modulePath = path.join(path.resolve('./modules'),moduleName + ".js"),
                data = require(modulePath)[method].apply(null, arguments);
            res.writeHead(200, {'Content-Type' : 'text/html'});
            res.write(viewEngine.renderView(moduleName, data));
        catch(exception) {
            res.writeHead(404, {'Content-Type': 'text/plain'});
            res.write("AW SNAP, my demo is already sucking: " + JSON.stringify(exception));

So – what’s going on here?
  • On lines 1-5 we’re importing the modules we’ll be using
  • On line 7 we declare a “handler” function that will be invoked whenever our web server receives a request.  This handler:
    • Ignores client requests for the favicon.
    • Concatenates the full body of the request as data events occur (I know, I know, stop it already!)
    • When the request has ended, it invokes a router function to dispatch the request to the correct handler
  • On line 20, we create a web server and tell it to use our handler function to process incoming requests
  • On line 21, we create a router function.  This is where the convention-magic happens:
    • it takes the HTTP method, the url, the request body and the response object as arguments
    • My example conventions assume that the target module will always be the first segment in the path, so a request to http://myserver/customer/1, would be targeting a “customer” module.  We parse the module name from the url on line 24.
    • The remaining module segments are parsed out into an array which also include the parsed request body as an object.  *NOTE – this is a vast oversimplification (I warned you, didn’t I?), as it’s possible that these segments could really be key/value pairs, but that would invoke a REST debate, potentially, and I’d like to avoid it, thanks! :-)
    • On line 26, we take the module name and resolve it’s location on the file system, and then we attempt to invoke it on line 27.  Again, my simple example assumes that our module will be returning a value.  We’ll come back to line 27 in a moment.
  • On lines 28 and 29, we marry the results of our module call up with a template engine and write the rendered HTML to the response.
  • Obviously, the above operations are wrapped in a try/catch, so if an error occurs, I’m cheating and throwing a 404 back to the client
  • Finally, line 38 starts the process by telling the server to listen on port 8000.
So how does line 27 work?  In this example, the conventions “rules” assume that the first segment of the path is a module name, and it’s expected that a module have a method implemented for each HTTP verb that could be called on it, and that those methods should be exported from the module at the top level.  In JavaScript, I can simply call “apply” on the module.method and pass the remaining arguments which we parsed off the url and from the request body in as an arguments array.
So let’s look quickly at a (yet another highly oversimplified) module that follows these conventions.  Here we have a “customer” module, which implements a “get” and “post” method (again, I’m cheating by using an in-memory cache of customers, whereas the real world would probably involve access to a data store, other service, etc.).  Notice that both the “get” and “post” methods expect a “custId” argument – which is pulled off the url (i.e. – /customer/1).  The “post” method takes an additional object to update the target resource (which turns out to be just the form/request body parsed into a JavaScript object).  Again, the example is simple, but the point comes across.
// fakin' some data
var customers = [
        id: 1,
        name: "ACME, Inc.",
        address: {
            street: "1234 Canyon Rd.",
            city: "Alberquerque",
            state: "NM",
            zip: 87101
        id: 2,
        name: "Bugs, LLC",
        address: {
            street: "Some Rabbit Hole",
            city: "Alberquerque",
            state: "NM",
            zip: 87101
        id: 3,
        name: "Road Runner, Inc",
        address: {
            street: "1234 Speedy Rd.",
            city: "Phoenix",
            state: "AZ",
            zip: 85024

// super silly examples, but you get the point.
module.exports = {
    get: function(custId) {
        var customer = customers.filter(function(x) { return == custId; })[0];
        if(customer) {
            return customer;
        throw "Customer " + custId + " not found!";

    post : function(custId, data) {
        var customer = customers.filter(function(x) { return == custId; })[0];
        if(customer) {
            // I know, should have just used an "extend" here.  Laziness, FTL
   = ||;
            customer.address.street = data.street || customer.address.street;
   = ||;
            customer.address.state = data.state || customer.address.state;
   = ||;
            return customer;
        throw "Customer " + custId + " not found!";
And, for grins, here’s the over-simplified view engine:
var haml = require('haml'),
    fs = require('fs'),
    path = require('path');

var Engine = function(hamlDir) {
    var viewDir = path.resolve(hamlDir),
        getView = function(name) {
            return fs.readFileSync(path.join(viewDir,name + ".haml"), 'utf8');
        master = getView('layout');

    this.renderView = function(viewName, data) {
        var viewData = {
            title: "Totally Over-simplified View Engine",
            contents: haml.render(getView(viewName), {locals: data})
        return haml.render(master, {locals: viewData});

exports.HamlEngine = Engine;
With these conventions in place, any request to /customer/2 will be routed to the customer.js module, invoke the “get” method, which retrieves the customer object from the collection, passes it on to the view engine which marries it to a haml template and writes the result to the response.  As a side-benefit to this kind of approach, you could create or update modules in place at any point, adding the ability for the site to process a host of new functionality, or change existing logic on the fly.
Lest our contrived example not have evidence to prove it works:

Customer 2 retrieved by conventions

So, let’s wrap this up.  I kept the example at the level of contrived simplicity so the focus could be the ability to wire up convention-based approaches in a dynamic language.  This Node.js application doesn’t have to know about the module in advance, and in fact won’t even load it unless it’s specifically targeted.  Obviously, real world scenarios would entail a more robust approach (better error handling, content negotiation, real HTTP status code management, etc.), but the point was to show that the conventions, in this case, followed the intent of the request, and that type metadata wasn’t necessary in order to wire them up.  [If you're super-observant, you'll notice where the loosely-typed nature of JavaScript could really bite the usual well-meaning JavaScript developer: I intentionally used "==" and not "===" in the customer module, since the ids are integers, but would come across the request as strings.] C# and JavaScript both offer a unique perspective in how to solve this kind of problem.  If anything, my point is to give credit to the FubuMVC guys for solving this problem with the strengths C# has to offer, rather than simply trying to port a dynamic-based framework over to .NET.  Bringing to bear the strengths of a given language as you target a problem is essential, as is the realization that most problems can be solved well by more than one language.  The mistake, IMO, is to think of the solution in the concrete details of *only one language*.

Update: Code samples for this can be found here.


Tags: , , , , ,

  • Chad Myers

    For the record, no one is arguing that you can’t do conventions in a dynamic language or that it’s particularly difficult. Our argument with FubuMVC is that you have lots of rich metadata that you don’t necessarily have in a dynamic language. This opens up other opportunities for conventions that you wouldn’t be able to do without the static typing. On the contrary, there’s things you can do with dynamic types that you wouldn’t be able to do with static types.

    We’re not saying static typing is better, but it does have its strengths and we’re saying: When in static-typed-land, use it to the max. A lot of the other MVC frameworks were trying to be like Rails and use strings and dynamic-like conventions which we thought was the wrong approach. C# is static typed, so let’s use static typed conventions and make the most of it. Rails uses what it has available in a dynamic type system. Fubu uses what it has available in a static type system.

    I’ve played it middle-of-the-road up to here, so I will conclude by saying that I think there’s a lot more metadata and contextual information available in the static system and makes it easier to create and use conventions. More importantly, it makes it easy to move things around and rename things and have the conventions automatically adapt. Since dynamic conventions rely heavily on strings and name-based conventions, renaming things causes a lot of pain and re-work.

    • Jim Cowart

      Chad – thanks for the reply! I definitely agree that frameworks should take full advantage of the feature available in the language in which they are composed. FWIW, I never got the impression from you or Jeremy that you were saying “loose typing = bad”. The inference I referred to was mainly an impression I got from a couple of blogs I’d read of 2nd generation (or later) adopters of the framework (I’d include myself there as well). It also helps that Dru kept bugging me to blog about conventions in dynamic languages!

      I love the contrast, though. Thinking through the various ways to accomplish these sorts of approaches in both static and loosely typed languages has helped me see more of what the two approaches actually share. In C#, we often scan for assemblies, load up types based on names or “implements X”, etc. It would be trivial to scan directories for js modules based on name convention – so there is analogue to assembly scanning. You could make the naming requirements more flexible like Fubu (end this with “controller” or, start this with “get”, “post”, etc.). Of course, the real divergence is when you begin to infer relationships based on type. That’s a very powerful facet of static typing. In a node.js approach, you’d instead see an approach like what express-resource takes: based on the conventions you follow in your module, the underlying framework for express-resource takes the implementation of your module and monkey-patches express itself to wire the specific routes to it, etc. Maybe I’m naive in thinking this (certainly open to my mind changing), but I look at the two general approaches and see them as two different directions in which to shift the burden (see Tesler’s Law: