Mainstay Monday: Lexical Scoping

Jun 29, 2015

Edit: I incorrectly stated that Javascript has dynamic scoping. It actually uses a mix of lexical scoping and contextual binding. Dynamic scoping is significantly different than contextual binding. This post has been updated to reflect correct information.

Eight-ish years ago, I wrote a blog post about the importance of programmatic scope. At the time I could have told you roughly what scope was, but I don’t think I could have explained how scope in Javascript actually worked. I could explain that some variables were accessible in different parts of the application and I could point at things and give a vague, hand-wavy kind of explanation as to how it all related. Only understanding that much served me well enough for a while, but when push came to shove, not understanding scope at a deeper level started to make development in Javascript difficult and unreliable.

Perhaps the most important thing to understand is what scope is. Variables are available to different sections of code based on how they are defined. Simply put, there is a lookup table that is provided at each layer of the code and this table contains all of the variable references a line of code may access based on where it lives in the source file or at execution time. Below is a visual demonstration of how this works in your code.

function myOuterFunction(){
    var foo = 'bar';

    function myInnerFunction(){
        var baz = 'quux';

        //foo is available from the outer function, here
        console.log(foo);

        //Baz is only available here
        console.log(baz);
    }

    myInnerFunction();

    //foo is available here too
    console.log(foo);

    //baz from the inner function is NOT available here
    console.log(baz);
}

myOuterFunction();

/*
* Output:
*
* bar
* quux
* bar
* undefined
*
*/

In order to write programs which are stable and predictable, it is really important to have a firm grasp on variable scoping and what this means in the context of the code you write. As it turns out, there are actually two major types of scoping. The first kind of scoping is lexical scope. The second type of scope is dynamic scope actually contextual binding.

As it turns out Javascript actually uses a combination of each of these. This blended approach to scope is, in my opinion, one of the largest sources of confusion for debugging and editing code in Javascript today. This post will focus on lexical scope, so we can get a firm grasp on, in my opinion, the simpler of the two scoping methodologies. I will cover the following lexically bound scope scenarios:

Lexical Scope Overview

Lexical scope is, in the simplest terms, association of variables in the program based solely on the way they are introduced in the source code. In other words, lexical scope will always follow the same rules when executing based solely on how you wrote the source code. Execution context has no bearing, so though inspection of the code alone, you can reason about which variables are available where.

The first example in the post is an explanation of how lexical scope looks when writing functions. Each variable is made available precisely where you would expect it based on the structure of the code. With the next three scenarios you will see how each of the lexically bound scopes work and how to apply them.

Global Scope

When people say “don’t use global variables,” what they really mean is don’t bind variables to the global scope. Globally scoped variables are available in every context and, when modified, can introduce all kinds of bugs and problems into your code. However, with ES6, we can define constants which are safe for global use. Let’s take a look at a good globally scoped value:

//This is NOT an arbitrary single-letter variable.
//https://en.wikipedia.org/wiki/E_(mathematical_constant)
const e = 2.7183;

//We can compute continuous interest growth, now
function continuousGrowth(principle, rate, time){
    return principle * Math.pow(e, rate * time);
}

Global scope is typically reserved for constants and namespaces. Other items that are globally scoped are built-in objects and functions that are part of the core Javascript language. Although the global scope is a valid scoping target, it is best to take great care when using it.

Function Scope

In Javascript, up to this point, function scope has been the primary scope used for defining, assigning and maintaining variables. Function scope is a relatively safe place to define variables that will be used locally for work to be done.

The interesting point about function-scoped variables is, they are defined within a function but any functions that are defined below that function level also have access to the variables as well. There are caveats, but that is a discussion for another day. Let’s take a look at function-level scoping.

var parrot = (function(){
    'use strict';

    var handyVar = 'variable scoped to an IIFE';

    function say(value){
        var prefix = 'Polly wants a ';
        console.log(prefix + value + '.');
    }

    function sayHandyVar(){
        say(handyVar);
    }

    return {
        say: say,
        sayHandyVar: sayHandyVar
    }
})(); //Take that, Crockford

parrot.say('cracker'); // Polly wants a cracker.
parrot.sayHandyVar(); // Polly wants a variable scoped to an IIFE.
parrot.say(handyVar); // Polly wants a undefined.

I feel the last call to parrot.say was completely unsurprising. HandyVar is scoped within the IIFE and is not accessible from outside the function. The item that is slightly more interesting is sayHandyVar. We access handyVar from sayHandyVar by referencing it directly. This is the nature of function-scoped variables.

By using function scoping, we can guarantee that our variables will remain unmolested by outside functions. This kind of data hiding gives us certain guarantees that our programs will behave more reliably and predictably as we develop. Due to this added stability, we can write larger, more complext functions without worry that we are impacting something we might not see until a bug shows up in production.

Block Scope

Block scope is old hat for anyone who has worked in other languages like C++, Java or C#. If you have a conditional or loop structure and you define a variable within that block of code, the variable is only available within that block.

Block scoping was introduced with ES6, and is defined with the let keyword. Theoretically, you could run around and replace all of your var declarations with let declarations and your program would work the same as it ever did… Theoretically.

Since var declarations only support function scoping, you might encounter some strange issues if vars were used inside of blocks and then referenced elsewhere in the function. This is due to variable hoisting. Basically, if you declare a variable with var, the declaration will be auto-hoisted to the top of your function. Let will not be hoisted. Let’s take a look.

function blockScoper(){
    console.log(myOuterVar);

    for(let i = 0; i < 3; i++){
        console.log(i);
    }

    let myOuterVar = 'Function scope';

    console.log(myOuterVar);
    console.log(i);
}

blockScoper();

/*
* Output:
*
* undefined
* 0
* 1
* 2
* Function scope
* undefined
*/

Wait, what?? So much craziness happening here. The variable myOuterVariable is not hoisted at all. It lives only below the for loop. Not only that, but i only lives within the for loop. This means you get a much more strict isolation of variables you define.

Coming from a Javascript background, this might not sound so great. We have all become so used to being loose with our variable declarations, that let might feel restrictive. As it turns out, var isn’t going away (though I wouldn’t miss it) and let is giving us a way to isolate our variables in a clean, predictable way. This kind of scope isolation allows us to use counting variables without fear of program retribution. Take a look at this:

function looper(value){
    let lesserValues = [],
        squaredValues = [];

    for(let i = 0; i < value; i++){
        lesserValues.push(i);
    }

    for(let i = 0; i < value; i++){
        squaredValues.push(i * i);
    }

    console.log(lesserValues.toString());
    console.log(squaredValues.toString());
}

looper(5);

/*
* Output:
*
* 0, 1, 2, 3, 4
* 0, 1, 4, 9, 16
*/

We were actually able to redeclare i for each loop, safely, and then manipulate it without worrying about whether we were going to affect the output. This opens a whole new world of opportunities to isolate variables and keep our programs tight, maintainable and predictable. I love predictable programs.

Finally (or TL;DR)

This covers the foundation for how lexical scope is handled in Javascript. There are three main lexical scopes a programmer can work in, global, function and block.

Global scoping makes your value available to the entire program without regard to safety or data security. The global scope should be reserved for constants, namespaces and core language functions and objects.

Function scoping makes your variables available only to the local function and all child scopes. When using the var keyword, variable declarations will be hoisted to the top of the function, though the assignment will still occur at the declaration line at runtime.

Finally, block scoping, which is new in ES6, gives us a way to manage variable scopes with block level granularity so you can protect your data and guarantee consistent function execution.

As it was said in the beginning, both lexical scoping and dynamic contextual binding are used in Javascript. We’ve managed to make it through the lexical scoping, so next time we chat, we’ll take a look at dynamic contextual binding. Until then, think about how you are scoping your variables and bring a little sanity back into your job.

Blog Post Notes

What Makes a Program Stand Up

Jun 24, 2015

Over the last year I have interviewed a lot of Javascript developers and I discovered something: many people working in Javascript don’t really understand what programming really means. What I am saying by this is, people can write code and make stuff happen in the DOM, but they don’t really understand why. Scratching just below jQuery reveals that most of a program is still essentially magic for people who promote themselves as developers.

If we look at professionals who regularly practice in other fields, even the most junior practitioner has a foundation understanding of what drives the profession. Lawyers fresh from the Bar understand law. Medical doctors, even in their residency, already have the foundation knowledge they need to diagnose and treat ailments. The most junior of architects have the physics, materials and design knowledge to understand what makes a building stand up.

Javascript developers, even at the most junior level, should understand what makes a program stand up.

History -- Turing Completeness and Lambda Calculus

Let’s hop in our wayback machine and go back about 80 years. There was a guy named Alan Turing. He is (finally) known by the general public as the man who helped crack the Enigma machine through the use of computing and mathematics. Before the second world war (~1936), he developed the idea of a computing device which could, in theory, emulate any other computing device. This device is called the Turing Machine. The Turing Machine is important because it, largely, defines what we know as the foundation of modern computing.

With the advent of the Turing Machine came the concept of Turing completeness. Essentially, any computing system that could emulate a Turing Machine could be called Turing complete. Turing completeness is a key ingredient in the development of modern programming. Though Alan Turing was working with tapes and those who followed used punch cards, programming as we understand it today began to take form in the early 20th century.

Around the same time as the development of the turing machine (1936-1937), another mathematician by the name of Alonzo Church developed a new method of describing computing function and behavior, called Lambda Calculus (λ-calculus). Incidentally Turing and Church developed these computing ideas separate from one another. Lambda calculus described a foundation for what we know as functions in programming and, more specifically, functional programming. λ-calculus is relatively inscrutable for the uninitiated, but a good example of what it looks like is the following:

λ.x.x => (λ.x x) = x;

This particular example draws upon Lisp notation to provide a little clarity. Below are the same functions in Clojure and Javascript:

;This is a standard Clojure identity function expressed with a variable
(identity x) ;returns x
function identity(value){
    return value;
}

identity(x); //returns x

In the great tradition of 1, 2, skip a few, 100, I’m going to bypass the invention of Lisp, C, C++, ML, OCaml, Haskell, Python, Java, Pascal, Basic, COBOL, etc. Though all of these languages are important in their own right, they are all informed by the same underlying principles.

If we come back to the modern day, Turing completeness and Lambda calculus underpin all of the things we know about good programming and reliable software. Turing completeness gives us the notion of branches and flow control in our favorite general purpose programming language, Javascript.

Conditionals and Branches

A computing system can be said to be Turing complete if it can emulate a Turing Machine. Although our modern computers are limited in memory and we, as people, are limited by time, a modern programming language can generally be considered Turing complete because it contains conditional operations and it is capable of accessing arbitrary amount of memory locations. In other words, because Javascript, much like other modern languages, has if statements and can store and retrieve arbitrary data in memory, we can consider it Turing complete.

Another way of looking at this is Javascript is a Turing complete computing system because you can write code like this:

function myFunction(maybeArray){
    var myArray = maybeArray === null ? [] : maybeArray;

    return myArray;
}

myFunction([1, 2, 3, 4]); //returns [1, 2, 3, 4]
myFunction(null); //returns []

Let’s be honest, this is a really trivial function, but there is a lot of history that goes into it. We declared a function which was stored in memory. Inside of that function we test a passed value with a conditional. When the conditional is satisfied, we perform one assignment operation. If the conditional is not satisfied, we perform a different assignment operation. After the assignment is complete, we return the result. For such a small, simple function, there is a lot happening. Consider what would happen if conditionals (programmatic branching) didn’t exist. How would we ever do this? All of our programs would look like this:

doAction1
doAction2
doAction3
doAction4

This program is really useful if, and only if, you ever only need to do just those four things in succession. If one action fails, the program would continue running and disaster could occur. For instance, suppose that was the program for a robot on an assembly line and a part came through oriented incorrectly. That part could translate into a completely ruined product. Whoops.

The idea of conditionals and the way they impact programming can be summed up by a joke about engineers. An engineer is going to the store for his wife. She told him “buy a loaf of bread and if they have eggs, buy 12.”

The engineer returned with a dozen loaves of bread.

The engineer’s wife said “why do you have so much bread?”

The engineer replied “they had eggs!”

Branching, as far as I am concerned, is the most important concept to pave the way for any modern computing. All other elements of modern computing would not exist without it. Branching, however, is necessary, but not sufficient to define modern programming.

Reusability -- Reusable Logic, Objects and Functions

The other core element of modern computing without regard to the implementation details, is logic reuse. We like to say code reuse, but what we really mean to say is, “I want to define some logical behavior and then just refer to it elsewhere.”

Logic reuse comes in several forms, but the ones best recognized are functions and objects. We can claim that there is third type of reuse which comes in the form of modules or namespaces, but can’t we squint a little bit and say those are just special cases of objects?

In Javascript we get the benefits of our forebears because we get all of the object/class goodness that comes with heavily object oriented languages like Java and C++, but we also get all of the functional wonderment that comes from languages like Lisp and Haskell.

Object logic reuse could look a little like this:

//ES6 format
class MyObject{
    constructor(){
        this.foo = 'bar';
    }

    setFoo(value){
        this.foo = value;
    }

    getFoo(){
        return this.foo;
    }
}

//ES5.1 format
function MyObject(){
    this.foo = 'bar';
}

MyObject.prototype = {
    setFoo: function(value){
        this.foo = value;
    },

    getFoo: function(){
        return this.foo;
    }
};

//Instantiating is the same either way
var myNewObject = new MyObject();

The functional paradigm in Javascript looks like this:

//A higher-order function
function fooer(userFoo, someBar){
    userFoo.bind(null, someBar);
}

//A standard function
function myFoo(a, b){
    return a + ' foo ' + b;
}

//Partial application with a higher-order function
appliedFoo = fooer(myFoo, 'bar');

//Use of a partially applied function with another higher-order function -- map
fooedArray = ['baz', 'quux'].map(appliedFoo);

//Resulting array: ['bar foo baz', 'bar foo quux']

You’ll note we are already doing some relatively advanced operations, and the code is rather brief. This brevity is due to the nature of logic-block, or more correctly algorithm, reuse and abstraction from the deepest building blocks in a computer software system. As we get further from the computer hardware, we get more power with fewer keystrokes. The language becomes more like English and less like bits.

Recursion + Conditionals => Looping

The next piece of the modern language puzzle is recursion. Recursion blended with branches is, in my estimation, the easiest way to break down looping structures into the base elements to add visibility. Recursion on its own is not simple, but it is key to understanding why loops work they way they do. Here’s a really basic recursive algorithm for adding values:

function add(valueArray, initialValue){
    var base = typeof initialValue === 'number' ? initialValue : 0,
        value = valueArray.length === 0 ? 0 : valueArray.pop(),
        sum = base + value;

    if(valueArray.length === 0){
        return sum;
    }

    return add(valueArray, sum);
}

add([1, 2, 3, 4]); //returns 10

You’ll note we did not use a standard looping structure for this. This is a special type of recursive function called a tail recursive function. What this means is the call back to the original function happens as the very last statement in the function. This behavior is very similar to the way a while loop works. Each iteration checks the return condition and the loop exits if the condition is met. If the condition is not met, the loop continues.

The problem we encounter with algorithms like this is you can easily fill all available memory with a large enough array of values, which can cause all kinds of problems. This is because Javascript does not support tail-recursion optimization. In other words, you could write this recursion any way you please and it will perform essentially identically. Due to the growth nature of recursion, looping constructs become significant. We could rewrite this loop with a standard while in the following way and not crash a browser, server or any other device you might be running your code on.

function add(valueArray){
    var sum = 0;

    while(valueArray.length > 0){
        sum += valueArray.pop();
    }

    return sum;
}

You’ll note that, while this will perform the operation more efficiently than our recursion, we have now tightly coupled our addition logic to our exit logic. This tight coupling is what, ultimately, interferes with the innate understanding of the loop and precisely when it will exit and allow the function to return our sum. It is equally important to note that this is the preferred way to handle explicit looping in Javascript.

We do have an alternate methodology which abstracts away the condition altogether which reintroduces the concepts we get from Church’s λ-calculus. If we select an appropriate higher-order function, we can extricate our addition logic and abstract away the express syntax for looping, leaving the real intent, alone.

function adder(a, b){
    return a + b;
}

//Using the higher-order function reduce, we can apply addition across all values
//Once we perform our reduce we can eliminate the explicit condition and loop
//from our system altogether
function add(valueArray){
    return valueArray.reduce(adder, 0);
}

add([1, 2, 3, 4]); //Returns 10

Although this is not what any mathematician would ever call a formal proof, we can see immediately that the functional aspects of Javascript introduce branches in such a way that we can guarantee Turing completeness in much the same way as the imperative logic could.

Conclusion

Much like any other profession, programming has a storied history and the groundwork for what we use today takes advantage of some very important foundational concepts. Even though we have been abstracted away from the hardware and we are no longer using punch cards, all of the groundwork laid by Turing and Church as well as many others who followed define physics, materials and design knowledge we employ today when we apply experience to new problems across many industries.

What makes a program stand up is not just understanding each of these concepts in a vacuum, but how they work together to create new solutions to existing problems. We have to understand and evaluate the interrelation of the core components of what makes a program work, and apply them in a way that makes software not only functional, but maintainable and clear in intent.

Simply knowing there are conditionals, loops and code reuse is possible does not, by itself, make the professional programmer skilled. It is understanding the interrelation of the elements in a program that allows a professional programmer to skillfully design and execute software that will solve problems and provide those professionals who follow to understand the choices that were made and enhance solutions as real world problems continue to change and grow.

Blog Post Notes

Related links:

Mainstay Monday: Inheritance

Jun 22, 2015

This is the first in a new series I am trying out on my blog. Every Monday I want to provide a post about some foundational element of programming and how it relates to Javascript development. What better place to start than inheritance?

Object inheritance is one of the one of the least understood foundation Javascript topics I can think of. Even if a developer is comfortable with prototypal behavior and instantiating prototypal objects, handling inheritance is a layer which is more obscured in the Javascript than classically designed, OO languages.

Let’s discuss the object prototype. To start with a simplified definition, an object prototype is a set of properties associated with an object that defines the foundation functionality an instance of the object will have. In other words, anything you put on an object prototype will define what that object will be when you perform a ‘new’ operation.

Let’s take a look:

//This is an object setup in ES5
function Fooer(){}

Fooer.prototype = {
    foo: function(value){
       return 'bar';
    }
};

var myFooer = new Fooer();

console.log(myFooer.foo()); // 'bar'

This is about as simple as it gets. We define a function attached to the prototype, let’s call it a method to keep parity with classical OO languages, and when we get a new instance, the method is attached to the object we get back. Once you are familiar and comfortable with this syntax, it’s easy to do and easy to understand. The pitfall we have here is it’s a little convoluted. ECMAScript 6 (ES6) introduces a new, more classical notation, though the underlying behavior is still the same as it ever was.

//ES6 classes look like this
class Fooer{
    foo(){
        return 'bar';
    }
}

let myFooer = new Fooer();

console.log(myFooer.foo); // 'bar'

The code is a little shorter and, hopefully a little more declarative of intent, but the end result is identical. Now, in classical languages, there is a concept of object hierarchy. OO languages provide a clear construct for how this is handled with a special keyword. Let’s call this inheritance keyword ‘extends.’ Let’s pretend our classical language uses this ‘extends’ keyword and create a child object with it.

class Greeter extends Fooer{
    greet(name){
        console.log('Hello, ' + name + '.');
    },

    fooGreet(){
        this.greet(this.foo());
    }
}

let myGreeter = new Greeter();

myGreeter.greet('Chris'); // log: Hello, Chris.

console.log(myGreeter.foo()); // log: bar

myGreeter.fooGreet(); // log: Hello, bar.

You’ll note that we just got the parent properties for free. Extra bonus, SURPRISE, that’s ES6 syntax. It’s nice and easy. Most of us are still working in ES5 and in ES5, the times are hard. Let’s have a look at what inheritance looks like when you don’t have all the handy dandy syntactic sugar.

function Greeter(){}

Greeter.prototype = Object.create(Fooer.prototype);

Greeter.prototype.greet = function(name){
    console.log('Hello, '  + name + '.');
}

Greeter.prototype.fooGreet = function(){
    this.greet(this.foo());
}

Greeter.prototype.constructor = Greeter;

var myGreeter = new Greeter();

myGreeter.greet('Chris'); // log: Hello, Chris.

console.log(myGreeter.foo()); // log: bar

myGreeter.fooGreet(); // log: Hello, bar.

This is a lot more verbose than our friendly ES6 syntax, but it’s pretty clear the result is the same. We end up with an object that performs a new operation and directly inherits properties from Fooer. This verbosity along with the hoops you have to jump through makes it pretty obvious why people don’t introduce object inheritance in a beginning discussion of Javascript.

Regardless of the obscurity, we can try this and see inheritance really works and it adheres to the kinds of expectations we would bring from languages like Java, C#, PHP, etc.

var testGreeter = new Greeter();

console.log(testGreeter instanceof Greeter); // true
console.log(testGreeter instanceof Fooer); // true

By adding object inheritance to our arsenal, we can look back to our computer science forefathers and apply the knowledge they shared in books like the Gang of Four Design Patterns book. Concepts like inheritable DTOs become usable in Node and in the browser and we can begin to normalize our coding practices and use sane conventions in our profession to help us focus on the task at hand: solving new problems.

On top of all of this, we can see deeper into what is really happening with prototypes. When we understand how prototypes handle object properties and provide a link to the parent object, we can better understand how to leverage the finer nuances of the language for a more powerful programming experience.

Blog Notes

For an abstraction layer to handle inheritance, please see my gist.

Don't Talk About My ObjectMother That Way

Jun 17, 2015

When last we met, we talked about setting up unit testing for Javascript. I’m sure anyone reading this blog is at least aware of the idea of software design patterns. There are all of these known challenges with canned solutions. If the solution isn’t ready out of the box, it is with just a little bit of tweaking. Something you might not be aware of is there are unit testing design patterns too.

Er… What?

I know, most people think of unit testing as either something they tolerate because it’s required or, at best, a list of tiny little functions that guarantee that a particular behavior matches the expected business requirement. Where is there room for design patterns?

Many patterns come in the form of best practices, but there is one that is at the top of my list of all time favorites. The ObjectMother pattern is a design pattern tailor made for unit testing. Technically you could use ObjectMother in your everyday programming as a factory or something like that, but today it’s all about testing.

Let’s start by looking at a unit test for two different functions that require data from the same contract. I’m just going to hand-wave past what the functions do, because it doesn’t really matter right now. Right? Right.

describe('dataModule', function(){

    describe('firstFunction', function(){

        var myTestData;

        beforeEach(function(){
            myTestData = {
                records: [ { required: true }, { required: true}, { required: false } ]
            };
        });

        it('should return the number of required records', function(){
            expect(dataModule.firstFunction(myTestData)).toBe(2);
        });

    });

    describe('secondFunction', function(){

        var myTestData;

        beforeEach(function(){
            myTestData = {
                records: [ { id: 1 }, { id: 2 }, { id: 3 } ]
            };
        });

        it('should return an array of record ids', function(){
            var result = dataModule.secondFunction(myTestData);
            expect(JSON.stringify(result)).toBe(JSON.stringify([ 1, 2, 3 ]);
        });
    }

});

That is a LOT of typing for two little tests. It’s especially bad since the two different objects are so similar. Now, we could combine the two object setup blocks into a single beforeEach at the top, but what if this same data object is necessary in another test in another file? What if, worse than that, there are several modules that might interact with this data, each capturing data for a particular purpose which could be unrelated to the data module we tested here?

The almighty DRY principle would tell us this is inherently flawed. There is a code smell and that smell is one of the big reasons I hear people hate writing unit tests. What if we could actually DRY out our unit tests in a sane, maintainable way?

Enter the ObjectMother pattern.

Here’s what the mother of this object might look like:

function testDataMother(){
    return {
        records: [
            { id: 1, required: true },
            { id: 2, required: true },
            { id: 3, required: false }
        ],
        otherProperty1: 'foo',
        otherProperty2: 'bar'
    };
}

With this defined, our test code becomes much simpler to write, read and maintain. If we use our new object mother, here’s what our tests become:

describe('dataModule', function(){

    var myTestData;

    beforeEach(function(){
        myTestData = testDataMother();
    });

    describe('firstFunction', function(){

        it('should return the number of required records', function(){
            expect(dataModule.firstFunction(myTestData)).toBe(2);
        });

    });

    describe('secondFunction', function(){

        it('should return an array of record ids', function(){
            var result = dataModule.secondFunction(myTestData);
            expect(JSON.stringify(result)).toBe(JSON.stringify([ 1, 2, 3 ]);
        });
    }

});

It’s like magic, right? We just eliminated 10 lines of code we were using in our original test file and now we are able to focus on the problem, testing our functions. What’s even better, we have now centralized our data example so any other tests can use it too and we only have to modify it in one place to expand our tests. If the contract were, heaven forbid, to change, we can change our data in our mother file to match the new contract and then identify any breakages, update functionality and guarantee function and data parity. This is a HUGE win.

For small sets of tests, and relatively simple data structures, this is perfectly acceptable. What happens when you have nested data structures and complex logic to interact with it? Now you have data interdependencies and our simple functions aren’t going to be sufficient.

This calls for other, well known, patterns. We can draw upon the Factory and Dependency Injection patterns to make this better. We can employ initializing functions and initial condition objects to define a more robust interface.

Since these requirements arose as I was working through unit testing scenarios in my day to day life, I created a library, DataMother.js. DataMother allows you to isolate layers of objects and register them with an injection system. At test time, you can use DataMother to handle your data requirements much like we did above which actually made unit testing with data so easy, I actually started looking forward to it.

Weird, right?

Anyway, whether you use the naive method outlined earlier or a more robust solution like DataMother.js, use the ObjectMother pattern in your testing and bring the joy to unit testing data-driven functions that you have in the rest of your programming life. Unit tests and data can be friends!

Blog Post Notes:

The ObjectMother pattern was first discussed (as far as I know) in 2006 by Martin Fowler.

The links below are assembled from the links in the post:

(Not) Another JS Testing How-To

Jun 10, 2015

There are lots of posts about how to write your first unit test in Jasmine or Mocha, and many of them draw directly from the Jasmine how to. Let’s pretend, for a moment, that you are a developer who is already familiar with unit testing and what you really, REALLY need is a way to actually get things started without having to read a whole host of how-tos, setup documentation etc, when all you really want to do is get to unit testing.

First, let’s get the Grunt versus Gulp conversation out of the way. I say neither! Though task runners can make CI much easier, this post is about getting a quick start into actually doing unit testing. By setting up a good, solid base configuration, moving to a task runner can be as simple as just applying the configuration you have with the runner you choose. Perhaps you like Tup…

Anyway, now that we have all that out of the way, let’s talk tooling:

When we are done, this is the toolset you will have for your testing needs:

  • Node and NPM
  • Jasmine
  • PhantomJS
  • Karma

The biggest hurdle you have to cover to get everything up and running is to get Node.js up and running. For most reading this article, all you have to do is visit the Node.js website and click install. You will get the right binary and you will be off and running.

Once Node.js is installed, it is all downhill. I created a Github project that you can use to quickly get started with unit testing on just about any platform. You can either download the release, or follow the directions below:

git clone https://github.com/cmstead/jsTestDemo.git

Once you’ve copied this repo one way or another, setup is really simple. You will need to install Karma and Phantomjs globally, so I created a handy one-time use script you can run. After the global installs are finished, you can run the project specific installer and you’ll be ready to rock and roll. Open a console wherever you cloned the repository and run the following commands:

#This does your one-time setup
npm run-script globalinstaller

#This is your project-specific setup
npm install

No fuss, no muss. You’re welcome. ; )

You’ll see lots of packages stream by in the console. Once everything installs, you’re ready to start testing. It’s not exactly exciting bedtime reading, but I definitely recommend looking at the Jasmine website. Their documentation is written as a set of unit tests for the framework, which is novel, but it makes things a little hard to figure out on first read.

Let’s have a look at a (barely) annotated first unit test:


describe('testObject', function () {

    var testObject;

    //test setup
    beforeEach(function () {
        testObject = {
            foo: 'bar',
            baz: 'quux'
        };
    });
	
    //test teardown
    afterEach(function () {
        testObject = null;
    });

    //A single unit test
    it('should be an object', function () {
        //The equivalence of an assertiion
        expect(typeof testObject).toBe('object');
    });
});

```

When you start writing unit tests for your code, be sure to review the Karma configuration file in the spec folder.  Most settings can be left exactly as they are, but the paths should be updated to match your project structure.  I've included the code below so you can see the lines which need to be updated:

```javascript
files: [
            //Uncomment the following line and change the directory
            //to match your project structure.
            //'../scripts/**/*.js', //change me
            './**/*.spec.js'
        ],

        preprocessors: {
            //Change this to match your project directory structure
            '../scripts/**/*.js': ['coverage'] //change me too
        }
```

Although this isn't the snappiest blog post I have written, I have gone through this process so many times I have created templates for different kinds of projects just to save time and simplify the process of setting up unit tests, linting, ES6 transpilation, code coverage, etc.

With so many different configuration options, limited documentation and roadblocks I have encountered as I have gotten systems set up, I wanted to put something together that might help save someone else a little pain and suffering. If you have feared unit testing in Javascript because of setup troubles, consider this your personalized invitation. Unit test your code and make the web a better place!

    

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->