Mainstay Monday: SOLID - Open/Closed Principle

Aug 3, 2015

This post is part of a series on the SOLID programming principles.

Last week we discussed the concept of single responsibility in programming. Continuing the process of discussing SOLID principles, let’s take a look at the Open/Closed Principle. The name of the principle isn’t quite enough to declare what it intends – open to extension, closed to modification – so we’ll need to figure out how this applies to our daily life.

What does it really mean to be open to extension or closed to modification? Clearly, we can’t strictly enforce that someone can’t come along and change our code, so there has to be a certain amount of self control that comes along with this principle. Something else that might help us along the way is the discussion we had about contracts a little while ago.

Uncle Bob Martin states, in his article on the open closed principle, that when requirements change, it is incorrect to modify working code to manage new, or updated requirements, but rather, the existing code should be extended to support the updates. There is an assumption here that the code satisfies a single responsibility to the original requirement. When new requirements arise, an extension to the original functionality should be created instead of modifying the original code. Let’s take a look at a functional approach to this.

// Original requirement: Validation, must be a string

// Important note: contract states value can be type any, return type is boolean
function isString(value){
    return typeof value === 'string';
}

// New requirement: Validation, must be a short string -- 10 chars or less

// We adhere to the contract here, value {any}, return {boolean}
function isShortString(value){
    // We are extending the original function with a new criterion 
    // by calling the original function and then adding a new
    // predicate upon return
    return isString(value) && value.length <= 10;
}

In this example, isShortString is clearly an extension of the requirements that were defined in the original isString function. Both functions accept any kind of value and return a boolean, so they adhere to the contract, and isShortString is intentionally built off the assumption that isString will always perform the necessary check for for the basic string type before any further checks happen. This means that isShortString will, effectively, act as isString for any strings that are naturally shorter than the constraint.

Since SOLID was originally developed as a tool for people working in Object Oriented Programming (OOP) we can’t overlook the original intent. If we want to apply this principle in an OO context, we need something to work with as a foundation. Let’s pick something easy that everyone can relate to from their grade-school years. Let’s have a look at a basic rectangle object, which defines the shape and has a function for getting the area and the perimeter.

class Rectangle{
    constructor(height, width){
        this.height = height;
        this.width = width;
    }
    
    getArea(){
        return this.height * this.width;
    }
    
    getPerimeter(){
        return this.height * 2 + this.width * 2;
    }
}

For as unexciting as this is, rectangles have a special form we all know and love: squares. Suppose we created a rectangle class and we’re using it throughout our code. One day the product owner says we need certain sections of the code to only support squares. We could, theoretically, modify this code to handle rectangles and their regular form, squares, but that violates the single responsibility and open/closed principles. This is a perfect opportunity to subclass and extend our original object.

class Square extends Rectangle{
    constructor(height, width){
        this.checkSize(height, width);
        super(height, width);
    }
    
    checkSize(height, width){
        if(height !== width){
            throw new Error('Height and width must be equal.');
        }
    }
}

What makes our square class interesting is, the only addition to the original class we made is a check to make sure the height and width are equal. Rectangle does everything we need without modification, aside from add an assurance that business requirements are met. Another item of note is, we ensured the original contract was enforced so anything using the Rectangle class would be able to use the Square class without modification.

When we take time to carefully extend existing functionality there is a little bit of magic that happens - we end up writing less code! If we had created a rectangle and then created a square, independent of the rectangle definition, we would have created duplicate code. If, for some reason, we needed to add something to Rectangle, we would have to go track down Square and add it there too. I don’t know about you, but I hate doing double duty.

Being a good programmer is a combination of being diligent and lazy

By front-loading the effort of good, well-written code, you get the benefit of doing less work later when things change. I will trade ease of upkeep for a little work now any day. I have seen code bases that weren’t crafted carefully and, as a result, required a tremendous amount of work to make even minor changes. In the foolishness of my youth, I might have even created code like that.

Coming back around to something Uncle Bob wrote in his article, expecting that all code will be closed to modification is generally unreasonable. Sometimes things change in a way that the original code no longer works as desired at all. Due to the nature of changing business requirements, it’s important to keep in mind that closure should be applied carefully and appropriately.

Since this is about Javascript, it’s easy to point to view-related code. Views can change on a whim and whole swaths of code may be rendered obsolete. This kind of sweeping change makes it important to clearly separate and define your business rules from the way information is presented to the user. Business rules seldom change. If someone wanted to write an accounting app, the view layer could change to enable easier entry. Common accounting principles like GAP are safe, and wise, to consider set in stone.

The Open/Closed Principle is designed as a tool to support good programming practice. By following the ideas presented, it becomes much easier to write an application that supports business rules and allows developers to safely and correctly enhance behavior as the business continues to grow and change over time. Consider your code as you write and ask yourself if you can define you work in extensible pieces which can support good development in the future without over-developing now.

Blog Post Notes

Code Smells - Conditional Obsession

Jul 29, 2015

Jeff Atwood of Stack Exchange and Coding Horror fame wrote a post quite a long time ago about code smells and what they mean. A couple weeks ago, I discussed eliminating switch statements using hashmaps. In that post, I introduced a new code smell that I want to discuss in a little more depth - conditional obsession.

Conditional obsession is when a programmer introduces more conditional logic than would ever be necessary to solve a particular problem. Sometimes conditional obsession comes in the form of a conditional structure taking the place of a common data structure, such as switches and hashmaps, while other times, it is just overwrought code that grew block by block until it became so unmanageable that developers are now afraid to even touch it.

Following is a dramatization of the kind of code I am talking about. This has been taken from real code I have encountered in the wild, but the variable names have been changed to protect the innocent.

function initCriteriaAssets() {
    var $this = this,
        dataAssets = null,
        coreData = $this.$scope.coreData,
        criteria = $this.$scope.criteria;

    if (coreData && criteria) {
        dataAssets = coreData.criteriaAssets;
        if (criteria.length > 0 && dataAssets.length > 0) {
            var count = 0,
                callback = function (data) {
                    count--;
                    if (data) {
                        for (var i in dataAssets) {
                            if (dataAssets[i].assetId === data.id) {
                                for (var j in criteria) {
                                    if (criteria[j].criteriaId === dataAssets[i].criteriaId) {
                                        criteria[j].asset = data;

                                        if (data.metaData.recordId) {
                                            criteria[j].showDialog = false;
                                        }
                                        break;
                                    }
                                }
                                break;
                            }
                        }
                    }
                    if (count === 0) {
                        $this.setDataTemplate();
                    }
                };
            for (var k in dataAssets) {
                if (typeof dataAssets[k].assetId !== "undefined") {
                    count++;
                    $this.$dataModel.initById(dataAssets[k].assetId, callback);
                }
            }
        } else {
            $this.setDataTemplate();
        }
    }
}

It’s a little like the Twilight Zone movie where Dan Aykroyd says, “do you want to see something really scary,” isn’t it?

Clearly there are more smells at work here than conditional obsession, but you can see that this programmer was clearly testing every possible situation under the sun. Even with the original variable names in place, I would defy you to explain to me what this code actually does. This code is so incomprehensible I’m not going to even attempt to restructure it in a single blog. This could take anywhere from a day to a full sprint to unravel and clean up, depending on how pathological the problem is.

I have reached a point in my programming life where I view conditional blocks as a code smell. Sometimes they are necessary, but, often, they are just a bug magnet. The more conditions you attempt to satisfy, the more likely you are to get one of them wrong. The deeper in a code block your condition is, the more likely it is to only occasionally surface, making it extremely difficult to diagnose.

No good code smell exists without some sort of remedy. Conditional obsession is no different. Let’s have a look at different ways we can fix up our code and make it easier on ourselves and nicer for the next programmer who has to take over what we have written.

Refactoring 1 - Reduce nesting depth

If you have your conditions nested two or more layers deep, consider refactoring your logic to handle the cases at a single layer, instead. This will reduce the number of cases where your code becomes unreachable except for a very specific, difficult-to-identify edge case. Let’s take a look at an example.

function myFunction(myList, aValue){
    let defaultValue = 'defaultStr',
        newList = [];
    
    if (myList.length > 0) {
        if (aValue !== null) {
            newList = myList.map(value => value + aValue);
        } else {
            newList = myList.map(value => value + defaultValue);
        }
    } else if (aValue !== null) {
        newList.push(aValue);
    } else {
        newList.push(defaultValue);
    }
    
    return newList;
}

Now let’s apply refactoring 1.

function refactoredFunction1(myList, aValue){
    let defaultValue = 'defaultStr',
        newList = [];
    
    if (myList.length > 0 && aValue !== null) {
        newList = myList.map(value => value + aValue);
    } else if (myList.length > 0) {
        newList = myList.map(value => value + defaultValue);
    } else if (aValue !== null) {
        newList.push(aValue);
    } else {
        newList.push(defaultValue);
    }
    
    return newList;
}

Even with just the first refactoring, we get code that is easier to reason about. It’s not perfect, and it’s not DRY, but it’s a step in the right direction. Now that we have applied the refactoring, we can identify what some of the conditionals we had in our original code were really trying to accomplish.

Refactoring 2 - Factor conditionals

Factoring conditionals is a lot like factoring in algebra. Suppose we had the following expression from Algebra 1:

5x + 10

We know that a simple factorization would look like the following:

5(x + 2)

Clearly the second expression describes the outcome of the first expression directly. The main difference is, we now know that we are simply dealing in a linear expression, x + 2, which is being multiplied by 5.

The same can be done with conditional statements to help clarify meaning and help us to reduce complexity in our applications. We can factor out common conditionals and separate our logical concerns, simplifying what we must digest to improve our program’s readability and/or maintainability.

function conditionalFactoredFunction(myList, aValue){
    let defaultValue = 'defaultStr',
        newList = [],
        postfix = '';
    
    // aValue comparison to null is a common factor
    // Our conditionals continually switched between aValue and defaultValue
    if (aValue === null) {
        postfix = defaultValue;
    } else {
        postfix = aValue;
    }
    
    // Since we are always setting postfix to a sane value
    // we don't need to perform any conditional assessments here
    if (myList.length > 0) {
        newList = myList.map(value => value + postfix);
    } else {
        newList.push(postfix);
    }
    
    return newList;
}

Now that we’ve performed our conditional factorization, it becomes trivial to finish the function cleanup. We are doing a lot of variable manipulation here. This kind of juggling leads to small, difficult to spot bugs, so let’s just get rid of all the unnecessary assignments.

function finalRefactoring(myList, aValue){
    let postfix = aValue !== null ? aValue : 'defaultStr',
        newList = [];
	
	if(myList.length > 0){
		newList = myList.map(value => value + postfix);
	} else {
		newList.push(postfix);
	}
    
    return newList;
}

By identifying the conditional obsession code smell, we were able to take a function that was small, but still difficult to read and reduce complexity while improving readability. We trimmed about 33% of the bulk from the code and cut closer to the real goal the original code was trying to accomplish.

A nose for code smells is generally developed over time and with practice, but once you learn to identify distinct smells, you can become a code sommelier and make quick, accurate assessments of code that could use the careful work that refactoring provides. While you code, watch out for conditional obsession and work to reduce the complexity of your application.

Mainstay Monday: SOLID - Single Responsibility Principle

Jul 27, 2015

This post is part of a series on the SOLID programming principles.

Starting this post and for the following four Mainstay Monday posts, I am going to go through the SOLID principles as put forth by Bob “Uncle Bob” Martin and Michael Feathers. SOLID is a foundational set of principles to allow programmers to evaluate their code by and refactor to in order to reduce bugs and increase stability. Originally SOLID was presented as a tool for object oriented design (OOD), but I contend that many of the principles apply to the functional paradigm as well.

The first principle of SOLID is the Single Responsibility Principle. As the name states, single responsibility is a heuristic we can use to evaluate whether code we wrote is trying to accomplish too many different tasks. When code does too many things at once it allows bugs to creep in undetected. Following is a function that, at first glance, looks pretty small and innocuous, but actually tries to accomplish several different tasks at once.

function returnAdder(){
	
	let latestSum = 0;
	
	return function addStuff(a, b){
		if(typeof a !== 'number' || typeof b !== 'number' || isNaN(a + b)){
			throw new Error('Something isn\'t a number, yo');
		}
		
		latestSum = a + b;
		
		return latestSum;
	}

}

Just as a brief review of what is happening here, we have a state variable captured in a closure, called latestSum. The returned function does a little validation, a little arithmetic and a little state modification. For 4 lines of executable code, that’s actually quite a bit going on.

Let’s do some refactoring and tease apart each of the different actions we’re performing and create separate functions for each. This may not immediately seem like the best way to go about things, but it will make our issues a little more transparent. Here’s a new, refactored function that does the same job.

function returnComplexAdder(){

	let latestSum = 0;

	function storeLatestSum(sum){
		latestSum = sum;
	}
	
	function validateArguments(a, b){
		let aIsValid = typeof a === 'number' && !isNaN(a),
		    bIsValid = typeof b === 'number' && !isNaN(b);
			
		return aIsValid && bIsValid;
	}

	function add(a, b){
		return a + b;
	}
	
	return function composedAdder(a, b){
		if(!validateArguments(a, b)){
			throw new Error(`Either ${a} or ${b} is not a valid number.`);
		}

		storeLatestSum(add(a, b));
		return latestSum;
	}
	
}

Composed adder is a little cleaner now. Each of the things it used to handle directly have been abstracted away and then reintroduced as function calls. Now our function is a little less explicit about the discrete steps needed to accomplish the work, and our adder function can be considered more of an execution layer which merely combines the steps needed to fully process our request.

As it turns out, however, a big mess of functions like this can turn ugly in a heartbeat. Beyond the obvious trend toward one or more pyramids of doom, we are handling a memory state in a rather sub-optimal way. When state and functions that modify that state live closely together it can sometimes be helpful to collect the entire block of data and functions into an object which manages its own state.

When we create a class around this concept of an adder with a persistent memory, we can make nearly a one-to-one conversion from a functional form to an instantiable object. Let’s refactor and take a look at the resulting object.

class memAdder{
	constructor(){
		// Since the world can access this and we haven't done anything yet
		// we want to use a non-number falsey value that accurately describes
		// the current state of our object.
		this.latestResult = null;
	}
	
	storeLatestResult(result){
		this.latestResult = result;
	}
	
	validateArguments(a, b){
		let aIsValid = typeof a === 'number' && !isNaN(a),
		    bIsValid = typeof b === 'number' && !isNaN(b);
		
		return aIsValid && bIsValid;
	}
	
	add(a, b){
		a + b;
	}
	
	computeAndStore(a, b){
		if(this.validateArguments(a, b)){
			throw new Error(`Either ${a} or ${b} is not a valid number.`);
		}
		
		this.storeLatestResult(this.add(a, b));
		return this.latestResult;
	}
}

Aha! Once we have made the transition to an object oriented paradigm, it becomes clear that we are still, in fact, not adhering to the single responsibility principle. Our object, memAdder, is still doing all of the same things our original function was doing. This is why our function looked so messy, we kept all the clutter!

As people who know me understand, I am a proponent of “everyday functional programming.” This means that doing things in a purely functional way in theory sounds wonderful, but sometimes objects happen. The beautiful thing that can happen, however, is sometimes we start looking at a big, ugly object and then functions happen.

Let’s use a modified strategy/factory pattern returning functions instead of objects to abstract away all of our validation and computation logic and leave the object managing the thing objects manage best: state. When we do this, we can fall back to our preferred pure functional approach for the grunt work, which will be selected and managed at runtime based on user need, meanwhile we have an object that can compose functions on the fly and maintain a running memory of the latest computation peformed.

//IIFE ALL THE THINGS!
let arithmeticValidatorFactory = (function(){
	
	function numberValidator(a, b){
		let aIsValid = typeof a === 'number' && !isNaN(a),
		    bIsValid = typeof b === 'number' && !isNaN(b);
		
		return aIsValid && bIsValid;
	}
	
	function getValidator(key){
		let validators = {
			default: numberValidator
		};
		
		return Boolean(validators[key]) ? validators[key] : validators['default'];  
	}
	
	return {
		get: getValidator
	};
	
})();

//IIFEs keep the global space clean and happy
let arithmeticFunctionFactory = (function(){
	
	function zeroFunction(){
		return 0;
	}
	
	function add(a, b){
		return a + b;
	}
	
	function getFunction(key){
		let arithmeticFns = {
			addition: add,
			default: zeroFunction
		};
		
		return Boolean(arithmeticFns[key]) ? arithmeticFns[key] : arithmeticFns[default];
	}
	
	return {
		get: getFunction
	};
	
});

class memComputor{
	constructor(method, validator){
		this.compute = method;
		this.validator = validator;
		this.latestResult = null;
	}
	
	setLatestResult(result){
		this.latestResult = result;
	}
	
	computeAndStore(a, b){
		if(!this.validator(a, b)){
			throw new Error(`Either ${a} or ${b} is not an acceptable argument.`);
		}
		
		this.setLatestResult(this.compute(a, b));
		return this.latestResult;
	}
}

As we can see, the new object memComputor has been reduced to a single responsibility. Hooray! That’s what we set out to do. memComputor is instantiated with a computation method and a validator, so it contains no computation or validation logic of its own. ComputeAndStore does exactly that. It takes the desired functionality, composes it on the fly, fails the attempt if it is invalid, otherwise the computation is performed and the output is stored and returned.

Meanwhile on the factory front, we have all of our actions lined up. We declare the methodology we need, receive functions and we have a pair of pure functions that are reliable, bug-free, or as close as we can make them, and ready for injection into our state-management object.

It seems like a lot of code to, ultimately, do a simple addition problem. If the goal were addition, I would agree. Ultimately, however, what we really built here was the foundation for a system to manage expensive actions that we might want to perform once and then reference again and again, like the union or intersection of a large list of data, perhaps.

To sum up everything we’ve gone over, the Single Responsibility Principle is a heuristic tool for evaluating whether a block of code, object-oriented or functional, is performing the correct actions to accomplish a single goal or if the code is taking on too much and should be refactored to solve the problem in a more granular way.

With many programming problems, identifying the right granularity can be difficult, but by using some of the well known and battle tested tools and solutions like the single responsibility principle. By adding SOLID principles to your arsenal of tools, your programming will get better and your ability to solve even greater and more complex problems will become a question of breaking them down into the right pieces.

Contracts for Better Code

Jul 22, 2015

With programming languages which have a greater draw for classically trained computer science types, there is a common discussion of contracts and data expectations. Contracts are so inherent in the code that it’s hard to see a declaration without one. Type systems emerge from the idea of contracts and every function, constructor and return path comes with an expectation that is defined and declared in the code.

Javascript, being dynamically typed and a little loose with the morals, attempts to sidestep contracts altogether. Function arguments are little more than a strong suggestion to the programmer, and every function returns something even if that something is merely the undefined type. Contracts are just not something we do.

Here’s an example of exactly how squirrely Javascript can really be.

function addBasic(a, b){
    return a + b;
}

function addArguments(){
    let a = arguments[0],
        b = arguments[1];

    return a + b;
}

function addSuggestion(a, b){
    let _a = Boolean(a) ? a : 0,
        _b = Boolean(b) ? b : 0;

    return _a + _b;
}

Obviously each of these functions does essentially the same thing, but in the last example a and b are nothing more than a suggestion of how you could use the function. You could run addSuggestion() and get 0 or you could run addSuggestion(1, 2) and get 3. There’s no requirement that you actually adhere to the contract at all.

You are doing it wrong.

There, I said it. Mucking about with all of these bits and pieces that may or may not exist and allowing programmers to play fast and loose with your carefully constructed function is just plain wrong. I makes me want to take a shower. It’s like the midnight movie on the horror channel: gross.

Rule number one of contract club: You ALWAYS talk about contract club.

If someone created a contract in their function they are setting expectations. If you don’t play by the rules someone set in the function, you should not expect the function to work properly. It’s just that simple. The contract is there to save you from yourself.

At this point, I think I should mention, I understand that Javascript doesn’t support function overloading, so you can’t create optional variations on a function and the loose requirements around the contract are there so you can get something akin to overloaded functions.

To this I say hogwash!

Actually that’s not true. Optional arguments are good, however it is better if we use them in a safe way. Overloaded functions, even in languages that allow them, can get dangerous. It’s preferable to write code that says what it means and does what it says. Let’s take a look.

function buildBasicUrl(hostname){
	return 'http://' + hostname;
}

function buildBasicPathUrl(hostname, path){
	return buildBasicUrl(hostname) + path;
}

function buildProtocolSpecificUrl(hostname, path, protocol){
	return protocol + '://' + hostname + path;
}

function buildPortSpecificUrl(hostname, path, protocol, port){
	return protocol + '://' + hostname + ':' + port + path;
}

function buildUrl(hostname, path, protocol, port){
	let url = '';
	
	if(Boolean(port)){
		url = buildPortSpecificUrl(hostname, path, protocol, port);
	} else if(Boolean(protocol)){
		url = buildProtocolSpecificUrl(hostname, path, protocol);
	} else if(Boolean(path)){
		url = buildBasicPathUrl(hostname, path);
	} else {
		url = buildBasicUrl(hostname);
	}
	
	return url;
}

That may not be the most beautiful code I’ve written, but it illustrates the importance of what I am saying. Here we can see that there is a function, buildUrl, which takes four parameters. Hostname is required, but all of the rest are optional. Once we get to the specific implementations of what we are actually doing, the contract becomes a firm handshake and it’s backed by the interpreter threatening to throw an error if something goes wrong. Mind you, the interpreter is going to just concatenate a whole bunch of undefined values, but that’s beside the point. You won’t get what you want if you don’t meet the contract.

So, there is another side to the contract that is also illustrated here. Regardless of what happens, you can guarantee you will always, ALWAYS get a string back when you run buildUrl. This is the promise made by the person who wrote the code before you came along. So, on the one hand, you must meet the requirements of the contract in order for the function to properly execute. On the other hand, you are allowed righteous indignation when you expect a string and you get a boolean or something else.

Return types are contracts.

When you, as the developer, write a function and claim you are returning a specific type, understand that the next person will hunt you down with hungry dogs if you promise a string, but sometimes return an object. What is returned is really, REALLY important. People rely on it. Imagine if this happened:

/*
 * I solemnly swear I always return an array.
 */

function listify(a, b, c, d){
	let finalArray = [a, b, c, d];
	
	if(finalArray.contains('foo')){
		finalArray = null; //This will totally never happen
	}
	
	return finalArray;
}

function removeVowels(value){
	return value.replace(/[aeiou]/gi, '');
}

let myListNoVowels = listify('foo', 'bar', 'baz', 'quux').map(removeVowels);

//BANG! BOOM! EXPLOSIONS! GUNFIRE! STACKTRACE!!!!

I mean, that was downright malicious. Who goes around saying they are returning an array and then they return null. That’s seriously sadistic. What’s worse is, if listify was buried somewhere in a library and all you had was the crummy documentation they wrote, you would never be able to figure out what you are doing wrong to cause listify to return null.

I dunno, it just blows up sometimes.

The short version of this post goes a little like this: When you write a function, you are writing a contract. That is a contract you are required to honor and you can, almost always, expect the programmer who uses your function to adhere to that contract.

The longer sum-up goes more like this: A contract is a guarantee. The guarantee that is given states expectations for what the user will do and it provides assurances about what the user will get in return. Contracts revolve around data, and everything is data. This means that regardless of what you are writing or what you expect, in Javascript, you should always say what you mean and do what you say.

By providing a strong contract for the user to rely on, you are making the world a little better place to live in. You are giving guarantees when so many programmers around you might not. Best of all, when you write a good data contract in your functions you can come back after a year of looking at something completely unrelated and get back up to speed almost instantly on what goes in and what comes out. Doesn’t that sound like better code? It does to me.

Mainstay Monday: Solving Problems With Recursion

Jul 20, 2015

If you have been writing Javascript for any amount of time, you’re sure to be familiar with standard loop structures. For and while blocks are part and parcel of the modern programming experience. On the other hand, if you haven’t done a data structures and algorithms course, you may not be familiar with the term recursion. Recursion is another methodology for handling repeated behavior, but it is useful for a completely different set of problems.

Before we start looking at the kinds of problems recursion is useful for handling, let’s discuss what it is. Recursion is the process of calling a function from within that same function to perform the operation again. Typically this is done with some reduction, modification or subset of the original data. A great, and simple example of a recursive algorithm is the Greatest Common Devisor (GCD) algorithm.

In order to understand what we are looking at with recursion, let’s first take a look at an iterative solution to our GCD problem. Euclid would tell us this is the most inelegant solution he’d ever seen to solve this problem, but he’s not here, so we can do what we want.

function gcd(a, b){
    //Let's not modify our original vars
    let _a = a,
        _b = b,
        temp;

    while(_b !== 0){
        temp = _a;
        _a = _b;
        _b = temp % _b;
    }

    return Math.abs(_a);
}

This function will find the GCD every time, but there is a lot of variable manipulation. With the variables being swapped around so much, it becomes difficult to follow what this function is really doing. Nonetheless, there are benefits to writing out GCD function this way that we will discuss in a moment. Now, let’s take a look at the recursive GCD function.

function gcd(a, b){
    return b !== 0 ? gcd(b, a%b) : a;
}

This second function actually accomplishes the same task as the original, but in a single line of executable code! Clearly recursion can provide a simpler, terser way of framing certain kinds of problems. In this case, we solve one step of the problem and then let the recursion do it over and over until we get an answer. This shorter syntax comes with a cost: memory.

Our while loop, though a little harder to read, takes up, effectively, a constant amount of memory and processing cycles. When you enter the function, variables are declared, memory is allocated and then the loop works within the constraints of the variables we define. Our recursive function operates differently.

Let’s add an array into our first function and watch what happens when we push values into it while we are computing everything. This will give us some insight into what is happening in memory as our recursion is working.

function gcd(a, b){
    let memory = [a, b],
        _a = a,
        _b = b,
        temp;

    console.log(memory);

    while(memory.length > 0){
        
        if(_b !== 0){
            temp = _a;
            _a = _b;
            _b = temp % _b;
            memory.push(_b);
            console.log(memory);
        } else {
            memory.pop();
            console.log(memory);
        }
    }
    return _a;
}

gcd(150, 985);

// [ 150, 985 ]
// [ 150, 985, 150 ]
// [ 150, 985, 150, 85 ]
// [ 150, 985, 150, 85, 65 ]
// [ 150, 985, 150, 85, 65, 20 ]
// [ 150, 985, 150, 85, 65, 20, 5 ]
// [ 150, 985, 150, 85, 65, 20, 5, 0 ]
// [ 150, 985, 150, 85, 65, 20, 5 ]
// [ 150, 985, 150, 85, 65, 20 ]
// [ 150, 985, 150, 85, 65 ]
// [ 150, 985, 150, 85 ]
// [ 150, 985, 150 ]
// [ 150, 985 ]
// [ 150 ]
// []
// 5

This is just a rough approximation, but you can see how more and more memory gets allocated to handle the recursion. We can generally consider this kind of behavior in our programs bad. As the algorithm finishes up, we can see the memory allocation is 8 integers.

The size of an integer lives somewhere between 2 and 8 bytes, so let’s call it 4 bytes and meet in the middle. This means that just the storage for the numbers we were computing took up 32 bytes. That may not seem like a lot, but considering our original algorithm took about 12 bytes, this is a pretty substantial overhead.

Fear not! All is not lost.

Okay, so recursion may not be the most efficient kid on the block, but sometimes it actually, really makes sense. Suppose we had a tree that you really, REALLY need to search and find something. You could write an iterative solution to search the tree, but that involves trickery we don’t have time for in this post. Instead, let’s suppose the tree is several layers deep and each layer contains several intermediate nodes. Here’s what our algorithm might look like:

// A predicate function is a function which tests a value and returns true or false
function searchTree(rootNode, predicate){
    let children = Boolean(rootNode.children) ? rootNode.children : [],
        childCount = children.length,
        found = predicate(rootNode) ? rootNode : null;

    for(let i = 0; i < childCount && found !== null; i++){
        if(predicate(children[i])){
            found = children[i];
        } else {
            found = searchTree(children[i], predicate); // Recursion!
        }
    }

    return found;
}

As you can see, we search the tree one edge at a time. We travel from node to node, moving up and down the levels until we find the element we want. If a matching element doesn’t exist in the tree, then we return null and accept our fate.

Wrapping this all up, there are always many ways to skin the proverbial cat and with each solution, there is a cost and a benefit. Recursion is an excellent way to solve some particularly tricky problems, but it comes with a cost, specifically memory and efficiency. Even with the drawbacks, sometimes it just makes more sense to use recursion and reduce the problem down to something more manageable.

We can consider looping to be a lightweight, electric chainsaw. It will cut just about anything you put in front of it, but it can make a mess. By that same notion, recursion is a scalpel. It’s not the right tool for every job, but when it’s handled with care, it can perform delicate surgery and remove warts from some of the trickiest problems in your code.

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->