Data, Types, Objects and Creating A New Generic Type

Oct 21, 2015

Javascript comes with a variety of types, and with ES 2015 and beyond, we are seeing new types emerge like sets and symbols. The primitive data types come with a standard set of comparison rules and default behaviors which make them especially nice to work with, however, complex data like arrays and objects are not so friendly. Objects have no means for comparison (unsurprising) and arrays have no simple, idiomatic way to distinguish them from objects. Even null, which is referred to as a primitive data type, can lie to us. See the example below.

typeof {}; // object
typeof null; // object
typeof []; // object

// Null check
foo === null;

// Array check
Object.prototype.toString.call(foo); // old way
foo.isArray(); // new way, throws an error on objects, not available in node

It gets worse if we want to compare any of these. If we compare two distinct arrays which contain the same data in the same order, we will get false every single time. Comparisons only happen on pointer references in Javascript and it makes checking equality really tough. There really isn’t much you can do about all of this except monkey patching data prototypes, which I always discourage for a variety of reasons.

I know I write a lot of posts about functional programming and, typically, it is the way to enlightenment and a land of wonder and elegance. Even with functional programming paradigms, sometimes what you need is not a function, but a data type… An object. The data type Javascript doesn’t have, but is older than (programming) time itself is a struct.

Structs are complex data types of a generic sort, which are used to store data in a predictable way. C and C++ have them, Ruby has them and even Go has them. Javascript has object literals which are similar, however, they lack something that is built into other languages. Let’s look at structs in a couple different languages, Ruby and C.

// Point struct in C
struct point {
    int x;
    int y;
}

point p = { .x = 0, .y = 5 };
printf("x is %d, y is %d", p.x, p.y); // x is 0, y is 5
# Point struct in Ruby

Point = Struct.new(:x, :y)
myPoint = Point.new(0, 5)

puts "x is #{myPoint[:x]}, y is #{myPoint[:y]}" # x is 0, y is 5

As I looked at these rather convenient data structures I wonder why we can’t have nice things in Javascript. This is especially apparent when we look at things like someStruct.merge() in Ruby, since we typically need a third party function to accomplish this for objects or arrays. What if we just had a really cool new data type?

This kind of construction is where object oriented programming really shines. Instead of lusting after something other languages have, but Javascript is missing, let’s just create our own data type! First thing we are going to want is a foundation to put our data into. Let’s create a constructor that sets up our struct with the stuff we need.

function Struct () {
    var keys = Array.prototype.slice.call(arguments, 0);

    this.dataStore = keys.reduce(this.addProperty.bind(this), {});
}

Struct.prototype = {
    addProperty: function (dataObj, key) {
        dataObj[key] = null;
    }
};

This really isn’s anything more than an object literal wrapped in a class, but it sets us up for a lot more. The first big problem we encounter with this is, we have a data object backing our struct object, but the way we access our data is kind of strange; myStruct.dataStore[key]. Let’s do a little bit of data hiding and add accessors and mutators so we can define an interface for our struct. By creating an API with a standard naming convention, we make our struct stable and predictable.

function Struct () {
    var keys = Array.prototype.slice.call(arguments, 0),
        dataStore = keys.reduce(this.addProperty.bind(this), {});
}

Struct.prototype = {
    accessorBase: function (dataStore, key) {
        return dataStore[key];
    },

    mutatorBase: function (dataStore, key, value) {
        dataStore[key] = value;
        return this;
    },

    addProperty: function (dataStore, key) {
        let accessorName = 'get.' + key,
            mutatorName = 'set.' + key;
        
        dataStore[key] = null;
        
        this[accessorName] = this.accessorBase.bind(this, dataStore, key);
        this[mutatorName] = this.mutatorBase.bind(this, dataStore, key);    
    }
};

If this step feels pretty abstract, that’s because it is. We have wandered into the world of metaprogramming and generic programming. We won’t go into detail on those topics because they are whole realms of computer science unto themselves. Instead, let’s discuss what we added and why.

AttachProperty adds a key initialized to null to our backing object literal, then it takes the pointer to the object literal and creates two object-bound methods: get.keyName and set.keyName. This gives us a clean, obvious API to interact with. Even better than that, we now know exactly which keys are supported by our struct and if someone tries to interact with a property which isn’t defined, they will get a useful error. This is a lot more stable than just allowing someone to come and modify the data directly. Let’s take a look at creating a point with our new struct.

var point = new Struct('x', 'y');
point.set.x(0);
point.set.y(5);

point.set.foo('bar'); // undefined is not a function

console.log('x is ' + point.get.x() + ', y is ' + point.get.y()); // x is 0, y is 5

Hey! Our struct is starting to come together. We can create a new structure on the fly, it sets up our object with the right keys, adds appropriate functions and behaves something akin to a data-oriented class. That’s pretty spiffy.

We could, theoretically, stop right here and be correct in saying we have defined our own data type, but there are so many things we are still missing. That setter behavior is fine for something small like a point. It’s actually pretty terse. However, what if we have a bunch of keys and we want to be able to modify them all at once? This is what merge is good for. Let’s define new syntax to handle batch mutation for properties.

Struct.prototype = {
    /* Our prior functionality is here... */
    mergeKey: function (updateObj, key) {
        this['set.' + key](updateObj[key]);
    },
    
    merge: function (updateObj) {
        var keysToMerge = Object.keys(updateObj);
        keysToMerge.forEach(this.mergeKey.bind(this, updateObj));
        return this;
    }
};

MergeKey is little more than an alias for our mutator functions, but it gives us everything we need to keep merge nice and tidy. It also gives us a way to pluck values from an object at run-time and update just a single property in our struct. Merge on the other hand is built exclusively for power. We can hand in an object and merge will lift all of the properties and batch update our struct. This added syntax provides a short, powerful way to handle our struct data at initialization time, and on the fly for big updates.

var point = (new Struct('x', 'y')).merge({ x: 0, y: 5 });
console.log('x is ' + point.get.x() + ', y is ' + point.get.y()); // x is 0, y is 5

Now that we’ve gotten this far, we have a fully functioning struct with a couple of conveniences. Rather than stepping through each new function we add, let’s just take a look at a final struct class. Our final struct will have comparison extension and type checking through duck-typing. This post, quite possibly, could be broken into a whole set of posts discussing each of the minute topics within our struct, but I think it might be better to just see the final product and visit some of the deeper ideas at another time.

function Struct () {
    var keys = Array.prototype.slice.call(arguments, 0),
        dataStore = {};
    
    this.get = {};
    this.set = {};
    
    keys.forEach(this.addProperty.bind(this, dataStore));
    
    // Bind data store to prototype functions
    this.addProperty = this.addProperty.bind(this, dataStore);
    this.equal = this.equal.bind(this, dataStore);
    this.dataStoresEqual = this.dataStoresEqual.bind(this, dataStore);
    this.instanceOf = this.instanceOf.bind(this, dataStore);
}

Struct.prototype = {
    
    compareValue: function (localDataStore, foreignDataStore, result, key) {
        return result && localDataStore[key] === foreignDataStore[key];
    },
    
    dataStoresEqual: function (localDataStore, foreignDataStore) {
        var localKeys = Object.keys(localDataStore),
            foreignKeys = Object.keys(foreignDataStore),
            
            compare = this.compareValue.bind(null, localDataStore, foreignDataStore),
            equalKeyCount = localKeys.length === foreignKeys.length;
            
        return equalKeyCount && localKeys.reduce(compare, true);
    },
    
    equal: function (localDataStore, foreignStruct) {
        return foreignStruct.dataStoresEqual(localDataStore);
    },
    
    containsKey: function (foreignStruct, result, key) {
        return result && typeof foreignStruct.get[key] === 'function';
    },
    
	instanceOf: function (localDataStore, foreignStruct){
        return Object.keys(localDataStore).reduce(this.containsKey.bind(this, foreignStruct), true);
	},
	
    mergeKey: function (updateObj, key) {
        this.set[key](updateObj[key]);
    },
    
    merge: function (updateObj) {
        var keysToMerge = Object.keys(updateObj);
        keysToMerge.forEach(this.mergeKey.bind(this, updateObj));
        return this;
    },

    // Generic accessor
    accessorBase: function (dataStore, key) {
        return dataStore[key];
    },
    
    // Generic mutator
    mutatorBase: function (dataStore, key, value) {
        dataStore[key] = value;
        return this;
    },
    
    // Generic property creation method. This will be bound and used later
    // to extend structs and maintain a homogenic interface.
    addProperty: function (dataStore, key) {
        dataStore[key] = null;
        
        this.get[key] = this.accessorBase.bind(this, dataStore, key);
        this.set[key] = this.mutatorBase.bind(this, dataStore, key);
        
        return dataStore;
    }
};

Leveling Up With Reduce

Oct 14, 2015

It was pointed out to me the other day that I suffer from the curse of knowledge. What this basically means is, I know something so I don’t understand what it means to NOT know that thing. This can happen in any aspect of life and it’s common for people, especially software developers, to experience this. Many of us have been working with computers in some way or another for most or all of our lives. This means, when we talk to people who haven’t shared our experiences, we don’t understand their position, i.e. we talk and they don’t have any clue what we are saying.

Within various programming communities this can also happen when more experienced developers talk to developers who are still learning and growing. The experienced developer says something they think is imparting great wisdom and knowledge on the person they are talking with, meanwhile the inexperienced developer is baffled and lost.

Functional programming has become one of these dividing lines in the community. There are people who have dug in deep and have an understanding of the paradigm which they then have trouble conveying to people who haven’t had the same experiences. Ultimately the message falls on deaf ears.

One of the least understood, but, possibly, easiest to comprehend concepts is reduce. We perform reductions every day. We reduce lists of values to sums. We reduce records down to a single selected record based on user preferences or our need. Programming and reduction really go hand in hand.

To come to grips with the kinds of behavior we’re talking about, let’s have a look at some common patterns programmers use in their day to day development. The following block of code contains functions for taking the sum of an array, finding a maximum number and filtering an array of integers. If you have written loops, conditionals and functions before, these will probably be completely unsurprising.

function sumImperative (values) {
	var result = 0;
	
	for (let i = 0; i < values.length; i++) {
		result += values[i];
	}
	
	return result;
}

function findMaxImperative (values) {
	var max = -Number.MAX_VALUE;
	
	for(let i = 0; i < values.length; i++) {
		if(values[i] > max) {
			max = values[i];
		}
	}
	
	return max;
}

function filterEvensImperative (values) {
	var result = [];
	
	for (let i = 0; i < values.length; i++) {
		if (values[i] % 2 === 0) {
			result.push(values[i]);
		}
	}
	
	return result;
}

These functions are written in an imperative style, and express every minute detail of the reduction process. We start with some sort of accumulator, whether it’s an array or a number, our variable is meant to capture outcome as we move through our iteration. We iterate over the array, performing some action at each step, then returning the result at the end.

These functions aren’t beautiful, but they are effective and predictable. For many readers, this pattern feels warm and cozy like a winter blanket. The problem we run into is, this methodology is really verbose and bloats the code. It also introduces a lot of noise. Do we really care about the inner workings of the iteration process or do we merely care about the output of our functions?

Let’s take a couple examples from our initial three functions, and rewrite them. It has been said that any recursive algorithm, may be rewritten as an iterative loop. I have no evidence to support the inverse, but I can say, with certainty, that we can rewrite all of these as recursive functions.

Just to catch everyone up, recursion is when a function calls itself internally to perform an iterative operation. We discussed recursion relatively recently in a different post. Essentially what we are going to do is put more focus on what happens in each step of the iteration, and make the iteration process less prominent in our code. Let’s take a look at a recursive strategy for sum and max behaviors.

function sumRecursive (values, accumulator) {
	accumulator += values.pop();
	return values.length === 0 ? accumulator : sumRecursive(values, accumulator);
}

sumRecursive([1, 2, 3, 4, 5].slice(0), 0); // 15

function findMaxRecursive (values, max) {
	var value = values.pop();
	max = max > value ? max : value;
	return values.length === 0 ? max : findMaxRecursive(values, max);
}

findMaxRecursive([2, -5, 12, 3, 89, 7, 6].slice(0), -Number.MAX_VALUE); // 89

An interesting point to note is, these functions are actually destructive in nature. We could have written them in a way that is not destructive, however it would have added complexity we don’t need to dig through at the moment. Instead, we can slice the array we are sending in to ensure the original array is not modified by the pop behavior.

Each of these recursive algorithms do something very similar. They highlight a single step in the process, allowing the programmer to focus on the immediate problem of reducing no more than two values at a time. This allows us to actually identify the real behavior we are interested in.

Recursion, of course, leaves us in a position where we have to identify a stopping condition, which was more obvious in the original, imperative, code. Nonetheless, if we choose to halt the process on the occurrence of an empty array, we can just replicate the behavior without needing to put too much extra thought in.

When we review these recursive functions, it becomes apparent the main difference is the accumulation versus comparison behavior. Without too much work, we can strip out this unique behavior and create a generic recursion function which accepts a behavior parameter as part of its argument list. Although this makes our recursion function fairly abstract, and possibly a little harder to read, it reduces the load when we start thinking about what we want to do. The recursion function can disappear as a referentially transparent black box function.

This level of abstraction allows the implementation details of our recursion to be safely separated from the details of our immediate functional need. Functions of this type, which take functions as arguments, are called higher-order functions. Higher order functions are commonly highly-abstract and can lead down a rabbit hole known as generic programming. Let’s not go there today, instead let’s cut to the chase and see our abstraction!

function add (a, b) {
	return a + b;
}

function max (a, b) {
	return a > b ? a : b;
}

function genericRecursor (behavior, values, accumulator) {
	accumulator = behavior(accumulator, values.pop());
	return values.length === 0 ? accumulator : genericRecursor(behavior, values, accumulator);
}

genericRecursor(add, [1, 2, 3, 4, 5].slice(0), 0); // 15
genericRecursor(max, [2, -5, 12, 3, 89, 7, 6].slice(0), -Number.MAX_VALUE); // 89

This generic recursion is actually the final step toward our goal, the reduce function. Technically, our generic recursor, given the way it behaves will perform a right-reduction, but that is more than we need to bite off at the moment. We could easily rename genericRecursor to rightReduce and we would truly have a reduction function. The problem we would encounter is, our function is backwards! If we really want to replicate the behavior from our original function we need to make a small modification. Let’s rewrite our genericRecursor as a first, and final hand-build reduce function.

function reduce (behavior, values, accumulator) {
	accumulator = behavior(accumulator, values.shift());
	return values.length === 0 ? accumulator : reduce(behavior, values, accumulator);
}

reduce(add, [1, 2, 3, 4, 5].slice(0), 0); // 15
reduce(max, [2, -5, 12, 3, 89, 7, 6].slice(0), -Number.MAX_VALUE); // 89

The two key changes we made were renaming and changing from pop to shift. Shift is notoriously slower than pop, so this function is useful for illustration, but it lacks characteristics we would like to see in a production-ready reduce function. Instead, let’s jump from our hand-rolled reduce function to the Javascript native implementation.

Javascript’s native implementation really is a black box function if you are working only from the Javascript side. Implemented in C++, reduce works only on arrays, and has a couple of shortcomings we won’t address here. Nevertheless, the native reduce is key to leveling up your fluent Javascript skills, and is a valuable tool for reducing cognitive load and SLOC bloat. Let’s take a look at a couple of examples of using reduce.

// Accumulation

var integers = [1, 2, 3, 4, 5],
	records = [{ value: 2 },
			   { value: 4 },
			   { value: 6 },
			   { value: 8 },
			   { value: 10 }];

function add (a, b) {
	return a + b;
}

function addValues (a, b) {
	return add(a, b.value);
}

integers.reduce(add); // 15
integers.reduce(add, 0); // 15

records.reduce(addValues, 0); // 30

// Maxima/Minima

function max (a, b) {
	return a > b ? a : b;
}

function min (a, b) {
	return a < b ? a : b;
}

var values = [2, -5, 12, 3, 89, 7, 6];

values.reduce(max, -Number.MAX_VALUE); // 89
values.reduce(min, Number.MAX_VALUE); // -5

If we return to our original filtering function, we can easily replicate the behavior using reduce. We will also introduce a mapping function. Reduce is so incredibly flexible we can actually accomplish many of the iterative tasks we do every day. The primary pitfall of using reduce for all of the iterative tasks is we will begin to introduce bloat again as we replicate more generic behavior. We won’t dig into the details today. Instead, let’s take a look at some of the power we get from reduce as a tool. It’s kind of the electric drill of the programming world: many uses, all of which save time and energy better spent elsewhere.

function filterEvens (accumulator, value) {
	if(value % 2 === 0) {
		accumulator.push(value);
	}
	
	return accumulator;
}

function multiplyBy10 (accumulator, value) {
	accumulator.push(value * 10);
	return accumulator;
}

function shallowCopy (original, accumulator, key) {
	accumulator[key] = original[key];
	return accumulator;
}

var originalObject = { 'foo': 'bar', 'baz': 'quux' };

[1, 2, 3, 4, 5].reduce(filterEvens, []); // [2, 4]

[1, 2, 3, 4, 5].reduce(multiplyBy10, []); // [10, 20, 30, 40, 50]

Object.keys(originalObject).reduce(shallowCopy.bind(null, originalObject), {});
// { 'foo': 'bar', 'baz': 'quux' } !== originalObject

This discussion is merely the tip of the iceberg, but it exposes the kind of work which can be done with reduce and the energy we can save by using it more often. For as frequently as complex data types like arrays and objects appear in our code, it only makes sense to work smarter and faster. With the power that comes from first class functions and higher-order functions, we can accomplish large amounts of work with small, but highly declarative behaviors.

As you look at your code, try to spot places where behaviors are repeated and the real focus should be on the data you are working with. Perhaps reduce is an appropriate solution. You might even be able to use it in an interview. I leave you with FizzBuzz performed using reduce.

function fizzBuzzify (value) {
	var result = value % 3 === 0 ? 'Fizz' : '';
	
	result += value % 5 === 0 ? 'Buzz' : '';
	
	return result === '' ? value : result;
}

function fizzBuzz (output, value) {
	output.push(fizzBuzzify(value));
	return output;
}

var integers = [1, 2, 3, /* ... */, 100];

integers.reduce(fizzBuzz, []);
// [1, 2, 'Fizz', 4, 'Buzz', /* ... */, 14, 'FizzBuzz', /* ... */, 'Fizz', 100]

Callback Streams With Function Decoration

Oct 7, 2015

Regardless of whether you prefer raw callbacks or promises, there comes a time where asynchronous behavior pops up in your application. It’s an artifact of working on the web and working with Javascript. This means that, although a function was originally written to solve a particular problem, eventually that function may need to be extended. If we follow the open/closed principle, we should not modify the original function since it almost certainly still solves the original problem for which it was designed. What to do…

Function decoration through composition gives us a powerful way to enhance existing function behavior without modifying the original function. This provides guarantees that our program remains more stable for more use cases and only introduces changes in a surgical, requirements-driven way.

Let’s start off with a core request service. It’s important to note that this is written with certain assumptions being made, i.e. we have specific modules which are already defined and that we only care about the service because it is the foundation for our request construction. This service only does a single thing: it makes a get call to a predefined endpoint with a provided ID. It’s uninteresting, but it helps to illuminate how we are going to construct our function stack.

// This is created as an object so we can pass in mocks and fakes
// for testing and abstraction purposes.
function MyDataService(httpService, urlConstantsFactory){
    this.httpService = httpService;
    this.urlConstantsFactory = urlConstantsFactory;
}

MyDataService.prototype = {
    get: function(id, callback){
        var request = {
                url: this.urlConstantsFactory.get('myDataUrl'),
                data: {
                    id: id
                }
            };
        
        this.httpService.get(request, callback);
    }
};

This is the last piece of object oriented code we are going to look at in this post. We are going to assume from here forward that this service has been instantiated with the correct dependencies injected. Now, let’s create a basic function that uses an instance of our service to make a request. This would look like the following.

function getSomeData(id, callback){
    myDataServiceInstance.get(id, callback);
}

So far I know nothing about what the callback does, but that’s okay. This is a simple wrapper to handle our request in some business-layer service. The view-model or controller code will be able to blindly call this service with a request matching the contract.

Technically we have already exposed everything that needs to be known about callback streams, but it’s a little early to end the post, since there isn’t much to be gained here, yet. If all we did was wrap up our request in another function, the goodness isn’t going to be readily obvious to someone who is coming fresh to this concept. Let’s take a look at what a callback stream looks like as an image before we start really digging in.

Callback Decoration Diagram

The important thing to take away from our diagram is no one layer needs to know anything more than what is passed from the layer above. It is unimportant to understand what the layer above does or why. It is, however, very important to know how to respond to the callback that is passed in. This is why contracts become so important in decoration. If, at any point, we break a contract, our stream will break and our application will fail. Fortunately, this adheres to the same requirements as calling any other function, so we are not introducing any greater rule strictness than we had before.

So, back to our business-layer abstraction. Suppose something changed at the data layer and a property name in the JSON that is returned was changed. Although we would like to hope this would never happen, we live in the real world and things are never perfect. Fortunately our abstraction layer allows us to handle this gracefully, rather than having our entire application break because of a database or service change.

Here’s a transformation function.

function myDataTransformation (data){
    var transformedData = utilityLibrary.copy(data);

    transformedData['expectedName'] = transformedData['newName'];

    return transformedData;
}

You’ve probably already noticed our transformation function isn’t tied in with our callback at all. That’s actually a good thing. This function is simple. but if there were complex logic, it would be important to isolate it, and unit test it appropriately. This function does exactly one thing and the declaration is clear. Since callback streams already introduce an abstraction layer, anything we can do at each layer to make the code clear and clean will make debugging easier.

Now, let’s take a look at an approach to handle transformation decoration. We will start off with a simple pattern and expand from there. If Josh Kerievsky taught us anything it’s that we should identify patterns as they appear in the code and refactor to them instead of doing extra, unnecessary work. Let’s write some code.

function transformationDecoration (callback, error, data){
    var transformedData = !error ? myDataTransformation(data) : data;
    callback(error, data);
}

function getSomeData (id, callback){
    // Oh noes! Our data changed at the service layer. : (
    var finalCallback = transformationDecoration.bind(callback);

    // We still make the same call, in the end.
    myDataServiceInstance.get(id, finalCallback);
}

By making changes this way, we silently introduce changes to fix our application without having to go and touch every place where this call is made. All of a sudden data changes become a much smaller liability to mitigate. We have broken a hard dependency that would be scattered throughout our code by adding an abstraction between our view layer and our data access layer. This is one of the biggest wins the common n-tier architecture provides to us. Let’s take a look at what happens when we have a bunch of changes that happen over time.

function transformationDecoration (callback, error, data){
    var transformedData = !error ? myDataTransformation(data) : data;
    callback(error, data);
}

function anotherTransformationDecoration (callback, error, data){
    var transformedData = !error ? anotherTransform(data) : data;
    callback(error, data);
}

function yetAnotherTransformationDecoration (callback, error, data){
    var transformedData = !error ? yetAnotherTransform(data) : data;
    callback(error, data);
}

function andYetAnotherTransformationDecoration (callback, error, data){
    var transformedData = !error ? andYetAnotherTransform(data) : data;
    callback(error, data);
}

function getSomeData (id, callback){
    // Oh noes! Our data changed at the service layer. : (
    var finalCallback = transformationDecoration.bind(callback);

    finalCallback = anotherTransformationDecoration.bind(finalCallback);
    finalCallback = yetAnotherTransformationDecoration.bind(finalCallback);
    finalCallback = andYetAnotherTransformationDecoration.bind(finalCallback);

    // We still make the same call, in the end.
    myDataServiceInstance.get(id, finalCallback);
}

The amount of cut and paste I had to do to create all those functions made me die a little inside. This is really smelly code. This is where we can start recognizing patterns and cut out a bunch of duplication. What we really care about is the set of data transformations that need to be managed in our call. The rest of this has become boilerplate. Unnecessary boilerplate in Javascript is bad. Don’t do it. Let’s make a change and fix all this. I like to do this one step at a time. Sometimes things appear as you refactor that might not have been immediately obvious.

function transformDecorator (callback, transform) {
    return function (error, data){
        var finalData = !error ? transform(data) : data;
        callback(error, data);
    }
}

function getSomeData (id, callback){
    // Oh noes! Our data changed at the service layer. : (
    var finalCallback = transformDecorator(callback, myDataTransformation);

    finalCallback = transformDecorator(finalCallback, anotherTransform);
    finalCallback = transformDecorator(finalCallback, yetAnotherTransform);
    finalCallback = transformDecorator(finalCallback, andYetAnotherTransform);

    // We still make the same call, in the end.
    myDataServiceInstance.get(id, finalCallback);
}

That’s a lot better already. Now we don’t have to struggle with a bunch of function duplication and copy/paste hoopla. All we care about is the set of transformations we are going to use on the data. We can practically read off the transformation functions we are using in order. This is actually more descriptive of what we intended to begin with anyway!

Let’s actually do one more refactoring on our code. By eliminating one duplication problem, we introduced another, although less-painful, duplication.

function transformDecorator (callback, transform) {
    return function (error, data){
        var finalData = !error ? transform(data) : data;
        callback(error, data);
    }
}

function getSomeData (id, callback){
    // Oh noes! Our data changed at the service layer. : (
    let transforms = [myDataTransformation,
                      anotherTransform,
                      yetAnotherTransform,
                      andYetAnotherTransform];

    let finalCallback = transforms.reduce(transformDecorator, callback);

    // We still make the same call, in the end.
    myDataServiceInstance.get(id, finalCallback);
}

Now we’re cooking with gas! Our getSomeData function can be extended with almost no effort whatsoever now. We can simply create a new transform and then decorate the callback as many times as we need to. This decoration process relies on our original idea: callback streams. Since each layer only cares about adhering to a single contract, and we can wrap the callback as many times as we want, multiple decorations, all behaving asynchronously, can be created as a stream of decorated callbacks without worrying about a failure somewhere in the middle of it all.

The more important item to note is, this could be a single step in a long line of behaviors within a stream. We are adhering to the callback contract in our getSomeData function, so we could, just as easily, use this as an intermediate step between the requesting function and the final request. We really only care about the behavior that happens at the edges of our function, so it really doesn’t matter where this code lives!

This discussion fits in the middle of a couple of different common issues. First, this kind of decoration and function streams behavior directly combats the “pyramids of doom” callback issue many people encounter. The other issue this deals with is exposed promise objects that worm their way through many modern Javascript projects which force us to tightly couple our data access layer to our view code. The abstractions are lost unless a new promise is created and resolved at every level throughout the stack. By thinking about the goal of your code, you take back the power of tiered applications and provide smart, well-isolated functionality which can be enhanced while keeping the rest of your codebase blissfully unaware of the ever-changing data that lives just beyond the edges of your application.

Extending Functions with Decoration through Composition

Sep 30, 2015

In object oriented design, the decorator pattern is commonly used to extend and enhance classes so they perform some new or more refined functionality. In functional programming it is possible to decorate functions as well. The decoration must follow a few rules, but the result is a very powerful technique to enhance functions statically and at run time.

At the core of functional decoration is function composition. Composition is a straightforward practice that relies on pure functions which take a predictable input and output. A trivial example is something like the following:

function add (a, b) {
    return a + b;
}

function square (x){
    return x * x;
}

// Composing the two functions:
var squaredSum = square(add(1, 2)); // 9

This is so foundational to what we know of math and computing there is actually a special notation in mathematics to compose functions this way. It’s common to see this kind of thing annotated as f &compfn; g.

I was trying to think of a toy problem that could be used to demonstrate the kind of power we can get from function composition, but then it dawned on me. A great example of real powerful decoration through composition can be demonstrated through a very common problem in statistics: Computing x-bar for a set of data points.

We actually already have almost everything we need to do it. Let’s create a divide function to round out the basic functions we will need to compute x-bar.

function divide (a, b) {
    return a / b;
}

That’s it. Let’s do some statistics.

The first thing we are going to need to compute x-bar is a simple mean. This is typically referred to as an average in daily life. We’re all pretty familiar with taking a basic average: take the sum of all values and divide by the number of values. Let’s build the simpleMean function.

// We need to sum all values, so let's start there.
function sum (valueList){
    return valueList.reduce(add, 0);
}

// Now we have everything we need to create simpleMean
function simpleMean (valueList){
    return divide(sum(valueList), valueList.length);
}

SimpleMean is our first big example of function decoration through composition. As it turns out the line gets rather fuzzy when dealing with composition and decoration. The important distinction to make is when a function is decorated, the new function will honor the original function contract. What we can see here is sum takes a list and returns an integer. SimpleMean also takes a list and returns an integer. Most importantly, simpleMean wraps up the behavior of sum with another function to extend the behavior of sum without modifying or duplicating it.

Let’s dig a little deeper into our statistical function and create a new function that normalizes the values in our list using the simpleMean function and map. It’s really important to note that normalize values is a composite function but it is not a decorated function. Although we are using simpleMean to create our new function, the resulting contract does not adhere to the original simpleMean contract.

function normalizeValues (valueList) {
    var mean = simpleMean(valueList);
    return valueList.map(add.bind(null, -mean));
}

By creating this new function, we are now ready to start piecing together our final function which will provide x-bar. X-bar is the sum of the squares of normalized values. We have our normalized values, so let’s get to squaring and summing!

// Please note we're decorating our sum function with
// a squaring function.
function sumSquares (valueList){
    return sum(valueList.map(square));
}

function computeXBar (valueList){
    return sumSquares(normalizeValues(valueList));
}

The power we are ultimately seeing here is something that comes out of strong composition and an adherence to the contracts of our functions to produce richer and richer behaviors with only one or two lines at a time. The important thing to note while looking at these functions is the extremely limited use of variables. By building functions through composing and decorating functions, state is eliminated and the spaces where bugs and incorrect logic is reduced to the smallest footprint.

As you work on new systems of behaviors, think about what your goal is and break it down into small, easy to understand steps which can be composed to create a powerful, easy-to-understand function which avoids state and provides fast, crisp, high-quality behavior.

Mainstay Monday: Hiatus

Sep 28, 2015

I have spent the last several months writing two blog posts per week. I have really enjoyed writing both the Mainstay Monday and the regular Wednesday posts, but time is a limited resource. I have started a new writing project which will consume a considerable amount of time, so I had to make a choice: I could either continue writing two blog posts a week, or I could pursue my new project.

By cutting back to a single post per week, I free up enough time to pursue my new project while maintaining my blog regularly with high-quality content. Though this means I will publish less frequently, it will serve not only me, but the Javascript community at large in the long-run.

There is no current estimate when Mainstay Monday will resume, but it is not gone forever. I am working on sorting out how I will present foundation material, and whether it makes sense to ever resume the two-post pace. Only time will tell, but in the meanwhile, we’ll meet on Wednesday, just like always!

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->