Speeding Up Your App with Request Caching

Sep 23, 2015

Recently in my Mainstay Monday posts I’ve talked about creating linked lists and queues in Javascript. Each of those topics is so dense that just writing about the basics of creating and using them took full blog posts. Really all of that was the lead up to today’s post.

I’ve worked in a couple of different, large single page applications and in every one I have, ultimately, encountered a need to cache requested data and respond to multiple functions needing that data before the original request is complete. No promise framework or other specialized library ever fit the need because, really, the call should only ever be made once.

I’ve had team members solve this problem in a naive way and simply make the call over and over again. What happens, though, if the data you’re requesting is large and takes a long time to retrieve? This means you have now introduced multiple seconds of latency into your app. I think we can all agree that waiting for 5-20 seconds for data to come back is about the worst thing you can do to a user. They get frustrated and confused. They think the app has stalled or their browser has crashed.

Problem #1: I need to store the data when I get it.

This is easy. If you just need to store the data and retrieve it upon request, you can create a hashmap with keys and values. Now, when a request comes in, first your data service will look in the hashmap and see if the data already exists. If not, you go fetch it, bring it back and then, upon return, you hand the data back into your app.

This is the most basic kind of cache and it will suffice for calls that are made infrequently. Initial data for your app to bootstrap can be handled this way. Typically a single request will be made and you can just go fetch it, then cache it. If the app wants to rebootstrap for some reason, the data is in memory and you can skip the wire call.

The more challenging issue is this: what if you have data that is requested often?

Problem #2: I need cached data to only be requested once, but the program asks for it everywhere.

This is where it gets really interesting. It’s quite common for a program to need to refer back to permissions and user capabilities often. ACL tables can get quite large and it is preferable to only request these once. The problem is, the program will need access, possibly multiple times, for even a single page. This means your application will request the same data multiple times before the service you’re accessing can return.

I’ve seen a page make 100+ requests at the same time to get data from a service. It’s not pretty.

Fortunately, queues provide the solution for this. We can queue all of the callbacks that our application generates and resolve them at once when we get the data back. This means we could, in theory, request the data on app bootstrap and never request it again. Worst case scenario is we request it just in time and the user has to wait once.

This is where the real meat of the problem is. We need to construct a queue backed request system with a cache layer to manage our data. This all sounds a bit scary but, once we break it down, it’s all just bite-sized pieces we can easily manage. We have even already decided on the data cache structure.

Before we start down this road, I would like to point out, a friend of mine introduced me to a rule I use all the time. It makes testing easier and it makes coding easier.

Never create and use an object in the same place.

The easiest way to answer this rule, for our current problem is with the factory pattern. Our factories are going to be relatively uninteresting, but because it creates good seams to work with in our code, and it does a nice job of separating our concerns so we can reasonable, correct abstractions.

So, since we know our mechanism is going to be queue-backed, we need a linked list. I went ahead and created a linked list item factory as a gist. It creates generic linked list items, so you could really use it for anything. We’re going to use it to construct our queue. Here’s our first factory, linked list items:

Once we have our linked list item factory set to go, we are ready to construct our queue. Once again, we are going to want to work with a factory. Our queue logic comes straight out of the queues post, it’s just wrapped up in a factory so we can easily separate it and work with it alone. Here’s our queue:

Now is where we start breaking new ground. Our cache, and its factory, are going to handle a few things. Consider this a little like learning to juggle. We have to get things in the right order, and it’s all interconnected, so you might want to read the code a couple times. Let’s have a look at the cache factory and then we can talk about it.

The short and sweet version of this is, we receive a local request for data, if it is in the cache, we resolve the request and move on. If we don’t have the data, we queue the callback with any other outstanding callbacks and wait for our service to return the data we need. In order to ensure we don’t have overlapping concerns, we rely on a cache instance per unique data request. Basically, this means if you make a request with { id: 1 }, it should always go through the { id: 1 } cache. This way if your application needs to come back later and request data using a different id, it can without colliding with the original data cache.

To expand this idea, let’s take a look at the steps that happen.

First, we have a cache factory. The factory takes in a request function, which it assumes only needs a callback to be complete. With this function, it news up an instance of the cache object. By using a factory, we can guarantee instantiation correctly, every time. Here’s what it would look like in code:

permissionDataService = {
    localCache: {},
    get: function(id, callback){


    addCache: function(id){
        var requestMethod = permissionService.get.bind(permissionService, id);
        localCache[key] = cacheFactory.build(requestMethod);

I’m assuming permissionService is already created and get is a method to perform a standard HTTP GET. Honestly, this could be anything, just as long as all of the correct parameters are bound.

It’s important to note that no request has been made yet. All we have done is create a cache we can make requests against. Our app is still sitting there patiently awaiting instructions for what to do. This entire setup process takes microseconds to complete and you now have everything you need to manage bursts of traffic gracefully.

Once we have our cache ready to go, we can make a whole bunch of calls and it will only make a single call across the wire. This gives us a major performance boost, which allowing our app to carry on blissfully unaware that anything has changed. Here’s what it looks like:

// Deep in a script somewhere
permissionDataService.get(123, checkCredentials);

// Somewhere else
permissionDataService.get(123, isAllowedToView);

// Some other widget wants to know, too
permissionDataService.get(123, setDisplayState);

The first request, regardless of which it is, will cause our service to create the cache. Each subsequent call will just end up the queue. Once the request comes back from our remote service, all callbacks will be resolved in the order they were received. Even better, if something else is kicked off in the meanwhile, this will simply be added to the queue and all is set and ready to go.

Adding this kind of data management to an application adds some complexity, so it may not be worthwhile to manage all data requests this way, however, when a particular behavior starts making lots of requests across the wire, this is a great way to throttle the requests back and improve efficiency in your app. The best part, as long as you are working modularly, is that you will only need to introduce changes in one place. This means the bulk of your application will remain precisely the way it is today while repairing a bottleneck that can slow your app down and frustrate users. So, get to profiling your apps and cache some data.

Mainstay Monday: Sorting

Sep 21, 2015

If you’re reading this you’re likely already a programmer. This means you have likely used [].sort() in your code many times. I honestly can’t remember how many times I’ve sorted data one way or another. Just because you’ve called sort on an array doesn’t mean you’re necessarily doing the best way you can. Does the following code look familiar to you?

[5, 9, 3, 8, 7, 6, 2, 5, 4, 8].sort().reverse()

If that code looks familiar, today is the day you stop. Javascript's Array.prototype.sort function is actually a rich function that will allow you to specify sorting far beyond just ascending search. Although reverse has its uses, the example above is just plain abuse. Stop abusing your code!

Let's take a look at how we can make use of sort in a smarter way. Sort will take a sorting function which it uses to compare two values to decide their correct order. You can define your sorting any way, but let's start with just sorting numbers.  Here's our same array and a standard, ascending sort, but we're actually defining the direction of the sort by hand. Let's take a look what this expansion would look like:

var numberArray = [5, 9, 3, 8, 7, 6, 2, 5, 4, 8];

function sortNumbers(a, b){
    return a - b;

numberArray.sort(sortNumbers); // [2, 3, 4, 5, 5, 6, 7, 8, 8, 9]

Yep, that looks like extra work to accomplish the same thing. SortNumbers just does the standard comparison and returns the array, sorted as expected. The win we get here is we can now specify the sort directly. Let's have a look at reversing the sort:

function reverseNumbers(a, b){
    return b - a;

numberArray.sort(reverseNumbers); // [9, 8, 8, 7, 6, 5, 5, 4, 3, 2]

//This output is exactly the same as
numberArray.sort().reverse(); // [9, 8, 8, 7, 6, 5, 5, 4, 3, 2]

If we are lucky enough to only ever have to sort numbers, this knowledge isn't particularly helpful. We eliminated a linear algorithm for the sort behavior which might have shaved a couple milliseconds off the total processing time. No big woo, right?

Actually it is. Have a look:

function numberSort(values, direction){
    var directionSort = direction.toLowerCase() === 'desc' ? reverseNumbers : sortNumbers;
    return values.sort(directionSort);

Our little abstraction made specifying the sort really, really easy. This means you could actually change the sorting behavior at runtime based on user input! Now, that's pretty useful.

Sorting is definitely not limited to numbers. Strings are another commonly sorted array. Much like numbers, strings have a predefined comparison inside Javascript. That said, we can't simply subtract them if we want to reverse the order. Strings have an ordinal number (numeric) value. This means you can't subtract strings, because it's meaningless, but one string can be greater or less than another.  Here's how we would perform a reverse sort on a string array:

var stringArray = ['foo', 'baz', 'quux', 'bar', 'snafu', 'woot'];

function reverseStrings(a, b){
    var result = a < b ? 1 : -1;
    return a === b ? 0 : result;

//The forward sort is left as an exercise for the reader. ; )
stringArray.sort(reverseStrings); // ['woot', 'snafu', 'quux', 'foo', baz', 'bar']

Now we can see, a little more clearly, what sort is really looking for. Negative numbers move values to the left, positive numbers move values to the right and zero means the values are equal. This is very helpful information we can capitalize on to do more interesting sorting.  Suppose, instead of sorting trivial arrays, we wanted to sort arrays of objects.

A common sorting happens when objects have an ID and we want the objects in ID order. By understanding how to sort strings, which have an inequality comparison operator, we can create a function that gives us the meaning objectA is greater, less than or equal to objectB, i.e. objectA < objectB.

Supposing we were comparing two objects, objectA = { id: 1 } and objectB = { id: 2 }.  We (might or might not) know that objectA < objectB === objectA > objectB because both inequality operators evaluate to false for objects. With that in mind, we know that objectA.id < objectB.id === true. This is what we are going to use to write our object sorting function.

var objectArray = [{ id: 1 }, { id: 4 }, { id: 2 }, { id:7 }, { id: 3 }];

function objectSort(a, b){
    var aId = a.id,
        bId = b.id,
        result = aId < bId ? -1 : 1;

    return aId === bId ? 0 : result;

objectArray.sort(objectSort); // [{ id: 1 }, { id: 2 }, { id: 3 }, { id: 4 }, { id: 7 }]

This is the first time we really couldn't just use Array.prototype.sort on our array. This problem can only be solved with a specialized sorting function. This is where the power provided by the built-in sort really shines. We can actually define data comparisons on the fly, which means we can actually create a much richer experience for our users with a lot less code.

Let's actually take our object sorting one step further. Suppose we wanted to sort an array of people. The most common way this kind of list is sorted is by last name, then by first if the last names are the same. This leads us into uncharted territory. If you were simply relying on the basic sort, you would end up sorting, partitioning and sorting again.  Let's have a look at how we might solve this problem.

var people = [
        { lastName: 'Jones', firstName: 'Bob' },
        { lastName: 'Smith', firstName: 'John' },
        { lastName: 'Jones', firstName: 'Arlene' }

// First let's generalize the object sort by allowing
// for a key to be specified. This way we can define 
// a sorting methodology at runtime and reduce the amount
// of code we write.
function keySort(key, a, b){
    var aValue = a[key],
        bValue = b[key],
        result = aValue < bValue ? -1 : 1;
    return aValue === bValue ? 0 : result;

// Now we can use our key sort to handle
// sorting in name order, last, then first.
function nameSort(a, b){
    var firstNameSort = keySort.bind(null, 'firstName'),
        lastNameSort = keySort.bind(null, 'lastName'),
        result = lastNameSort(a, b);
   return result === 0 ? firstNameSort(a, b) : result;


/* output:

    { lastName: 'Jones', firstName: 'Arlene' },
    { lastName: 'Jones', firstName: 'Bob' },
    { lastName: 'Smith', firstName: 'John' }


Now, that's some good sorting! We have all of the names in the right order with just a little bit of work. All of a sudden what looked like unnecessary abstraction becomes a big win. We sort on last name for every record, but we only sort on first name if the last name is the same. This allows us to not only sort complex arrays, but do it in a smart way to guarantee the best performance we can get.

This algorithm is great if you only need to sort first name and last name. What if you actually want to sort on a set of different keys? More importantly, what if you want to sort on a set of keys that are specified at runtime? This is a new, interesting problem which relies on what we learned with our people array. Let's generalize.

// Here's our starting point for a complex sort
function complexKeyedSort(keys, a ,b){
    var sortFunctions = keys.map(key => keySort.bind(null, key)),
        result = 0;
    while(result === 0 && sortFunctions.length > 0){
        result = sortFunctions.pop()(a, b);
    return result;

//Here we apply it to our people array:
var nameSort = complexKeyedSort.bind(null, ['lastName', 'firstName']);
people.sort(nameSort); // Output is identical. Try it!

This is a good start, but we still have a problem. We are rebuilding our sort function array on every step of the sort. For small arrays, this is probably fine, but when our arrays get big, this is dangerous. We will start seeing a bottleneck and it will be difficult to identify. Let's use the factory pattern to retool our function and get some really great code.

function reducedComplexKeyedSort(sortArray, a, b){
    var result = 0,
        index = 0,
        // Only read the length once for a micro-enhancement
        sortArrayLength = sortArray.length;
    // Updated while loop to be array-non-destructive
    while(result === 0 && index < sortArrayLength){
        result = sortArray[index](a, b);
    return result

function complexKeyedSortFactory(keys){
    // Performs sort algorithm array construction only once
    var sortFunctions = keys.map(key => keySort.bind(null, key));
    // Returns bound, ready to use sort algorithm
    return reducedComplexKeyedSort.bind(null, sortFunctions);

// Putting our new sort factory to use:
var nameSort = complexKeyedSortFactory(['lastName', 'firstName']);

// This will fall back, ever deeper into the object to give us
// a rich, deep sort with only two lines of code.
var complexObjectSort = complexKeyedSortFactory(['foo', 'bar', 'baz', 'quux']);

With all of this complexity, we are actually missing one last piece of the puzzle, a reverse sort. Even with our most complex sort managed by our reducedComplexKeyedSort, we might still want to reverse the entire sort. We agreed at the beginning that calling .reverse() on a sorted array is kind of a gross, hacky way to do things. Fortunately, reversing the order is really, really easy. All that has to be done is multiply the sort outcome by -1.  Here's some evidence:

// Here's a forward number sort
sortNumbers(a, b) === a - b;

// If we multiply by -1 we get this:
-1 * sortNumbers(a, b)
    === -1 * (a - b)
    === -1 * (a + (-b))
    === (-1 * a) + (-1 * (-b))
    === (-a) + (-(-b))
    === (-a) + b
    === b + (-a)
    === b - a
    === reverseNumbers(a, b)

// This means
-1 * sortNumbers(a, b) === reverseNumbers(a, b)

I'm not going to walk through a formal proof, but the evidence is pretty compelling. This means we could actually write a function which will return a reverse sort function. Now we only have to know how to sort one direction and we can actually switch between directional sorts easily. Here's a sort reverse function:

function reverseSort(sortFunction){
    return function(a, b){
        return -1 * sortFunction(a, b);

Let's use our reverseSort to reverse the order of our most complex sort constructing our sort function from the ground up.

var complexObjectSort = complexKeyedSortFactory(['foo', 'bar', 'baz', 'quux']),
    reversedComplexObjectSort = reverseSort(complexObjectSort);


By abstracting out our object sorting behavior, we have taken an object we know nothing about and produced a sorting algorithm which will sort our objects in reverse by keys in a ordered, refined way. That's some pretty powerful stuff.

To sum up, sometimes we can get away with simply using the built-in sort, and we can even hack in a reverse to give us ascending and descending sort behavior, but the world of array sorting is large and full of twists and turns.

We've introduced a simple way to address sorting which requires more than just relying on the language supported comparison, which will likely carry you through most common sorting tasks. We have also introduced a higher-level abstraction for defining complex sorting. Finally we developed a higher-order function which allows us to easily reverse any sort function.

This kind of development provides a strong way to reduce the amount of code you write and enhance the functionality of your program while you do it. You can look back at the code you've written and refactor it to be more maintainable and simpler to build upon. Now go forth and do some great sorting!

Blog Notes

I created a final factory for handling complex sorting with ascending and descending order handled in a SQL style. This will allow for sorts like the following: keyedSortFactory.build(['column1 asc', 'column2 desc', 'column3 desc']); Please see the gist here: Keyed Sort Factory

Five Things That Will Improve Your (Javascript) Code

Sep 16, 2015

I see lots of discussion around new frameworks, libraries and other odds and ends, which claim to make your code better, cleaner, easier to maintain, etc. Although frameworks are definitely useful for avoiding reinventing the wheel and libraries help clear out boilerplate you would have to litter your code with, the cake they offer of better code is a lie.

As client-side development in the browser becomes more focused on solving application-sized problems, the days of merely dropping in a plugin or a library to accomplish what you need are coming to an end. Javascript living both in the browser and on the server further leads to large, potentially complex code bases. If we simply continue working the way we did 10 years ago, you end up with a spaghetti mess that is completely unmaintainable.

Don’t panic!

There are things you can do to save yourself from impending doom. There are actually five things I recommend every Javascript programmer do to make their code and their lives better. It’s a lot like reinventing yourself, though, it takes work and dedication, but it is worth every ounce of work you put in. I would do a countdown, but I always recommend these items are done in order.

1. Test Driven Development (TDD)

If there is one thing on this list you should start doing immediately, it is test driven development. TDD allows you to define, up front, what the business requirements are that your code should adhere to. You actually write tests before you write code.


This means, first you write a test and run it. That test should fail. If your test doesn’t fail you are either writing tests against existing code (not TDD) or you wrote a test that tests nothing. Make sure your test fails first. Once your test fails, write just enough code to pass that test. Run the test again. Green means passing. When your test goes green, your code is good. Once you have written enough code to get messy, refactor, ensuring your tests continue to pass.

It doesn’t really matter which testing framework you choose. At my work we use Jasmine, but Mocha is fine as is Tape. There is always a flavor of the week testing framework, so pick one that makes sense to you and use it. The choice is completely up to you.

2. Static Analysis

Static analysis is, basically, a program that checks your source code against a set of rules and warns you of potential errors and bugs in your code. The sooner you can get static analysis going on your code, the better. I recommend you set it up before you even write your first unit test.

Current popular static analysis tools are JSLint, JSHint and ESLint. I prefer JSHint only because I have used it and I’m very familiar with the tool. ESLint is the newest of the three and people swear by it. Both support ECMAScript 2015/2016 syntax. JSLint is the oldest and I am unsure as to the progress regarding new syntax. Douglas Crockford is a busy guy and he may or may not have the time to maintain it.

Another way to get static analysis into your project is to use a language designed for transpilation. Transpilation is the process of compiling source code into Javascript. The output could be either human-readable code or ASM, though I tend to prefer human-readable output for no particularly good reason except ASM makes me think of Emscripten which makes me cringe.

Languages which will provide good static analysis include TypeScript and Elm. TypeScript allows you to define the type contracts your functions and methods will adhere to, which means you get some good information up front about the safety of a function call. Elm is a functional view-layer language with very strict code and type rules; because of this Elm provides greater code stability guarantees.

3. Functional Programming

The kind of functional programming I am talking about is not just introducing Underscore or Lodash into your project, but really embracing the ideas that make functional programming great: immutability, no side effects, function composition and function abstraction.

Even though Javascript variables are mutable, by treating them as immutable, you add stability to your code by avoiding changing state under your own feet. When a value is set and remains as it was assigned, you can rest assured that your code will always behave the same way when you refer to that variable.

By eliminating side effects in as much of your code as you can, you create well defined units of code which behave the same way every time and, better yet, are easy to test. This makes the first item on our list even easier to satisfy which makes your program better.

Function composition is the process of creating small abstracted units of code as functions without side effects which are then put together to do more complex work. This kind of modularity and abstraction makes it much easier to test and, when necessary, debug your code.

4. Data Structures and Algorithms

In any computer science program, the data structures and algorithms course is considered a critical course in computer science thinking. By getting familiar with the well known data structures and algorithms, you lay a foundation upon which you can build your knowledge and skills which will help to more quickly analyze and work with business concerns as well as existing code which will start to display recognizable patterns.

Data structures reach beyond the basics of variables, arrays and objects and dive into the concept of lists, stacks, queues and trees. These complex structures provide much cleaner, smarter solutions to common problems and can provide insight into problems which might be hard to identify without this kind of core understanding.

In much the same way that data structures provide data-related solutions to problems, algorithms provide insight into code and how to build in efficiency and good structure. Topics like sorting, searching and working with complex data structures will give a functioning foundation for how to integrate data solutions into your projects.

5. Design Patterns

Finally, to really cap the knowledge you have gained from the rest of the list, design patterns are a set of solutions which have been discovered and well documented. Design patterns tie together all of the practices with testing, abstraction, data solutions and algorithms and provide a well known structure to add to your program as common problems and patterns emerge.

As evidenced by my list, design patterns are not where you begin, but where you enhance. Good use of design patterns will enhance well-architected code and provide clarity when the going gets rough. These patterns are not so much a law as a guideline to help make good programs better and bad programs stable.


There are many other important practices I could have thrown into this list, like polygot programming and theoretical studies, but these things are only useful once a strong foundation is set. In studying these five major topics it is highly likely you will encounter different programming languages and different schools of thought. This is a good thing.

The greater the number of perspectives a programmer is introduced to, the better they are bound to become. This list is not something a good developer can go through once and be done. Instead it is a cycle which should be recognized and embraced. By learning, developers grow and by growing, developers improve their world.

Hopefully the topics I presented here resonate with you and you share this with others who want to continue their journey to be their best. Even the journey of a thousand miles is started with a single step, so take your step and become better every day.

Mainstay Monday: Queues

Sep 14, 2015

A couple weeks ago, we looked into linked lists. Sadly linked lists are kind of a standalone topic that don’t use much more than basic objects in order to function as designed. Queues, on the other hand can easily spring forward from linked lists, and they are a way of working with data as you might with generators!

Queues are a great resource for dealing with anything from data stores which are being updated in one place and read from another to dealing with data requests against an endpoint and a cache. Queues give you a reliable way to store data and interact with it later in the same order. Since functions are first class in Javascript, this means functions are actually data, which can be stored in a queue as well. This kind of power opens a world of possibilities.

Let’s take a look at a queue in action.

// Does stuff and stores it
function processAndEnqueue(queue, process, value){

function slowResolve(queue, resolutionProcess){
        if(queue.peek() !== null){
            slowResolve(queue, resolutionProcess);
    }, 0);

function fakeAsyncRequest(callback, value){
    setTimeout(callback(value), 10);

function square(value){
    return value * value;

function enqueueSquares(queue){
    var resolutionCallback = processAndEnqueue.bind(null, queue, square);
    for(let i = 0; i < 10; i++){
        fakeAsyncRequest(resolutionCallback, i);

function logSquares(queue){
    var log = console.log.bind(console);
    slowResolve(queue, log);

var squareQueue = new Queue();


That’s a lot of code for a simple example, but I think you’ll get the idea. Essentially we want to call a service and get a value, resolve the value and store it in the queue. One that is complete, we want to log everything that was queued up. The problem we would have run into is, the queue may not be filled completely before we start dequeueing. This allows us start with the first value and let the rest filter in over time.

Queues are a common data structure which are used throughout programs in every language to solve the same kinds of problem: we want to perform one action and allow other actions or data to wait until we are ready. It’s a lot like people waiting to get into a movie on opening night.

Since I already talked about linked lists, you probably have an idea where I am going with all of this. Let’s use the list object we created in the last post to build our queue. It turns out that once we have a linked list item definition, we can create a queue with just a bit of effort.

//Linked-list-backed queue

function Queue(){
    this.queueHead = null;
    this.queueLast = null

Queue.prototype = {
    getHeadValue: function(){
        return Boolean() ? this.queueHead.val() : null;
    peek: function(){
        return this.getHeadValue();
    enqueue: function(value){
        var queueItem = new ListItem(value);
        // Append item to the end of the list
        // If there is no current list, create it with
        // the new value as the single item.
        else {
            this.queueHead = queueItem;
            this.queueLast = queueItem;
    dequeue: function(){
        var value = this.getHeadValue(),
            queueHead = this.queueHead;
        // If there is a head element, move to the next
        // otherwise leave queueHead as null
            this.queueHead = queueHead.next();
            queueHead.setNext(null); // Avoid memory leaks!
        // This checks to see if the head and the last point to
        // the same object. If so, the queue is now empty.
        if(queueHead === this.queueLast){
            this.queueLast = null;
        return value;

There is a little bit of logic here to ensure we don’t leave dangling pointers and we don’t have null pointer exceptions, but other than this, we’re basically moving through a list that has the capability to grow on one side and shrink on the other. This is the perfect structure for dealing with data which isn’t infinite, but could grow to an arbitrary size at an arbitrary rate.

Why not just use an array for this?

It turns out we can do this in a quick and dirty way with an array. I’ve done it and you can do it too. I’ll even give an example of how to do it, but I would argue that you shouldn’t. Arrays are really, really good at handling random reads from a location. This means, if you have data you want to bounce around in, arrays are a good way to manage that. While you get this random access behavior, you have to pay for it somewhere. That cost is built in to the allocation and management of array memory.

Let’s take a look at a queue built on an array and then we can talk about the problems.

// Array-backed queue

function ArrayQueue(){
    this.queue = [];

ArrayQueue.prototype = {
    peek: function(){
        return (this.queue.length > 0) ? this.queue[0] : null;
    enqueue: function(value){
    dequeue: function(){
        return this.queue.shift();

As you can see, this code is really short and easy to write. For a small queue written in a naive way, this will suffice, but there is something dangerous lurking just beneath the surface. When we create our array, it allocates space for the array to live in. This space will grow with our array at a linear rate, which is okay, though non-optimal. The real problem comes in when we perform a shift.

Shifting an array involves retrieving the value from the head of the array, and then moving each of the elements into a new position in the array to fill the head space which was shifted out of the array. This kind of element movement and array space reallocation is really, really slow.

This slowness comes from the fact that an array in Javascript has to abide by particular rules to be predictable. If we were to do the following and the elements weren’t moved as described here’s what would happen:

var myArray = [1, 2, 3, 4];
console.log(myArray.shift()); // 1

//What would happen without reallocation:
console.log(myArray[0]); // undefined

//What REALLY happens because of reallocation
console.log(myArray[0]); // 2

This kind of reallocation is exactly what we avoid by using a linked list. Linked lists grow and shrink in constant time and position 0 is always the head of the list. Since queues only ever interact with the first and last elements of a set of values, lists give us the improved performance we need to ensure, even with large queues, we don’t encounter the kinds of difficult to diagnose bottlenecks in our code that can cause slowness.

Queues are a great example of a use for linked lists in the wild and they provide a useful mechanism for lining up data and handling it in a predictable order. With a little creativity, queues can provide a means to manage a cache, handle processes in an orderly way and even provide the backbone for generator-like behavior. Take our queue code and play with it. You might find a solution to a problem that has been challenging you.

Code Generation and You

Sep 9, 2015

A friend of mine and I have discussed code generation on several occasions. We both agree that any enterprise development process should involve code generation, without exception. Though it is possible for development from scratch may not provide enough identifiable boilerplate to justify building templates for your code, the moment a framework is in use, you will, undoubtedly, have a certain amount of boilerplate introduced which must be set up repeatedly for each new file you create.

I work with a team using Angular. We have certain style requirements for code, and Angular has a particular way for handling the setup for every code component you would create. Beyond this, we have certain patterns which have been uncovered which introduces sets of modules which we need to construct, which introduces even more boilerplate.

I have timed the setup for a standard controller and the baseline unit tests which need to be created before any new development and on average it takes about 10 minutes to create all of the files and type all of the boilerplate. For a one-time cost, this is a minimal requirement, but when working in an enterprise environment, this can add up to many lost hours each month.

Here’s the worst part of losing those hours: no problem was solved in that time.

This means for any given person who is creating a new module, they are performing rote behavior over and over without adding any real value to the project. Programmers are not paid to simply pound keys. If key pounding was all that was necessary to create an application, they would pay the smallest wage possible. Data entry is a good example of a key-pounding job. I think we can all agree that programming is more than simply performing data entry.

If this kind of rote file creation and boilerplate typing is the most basic of work, then why do we continue to do it? It seems as though rote behavior is something we often look to automate, which is why the profession exists in the first place.

I say, are we not programmers?

Automation is the name of our game, so it seems reasonable that we would want to take this wasted time and have the computer do it for us. This is what computers are for. Fortunately, this is a solved problem!

My team uses Yeoman to create our boilerplate. We have a defined application file structure, and the modules we create always have a certain amount of boilerplate associated with them. Yeoman is really, really good and taking templates and creating files with them. Here’s an example of what an Angular controller (in ES6) looks like, with no interesting bits:

    'use strict';
    var moduleName = 'basic.module';
    angular.module(moduleName, []);
    class Controller{
            this.$scope = $scope;
           .controller('basic.controller', Controller);

That is about 20 lines of boilerplate, and we haven’t even created the unit tests for it, yet! Beyond that, most controllers are associated with a view or a directive, which means we have at least two or three more files, full of boilerplate, left to create. If we were using ES5, there would be even more code here that produced nothing of new value. This is not the reason we got into programming.

Let’s take a look at what a Yeoman template would look like instead.

    'use strict';
    var moduleName = '<%= controllerName %>.module';
    angular.module(moduleName, []);
    class Controller{
            this.$scope = $scope;
           .controller('<%= controllerName %>.controller', Controller);

This is so similar to the code we wrote above it’s difficult to see where one leaves off and the other picks up. Here’s the big win, though, We spent 10 or 15 minutes creating this template and now we never have to do it again!

Let’s have a look at what the actual script looks like to populate this controller:

'use strict';

var generators = require('yeoman-generator'),
    path = require('path');

module.exports = generators.Base.extend({

    constructor: function(){
        generators.Base.apply(this, arguments);
        this.argument('controllerPath', { type: String, required: true });

    setup: function(){
        this.controllerName = this.controllerPath.split('/').pop();
        this.localPath = this.controllerPath.split('/').join(path.sep);

    performCopy: function(){
        var controllerDestination = ['app', 'controllers', this.localPath].join(path.sep),
            controllerOutput = [controllerDestination, this.controllerName + '.js'].join(path.sep),
            context = {
                controllerName: this.controllerName

        this.template('controller.js', controllerOutput, context);

That’s it. This might not be the fanciest Yeoman generator, but it reliably outputs a controller, with the correct parameters filled, every. single. time. With just a few more lines of code and another couple templates for our tests and views, we can type one line into the shell and get all of our working files spun up in seconds.

Let’s do a rough, back of the envelope comparison so we can see the amount of time saved by using generators. Let’s say you have a small to medium-sized SPA you are building and it contains 150 Javascript source files. On top of that, you will have unit test files for each of those source files, so that’s another 150 files. Atop all of that, you have views that associate with, let’s say, 1/3 of those files, so that’s another 50 files. Let’s say, for argument’s sake that it takes 3 minutes per file to generate these files by hand.

350 * 3 / 60 = 1050 / 60 = 17.5 hours

Now, let’s assume you created each of these files with a generator, and let’s assume your computer is slow, so it takes 1.5 seconds per file. Let’s do a little more math so we can see how long this takes in comparison:

350 * 1.5 / 60 = 525 / 60 = 8.75 minutes

Let’s take the value that hired says is average pay for a Javascript developer in Los Angeles, $130,000US/year, and divide it by the number of hours in the average work year, 2087. This means we have an hourly rate of about $62. Now, let’s multiply that $62 by 17.5 and we get $1085. That’s some expensive boilerplate!

With our same numbers, a developer is working for a little more than $1/minute to generate boilerplate. Let’s assume this same developer generated all of their code with templates instead of by hand. Now our code cost around $10 to generate on a slow computer. That is TWO ORDERS OF MAGNITUDE different.

People like to talk about exponential growth and this is exactly what we have here. Using code generation versus writing boilerplate by hand literally increases the cost of each base file exponentially by a factor of 2. Hand-typed boilerplate is 100 times as expensive.

The short version of all of this is, if you aren’t using code generation, you should be. If your manager tells you that it takes too much time to get code generation integrated into your project, just send them to me. Does it take too much time to decrease the cost and time of production by a factor of 100? I think not.

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe