Types, Sets and Characteristics

Jun 22, 2016

A couple weeks ago, we looked at using Signet and some of the core types to add type information to function calls. Although it is handy to have a variety of base types available to provide signatures for your functions, sometimes you want more control and finer-grained behavior.

At the most foundational level, applied types can be viewed as sets of values. This means, for any type, we can easily construct a set which will describe that type. For instance, the type ‘string’ can be written as the set of all values which are strings. Although this may seem like a trivial way to perform a conversion of a type to a set, it gives us a way to start rethinking the way we interact with type information.

Sets as Types

We can, somewhat informally, say that sets are types. Although this doesn’t capture the nuance of types, it allows us to capture a lot of power in a simple idea. We looked at defining the string type as a set of values. What this really means is, strings are a certain set of values within all values assignable.

If we begin our sets by considering all values assignable and available in within Javascript, we can refer to that set as our “universe.” Within that universe, we could choose a variety of different sets, but regardless of which set we choose, the new set will be contained completely within the original universe.

Using this universal definition, we can consider our strings again and consider how we might describe our set of all strings. First of all, we can ask if a value is contained within our universe. A good example of a value which is distinctly NOT in the Javascript universe is 1000! (1000 factorial). Although this is an integer which actually exists, Javascript will simply evaluate it as infinity. This is not something we will need to test for, it is simply the indication of an upper bound in Javascript.

We could, however, define a set of numbers we declare as our domain set. We can call this an explicit set definition. By turns, we can define a set by way of excluding items which are not in the set. This inverse set can, equally, be declared explicitly, or we can define a function which will simply tell us whether something is in the set or not. This implicit method of creating a set can be referred to as an induced set.

Let’s take a look at a meaningful question we could ask. Let’s ask if a value is a number. This means we are going to call a function which accepts type of * and returns a boolean. This kind of function is called a characteristic function.

function isNumber (value) {
    return typeof value === 'number';
}

Although this function, by itself, is not a big step away from what we already know, it lays the groundwork for defining a richer set of types we can interact with. From the function isNumber, we get an induced set of all values which are numbers, rather than defining the type set explicitly.

Propositions and Predicates

It has been shown in the academic community that propositions are types. What this means is we can actually consider propositions such as A and B in the expression A ∨ B as types unto themselves.

Any proposition can either resolve to a theorem (⊤) or an antitheorem (⊥) which roughly equates to the idea of true or false. In other words, we can ask a question and the answer will either be “yes” or “no.” Although this seems non-obvious, at its face, this is the foundation we will use to construct a richer set of types within Javascript.

We have already seen type construction where we use a predicate to manage inclusion in a type set. We are stating that a value α must be in the set of numbers if α is a number. Even stronger than that, we can say, since our set contains ONLY values which are numbers that if α is in the set of numbers it has a type of number. This relationship between sets and type implementations is important for capturing greater amounts of information about a value as we construct subsets from sets we have defined. Let’s have a look at the logical notation to see predicates in action, so we can tie that together with our predicate notation.

Njs = the set of all values which are Javascript numbers A: α is a Javascript number B: α ∈ Njs;

A → B B → A

Of course in our implementation we really only worry about the second relation, i.e. that if a value is in the set of values which conform to the number type, the value must also conform to the number type. Given the definition of B, α ∈ Njs;, we can actually conclude that the first relation is true given the definition of Njs.

We can actually reformulate this to express the type-set relation more generally. If we simply replace our specific number set with a set of any type Τ we get a new, very useful formulation we can use to extend our type reach well beyond the specifics of the language.

Τjs = the set of all values expressible in Javascript of type τ A: ατ is a Javascript value of type τ B: ατ ∈ Τjs

A → B B → A

That’s a lot of symbols, words and relations. What this really means is we can identify and define any arbitrary type logically and, in turn, define a set containing all values of that type, which will induce an “if and only if” (iff) relationship. That’s a lot of words and symbols. Let’s take a look at how we might use this to implement our own type.

Defining a New Javascript Type

Clearly we won’t be able to build our type into the core language without getting on TC39, issuing a new standard and waiting for several years while everyone adopts it, but we can induce our type through a new predicate function. Let’s suppose we want to define a new type, Integer. We could express our type in the following way:

Intjs = the set of all values expressible in Javascript of type number which are integers αint ∈ Intjs

With this, we can define a function expressing this relationship, which we can use to verify whether a value is in our set Intjs or not. With regard to the relation between types, expressible values in Javascript and our integer set, we can guarantee the stability of our type and the correctness of our verification.

    function isInt (value) {
        return typeof value === number && Math.floor(value) === value;
    }

    isInt(5); // true
    isInt(9.3); // false

Although this function is sufficient for verifying whether a value is an integer, we are actually duplicating our efforts. Moreover, it lacks a certain expressiveness which we might like to see. Let’s use our original isNumber function to say a little more about the meaning of our int type.

    function isInt (value) {
        return isNumber(value) && Math.floor(value) === value;
    }

This new function performs the same check as the original, but it reflects a deeper relationship between our number set, Njs, and our integer set, Intjs. In other words, what we can see expressed here is the typical inheritance property of the is-a relationship.

The Is-A Relationship of Types

As is true for objects in classical object oriented programming, types can also have an inheritance relationship where one type is a subtype of another. This is what we mean by is-a relationship. We can say an integer is a number, or a name is a string. Although an integer can be a type in its own right, we know the number type is the foundation type in Javascript for any numeric representation. This means, for any function which requires a number, an integer is an acceptable value.

Our isInt function demonstrates the is-a relationship by using the number set definition as the first requirement of our check for set inclusion. Let’s continue the chain and create a characteristic function to defining our natural number. Our natural number set will be a strict subset of our integer set.

    function isNatural (value) {
        return isInt(value) && value >= 0;
    }

Now we can see that a natural number is an integer which is a number. This, of course, is similar to OO subtyping with regard to relationship, but is compositional in nature. In fact we can actually describe this type relationship as a relationship of sets, like so:

Naturaljs ⊂ Intjs ⊂ Numberjs

With the repeated behavior of including a function call from the superset, we can start looking for a way to uniformly describe our sets and their relationships. Let’s create a new function, subtype, to help us create set relationships in order to streamline the process of defining type relationships.

    function subtype(parentCharacteristic) {
        return function (childCharacteristic) {
            return function (value) {
                return parentCharacteristic(value) && childCharacteristic(value);
            };
        };
    }

Subtype allows us to define our types with functional composition and define our new characteristics with the assumption that we are already working from within a specific type. Let’s rewrite our isNatural check using subtype.

    function isNaturalType (value) {
        return value >= 0;
    }
    
    var isNatural = subtype(isInt, isNaturalType);

Now the body of our characteristic function is expressed with an implicit relation to the superset of natural numbers, integers. This kind of higher-order function use to express set relations is extremely powerful for defining and describing value types we can use in our development.

Wrapping Up

This was a somewhat dense tour of how we can construct types in Javascript, so don’t worry if it takes a little while to pull the pieces together. The important take-away is that we can construct our own types with meaningful names and clear relationships in order to better understand the way our programs work.

At the end of the day, we are human, so expecting us to actively deal in generalized abstractions such as strings and numbers may not be a reasonable request. Instead, we can reclaim the reins and define our own type language which speaks to future developers in the language of our intent. Go make types and make your programs better!

Currying Matters: Clarifying Contracts

Jun 15, 2016

Function contracts are a tricky thing. Ultimately what they define is an API for your application, but they also define how you write your internal behaviors. This balancing act can either lead to clear, well written code, or it can quickly devolve into ball of tangled string.

Walking the clean code, clean API line can seem to be a daunting task. It’s common to hear people say this is precisely what Classical OOD is built for. By maintaining state, methods can accept partial requirements and allowing the developer to build their behavior in time. I argue this kind of state management leads to extra cognitive load as the developer is required to keep track of the managed state. By currying and clearly exposed intent, incremental behavior building becomes a trivial task done in a reliable set of steps. It also leads to better program design and behavior determinism, making testing much easier to reason about.

A Small Example

Let’s have a look at a slice function as an example. It’s common to want to call slice on arguments objects and arrays alike. This means we have to vary our behavior for each different behavior. Instead of showing example of different usages, I’m going to jump to a general slice implementation similar to what was written in JFP v2.x.

    function slice(start, values, end) {
        var cleanEnd = pickEnd(end, values.length);
        return Array.prototype.slice.call(values, start, cleanEnd);
    }

This function makes use of a convenience function which we will continue to use throughout this post. Here is the implementation of pickEnd.

    function isInt(value) {
        return typeof value === 'number' && value === Math.floor(value);
    }

    function pickEnd(end, valueLength) {
        return isInt(end) ? end : valueLength;
    }

Let’s have a look at how we might accomplish a few simple tasks using our original slice function. We will create a function which will slice an arguments object or copy an array, a function which will drop the first three elements in an array and, finally, a function which will capture the elements from an array from indices 1 to 3.

    var argumentsToArray = slice.bind(null, 0);
    var dropFirstThree = slice.bind(null, 3);

    function takeFrom1to3 (values){
        return slice(1, values, 3);
    }

As we can see, using our slice function forces us to either bind arguments, or actually wrap the entire behavior in a function. This kind of inconsistency makes our slice implementation difficult to use. There must be a better way!

Currying The Slice

The application inconsistencies in our new code leads me to believe we need a better solution. When dealing with a single function API, currying can, often, be illuminating regarding argument order and function implementation. At the very least we might land on a first uniform, stable application. Let’s have a look at what currying is and how we can apply it to our existing function.

Formal currying is defined as converting a function of multiple arguments to a series of on-argument functions. This means currying a function of 3 elements would go a little like this:

    function lambda (a, b, c) {
        return op(a, b, c);
    }
    
    function lambda (a) {
        return function (b) {
            return function (c) {
                return op(a, b, c);
            };
        };
    }

If we apply this formal definition to our function, it will produce a new series of functions which are called in order. This means each function progressively accumulates information about execution state without needing any external management system or object. Let’s take a look at a formal currying of slice.

    function slice (start){
        return function (values) {
            return function (end) {
                var cleanEnd = pickEnd(end, values.length);
                return Array.prototype.slice.call(values, start, cleanEnd);
            };
        };
    }

If we use this new definition of slice, we will need to revise the implementation of our functions. Let’s dig in and see how currying makes application more uniform.

    var argumentsToArray = function (values) { return slice(0)(values)(); }; 
    var dropFirstThree = function (values) { return slice(3)(values)(); }
    var takeFrom1to3 = function (values) { return slice(1)(values)(3); };

Although we have to wrap each new function in an anonymous wrapper, we now have complete uniformity in how we apply slice. With this uniformity, we can now, safely, reorganize code and guarantee code depending on our API won’t break.

Three Arg Monte

Since each of our derivative functions only take an array as an argument, we can fiddle with the inner workings so long as we don’t alter the output. Suppose we swap the order of the output functions, capturing values at different stages of execution. It is, in fact, no different than if we had reordered the parameter list in the original function, but without the pesky .bind() bit.

Let’s take our curried function and move our end parameter up the chain, next to the start parameter. This means our function series will always take a start and end value, which makes our values parameter the last argument. Let’s see the resulting reorganization.

    function slice (start){
        return function (end) {
            return function (values) {
                var cleanEnd = pickEnd(end, values.length);
                return Array.prototype.slice.call(values, start, cleanEnd);
            };
        };
    }

With this new parameter order, we actually move the values parameter to the correct position; in other words, we can apply all of our indices first and take the values argument last, leaving us in a position which is correct for creating a variety of novel behaviors directly.

    var argumentsToArray = slice(0)();
    var dropFirstThree = slice(3)();
    var takeFrom1to3 = slice(1)(3);

Our application of slice has remained uniform, while allowing us to exclude the anonymous function wrapper for all three applications. This is definitely closer to what we really meant to say at the beginning. If only we could get rid of that required second call.

Collapsing The Calls

Now that we have found a parameter order which serves us the best, let’s get rid of the extra function call. It is useful for takeFrom1to3, but it actually makes the application of slice for argumentsToArray and dropFirstThree unnecessarily complicated since we call a function with no argument. We want to eliminate confusion where possible.

Since curried functions can be expanded from multiple argument functions, what’s to stop us from reversing the process? Moreover, it is reasonable to collapse only the parameters we want at a given level. Let’s reverse the currying process for start and end and see what we get.

    function slice(start, end) {
        return function (values) {
            var cleanEnd = pickEnd(end, values.length);
            return Array.prototype.slice.call(values, start, cleanEnd);
        };
    }

Slice has been collapsed back into a function which captures our indices early and applies them, lazily, to the final argument as we need execution to complete. This means we can actually get the uniformity we saw in earlier reorganizations, with the API sanity of a well-defined function, which happens to optionally take an extra argument after a start index. This is probably best viewed in an example.

    var argumentsToArray = slice(0);
    var dropFirstThree = slice(3);
    var takeFrom1to3 = slice(1, 3);

Throughout the process, all three of our derivative functions have only ever taken a values argument, but the application of our deeper=level function has brought us to a point where the contract is most sensible for flexible application and reuse. Better yet, each of the slice applications expresses the intent more closely since only the indices we intend to use are used at application time.

API Clarity, At Last

Someone said, recently, on Twitter that what most people call currying is actually partial application. As it turns out this is only partially true. The line between currying and partial application is so blurred, I am inclined to argue that partial application is merely a special case of the more general form of function currying.

Moreover, currying is not a one-way street. Instead, it is a tool to help us identify better ways to express our programs through expanding and collapsing arguments. By better understanding how currying works, we can actually experiment with different configurations of our functions, ideally, without overhauling contracts, types and the like.

When used with intent and care, currying enables us to create functions which have sane, meaningful and expressive contracts while also providing the flexibility to fluidly apply a general-purpose function in a variety of different situations. In other words, if your function contract stinks, maybe it’s time to apply a little currying and make your code awesome!

Enforcing Endpoints: Types and Signet

Jun 8, 2016

What a ride! I spent the last month preparing a talk for and presenting at Lambdaconf. If you haven’t been, you should. Of the conferences and coding-related events I have been to, this was probably the coolest, toughest, mind-bendiest one. It was awesome. I learned a lot about myself while I was there and a lot about the world beyond the horizon of what we consider “conventional production development.” More than that, it’s all coming to a developer shop near you sooner than you think.

You should go.

I have been talking about types in Javascript lately and this post continues the tradition. As I have been working on, and with, Signet, it becomes more and more obvious why types and signatures are fantastic in simple, raw Javascript code.

There is a lot of discussion about languages which compile to Javascript which support types. This includes Elm, TypeScript and PureScript, though there are more out there. Although I feel these languages may bring something interesting to the table, I feel they are largely akin to writing a language which compiles to C. If there is a flaw in the underlying language, compiling to the flawed language without actually addressing the problem is a band-aid, not a fix. We actually need real types in real Javascript.

I am not a die-hard type convert who wants everything to be typed to a ridiculous degree. Instead, I actually believe that blended dynamic and static typing can lead to an amazing, joyful programming experience. Imagine a world where you can bolt down the contract on things the world touches while leaving the internals to move fluidly through refactorings without having to worry about whether you’re violating a contract.

Programming in a Dynamic World

Let’s imagine you have a bit of code which takes a single purchase record and computes the final total for that record including tax. The code might look a little like what I have outlined below:

    function computeTax(percent) {
        return function (total) {
            return percent * total;
        };
    }

    function computeTotal(taxCalculator) {
        return function (total) {
            return total + taxCalculator(total);
        };
    }

    function computePrice(purchase) {
        return purchase.price * purchase.quantity;
    }

    var api = {
        computeTax: computeTax,
        computeTotal: computeTotal,
        computePrice: computePrice
    };

We’re not going to dive into why some of the functions are curried, let’s just accept that’s the way they are for this post.

Everything in this code has a clear name and, ultimately, speaks to the intent of the behavior. I’ll assume you are in an agile shop where your code is not thoroughly documented. Instead, you are relying on tribal knowledge to ensure people understand what this code does and how to interact with it.

The likelihood is someone is going to do it wrong.

This brings us to the way Javascript behaves. Javascript will, with all the best intent in mind, try to do the “right” thing. This means, passing numbers instead of objects, strings instead of numbers and NaN could all result in a running, though wholly incorrect program.

The internals of this small module might not need to be protected since anyone working in the file will be compelled to read the code and make sense of the words on the screen, but people who have never seen this code, and perhaps never will, still need to understand what correctness means. Do they know the functions are curried? No. Do they know the names of the variables? Probably not.

The fluid awesomeness of Javascript’s dynamic nature just bit us. Hard.

If You Liked It You Should'a Put A Type On It

One of the greatest failings of assuming clear names will make things manageable is that the names are rarely if ever seen outside of code they are used in. Some editors like WebStorm and Visual Studio Code will pick up the names within modules given the programmer is working with node imports and everything is properly exported, named and referenced.

Even TypeScript can’t save us from this kind of problem since the types are only supported at transpile time, so type erasure eats our one standing bastion of truth. What if we added a little signature and type help to tell others what we are expecting and what they can expect in return?

This is where Signet comes in. By using a modified Hindley-Milner type notation we can actually read what the API does and how we can interact with it. On top of that, we get real, fast type checking at runtime, which means type erasure is a thing of the past. Let’s have a look at our API definition with type enforcement.

    var api = {
        computeTax: signet.enforce('number => number => number', computeTax),
        computeTotal: signet.enforce('function => number => number', computeTotal),
        computePrice: signet.enforce('object => number', computePrice)
    };

The signature annotation not only tells us the kinds of values our function expects, it actually tells us that after the first execution we can expect a function back again. This means we can gain a tremendous amount of insight about our function without knowing anything about the internal workings of the function. Instead of having a true black box, we now have a black box with instructions on the side telling us how to use the thing. We don’t know how it is implemented, but we know it works the same way every time.

With this new enforcement, we get the following behavior:

console.log('computeTax: ' + api.computeTax.signature);
// computeTax: number => number => number

var computeLocalTax = api.computeTax(0.08);
console.log('computeLocalTax: ' + computeLocalTax.signature);
// computeLocalTax: number => number

computeLocalTax('9.99');
// Expected value of type number but got string

Closing up Shop

In the end, the challenge in any programming project is not about whether or not you can write simply maintainable code, or whether you should use types or not. Really, it is about making sure you are clearly communicating with the people who rely on your code to do what it says on the label. This means, within the code itself, it should be clear, obvious and intentional. From the outside, any code which is accessible to others, including your future self, should declare what it does, and we should make use of every tool we can to simplify the process of gaining an understanding of what to expect.

By signing and enforcing your API, you get all the benefits of a type checker, plus signature metadata which means you don’t have to go rifling through code that is not immediately related to the task at hand. Meanwhile, under the covers, we can rely on patterns, good naming and clean code to ensure our code continues to deliver value and convey meaning. Now, go add some types to your code and make life better for your team!

Static Methods on Javascript Objects

May 4, 2016

I’m a big proponent of unit testing. This means that any code I can test, I do. When I work in the browser, however, it becomes more challenging to effectively unit test all of the code I write without spinning up an instance of PhantomJS. On top of that, most of the code I write in the browser, now, uses Angular as the underlying framework, which means my requirements are even more restricted since the go-to testing environment for Angular is Karma, which uses PhantomJS to satisfy Angular’s dependency on a live DOM.

When we consider testing requirements along with the desire to share code between Node and client-side Javascript, it becomes critically important to decouple our core functionality from the framework and environment it runs within. Although some projects can benefit from Browserify and Webpack, it is equally common for developers to fight against the build step which happens before running everything in the browser.

I have spent a fair amount of time off and on trying to find the best solution for each of these problems, which would solve all of them together. Ultimately, the solution came to me while working with Scala. In Scala, it is possible to define a class and an associated object. The object exposes functions as static methods on a namespace, while the class acts as an instantiable object which can be used in Classical OO applications.

A Basic Object

This inspired my thinking and I started looking at ways I could drop this same philosophy into Javascript. Ultimately, I landed on the concept of static functions on an object. In order to get some perspective on where this train of thought will take us, let’s take a look at a simple controller object like we might create in Angular.

    function TransactionController(transactionList) {
        this.transactionList = transactionList;
        this.setTotal();
    }

    TransactionController.prototype = {
        deleteItem: function (itemId) {
            this.transactionList = this.transactionList.filter(function (item) {
                item.id !== itemId;
            });
        },

        setTotal: function () {
            this.total = this.transactionList.reduce(function (total, item) {
                return total + item.price * item.quantity;
            }, 0);
        }

    };

This controller is actually pretty typical. There is a little bit of functionality which goes through and modifies the controller state. In order to properly test this behavior, we have to modify the controller state, then run each method and test that the mutation was correct. In a world where good functional practices are possible, this seems unnecessarily fiddly.

Moving to Static

If we rewrite this controller just a little bit, we can start separating behaviors and decouple the computational bits from the state mutation. This means the bulk of the work can be bundled up inside a pure function which is easy to test and think about. Once that is complete, the mutation behavior becomes trivial to test and reason about because it is merely setting a variable.

function TransactionController(transactionList) {
        this.transactionList = transactionList;
        this.setTotal();
    }

    TransactionController.removeItem = function removeItem(itemId, transactionList) {
        return transactionList.filter(function (item) {
            return item.id !== itemId;
        });
    };

    TransactionController.getTotal = function getTotal(transactionList) {
        return transactionList.reduce(function (total, item) {
            return total + item.price * item.quantity;
        }, 0);
    };

    TransactionController.prototype = {
        deleteItem: function (itemId) {
            this.transactionList = TransactionController.removeItem(itemId, this.transactionList);
        },

        setTotal: function () {
            this.total = TransactionController.getTotal(this.transactionList);
        }
    };

This change is important because we are actually modifying the base object which introduces functions which are not part of the instantiable object. Due to this, we can actually start moving the functionality out of the primary object scope altogether and, instead, only reveal the parts of our code which we really want to expose for use.

Full Extraction and Namespacing

Once we have extracted the base functionality, we can actually move all of our logic into a factory function. This will allow us to close over utility functions and reveal just the resulting composite functions which can be attached to our object just in time.

    function getTransactionBehaviors() {
        function isNotSelected(itemId, item) {
            return item.id !== itemId;
        }

        function removeItem(itemId, transactionList) {
            return transactionList.filter(isNotSelected.bind(null, itemId));
        }

        function addToTotal(total, item) {
            return total + item.price * item.quantity;
        }

        function getTotal(transactionList) {
            return transactionList.reduce(addToTotal, 0);
        };
        
        return {
            getTotal: getTotal,
            removeItem: removeItem
        };
    }

We can actually call our factory function within our tests to ensure the logic is correct, meanwhile, nothing is exposed to the outside world accidentally. This means we can attach these functions to the controller, if desired, just before we use them in our prototypal functions.

By attaching the functions as static methods, we give them a unique namespace, perform safe data hiding and ensure that our controller functions can refer to them without needing to be bound to a local context. This actually frees us quite a bit since much of the complexity related to Classical OO in Javascript is related to context switching depending on whether a function is called within the object scope or not.

I created a small utility to perform a no-frills merge of properties onto an object. This is only for illustration and would probably be best done with a reliable library like lodash or JFP. Let’s take a look at attaching our functions to our object for namespacing purposes.

    function TransactionController(transactionList) {
        this.transactionList = transactionList;
        this.setTotal();
    }

    TransactionController = simpleMerge(TransactionController, getTransactionBehaviors());

    TransactionController.prototype = {
        deleteItem: function (itemId) {
            this.transactionList = TransactionController.removeItem(itemId, this.transactionList);
        },

        setTotal: function () {
            this.total = TransactionController.getTotal(this.transactionList);
        }
    };

We can see here the attachment of our functions is exclusively for the purpose of namespacing, much the same way we might see this in Scala or other functional language. Now that our functions are separated and declared within a factory function, we can actually work toward our second goal.

Externalizing Our Code

By separating our functionality, we can actually lift the entire factory function into a conditionally exported node module. On top of that, we get extra security because our factory function closes over our functions, making them completely inaccessible to tampering. This means, once our app is loaded in a browser, we get the same kind of separation from the world we normally see from an IIFE.

Moreover, because our code can be conditionally exported, we can require our behaviors directly and test them outside of the browser context. This means our tests will run faster and we don’t need to rely on as many, potentially flaky, integration tests. Here’s our final behavior code.

var getTransactionBehaviors = (function () {
    'use strict';

    function getTransactionBehaviors(transactionController) {
        function isNotSelected(itemId, item) {
            return item.id !== itemId;
        }

        function removeItem(itemId, transactionList) {
            return transactionList.filter(isNotSelected.bind(null, itemId));
        }

        function addToTotal(total, item) {
            return total + item.price * item.quantity;
        }

        function getTotal(transactionList) {
            return transactionList.reduce(addToTotal, 0);
        };

        return {
            getTotal: getTotal,
            removeItem: removeItem
        };
    }

    if(typeof module !== 'undefined' && typeof module.exports !== undefined) {
        module.exports = getTransactionBehaviors;
    }

    return getTransactionBehaviors;

})();

Summary

In the face of a new world for Javascript, it is important to capture every advantage we can in order to make our code clean, efficient and stable. By splitting our behaviors out of the strict confines of a framework structure and pulling them into a file for easy testing, we simplify the testing story and make it easier to share behavior between the browser and the server.

We can use simple patterns to build well defined, pure functions which give us a clear way to write and share code, while keeping it safe from attackers and stable for our users. The next time you find yourself working on a full-stack Javascript application, how are you going to split your app?

The Shodan Programmer (Reprise)

Apr 27, 2016

Quite some time ago, Michael O. Church wrote a blog post about the Shodan Programmer (beginning degree programmer, or journey beginning programmer). In this post he detailed a gradient system for identifying programmers with regard to skill, experience and knowledge. Two or three weeks ago, I discovered his post was either removed or his site had been deleted. I’m not entirely sure what led to his Shodan post going MIA, but it was an important idea in the greater discussion of progressing skill and ability.

The last remaining reference to his Dan idea is on a Quora post about the top 1% of programmers. In this post, the person posing the question was trying to find out how some of the greatest developers think and work in order to identify common habits of skilled and high-performing programmers.

With all of this in mind, I decided it is important to provide a new voice to the discussion about the Shodan Programmer. I believe that Church’s metaphor for describing programmer skill and ability holds up quite well against scrutiny. Ultimately, the idea is, programmers come in three broad-scope forms and the line is blurred when digging in and identifying when someone has moved from one part of the scale to another.

Where I currently work, we have three conceptual roles in which a developer falls: learner, contributor and mentor. Although this is, effectively, more closely related to the concept of apprentice, journeyman and master, it does line up, in part, with the greater idea Church presents in his Shodan Programmer post.

The Dan scale goes from 0.0 to 3.0 and, it could be said, this scale is logarithmic. The closer you get to the highest point, the harder it is to actually move the needle and the closer to 0 you are, the more likely you are to see large amounts of measurable improvement. This means it becomes increasingly more difficult to identify the gradation between two programmers with high proficiency.

Let’s have a glance at the three levels of the Dan.

Level 1 programmers, or Adders, are programmers who typically perform the bulk of the programming work in a software shop, solving common problems and interacting with systems. These programmers, given enough time, could eventually solve most problems which are necessary

Level 2 programmers, or Multipliers, are programmers who are capable, not only, of solving a problem, but are versed in higher-level ideas like architecture or library development and maintenance. Multipliers provide means to solve more general problems in order to reduce the amount of time spent solving the same problem over and over.

Level 3 programmers, or Global Multipliers, are programmers who tend to think beyond the scope of the immediate problem and look to solve large-scale problems. Although level 3 programmers can perform the tasks of level 1 and 2 programmers, they tend to take research positions and work on conceptual or innovative new technologies which proliferate through the entire community over time.

Level 1 Programmers

Level 1 programmers are what people would typically think of as entry- and mid-level programmers. These people are more focused on expanding their knowledge either through a learning program or day to day development projects which push their boundaries. Even programmers who have barely written their first Hello World program fall into Level 1, though they are usually not ready for production development for some time yet.

Church writes that a level 1 programmer is typically found on the scale between 0.0 and 1.4. If we were to consider the least experienced developer being hired as a junior or associate developer, we would likely be looking at developers who are at ~0.8 on the scale. Any programmer falling above 1.4 is starting to move into the realm of a Level 2 programmer.

We can see, of course, that this scale leaves a rather wide division between Level 1 and Level 2 since Level 2 actually begins well before a programmer actually reaches the 2.0 mark. It’s highly likely that someone who is working at a 1.2 level is probably starting to move into a mid-level programmer position and they are able to start looking at larger, more complex problems, though they are unlikely to be prepared to do architectural analysis

Church states that many programmers don’t progress beyond 1.2 because they stop actively investing in their own knowledge and education. It is common for someone to become comfortable with the technology stack and problem space they know and settle in.

Level 2

As you might imagine, a programmer who is at 1.5 is lumped into a Level 2 programmer. This does not mean that everyone who has reached level 1.5 is prepared to take on all tasks a Level 2.0 programmer would be able to accomplish, but they are starting to learn methods and techniques for tackling more challenging problems.

Level 2 programmers fall between 1.5 and 2.4 on the scale and perform a variety of tasks which might be considered too difficult, or too obscure for a Level 1 programmer. This does not imply any sort of superiority, however. More often reaching Level 2 is a product of independent learning, and industry experience. Level 2 programmers, typically, tap into knowledge which comes exclusively through time.

Church states that Level 2+ programmers can be identified through a variety of outlets such as library development or maintenance, speaking at conferences and mentorship. Due to the nature of their output, Level 2 programmers tend to multiply the efforts of a team rather than simply adding to it. This is where we can really see the difference between the Level 1 and Level 2 programmers.

Simply stated, by bringing Level 1 programmers onto a team the team is likely to benefit in an additive way. It is not uncommon to have a development team made up, exclusively, of Level 1 programmers, who produce working software. Typically, by introducing Level 2 programmers, the code quality will increase and the overall team will learn new skills and become a more effective team. This kind of improvement is greater than an additive effect, giving Level 2 programmers the Multiplier name.

The important aspect of a Multiplier is they provide large scale benefit to a company and are often as influential to an organization as a manager or executive. High-performing Multipliers not only raise the skill and ability of the people around them, but they steer projects clear of known pitfalls or dangers and empower the programmers they work with to take the software they touch to greater vistas.

Level 3

Church calls Level 3 programmers Global Multipliers, but it might be easier to envision them as theorists and researchers. These programmers are at 2.5 and above in the Dan gradient and it is highly unlikely anyone will actually reach 3.0, given the logarithmic nature of the scale.

Typically, Level 3 programmers work on bleeding edge projects and provide the technical and theoretical vision for the result. Level 3 developers are often found working in the R&D departments of large companies like Google, Microsoft or Apple or as high-paid, uniquely skilled contractors and trainers.

Level 3 programmers often have a deep understanding of more than just software development. They typically understand the deep underpinnings of computer science and mathematics. An academic equivalent would be a post-doctorate computer scientist leading research on a funded long-term project.

As with the gap between Level 1 and Level 2 programmers, a Level 3 programmer grows to their position through constant research, development and experience. Both Level 2 and Level 3 programmers have learned through a long process of failure and success. There is no simple path to reaching Level 3 and many programmers may never reach Level 3 simply because they lack the desire to put forth the tremendous amount of effort needed to increase their place on the scale by one or two tenths of a point.

Level Analysis

The way Church originally designed this scale, the intent was to identify appropriate skill level for people working in the industry and understand how people might attack a particular problem. Anyone who is at the whole number increments, e.g. 1.0 or 2.0, it is likely they would be able to solve 95% of the problems which could be identified at that skill level.

In other words, someone who is identified at a level of 2.0 would be able to successfully work on and solve 95% of Level 2 type problems. In much the same way, they are likely only going to be able to solve about 5% of Level 3 problems. This works the same way for someone at a level of 1.0, who would be able to solve 95% of Level 1 problems, but only 5% of Level 2 problems.

This means that for each point closer to the next level, a programmer would be able to solve a greater number of problems which would be distinct for that level. This kind of ability growth leads directly to the logarithmic nature of the graph.

Where this level designation comes in handy in day to day work is understanding the level of difficulty for a specific problem. If something is a straightforward coding problem which can be solved through a brute force approach, the problem is likely a level ~1.0 problem. On the other hand, if the problem is largely architectural or generic in nature it is probably more of a level ~2.0 problem.

By understanding where a developer currently falls on the scale will provide insight into the likelihood they will succeed at solving a problem on their own. In other words, if a developer is performing at level ~1.2, they will probably find a problem at level ~2.1 frustrating or inscrutable. On the other hand, they will also, likely, find a level ~0.3 problem trivial or boring.

This gives us a heuristic for maintaining developer engagement. Church claims that developers seeking a challenge perform best at about 65-75% above their skill level. This means we could expect a developer to exhibit moderate growth with a healthy mixture of problems which they consider “simple” for confidence, or 50% below their level, problems at +/-10% of their level and problems which are up to 50% above their level.

What it Means

Ultimately, this scale is not an indictment of anyone’s skill or ability regardless of where they are in their career. Regardless of whether you are just starting out, or you are a multi-decade veteran of the software industry, your place in the Dan is uniquely your own. Some people enjoy the act of writing code and will happily continue forward on that path, while others have ambitions of experience beyond where they are today. The choice is personal and it’s not one I would be comfortable guiding in any way.

Ideally, Church’s Dan should provide insight into the road a programmer could walk to achieve their desires and set a course for their career. Level 1.0 or 1.1 could realistically be reached in a couple of years and, perhaps, a year or so for a particularly dedicated developer. Church states that moving up the scale at 0.1 points per year after the first level is likely an aggressive schedule for many programmers. This means reaching level 2.0 will likely take a dedicated programmer 10 or 11 years on average, but might be achieved in about 8 for someone who is especially active in their research and experience.

I would suggest that the closer a programmer gets to 2.0, the slower the progress is going to feel simply because it takes more effort to cover the ground necessary for each tenth of a point. This is especially true since computer science is a field which is continually expanding and knowledge of topics continues to deepen year over year.

After reaching level 2.0, it could, realistically, take a lifetime to reach 2.5 or above. This is completely dependent upon the programmer and what their desires are. This does not necessarily mean reaching for the pinnacle is not a worthy goal outright. Instead what this means is each person will have to decide on their own whether they want to continue to climb, and what it will mean for their career.

In the end, this scale is simply a means for every person to understand where they currently are in their career, and provide a way to assess problems which will crop up while working on a particular project. In much the same way, revisiting this idea and the scale that comes with it has helped me to understand how to, both, pick appropriate level problems for people I work with as well as understand problems I have yet to face for myself.

In the end, I hope this has helped others to get a taste of the scale Church presented and, perhaps, draw a clearer line from where they are now to where they want to be. Everyone walks the path, the question is where does the journey ultimately lead?

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->