Creating Programmer Joy with Type Enforcement

Dec 29, 2017

It’s extremely common for developers who work in statically typed languages to talk about how much easier their code is to maintain and that the code is self-documenting because of the type system.  However, these same programmers often talk about the amount of “ceremony” they have to overcome to work within the type system and language of their choice.  The ceremony issue seems to be reduced to zero within the Javascript community because of the dynamic type system.  On the other hand it is common to hear JS developers complain about the level of difficulty regarding maintaining a codebase which brought them so much joy while they were creating it.

In a project I have been maintaining for the last couple of years, I started off with the creation joy only to find myself fearing the idea of jumping back in and making updates to resolve bugs logged by users.  I started considering options.  Should I rewrite the entire codebase from scratch? Should I write it with a different language altogether?  It is a plugin for VS Code, so Typescript is the preferred option, though everything I wrote was in vanilla Javascript.

Between the creation of JS Refactor and this past November, I started creating a suite of different libraries all of which were employed to solve the exact kinds of problems I had in my plugin: tight coupling, undocumented code, uncertainty, and the requirement of having to go back and reread my code to rebuild context so I could start working again.

The last two issues were my greatest hurdle. I felt completely uncertain about what the code looked like which I had written to create the plugin in the first place and the only way to understand it was to go back and read it again.

Ultimately, for all of the effort I made to keep my code clean, I had still created write-only code and I was miserable about it.  Nevertheless, I started with the first bug that made sense for me to tackle and dove in.  I plodded along and my dread quickly turned into joy.  Something had happened which actually made me want to throw myself back into this (tested) legacy project.

Somewhere in the process I discovered real programmer joy.

Set aside the fact that I created a dependency injection library and integrated it a while back (this was not the source of my joy). Let’s even set aside the tool I created for turning automated tests written in Mocha, Jasmine and Jest into human readable documentation.  The thing that made my life easy and joyful were the types!

No, I didn’t make the switch to Typescript.  For all the good Typescript offers to the user, the issue of being constrained by the type system was still too much for me to bear.  Instead I stayed within plain old Javascript and started really leaning hard on the Signet type system.

First things first, I started identifying the types of objects and data I was going to interact with and I created just enough type information to say something meaningful about it all.  Here’s a sample of what I created:

After I got my type information lined up, I started working. As I slung code and discovered new information about the data I was working with, I tweaked my types to tell future me, or another developer, what kind of information really was lurking in that data with which I was interacting.

As I worked I would forget what a specific API called for, or how it worked. I would open the source code and, instead of trying to interpret the functions I had created, I simply referred to the signatures at the bottom and my context was instantly rebuilt.

What made this such a revelatory experience was not that I simply had type information encoded into my files, but it was always accurate and, if I got something wrong, I would get real, useful information about how I could fix it. The types are evaluated at run time and could verify things like bounded values and in-bound function behaviors. The more code I wrote, the faster I got. The more I introduced types and encoded real, domain-specific information into my files, the better my program became.

My code came to look like the kind of code I always wanted to write: strict and safe at the edges and dynamic in the middle. As long as I know what is coming in and what is going out I am safe to trust myself, or anyone else, to behave as they should in the middle of their function, because they can’t get it wrong.

All of a sudden typed variables became irrelevant and creating something from what existed became an exercise in joy. The game went from dynamic or static to dynamic and dependent. I could encode logical notions into my software and they always led to something better. An example of what this looks like is as follows:

With all of this information encoded in my program, I could start writing tests which actually described what is really happening under the covers. Creating example data could be done relatively effortlessly by simply fulfilling the type contract. Even creating and interacting with automated tests brought me joy:

This meant that all of the code would lead back around to the start again and each piece, type definitions, type annotations and tests, told the story of how the program worked as a whole. For the small amount of extra work at the beginning of a given thread of thought, the payout was tremendous at the end.

Now, does this mean that types ARE joy? No.

All this really says is a good, rich type system can, and should, help tell the story of your program. It is worth noting my code reflects the domain I work in, not the types of data living within objects and values. Arguably, if a programmer goes type crazy and codes something obscure into types (like some of the atrocities committed by overzealous Scala programmers) the types can bring pain. Instead we should aim to speak the same language as other humans who work around us. Never too clever, never too obscure, just abstraction in simple language which helps form immediate context.

At the end of the day, anything could be used to build beautiful abstractions, but why not use a tool that helps you fall into the pit of success?

A Case for Quickspec

Oct 2, 2017

Any software community has a contingent which agrees that tests are a good thing and testing first leads us to a place of stable, predictable software. With this in mind, the biggest complaint I’ve heard from people is “testing takes too long!”

This blog post covers the testing library Quickspec, which can be installed from NPM, so you can follow along!

All tests in this post will be written to test the following code:

Testing a Composition With Mocha and Chai

Coming from a background of testing first, I am used to writing tests quickly, but even I have to concede, there are times I just don’t want to write all of the noise that comes with multiple use cases. This is especially true when I am writing tests around a pure function and I just want to verify multiple different edge cases in a computation.

If we were going to test several cases for the composition multiplyThenDivide, the output would look a lot like the following:

Testing a Composition with Mocha and Quickspec

Although testing a simple composition like this is not particularly hard, we can see representing all interesting cases actually requires a whole bunch of individual tests. Arguably, there are enough cases, we’d get bored testing before the testing is actually done (and we did!).

What if we could retool this test to be self contained and eliminate the testing waste?

This is the question is precisely what Quickspec aims to answer. Let’s have a look at what a simple Quickspec test might look like:

Why Is This Better?

Separation of Setup and Execution

The first benefit we get from using Quickspec is we can see, up front, what all of the cases are we are going to test. This makes it much easier to see whether we have tested all of the cases we care about. This means we have a clear picture of what our tests care about and it is completely separated from the execution of the code under test.

Deduplication of Test Execution

The second benefit we get is, we eliminate duplicate code. In this example, the duplication is simply a call to multiplyThenDivide and a call to verify, but this could have represented a full setup of modules or classes, dependency injection and so on. It’s common to have this kind of setup in a beforeEach function, but this introduces the possibility of shared state, which can make tests flaky or unstable.

Instead, we actually perform our setup and tie it directly into our test. This means we have a clear path of test execution so we can see how our specifications link to our test code. Moreover, we only have to write any test boilerplate once, which means we reduce the amount of copy-paste code which gets inserted into our test file.

Declarative Test Writing

Finally, if it is discovered a test is missing all it takes is the definition of a test case and we’re done. There is no extra test code which needs to be written, or extra boilerplate to be introduced. Each test is self contained and all specifications are clearly defined, which means our tests are more declarative and the purpose is clear.

Other Testing Capabilities

Async Testing

Quickspec is written around the idea that code is pure, thus deterministic, but it is also built to be usable in asynchronous contexts, which is a reality in Javascript. This means things like native Javascript promises and other async libraries won’t make it impossible to test what would, otherwise, be deterministic code.

Testing with Theorems

Writing theorem tests with Quickspec could be its own blog post, so I won’t cover the entirety here, though this is an important point.

Instead of hand-computing each expected value, Quickspec allows you to write tests where the outcome is computed just in time. This applies especially well where outcomes are easily computable, but the entire process to handle special cases could lead to extra winding code, or code which might actually use an external process to collect values in the interim.

What this all means

In the end, traditional unit tests are great for behaviors which introduce side effects like object mutation, state modification or UI updates, however, when tests are deterministic, Quickspec streamlines the process of identifying and testing all of the appropriate cases and verify the outcomes in a single, well-defined test.

Install Quickspec from NPM and try it out!

Why Should I Write Tests?

Feb 8, 2017

There has been an ongoing discussion for quite some time about whether automated tests are or are not a good idea. This is especially common when talking about testing web applications, where the argument is often made that it is faster to simply hack in a solution and immediately refreshed the browser.

Rather than trying to make the argument that tests are worthwhile and they save time in the long run I would rather take a look at what it looks like to start with no tests and build from there. Beyond the case for writing tests at all, I thought it would be useful to take a look at the progression of testing when we start with nothing.

Without further ado, let’s take a look at a small, simple application which computes a couple of statistical values from a set of sample numbers. Following are the source files for our stats app.

Now, this application is simple enough we could easily write it and refresh the browser to test it. We can just add a few numbers to the input field and then get results back. The hard part, from there, is to perform the computation by hand and then verify the values we get back.

[caption width=”390” align=”aligncenter”]Example of manual test output Manual test example[/caption]

Doing several of these tests should be sufficient to prove we are seeing the results we expect. This gives us a reasonable first pass at whether our application is working as it should or not. Beyond testing successful cases, we can try things like putting in numbers and letters, leaving the field blank or adding unicode or other strange input. In each of these cases we will get NaN as a result. Let’s simply accept the NaN result as an expected value and codify that as part of our test suite. My test script contains the following values:

Input Mean Standard Deviation
Success Cases
1, 2, 1, 2 1.5 0.5
1, 2, 3, 1, 2, 3 2 0.816496580927726
Failure Cases
NaN NaN
a, b NaN NaN

Obviously, this is not a complete test of the system, but I think it’s enough to show the kinds of tests we might throw at our application. Now that we have explored our application, we have a simple test script we can follow each time we modify our code.

This is, of course, a set up. Each time that we modify our code, we will have to copy and paste each input and manually check the output to ensure all previous behavior is preserved, while adding to our script. This is the way manual QA is typically done and it takes a long time.

Because we are programmers, we can do better. Let’s, instead, automate some of this and see if we can speed up the testing process a little. Below is the source code for a simple test behavior for our single-screen application.

With our new test runner, we can simply call the function and pass in our test values to verify the behavior of the application. This small step helps us to speed the process of checking the behaviors we already have tests for and we only need to explore the app for any new functionality we have added.

Once the app is loaded into our browser, we can open the console and start running test cases against the UI with a small amount of effort. The output would look something like the image below.

[caption width=”518” align=”aligncenter”]Single-run tests scripted against the UI Single-run tests scripted against the UI[/caption]

We can see this adds a certain amount of value to our development effort by automating away the “copy, paste, click, check” tests we would be doing again and again to ensure our app continued to work the way we wanted it. Of course, there is still a fair amount of manual work we are doing to type or paste our values into the console. Fortunately we have a testing API ready to use for more scripting. Let’s extend our API a little bit and add some test cases.

Now, this is the kind of automated power we can get from being programmers. We have combined our test value table and our UI tests into a single script which will allow us to simply run the test suite over and over. We are still close enough to the original edit/refresh cycle that our development feels fast, but we also have test suites we can run without having to constantly refer back to our test value table.

As we write new code, we can guarantee in milliseconds whether our app is still working as designed or not. Moreover, we are able to perform exploratory tests on the app and add the new-found results to our test suite, ensuring our app is quick to test from one run to the next. Let’s have a look at running the test suite from the console.

[caption width=”704” align=”aligncenter”]Test suite run from the browser console Test suite run from the browser console[/caption]

Being able to rerun our tests from the console helps to speed the manual write/refresh/check loop significantly. Of course, this is also no longer just a manual process. We have started relying on automated tests to speed our job.

This is exactly where I expected we would end up. Although this is a far cry from using a full framework to test our code, we can see how this walks us much closer to the act of writing automated tests to remove manual hurdles from our development process.

With this simple framework, it would even be possible to anticipate the results we would want and code them into the tests before writing our code so we can simply modify the code, refresh and run our tests to see if the new results appear as we expected. This kind of behavior would allow us to prove we are inserting required functionality into our programs as we work.

Though I don’t expect this single post to convince anyone they should completely change the way they develop, hopefully this gives anyone who reads it something to think about when they start working on their next project, or when they go back to revisit existing code. In the next post, we’ll take a look at separating the UI and business logic so we can test the logic in greater depth. Until then, go build software that makes the world awesome!

Domain Modeling For Better Contracts

Jun 29, 2016

In the post about communicating contracts through enforcing endpoint contracts, we took a look at some basic types which are available in Signet. Today we are going to talk about how to add more information to your types by creating your own data types.

Last week we took a look at how to build types as sets with characteristic functions. This week we will apply that information in order to add extra information to our types.

By this point I’m certain there are plenty of people who are thinking I’ve gone completely off the rails. Javascript, after all, is a dynamically typed language. Don’t burden yourself with all of this type stuff and just write some code!

Although this is true, most people view types as a constraint which only causes pain. Though this might be true if you are coming from a language like Java which contains lots of artificial constraints around type creation, and after all is said and done, the types are still weak and restrictive.

On the other hand, if we consider types as a way to add a layer of correctness checking and a tool for communicating with others, types become less a restriction and more a tool we can use to make our programs better. Good types will make a program transparent and predictable. These are traits we definitely want in our programs.

Just as a refresher, let’s have a look at where we left off with our purchase API from the enforcing endpoints blog post. This way we have a common position to understand where we started and where we’re going.

    var api = {
        computeTax: signet.enforce('number => number => number', computeTax),
        computeTotal: signet.enforce('function => number => number', computeTotal),
        computeSubtotal: signet.enforce('array => number', computeSubtotal)
    };

    let computePurchaseTotal = signet.enforce(
        'number, array => number',
        function computePurchaseTotal(tax, purchases) {
            return enforcedApi.computeTotal(enforcedApi.computeTax(tax))(enforcedApi.computeSubtotal(purchases));
        }
    );

Now that we have our API defined with regard to basic types, we can start to ask more meaningful questions. Instead of asking things like “what does this function do,” we can ask directed questions to inform our programming better: What kind of numbers are they? What is in the array? What kind of argument must the function take?

The last two questions are easiest to answer since we don’t have to look any farther than higher-kinded types. This is, of course, scary sounding the first time you hear it. I had no clue what a higher-kinded type was the first time I heard the word. Fortunately, many of you may already be familiar with them even if you don’t know the word. Java and C# both support higher-kinded types.

Higher-Kinded Types and You

First and foremost, let’s discuss what a higher-kinded type actually is. (It’s a type.) Once we have a better grasp on that, we will use it in code to make everything a little more clear.

A higher-kinded type is simply a type which takes a type as an argument and returns a type. I know, that sounds weird. How does a number take a string and return a type? I asked the same thing.

It turns out, however, that it’s not nearly as foreign as it might seem. One very common type we rarely think about in Javascript which could easily be handled as a higher-kinded type is an array. An array is, itself, a type, but it contains values which are also typed. This means, if we had a language to express it, we could declare an array which contains only a single type.

As it turns out, there are potentially infinite different types which are, or could be, higher-kinded. In this post we are going to look at just two: array and function. With the type signature language available with signet, we can explicitly declare an array type as needed. This means we can do things like the following.

var isArrayOfNumbers = signet.isTypeOf('array<number>');

isArrayOfNumbers([1, 2, 3, 4, 5]); // true
isArrayOfNumbers([1, 2, 3, 4, 'foo', 'bar', 'baz']); // false

We can see both of the tested arrays are completely valid Javascript arrays, but the second is not an array of exclusively numbers. There are ways we could create an array which would support numbers and strings, but that’s beyond the scope of this post.

Just like we can declare information about arrays, we can also say something about the expectations around a function. Instead of simply saying a value is “function,” we can actually say a value is a “function which takes a number.” In much the same way we declare our types in arrays, our function type declaration is “function."

Now that we have an expanded type language to draw upon, let’s update our API and clarify the communication of our domain model.

    var api = {
        computeTax: signet.enforce('number => number => number', computeTax),
        computeTotal: signet.enforce('function<number> => number => number', computeTotal),
        computeSubtotal: signet.enforce('array<object> => number', computeSubtotal)
    };

    let computePurchaseTotal = signet.enforce(
        'number, array<object> => number',
        function computePurchaseTotal(tax, purchases) {
            return enforcedApi.computeTotal(enforcedApi.computeTax(tax))(enforcedApi.computeSubtotal(purchases));
        }
    );

Subtyping With Characteristics

Now we just have the ‘number’ type scattered everywhere throughout our code. Although this is better than nothing, it would be SO MUCH better if we actually knew something about those numbers. What do they mean? How are they used? What are the constraints?

It turns out we have just the thing to remedy this pain, it’s called characteristic functions. As we know from our earlier discussions on characteristics, we can add richness to our type system through set-describing predicate functions. (Protip: all predicates describe sets)

Before we dive into creating new types willy-nilly, let’s take a moment to account for the different number types we have. By properly identifying the actual domain language we care about, we can create better types which will allow us to clearly describe our application to people who might know nothing about it.

Ultimately, we care about tax, price, amount of tax to pay (tax amount) and purchase total. If we were to simplify this list a bit, we can identify a couple of distinct bits of information.

First let’s consider tax. Tax is a percentage amount. Since, where I live, taxes are always greater than or equal to 0%, but always less than 100%, I am going to say tax is a percent value which will always fall between 0 and 1. For example, in San Diego, sales tax is currently around 8% or 0.08.

Now, we can take a look at price, tax amount and purchase total. Each of these is a value which is related to a value an amount our customer will be paying. This means we can roll these all into some aspect of price. We will say a price value will be greater than or equal to 0. This describes our data pretty accurately for the moment, so let’s go with that.

With our base types sorted out in a way we can jump off from, we can start building characteristics. By clearly defining our characteristics, we give our new types programmatic meaning. Let’s see what our basic characteristic functions will look like for price and percent.

    function checkTax(value) {
        return 0 <= value && value <= 1;
    }

    function checkPrice(value) {
        return 0 <= value;
    }

The other piece of this puzzle is, we need to register our types with Signet. Fortunately, this is a simple process. We know that each of these types is actually a number, so we can simply use the subtype function and declare these two functions as new types, inheriting from number. This is also why we didn’t need to test each value to see if it is a number, our subtyping will guarantee we only verify numbers.

    signet.subtype('number')('tax', checkPercent);
    signet.subtype('number')('price', checkPrice);

We can use our price type to create our other two types by simply aliasing them and using the price definition to ensure our data constraints are clear.

    signet.alias('taxAmount', 'price');
    signet.alias('purchaseTotal', 'price');

Let’s have a look at our updated API and see how our types are coming along!

    var api = {
        computeTax: signet.enforce('tax => price => taxAmount', computeTax),
        computeTotal: signet.enforce('function<price> => price => purchaseTotal', computeTotal),
        computeSubtotal: signet.enforce('array<object> => price', computeSubtotal)
    };

    let computePurchaseTotal = signet.enforce(
        'tax, array<object> => purchaseTotal',
        function computePurchaseTotal(tax, purchases) {
            return enforcedApi.computeTotal(enforcedApi.computeTax(tax))(enforcedApi.computeSubtotal(purchases));
        }
    );

Duck Typing our Object

At this point, our API is pretty clear, but there is still one last type which just doesn’t quite convey the information we want to know. Our array of purchases is still described, simply, as an array of objects. This could be much better, if only there were a way to check it.

As it turns out, the Go language has popularized the notion of object similarity through duck typing and that is precisely what we are going to do here. If we know enough information, we can tell whether our object satisfies the Liskov substitution principle, and can be used in place of our intended object.

Signet provides a means to perform duck typing as well, so we don’t have to build our characteristic function from the ground up every time, because that could end up being a LOT of repeated code. Let’s build a duck typing characteristic and finish up our API types.

    let checkPurchase = signet.duckTypeFactory({ price: 'price', quantity: 'int' });

    signet.subtype('object')('purchase', checkPurchase);

Now we have a name for our purchase object type, which means we can easily check whether our array of purchases actually adheres to our expectations. Plus this will provide a way for others to understand what we intended when we wrote the code, making it much easier to write new code against the existing API.

    api = {
        computeTax: signet.enforce('tax => price => taxAmount', computeTax),
        computeTotal: signet.enforce('function<price> => price => purchaseTotal', computeTotal),
        computeSubtotal: signet.enforce('array<purchase> => price', computeSubtotal)
    };

    computePurchaseTotal = signet.enforce(
        'tax, array<purchase> => purchaseTotal',
        function computePurchaseTotal(tax, purchases) {
            return enforcedApi.computeTotal(enforcedApi.computeTax(tax))(enforcedApi.computeSubtotal(purchases));
        }
    );

Wrapping Things Up

Although this just scratches the surface of using types in your program, hopefully this exercise helps you communicate intent and define a clear domain model. By taking core types we already know and applying a small amount of predicate logic, we surface a new way to talk about our program and the data we use.

Instead of simply using old code as a reference for what it does, add a little annotation, a little bit of logic and get a lot more bang for your buck. In the end, types don’t make everything correct all the time, but they do a lot to make you and others like you a lot more awesome!

Types, Sets and Characteristics

Jun 22, 2016

A couple weeks ago, we looked at using Signet and some of the core types to add type information to function calls. Although it is handy to have a variety of base types available to provide signatures for your functions, sometimes you want more control and finer-grained behavior.

At the most foundational level, applied types can be viewed as sets of values. This means, for any type, we can easily construct a set which will describe that type. For instance, the type ‘string’ can be written as the set of all values which are strings. Although this may seem like a trivial way to perform a conversion of a type to a set, it gives us a way to start rethinking the way we interact with type information.

Sets as Types

We can, somewhat informally, say that sets are types. Although this doesn’t capture the nuance of types, it allows us to capture a lot of power in a simple idea. We looked at defining the string type as a set of values. What this really means is, strings are a certain set of values within all values assignable.

If we begin our sets by considering all values assignable and available in within Javascript, we can refer to that set as our “universe.” Within that universe, we could choose a variety of different sets, but regardless of which set we choose, the new set will be contained completely within the original universe.

Using this universal definition, we can consider our strings again and consider how we might describe our set of all strings. First of all, we can ask if a value is contained within our universe. A good example of a value which is distinctly NOT in the Javascript universe is 1000! (1000 factorial). Although this is an integer which actually exists, Javascript will simply evaluate it as infinity. This is not something we will need to test for, it is simply the indication of an upper bound in Javascript.

We could, however, define a set of numbers we declare as our domain set. We can call this an explicit set definition. By turns, we can define a set by way of excluding items which are not in the set. This inverse set can, equally, be declared explicitly, or we can define a function which will simply tell us whether something is in the set or not. This implicit method of creating a set can be referred to as an induced set.

Let’s take a look at a meaningful question we could ask. Let’s ask if a value is a number. This means we are going to call a function which accepts type of * and returns a boolean. This kind of function is called a characteristic function.

function isNumber (value) {
    return typeof value === 'number';
}

Although this function, by itself, is not a big step away from what we already know, it lays the groundwork for defining a richer set of types we can interact with. From the function isNumber, we get an induced set of all values which are numbers, rather than defining the type set explicitly.

Propositions and Predicates

It has been shown in the academic community that propositions are types. What this means is we can actually consider propositions such as A and B in the expression A ∨ B as types unto themselves.

Any proposition can either resolve to a theorem (&top;) or an antitheorem (&bottom;) which roughly equates to the idea of true or false. In other words, we can ask a question and the answer will either be “yes” or “no.” Although this seems non-obvious, at its face, this is the foundation we will use to construct a richer set of types within Javascript.

We have already seen type construction where we use a predicate to manage inclusion in a type set. We are stating that a value α must be in the set of numbers if α is a number. Even stronger than that, we can say, since our set contains ONLY values which are numbers that if α is in the set of numbers it has a type of number. This relationship between sets and type implementations is important for capturing greater amounts of information about a value as we construct subsets from sets we have defined. Let’s have a look at the logical notation to see predicates in action, so we can tie that together with our predicate notation.

Njs = the set of all values which are Javascript numbers A: α is a Javascript number B: α &in; Njs;

A → B B → A

Of course in our implementation we really only worry about the second relation, i.e. that if a value is in the set of values which conform to the number type, the value must also conform to the number type. Given the definition of B, α &in; Njs;, we can actually conclude that the first relation is true given the definition of Njs.

We can actually reformulate this to express the type-set relation more generally. If we simply replace our specific number set with a set of any type Τ we get a new, very useful formulation we can use to extend our type reach well beyond the specifics of the language.

Τjs = the set of all values expressible in Javascript of type τ A: ατ is a Javascript value of type τ B: ατ &in; Τjs

A → B B → A

That’s a lot of symbols, words and relations. What this really means is we can identify and define any arbitrary type logically and, in turn, define a set containing all values of that type, which will induce an “if and only if” (iff) relationship. That’s a lot of words and symbols. Let’s take a look at how we might use this to implement our own type.

Defining a New Javascript Type

Clearly we won’t be able to build our type into the core language without getting on TC39, issuing a new standard and waiting for several years while everyone adopts it, but we can induce our type through a new predicate function. Let’s suppose we want to define a new type, Integer. We could express our type in the following way:

Intjs = the set of all values expressible in Javascript of type number which are integers αint &in; Intjs

With this, we can define a function expressing this relationship, which we can use to verify whether a value is in our set Intjs or not. With regard to the relation between types, expressible values in Javascript and our integer set, we can guarantee the stability of our type and the correctness of our verification.

    function isInt (value) {
        return typeof value === number && Math.floor(value) === value;
    }

    isInt(5); // true
    isInt(9.3); // false

Although this function is sufficient for verifying whether a value is an integer, we are actually duplicating our efforts. Moreover, it lacks a certain expressiveness which we might like to see. Let’s use our original isNumber function to say a little more about the meaning of our int type.

    function isInt (value) {
        return isNumber(value) && Math.floor(value) === value;
    }

This new function performs the same check as the original, but it reflects a deeper relationship between our number set, Njs, and our integer set, Intjs. In other words, what we can see expressed here is the typical inheritance property of the is-a relationship.

The Is-A Relationship of Types

As is true for objects in classical object oriented programming, types can also have an inheritance relationship where one type is a subtype of another. This is what we mean by is-a relationship. We can say an integer is a number, or a name is a string. Although an integer can be a type in its own right, we know the number type is the foundation type in Javascript for any numeric representation. This means, for any function which requires a number, an integer is an acceptable value.

Our isInt function demonstrates the is-a relationship by using the number set definition as the first requirement of our check for set inclusion. Let’s continue the chain and create a characteristic function to defining our natural number. Our natural number set will be a strict subset of our integer set.

    function isNatural (value) {
        return isInt(value) && value >= 0;
    }

Now we can see that a natural number is an integer which is a number. This, of course, is similar to OO subtyping with regard to relationship, but is compositional in nature. In fact we can actually describe this type relationship as a relationship of sets, like so:

Naturaljs ⊂ Intjs ⊂ Numberjs

With the repeated behavior of including a function call from the superset, we can start looking for a way to uniformly describe our sets and their relationships. Let’s create a new function, subtype, to help us create set relationships in order to streamline the process of defining type relationships.

    function subtype(parentCharacteristic) {
        return function (childCharacteristic) {
            return function (value) {
                return parentCharacteristic(value) && childCharacteristic(value);
            };
        };
    }

Subtype allows us to define our types with functional composition and define our new characteristics with the assumption that we are already working from within a specific type. Let’s rewrite our isNatural check using subtype.

    function isNaturalType (value) {
        return value >= 0;
    }
    
    var isNatural = subtype(isInt, isNaturalType);

Now the body of our characteristic function is expressed with an implicit relation to the superset of natural numbers, integers. This kind of higher-order function use to express set relations is extremely powerful for defining and describing value types we can use in our development.

Wrapping Up

This was a somewhat dense tour of how we can construct types in Javascript, so don’t worry if it takes a little while to pull the pieces together. The important take-away is that we can construct our own types with meaningful names and clear relationships in order to better understand the way our programs work.

At the end of the day, we are human, so expecting us to actively deal in generalized abstractions such as strings and numbers may not be a reasonable request. Instead, we can reclaim the reins and define our own type language which speaks to future developers in the language of our intent. Go make types and make your programs better!

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Inclusive or Exclusive Web?

    When you start working on a website or application, what is your goal? In the current state of the web, there are many ways you can carry your user but, in the end, you must choose web inclusive or web exclusive. Sites with rich APIs which interact with the world around them are web inclusive. Sites which focus internally, drawing little content from the outside web and, ultimately, giving nothing back are web exclusive.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • What Have I Done? (Redux)

    A little earlier this month, I made a post to Posterous called “What Have I Done?” It was less a post about what I had done as what I was doing. Here we are, approaching the end of the month and I’ve just completed phase one of what I was doing.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • User Exwhatience?

    A couple of weeks ago, a friend of mine sent out a tweet asking what the ‘x’ was in Ux. I shot back a pithy “Ux is User Experience.” In a small way, the question got my mind rolling. I didn’t realize, at the time, that I was considering who does and doesn’t know anything about user experience and why that might be.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • Well, Now, Isn't that Flashy?

    I make no secret of the fact that i’m not a huge fan of Flash.  It’s not really because I feel there is anything inherently wrong with Flash.  I am opposed to the gross overuse and misuse that happens every day.  Sometimes only Flash will do, and in those circumstances it is the answer.  Sometimes Flash is the answer to a question that is totally incorrect.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->