Mathematicode a Love Letter to Math

Sep 4, 2015

I’ll open up by saying I am a mathematician by education. I’m not one of the mathematical great minds and I am unlikely to ever win a Fields medal, but I have a degree in applied math and it’s what I know. The reason I share this is because it helps define the way I think about problems I work on. Regardless of whether I am working on a problem at work or working on something in my spare time, my math background helps to define the approach I take to solving many of the problems I encounter.

I have worked in the software world for a while now and I have encountered a good number of developers who come from a variety of backgrounds. I won’t say all the best programmers I have met are math people, but I won’t say they aren’t. Some people come to programming through an organic sort of process and others go through a formal education process. Either way, there is something that comes forward really fast, Boolean logic.

Boolean logic is very closely tied to the mathematical work that George Boole did in the 19th century. This means, regardless of anything else you do with programming, you are doing math. Always.

I believe that even the people who started off as designers or biologists or cognitive scientists, if they become good programmers, they end up being good at logic and, yes, math.

There is a common debate around what a programmer is. Are they scientists? Plumbers? Engineers? Developers? What are they?

I believe that every software engineer is a developer. I do not believe that every developer is a software engineer. By this, I can say, I believe that every programmer is a developer. An author of code. Programmers are writers.

As is true with journalists and novelists, there are good code authors and bad ones. Some people take pride in writing their code in a crafted, carefully maintained sort of way. I think this is a good thing and more people should do it.

The difference between a good and a bad author of code is the math they do. If we look to other engineers in fields like mechanics or structure, you will see math and science. Computer science incorporates these same elements in different ways. This means, in order to truly author great code, you must embrace the applications of math and make it part of your programming experience.

I believe that in order to be elevated to the state of being an engineer, you must look beyond the simple trappings of the language you work in. I speak the English language, but this does not mean that I am a linguist. I am merely a user of the language. Many developers live in much the same state in their code. Become more than you are and explore the things that make your language work.

I am not prepared to call myself an engineer, by the way. I believe I would make a terrible engineer. I am prone to flights of fancy and I am much more interested in the reasons why than the application of the math I understand. I bury myself in books about math and the underpinnings of programming rather than looking for sturdy materials to construct things with. I, with all of my foibles, am more aligned with philosophers than I am with the engineers who apply their knowledge to make the world great. I am okay with this realization.

In the end, however, I would encourage anyone who calls themselves a developer to dig and pry and tear whole chunks from computing. Find the math that lives underneath it all and apply it to what you know. As you grow and become a greater developer, embrace the math that will make you an engineer and bring sanity and soundness to the code that you and others like you will have to support and maintain for years to come.

Testing Promises: Promise Fakes

Sep 2, 2015

Javascript developers notoriously say unit testing in hard. I think the problem is actually more specific than that. Unit testing pure functions and business logic actually pretty easy.You put a value in, you get a value out. If the value you get back is what you expected, your test passes. Something that is hard to unit test is asynchronous logic.

Asynchronous logic is so hard to test, the angular team actually built a special mocking system for testing calls made through the $http service. In reality, though, if your unit tests are littered with $httpBackend references, you’re doing it wrong.

I won’t go into philosophical discussions about promises and callbacks, so let’s just agree that people use promises. A lot. Promises have become the currency for modern asynchronous requests, whether to another thread or across the internet, if you have to wait for your execution to come back from an asynchronous behavior, you might see a promise in the mix. Your code might look a little like this:

class MyUsefulClass{
    constructor(myService){
        this.myService = myService;
    }
    
    myFunctionUnderTest(callback){
        var promise = this.myService.asyncMethod();
        
        promise.then(callback);
    }
}

Let’s cut to the chase, promises are easy to throw into the middle of your code, but they are really hard to test… until today.

In unit testing there is a concept known as fakes. Fakes are often, mistakenly called mocks, but mocks actually store something to use for later when you are handing your test expectations. No, fakes are just what they sound like: a fake something.

Fakes contain the minimum amount of code to satisfy your functionality and nothing more. Sometimes fakes are nothing more than an empty object or function. Other times they are a little more involved, doing stuff like calling passed functions and returning values, but at the end of the day, a fake is a placeholder used for guaranteeing your unit under test won’t fail and will be isolated.

So, promises and fakes.

It is a law of unit testing that you do not talk to the outside world and you do not talk about fight club. This means, if you have a function that is going to call a service which will, in turn, interact with the world at large, you have a big problem. Fortunately, you probably wrapped that up in a promise and that is the crack in the armor we can use to break our unit out into its own isolated space.

Underneath it all, promises are nothing more than objects with a bunch of trickery hidden inside. With the knowledge of fakes and a general understanding of the promise syntax, we can build a stand-in object. A stunt promise, if you will. It’s pretty common to use Chris Kowal’s concept of promises, as developed in Q, so let’s build our fake around that.

Here’s what a promise fake might look like:

function PromiseFake(){
    this.failState = false;
    this.error = null;
    this.response = null;
}

PromiseFake.prototype = {
    setFailState: function(failState){
        this.failState = failState;
    },
    
    setError: function(error){
        this.error = error;
    },
    
    setResponse: function(response){
        this.response = response;
    },
    
    then: function(success, failure){
        if(this.failState){
            failure(this.error);
        } else {
            success(this.response);
        }
        
        return this;
    },
    
    catch: function(callback){
        if(this.failState){
            callback(this.error);
        }
        
        return this;
    }
};

It’s about 40 lines of code, but now we have something we can work with. This promise fake can be instantiated for each test we write and it won’t muddy the state from test to test. It’s chainable, so if there is code using chained promises, it can be tested. Finally, success and failure states can be set with errors and response values so any function that might be relying on a specific response value will be testable with little extra effort.

Faking can be hard work, but if you do it right, you only ever have to do it once. Hooray! Now let’s look at a test for our method.

describe('MyUsefulClass', function(){
    
    var myInstance,
        myService;
    
    beforeEach(function(){
        myService = {
            asyncMethod: function(){}
        };
        
        myInstance = new MyUsefulClass(myService);
    });
    
    if('should call spy on success', function(){
        var spy = jasmine.createSpy('callback');
        
        myService.asyncMethod = function(){
            return new PromiseFake();
        }
        
        myInstance.myFunctionUnderTest(spy);
        
        expect(spy).toHaveBeenCalled();
    });
    
});

That was easy! We didn’t have to do a whole mess of monkey patching to our code and we didn’t have to use some crazy mechanism to intercept HTTP requests way down the stack. Fakes are great, when used for the powers of good. The goodness doesn’t stop there, though. Here’s how we can actually test our promise fake actually works as expected.

describe('Do somethig that uses a promise', function(){
    
    it('should call success spy', function(){
        var myPromise = new PromiseFake(),
            spy = jasmine.createSpy('successCallback');
        
        myPromise.then(spy, function(){});
        
        expect(spy).toHaveBeenCalled();
    });
    
    it('should call failure spy', function(){
        var myPromise = new PromiseFake(),
            spy = jasmine.createSpy('failureCallback');
        
        myPromise.setFailState(true);
        myPromise.then(function(){}, spy);
        
        expect(spy).toHaveBeenCalled();
    });
    
    it('should chain', function(){
        var myPromise = new PromiseFake(),
            spy = jasmine.createSpy('callback');
        
        myPromise.response('foo');
        myPromise.then(function(){}, function(){}).then(spy, function(){});
        
        expect(spy).toHaveBeenCalledWith('foo');
    });
    
});

That’s pretty much it!

We had to do a little grunt work at the beginning, but after we built our class, we could fake promises all day long and save ourselves headaches forever more. The best part is, now we have eliminated the asynchronous behavior from our code and made everything testable. This makes our tests easier to read, easier to maintain and clearer to write. Who could argue with that?

What this all really leads us to is this, promises are tough to test, but when we absolutely, positively have to have them, we can trim the fat, clean out the code and fake our way to a brighter tomorrow. Isn’t that what we all really want? Go write some tests and make your code a better place to live.

Mainstay Monday: Linked Lists

Aug 31, 2015

Dear data structures, which of you is most useful? I don’t know, but linked lists are pretty awesome. Linked lists are great for any number of things, a list of data that can be grown without bound, a data structure that can be incrementally increased, a queue or a stack. There are even more things that can be done with linked lists, but before we can dig into what can be done with a linked list, we need to understand what it even is.

The most basic linked list is a series of objects that point from one to another. This is what we are going to dig into today. To get a basic understanding of how a linked list works, let’s have a look at a basic diagram of a linked list.

|Object A| -> |Object B| -> |Object C| -> null

By this diagram, it makes sense to say that an empty list is null. It’s not like null or falsey, but it’s actually null. Null contains nothing and points to nothing. As soon as you put your first element into the list, you get this:

null => |Object A| -> null

The last object in our linked list, always, is null. This actually makes it really easy to identify when we have hit the end of the list. This kind of list is called a singly-linked list. This means we can only traverse the list in one direction. Typically when a linked list is created, the head of the list is held as a pointer. Each object, including the head is an object and they are all, essentially, the same. Let’s have a look at an implementation of a basic linked list item.

function ListItem(value){
    this.value = value;
    this.nextPointer = null;
}

ListItem.prototype = {
    val: function(){
        return this.value;
    },
    
    append: function(node){
        var pointer = this.nextPointer;
        this.nextPointer = node;
        node.setNext(pointer);
    },
    
    setNext: function(pointer){
        this.nextPointer = pointer;
    },
    
    next: function(){
        return this.nextPointer;
    }
}

There’s really not much to this, but the important items here are the constructor which sets the internal value, which I am treating as read-only for our case. Let’s just say it’s better that way. After that, we need an accessor, so let’s use val. Finally we need to be able to set the next value in the list and retrieve the next value in the list; append and next will do just fine for this. Now if we want to create and use our list it would look like the following:

var listHead = new ListItem('foo'),
    listTail = new ListItem('bar'),
    tempItem;

listHead.append(listTail);
tempItem = new ListItem('baz');
listTail.append(tempItem);
listTail = tempItem;

tempItem = listHead;
console.log(tempItem.val()); // foo
tempItem = tempItem.next();
console.log(tempItem.val()); // bar
tempItem = tempItem.next();
console.log(tempItem.val()); // baz
tempItem = tempItem.next();
console.log(tempItem); // null

This is a pretty manual process, but it gets the job done and we can see the basic use of our linked list object. Now we can see how each of the objects links to the next and the last object always refers to null. This gives us a nice, predictable structure and clear, obvious algorithm for accessing each element in the list.

Let’s add some convenience functionality around our list so we can dig into some of the interesting characteristics of a linked list. We can create a list object and an iteration object. These will give us a nice API to work with.

function Iteration(list){
    this.current = null;
    this.listHead = list;
}

Iteration.prototype = {
    next: function(){
        var next = (this.current !== null) ? this.current.next() : this.listHead;
        this.current = (next !== null) ? next : this.current;
        
        return (next === null) ? null : next.val();
    },
    
    hasNext: function(){
        var next = (this.current !== null) ? this.current.next() : this.listHead;
        return next !== null;
    }
};

function List(){
    this.first = null;
    this.last = null;
}

List.prototype = {
    append: function(value){
        var item = new ListItem(value),
            last = this.last;
        
        this.last = item;
        
        if(last){
            last.append(item);
        }
        
        if(!this.first){
            this.first = item;
        }
    },
    
    getFirst: function(){
        return this.first;
    },
    
    getLast: function(){
        return this.last;
    },
    
    iterate: function(){
        return new Iteration(this.first);
    }
};
```

Here's what our original usage looks like once we wrap everything up:

```javascript
var myList = new List(),
    iterator;

myList.append('foo');
myList.append('bar');
myList.append('baz');

iterator = myList.iterate();

while(iterator.hasNext()){
    console.log(iterator.next());
}

console.log(iterator.next());
```

This is much cleaner and it gives us a little insight into the first aspect of linked lists, access performance. If we want to access the first or last element of our list, it happens in constant time, which we can express in big-o notation as O(1). This is really fast.  It's just about as fast as you can get for value access.

On the other hand, we also lose something for this gain at the front and back of the list.  Accessing any of the elements in the middle can only be accessed in linear, or O(n), time.  This means, to reach the nth element, you have to cycle through each element before it.  For small lists this is not a problem, but for large lists, this can be a problem.

These performance characteristics make linked lists great for small data sets and things like stacks and queues. Linked lists, however, are not suited for random access or repetitive search.  Sorting's not great either, but that's another discussion for another day. Let's look at accessing elements.

```javascript
var myList = new List();

myList.append(1);
myList.append(2);
myList.append(3);
myList.append(4);

function nth(list, index){
    var foundItem list.getFirst().next();

    // Looping to access! Linear time element access.
    while(index > 0){
        foundItem = foundItem.next();
        index--;
    }

    return foundItem;
}

var firstItem = list.getFirst(), // O(1) - fast
    lastItem = list.getLast(), // O(1) - fast
    secondItem = nth(list, 1); // O(n) - linear
```

Modification characteristics, on the other hand, are fantastic. If you need to add elements to a list, it's fast. The addition action is nearly as fast as reading the head or tail of the list. If you have the list element you want to insert after, you get an O(n) insertion speed.  The most costly part is the instantiation of a ListItem object. Each time you call append, it just adds an element and you're done. Speedy!

At the end of the day, there is another kind of list: the doubly-linked list.  As it turns out the performance characteristics aren't terribly better.  You get the benefit of moving up and down through the list, but access is about the same speed as is appending.

Linked lists, by themselves, are useful for the purpose of fast writing and small memory footprint for storage expansion.  It also doesn't require any pre-allocation, and can grow incrementally. When we look at other data structures, linked lists can make a good foundation structure because of the fast write behavior.  There is also a removal characteristic that is equally fast.  We'll take a look at those structures in another post.  Until then, think about your data and how you need to use it. Perhaps a list is what you've needed all along.

    

Mainstay Monday: SOLID - Dependency Inversion

Aug 24, 2015

This post is part of a series on the SOLID programming principles.

This is it, the final SOLID principle. Dependency inversion is probably one of the most talked about SOLID principles since it drives so much of what we see in programming today. From Inversion of Control (IoC) libraries to the dependency injection in modern Javascript frameworks like Angular and Ember, popular OO programming has embraced this principle more than any other. It’s possible that the success of dependency inversion is related to the fact that it can be enforced with a technical solution.

Let’s start by talking about what dependency inversion is. As Uncle Bob Martin says, dependency inversion is the dependency on abstractions instead of concretions. More specifically, this means that any object or function should not rely on the existence of specific concrete parts of an object, but instead, it should expect and use a contract while letting an outside entity define the concrete object that will be used.

To demonstrate this idea, let’s first take a look at a counter-example to our principle. This is probably one of the most flagrant violations of the dependency inversion principle I have encountered in Javascript and it is right on a popular examples page. The following example is lifted directly from that examples page:

var Mailbox = Backbone.Model.extend({

  initialize: function() {
    this.messages = new Messages;
    this.messages.url = '/mailbox/' + this.id + '/messages';
    this.messages.on("reset", this.updateCounts);
  }

  /* ... */

});

var inbox = new Mailbox;
/* ... */

Clearly, this is an example from the Backbone site. Backbone, you’re doing it wrong.

If you look back to my post on dependency injection, you’ll see we could easily create a factory for each of these instance-creation lines. It could be simple, like the following code:

var messageFactory = {
        build: function(url){
            var message = new Message;
            message.url = url;
            return message;
        }
    },
    mailboxFactory = {
        build: function(){
            return new Mailbox;
        }
    };

var Mailbox = Backbone.Model.extend({

  initialize: function() {
    var url = '/mailbox/' + this.id + '/messages';

    this.messages = messageFactory.build(url);
    this.messages.on("reset", this.updateCounts);
  }

  /* ... */

});

var inbox = mailboxFactory.build();
/* ... */

It’s a few extra lines of code, but the separation of concerns you get can make a huge difference when you write unit tests or if the creation of a message or mailbox ever becomes more complicated than simply calling new. By inverting the dependency chain on this small example, we centralize our creation, eliminate the new keyword from our code and provide a facility for easily injecting a substitute factory to help in testing that our code does only what it is supposed to do.

The other thing that happens when we break out the creation logic is, it becomes obvious what is really happening in the code: we are creating a messages object that is, really, an event handler. Now we can isolate this behavior fully and put guarantees around the actions our model will take when we trigger certain events.

Let’s take a look at the other side of the picture and write a little Jasmine code to test our message model.

describe('messages', function(){
    
    var factory;
    
    beforeEach(function(){
        factory = {
            build: function(){
                /* noop */
            }
        };
    });
    
    it('should set an event on the messages object', function(){
        var spy = jamine.createSpy('on');
        
        messageFactory = Object.create(factory);
        messageFactory.build = function(){ return { on: spy }; };
        
        mailboxFactory.build();
        
        expect(spy.calls.mostRecent().args[0]).toBe('reset');
        expect(typeof spy.calls.mostRecent().args[1]).toBe(function);
    });
    
});

If we had tried to do that without our factory, it would have been a lot more challenging to wrap everything up in tests. Since we broke out the new object from the mailbox definition testing became as easy as replacing our definition for the factory and we got direct access to everything we needed inside the object setup.

Finally, when we separate our object instantiation out from the core body of our code, it becomes much easier to modularize all of our code and separate pieces as they become unwieldy. This gives us better guarantees around the stability of each part of our code, since each module can be written and tested independently, breaking the code apart at intentional seams.

The takeaway from all of this is, when we invert our dependencies in our code and rely on abstractions to define our program, we improve stability, guarantee contract integrity and improve the testing story. All of these things add up to better programming and more stable output. Isn’t that really what we all want?

An Open Letter to Single-Language Programmers

Aug 21, 2015

I work with several programmers every day and I end up giving advice a lot. Let’s not discuss whether people are actually soliciting my advice, or not, but I give it regardless. This evening I started thinking about things I think every programmer should experience. There are lots of different topics that come to mind, including math, hardware, networking and so on. One thing that keeps coming back to me, though, is languages.

When I started studying computer science in college, I thought, “learn one language, you basically understand them all.”

Incorrect!

Some languages are very similar to one another, but each one has its own specific flavor or idiom. Some languages, on the other hand, are radically different from anything that is used in the mainstream commercial development landscape.

Early in my career resurrection I bounced from language to language, always staying close to home. PHP, C#, Java, JavaScript and back again. I wrote the same way in each of these languages, only doing things slightly differently only because there was a slightly different method to dig into what I needed.

The first time I really got a clear understanding that all languages are not made equal was when I built a content cache in Java. Everything was in the runtime and I just created a static cache instance that lived outside of the current control flow and stored content I could retrieve later. I wasn’t really breaking new ground, but I had to deal with one thread safety, cache invalidation and so on.

I swung back around and decided I was going to do something similar in PHP. Lo and behold, I discovered that no matter what I did, there was no way I could spin up a long-running thread or long-lived memory location I could easily store and retrieve anything from. The PHP lifecycle is akin to the lifespan of a housefly.

That was the first time I got a really clear picture of how different languages could be, even when they feel similar to an outsider.

My next eye-opening experience was when I decided I was going to look into a functional language and started seriously playing with Clojure. For the record, I love Clojure. It just feels right to me. I don’t know why, but it does. Moving from languages which borrow syntax from C to a Lisp really changed the way I think about programming altogether.

All of a sudden my view of programming took a hard right turn and I stopped thinking about shared state and started thinking about data transformations. State became nothing more than a transition from one data form to another.

My day job revolves almost exclusively around writing JavaScript and I couldn’t stand looking at sets of loops and conditional blocks. It was a rat’s nest of too many variables and too much ceremony. Every line looked like a duplication of code and all of the code looked like a bomb waiting to go off.

Clojure changed me.

This doesn’t mean that I spend my days writing Clojure now, in fact I still write JavaScript, but every language I have touched has changed me a little. Java taught me about working in a long-running environment, PHP taught me about short-lived scripts. C# taught me about data structures and Clojure a lot about data management.

In the end, the reason I continue to work with JavaScript is because it reminds me a lot of myself. It draws upon a little bit from a lot of different influences. The important thing is, JavaScript reached out and touched all of those things and brought something back with it.

I share all of this because I want to say to anyone who has gone their entire career only ever writing in a single language, try something new. Reach outside of your comfort zone and do something that looks nothing like the work you’ve done until now. Write a program that scares you. Use a language that changes you. Make yourself better and never look back.

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->