Dependency Injection Without A Framework (Or Pain)

Jul 15, 2015

If you’ve come from one of those big name, big OO frameworks, you are probably used to the idea of an Inversion of Control (IoC) container and dependency injection. If you have worked with Angular, you’re probably familiar with their dependency injection system. That’s all great, but what if you aren’t one of those people?

As it turns out, dependency injection (DI) just isn’t that hard to wrap your head around. When you talk to someone who has worked with one of the big DI systems like AutoFac or Spring, it can sound like DI is an enormous deal and could take years of practice and experience to get comfortable with. Here’s a little secret: there’s no magic. It’s not hard.

First, let’s talk about what DI is; it’s injecting stuff into your environment that you depend on. Dependency. Injection. That’s it.

You’re welcome.

Seriously, though, let’s have a little look at what DI looks like in a very hand-wavy kind of way with a class in ES6.

class Widget{

    constructor(componentFactory, widgetizer){
        this.componentFactory = componentFactory;
        this.widgetizer = wigetizer;

        this.context = {};
    }

    build(){
        let processedContext = this.widgetizer.processContext(context);
        //Here we do some stuff, maybe
        return this.componentFactory.create(processedContext);
    }

    setContextValue(key, value){
        this.context[key] = value;
    }

}

Obviously we know nothing about compontentFactory or widgetizer, but that’s alright. All we really care about is that we know widgetizer has a method that processes a context and componentFactory has a create method that takes a processed context. The black boxes that are these objects really doesn’t matter at this point in the application. All that matters is the API.

Most of the time when people see this kind of implementation, they construct each of the dependencies one of two ways. Either they instantiate the objects inside of their class or they instantiate their objects as they construct their class. To this I say ‘gross.’ The practice is so bad I can’t bring myself to give an example.

Instead, here’s how we are going to do this. We’re going to use the factory pattern and create objects as we need them. Once we have a factory, we can build new widgets without breaking a sweat. Here’s what that would look like.

var widgetFactory = (function(){
    var componentFactory = new ComponentFactory(),
        widgetizer = new Widgetizer();

    function build(){
        return new Widget(componentFactory, widgetizer);
    }

    return {
        build: build
    };
})();

//Somewhere in the code
let myWidget = widgetFactory.build();

The code is so simple it practically writes itself. What’s even better, if you are writing unit tests (you should be testing all the f**king time) then the setup for your tests becomes so easy even a junior Wordpress developer could figure it out. Here’s a little Jasmine for flavor:

describe('Widget', function(){
    var testWidget;

    beforeEach(function(){
        var componentFactory = { build: function(){ return {}; } },
            widgetizer = { processContext: function(){ return {}; } };

        testWidget = new Widget(componentFactory, widgetizer);
    });
});

Your unit test setup is seriously only 8 lines of executable code. Let me repeat that… EIGHT LINES. Since the instantiation of your dependencies is completely disconnected from the instantiation of your object, you can easily swap them out for testing, or replacement with a new, better version, or… whatever. There is no need to hunt down every place you instantiated your dependencies because, if they have dependencies of their own, you can just build factories for them, too.

Now, I will say that all of the factories of factories of factories is going to get a little heavy and become a burden on your immortal soul, but that’s okay. I have another trick up my sleeve for you. Let’s create a registry and automatically handle factories out of a central object. Automatic factory… AutoFac… hmm.

Public Service Announcement: Before we start into the next part, I want to make this clear – If you aren’t using a framework, you’re building one.

Anyway, let’s build our registry.

//This quick hack is probably not safe for production code.
//Always understand and test code before you use it.
var objectRegistry = (function(){
    let registrations = {};

    function register(key, definition, dependencies){
        if(registrations[key] !== undefined){
            throw new Error(`${key} already exists in object registry.`);
        }

        registrations[key] = {
            definition: definition,
            dependencies: dependencies
        };
    }

    function build(key){
        let dependencyInstances = [null], //Trust me, you need this
            definition = registration[key].definition,
            dependencyList = registration[key].dependencies
            dependencyLength = dependencyList.length;

        for(let i = 0; i < dependencyLength; i++){
            let dependencyInstance = build(dependencyList[i]);
            dependencyInstances.push(dependencyInstance);
        }

        return new (definition.bind.apply(definition, dependencyInstances));
    }

    return {
        register: register,
        build: build
    };

})();

Creating a whole registry system really wasn’t so bad. A little bit of recursion and line of slightly tricky Javascript later, you have a registry and object factory all set. Let’s take a look at what our registration and instantiation code would look like now.

objectRegistry.register('ComponentFactory', ComponentFactory, []);
objectRegistry.register('Widgetizer', Widgetizer, []);
objectRegistry.register('Widget', Widget, ['ComponentFactory', 'Widgetizer']);

//You want a widget? You got a widget.
let myWidget = objectRegistry.build('Widget');

A little recap, dependency injection is nothing more than providing your object with instances of the dependencies it needs. If your system is simple and your dependency tree is flat, you can easily get away with a factory to manage your dependency needs. If your system is more complex, you may need to create a registry to handle your components and the dependency tree. For better or worse, your dependencies are going to be complicated at that point anyway so avoid the pain.

The moral of this story is simple: never manage your dependencies along side the code that depends on them. Use factories to make your life better. If you take care of your dependencies, they will take care of you, so manage them wisely and profit.

Mainstay Monday: Managing Type Coercion

Jul 13, 2015

If you are new to programming and, especially, to a dynamically typed language like Javascript you are likely not familiar with type coercion. The best way to think about type coercion is, when dealing with two values of different types, the two variables will be normalized to a single variable type for the sake of comparison or other common interaction. The important thing to understand with type coercion is the language interpreter or just in time compiler (JIT) will guess what the type is that you meant to work with and do the “right thing” with it.

Let’s take a look at what type coercion looks like in Javascript.

//Equality
5 == '5'; //true -- presumably '5' is converted to a number
'5' == 5; //also true -- presumably 5 is converted to a string
true == '5'; //true -- 1 is converted to boolean true
true == 10; //true -- 10 -> boolean

true == 'foo'; //false -- string doesn't coerce
false == 'foo'; //false -- as you can see, 'foo' isn't true or false

//Concatenation (or not)
console.log("The answer is " + 55); //55 is converted to a string and concatenated
1 + '2'; //12 -- 1 is converted to a string
5 - '1'; //4 -- '1' is converted to a number

//Inequality
1 < '2'; //true -- '2' is converted to a number
'3' > 2; //true -- '3' is converted to a number
1 < 'foo'; //false
1 > 'foo'; //false

//Arithmetic
5 + 2; //7 -- although under the covers this is actually 7.0
10 + 8.123; //18.123 -- 10 is immediately converted to a floating point number
0x0F + 3; //18 -- Hexidecimal number is converted directly to number type

//Other oddities
1 == true && -1 == true; //true, and
null == false; //true
'abc'.indexOf('e'); //-1, NOT null, so
'abc'.indexOf('e') == true; //true, but we wanted
'abc'.indexOf('e') >= 0;

As you can see, there isn’t a particularly hard and fast rule that one type is always converted to another. More importantly, you can see that the most common cases are strings to and from numbers and vice versa. Numbers coerce to booleans, but strings don’t. For concatenation numbers coerce to strings. For equality it’s unclear which direction the coercion goes and for inequality, strings are coerced to numbers as long as they convert cleanly.

Type coercion is intended to be a convenience feature in Javascript so new programmers don’t need to understand value types deeply enough to perform typecasting. Unfortunately the confusion that comes with type coercion mitigates any benefit even the beginner programmer would gain from it, since it is relatively unpredictable.

Managing Expectations

Since type coercion is unpredictable, we should manage values ourselves and try to be as explicit as possible so we always get results back that we expect. We don’t want addition to concatenate our members if one is accidentally a string. We don’t want to coerce boolean values to numbers or the other way around since the only number that evaluates to false is 0 and there are many times we get values which mean something failed, but the coercion would make them true.

We, basically, don’t want the language to guess what we mean because it is likely to guess wrong. Let’s have a look at some of the things we can do to help improve the reliability of our applications and manage the type coercion that happens with our values throughout our source code.

First, let’s take a look at triple-equals (===). Performing a value conversion at comparison time has two pitfalls. The lesser of the two is, it’s slow. It’s not slow in the way that an O(n^4) algorithm is slow, but it is slower than comparing values directly without conversion. Let’s take a look:

1 == '1'; //true -- We saw this above.
true == 'true'; //false -- a string cannot convert directly to a boolean
-1 == true; //false
1 == true; //true -- true and 1 cross-convert to be equivalent

//Let's normalize.
1 === '1'; //false -- a number is never equal to a string
true === 'true'; //false -- a boolean is never equal to a string
-1 === true; //false -- a boolean is never equal to a number
1 === true; //false -- same as above

We can see how eliminating coercion from our comparison operations, we get a normalized, type-safe experience while programming. This provides guarantees we otherwise could never get. If the code is changed, potentially in an unstable way, issues will start to emerge that will give us more immediate insight into what is happening.

Let’s have a look at another method for handling type differences: typecasting. Typecasting is something that is very common in strongly typed languages, but is often overlooked in dynamically typed languages like Javascript because it is not immediately obvious why it could be valuable. Let’s compare some of the common ways people manage type differences and how typecasting can help normalize your code and eliminate hacks to get around a common problem.

//Numbers
1*'4' + 1; //5 -- This feels like a hack
+'4' + 1; //5 -- This looks like a mistake

//Typecasting to numbers instead
Number('4') + 1; //5

//Booleans
!'foo'; //false -- strange feeling, but it works
!!'foo'; //true -- Gross. It's hacky and I'm just as guilty as anyone of doing this
!!''; //false -- What does not-not empty string even really mean?

//Typecasting to booleans instead
Boolean('foo'); //true
!Boolean('foo'); //false
Boolean(''); //false

//Strings; Yes, I have seen this example in the wild
'' + 1234; //'1234' -- This relies on that weird coercion we were talking about

//Typecasting to strings instead
String(1234); //'1234'

Typecasting might take a few more keystrokes than one of the hack methods, but it does two things for us that other methods don’t provide. First, typecasting is declarative of intent. By using Boolean or Number, you know precisely what you should be expecting. You will also, get a highly normalized, safe value back. The second thing typecasting offers is a 100% guaranteed type safe expression every time. This means that every comparison, computation, concatenation, etc, will produce a predictable result. Predictability is stability.

Before we finish up, let’s take a look at a couple of other built-in functions that are useful for handling common conversion cases. These functions are specifically for managing number and string outputs. These three functions are parseFloat, parseInt and toString.

ParseFloat takes a single value parameter. ParseInt takes two mandatory values, the value to parse and a radix. A radix is the base the original number is in, which is important for handling things like binary, octal and hexadecimal strings. ToString is a function that exists on the prototype for just about every object in the Javascript ecosystem. Let’s take a look at what

parseFloat('123.45'); //123.45
parseFloat('0xFF'); //0 -- x and F are not valid numbers in decimal floating point
parseFloat('0107'); //107 -- Octal string is resolved as a decimal

parseInt('1234', 10); //1234 -- base 10 numbering; the most common output
parseInt('0xFF', 16); //255 -- Hexadecimal string
parseInt('0107', 8); //71 -- Octal string
parseInt('101', 2); //5 -- Binary string

['a', 'b', 'c'].toString(); //'a, b, c'
1234.toString(); //'1234'

What is happening in Javascript is this, there are language features, i.e. type coercion, that were introduced to make it friendly for people who might not be strong programmers, or may not be programmers at all. Now that Javascript has taken hold as the language of choice for many different applications and we begin solving real problems with focus on real programming, this kind of low-entry-barrier kind of behavior is not preferable.

Like many other high-level, application type programming languages, Javascript has means to handle types with grace and stability. The concept of a type-safe comparison, i.e. triple-equals (===), gives us type guarantees for a variety of conditional cases. Typecasting allows us to explicitly declare the manner in which we intend to use a value, affording us stability when operating with unexpected type variances. Finally, build-in conversion functions and methods allow us to convert a value, store it and use it in a predictable way. This conversion gives us guarantees around the type of a variable as we develop.

The important take-away here is using type coercion is, at best, an unstable way to write programs which could introduce bugs that are difficult to identify and, at worst, a hack that makes code obscure and difficult to maintain. Instead of using type coercion, prefer stable, predictable methods for guaranteeing type and maintain type safety in your programs.

Blog Post Notes

Eliminating Switch Statements with Hashmaps

Jul 8, 2015

It has been a really, really long time since I created a switch statement. I’m not saying there is no place for switch statements in programming, I’m just saying, I haven’t had a reason to use one in a long time. Even though I haven’t written a switch in a long time, I have seen them popping up in code examples at work, online and other places a lot lately.

After seeing several different uses, I started asking “what is the programmer really trying to say with these?” Most of the examples I have seen look like the following:

function sendError(message){
    notification.error(message);
}

function doStuff(){
    //Code doing some stuff that might have an error

    if(errorCode !== undefined){
        switch(errorCode){
            case 123:
                sendError('some error message');
                break;
            case 234:
                sendError('some other error message');
                break;
            // more cases here
            // ...
            // finally
            default:
                sendError('An unexpected error occurred');
                break;
        }
    }
}

This has a very particular code smell that I haven’t encountered a name for yet. I’m going to call it conditional obsession. In this particular case, the programmer has opted for conditional logic to emulate a well-known and commonly used data structure. Reducing this kind of conditional overhead is akin to using a stack to eliminate recursion.

Switch statements are intended to be a way to simplify multiple conditionals in a more readable way. Since this code is not really, actually handling a set of conditionals, the switch statement has become little more than an extravagant replacement for a hashmap.

For those of you in Javascript land who aren’t familiar with hashmaps, they are a very close relative to the object literal we have all come to know and love. They are so close, in fact, that you can substitute an object literal in for a hashmap at any point in order to maintain an idiomatic look and feel to your code.

Let’s take a look at what a data structure containing our error messages would look like:

var errors = {
    123: 'some error message',
    234: 'some other error message',
    345: 'an error message from some other place in the local code',
    // Just add your error message here.
};

Hey, that makes a lot more sense to me. I can look at this and, in a glance I can immediately tell you what our hashmap contains and what the relation means. This, of course, still doesn’t satisfy one thing that a switch statement can do: default behaviors.

Fortunately, we can build a quick, painless mechanism to handle default values and keep all of the readability we have started here.

function getErrorMessage(errorCode){
    let message = errors[errorCode];
    return message !== undefined ? message : 'An unexpected error occurred.';
}

Now we have reduced our switch statement down to what we really meant to say: find my error message in this set of keys; if a message can’t be found, then provide a default value instead. This leaves us with a single data structure and one conditional that handles the case we were really interested in: when the error code is unknown.

We will need to make one more modification to our original code to really clean it up and give us the clarity we are looking for:

function sendError(errorCode){
    let message = getErrorMessage(errorCode);
    notification.error(message);
}

Now sendError doesn’t require every function to perform some preprocessing to capture the error message it needs to send. This reduces the complexity of our code every place an error code switch statement might have existed and allows us to centralize our error messaging and let our core functionality do what it is intended to do.

Here’s our final, refactored code:

var errors = {
    123: 'some error message',
    234: 'some other error message',
    345: 'an error message from some other place in the local code',
    // Just add your error message here.
};

function getErrorMessage(errorCode){
    let message = errors[errorCode];
    return message !== undefined ? message : 'An unexpected error occurred.';
}

function sendError(errorCode){
    let message = getErrorMessage(errorCode);
    notification.error(message);
}

function doStuff(){
    //Code doing some stuff that might have an error

    if(errorCode !== undefined){
        sendError(errorCode);
    }
}

Depending on the size and complexity of your code, this refactoring provides the perfect opportunity to abstract all of your error codes out into a centralized configuration file and then provide an error service that will allow you to simply capture an error code and then send it up through the stack and abstract your error messaging away from your core code altogether.

Switch statements, along with other conditional statements, should be used when an action should be taken only when the condition is satisfied. When conditionals are used to replicate core language data structures, it is often preferable to fall back to the core data structure and reduce the complexity of your code. Hashmaps are faster and more intuitive than a switch statement will ever be, so think about your data, refactor your code, then take a couple minutes to marvel at how your code will say what you really meant to say.

Mainstay Monday: Contextual Scope

Jul 6, 2015

Last week we kicked off a discussion of scope in source code. We talked about lexical scope and how that impacts the way variables are accessed. There is another element of scoping called contextual binding, which is what gives people the most trouble.

Contextual binding is the scoping of variables based on the execution context of a particular function at the time of execution. This is least visible when dealing with the functional aspects of Javascript and most visible when interacting with objects. Let’s take a look at a little bit of Java to start.

class Thingy{

    protected String someVar;

    public Thingy(String aVar){
        //I am using this for clarity. This is not idiomatic Java.
        this.someVar = aVar;
    }

    public void printVar(){
        System.out.println(this.someVar);
    }

}

//Begin ceremonial main class
class Main{

    public void main(String[] args){
        Thingy myInstance = new Thingy("Hello!");
        myInstance.printVar(); //Hello!
    }

}

Although there are a few things here and there that might not seem familiar to the average Javascript developer, I’m sure everyone can largely follow along with what is happening here. We’re creating an object that takes a string in its constructor and then prints it to System.out when printVar is called.

Let’s take a look at the equivalent code in Javascript. I’m going to keep this old-school so we can talk about what is happening here without trying to remember all that new-fangled ES6 syntax. (I originally wrote this with a class)

function Thingy(aVar){
    this.someVar = aVar;
}

Thingy.prototype.printVar = function printVar(){
    console.log(this.someVar);
};

var myInstance = new Thingy('Hello!');
myInstance.printVar(); //Hello!

So far, no surprises. Handy thing, that. We did essentially the same thing: we created an object Thingy, instantiated it and then called myInstance.printVar. Everything worked out just as we expected it. Suppose, on the other hand, we were to do something like, hand our function off as a delegate to another function or object. Let’s take a look at what that produces:

function AnotherThingy(delegate){
    this.delegate = delegate;
}

AnotherThingy.prototype.doStuff = function doStuff(){
    delegate();
}

var myOtherInstance = new AnotherThingy(myInstance.printvar);
myOtherInstance.doStuff(); //undefined

I’m sorry, what?

We defined printVar inside myInstance and pointed it at this.someVar. It seems like it shouldn’t return undefined when we call it. This is a product of contextual binding. Although the original context inside our object Thingy provides a value someVar, once the function is passed to another function as a delegate, the context changes. This NEW contextual binding doesn’t provide the same variables as the original, so this.someVar doesn’t mean what it once did.

This behavior is really difficult for people who are new to Javascript. They expect, much like Java, that the original object context is bound up with the functions from the same context. What we can’t do with Java, however, is break a function from its object context and produce a delegate like we are doing here.

Fortunately, Javascript has a way to provide some guarantees! Don’t fret, young padawan, we have the bind function. Bind is a function that is defined on Function.prototype, and allows us to guarantee that a function will execute within a specific context.

Here’s what using bind does for us:

var myLastInstance = new AnotherThingy(myInstance.printVar.bind(myInstance));
myLastInstance.doStuff(); //Hello!

Hooray!

With simple examples like this it’s easy to think that all contextual binding is obvious and simple. I wish it were true. Contextual binding, however, can get a little tricky as functions start getting passed around and you start editing not only your own code, but others’ code too. The important thing to see is that contextual binding is a good place to look when you start coming up against disappearing variables and suddenly undefined values.

By combining lexical scoping and contextual binding, you can get your variable management under control and start writing safe, stable, internally consistent code. You’ll impress your coworkers, be better at sports, your teeth will be whiter and your car will get an extra 5 miles to the gallon. Well, you code will be more stable, so there is that. Watch your scope and context and your code will thank you.

Blog Post Notes

Markdown: Content Isn't Just For Web

Jul 1, 2015

A couple of my friends and I have the same conversation once or twice a month: How do you deal with content that could be displayed in any number of different devices?

I know, this sounds like chilling lunchtime conversation, but this is what happens when you get a group of programmers together over lunch on the regular. Nonetheless, there is value in this discussion. We don’t all work on the web and we all have to deal with content from the same source.

But, HTML is a known spec.

True, however, the next time you talk to a mobile developer, suggest to them that they process your HTML (and CSS and Javascript and other garbage text) and display it as a part of their native environment. After they laugh long and hearty at you, they are likely to tell you it will never happen.

A friend of mine wrote about the general nature of display agnostic content, and concludes that with the current state of technologies, Markdown is likely the best option for safe cross-platform content. I agree that this is likely true.

First, Markdown is easy to produce. No special editor is even necessary to create a Markdown document since the average person could learn all of the key features in a few minutes. Moreover, for technical users, some key players have adopted a specific dialect known as Github Flavored Markdown (GFM) and there is wide support for it, so converting to and from GFM has become a rather trivial task.

Second, Markdown does not allow for external documents to declare display properties. This means that the display management is left entirely up to the application that is rendering it. Since the user can’t do things like create CSS to make all of the text green and rendered with Comic Sans, the application level control is more sane and normalized. Normalization is a good thing.

Third, Markdown is, at its core, just plain text. Plain text follows rules and standards that can be set outside the scope of your application or organization. If you store the text document in UTF-8 or UTF-16 format, it will always be the same. Everywhere. All of a sudden, you can reason about your document in all kinds of useful ways. You know precisely how big it is. You know exactly how fast it will render. You know, without question, what the format and markers will be.

That’s a really, REALLY big win.

I’m going to sneak a fourth point into my three-point list: Markdown is safe for just about any text format or serialization strategy you can throw at it, because it’s just text.

Markdown in JSON? It’s a string Markdown in SOAP? It’s a string Markdown in XML? IT’S A STRING

There are plenty of people out there still using XML. (Don’t laugh, they are out there.) Imagine a world where CDATA just goes away. I mean, capturing XML, parsing it, dealing with CDATA protected strings, making sure everything didn’t get completely borked in the process is a pain in the tuchus. I’ve been there and trust me, it stinks.

Of course this leads us to the inevitable discussion of how we process Markdown. If you are not on the web and you’re relying on any number of different languages to parse and manage Markdown, use Hoedown. Yes, it’s called Hoedown, seriously. Hoedown is a standalone, no libraries needed markdown parser built in C.

It is likely, though, that you are using web technologies to process your Markdown (or you wouldn’t be reading a blog by a JS developer), so I have a special gift for you too: Marked. Marked takes Markdown strings and turns them into standard HTML and it’s easy. Here’s what it looks like when you used marked:

let myMarkdown = '**This** is some __markdown__.',
    output = marked(myMarkdown);

console.log(output); // <p><em>This</em> is some <strong>markdown</strong>.</p>

This is great if you already have Markdown and you just need to display it on the web, but what about the output from your favorite WYSIWYG editor? As it turns out, there is a library for that too. To-markdown is a script that will take whatever garbage-formatted HTML comes out of the back end of your HTML editor and turn it into crystal clear Markdown. Here’s what it looks like:

let myHTML = "<p><em>This</em> is some <strong>markdown</strong>.</p>",
    output = toMarkdown(myHTML); //It's so much like Marked it hurts

console.log(output); // **This** is some __markdown__.

To sum up, if you are working in a multi-platform environment, which is really REALLY common, make friends with your mobile and desktop developers and provide them platform-agnostic content in the form of Markdown. It’s easy to work with, it’s popular, it’s plain text and it’s easy to serialize.

With the solid support of two well-vetted libraries like Marked and To-markdown, there is practically no barrier to entry, so stop saving HTML to the database, and make your content easy to work with. If you drop in the conversion method into the standard content flow in your app, management will just look around one day and notice that everything is a little better and they won’t know why. Who can argue with ‘better,’ really?

Blog Post Notes

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->