# Math for Programmers: Union and Intersection

Last week we talked about sets and how they relate to arrays. This week we will take a look at how to interact with arrays and apply two common mathematical operations on them to produce new, refined sets of data with which we can interact.

Two of the most common and well known actions we can take on sets are union and intersection. The union operation combines two sets and creates a new single set containing the elements of each of the original sets. Intersection is also a combinatorial operation, but instead of combining all elements, it simply returns a set containing the shared elements of each set.

### Uniqueness

Before we can address union and intersection, we have to deal with the state of uniqueness. Last week we looked at how an array can be converted into a set by viewing each index and value as a vector. Though this is useful for seeing the relation between mathematical sets and arrays in programming, it is not quite so useful when trying to actually accomplish set operations.

The biggest issue we encounter when looking at an array is that it is more closely related to a vector in nature and behavior. If we discard the importance of array ordering, it becomes a little more set like. Let’s take a look at what this means.

``````    // ≁ -- mathematically dissimilar
// ~ -- mathematically similar

var myVector = [1, 3, 2, 5, 7, 1, 1, 2];
myVector ≁ {1, 2, 3, 5, 7}; // This is true since a vector is ordered and requires all elements

var myArray = [1, 3, 2, 5, 7];
myArray ~ {1, 2, 3, 5, 7}; // true because our array contains only unique elements
``````

Our second array closely matches our needs for a set, so it would be ideal to have a function which takes an array of values and returns a list with all duplicated values removed. We can annotate this function like this: (array) -> array

Although this function has been implemented in several libraries, it is easy enough to create we’ll just build it here. This not only gives us insight into how a “unique” function could be built, but it also gives us a vanilla implementation of our functionality so as we build on top of our behaviors, we know where we started and how we arrived at our current place.

```    function addToMap (map, value){
map[value] = true;
return map;
}

function buildSetMap (list){
}

function unique (list){
return Object.getKeys(buildSetMap(list));
}
```

Now we have a clear way to take any array and create an array of unique values in linear, or O(n), time. This will become important as we move forward since we want to ensure we don't introduce too much overhead. As we introduce new functions on top of unique, it would be easy to loop over our loop and create slow functions which can be disastrous when we rely on these functions later for abstracted behavior.

Union

To really talk about the union operation it can be quite helpful to take a look at what a union of sets might look like. In words, union is an operation which takes two sets and creates a new set which contains all members, uniquely. This means, the union of {1, 2, 3} and {2, 3, 4} would be {1, 2, 3, 4}.  Let's look at a Venn diagram to see what this means graphically.

For small sets of values, it is pretty easy to perform a union of all values, but as the sets grow, it becomes much more difficult.  Beyond this, since Javascript does not contain a unique function, i.e. the function we built above, nor does it contain a union function, we would have to build this behavior ourselves.  This means we have to think like a mathematical operator to create our function.  What we really need is a function with accepts two sets and maps them to a new set which contains the union of all elements.  Using a little bit of visual mathematics, our operation looks like the following:

This diagram actually demonstrates one of the core ideas behind functional programming as well as giving us a goal to work toward.  Ultimately, if we had a function called union which we could use to combine our sets in a predictable way, we, as application developers, would not need to concern ourselves with the inner workings.  More importantly, if we understand, at a higher abstraction level, what union should be doing we will be able to digest, fairly immediately, what our function should take as arguments and what it will produce.  Our union function can be annotated as (array, array) -> array. Let's look at the implementation.

```javascript
function union (lista, listb) {
return unique(lista.concat(listb));
}
```

With our unique function already constructed, this is a pretty trivial function to implement.  There is, of course an item of interest here. Union is almost done for us by the concat function. Concat makes the same assumption our original exploration of converting an array to a set does: arrays are sets of vectors, so a concatenation would be an introduction of two sets of vectors into a new set, reassigning the indices in each vector to map to a new unique set.

This concatenation behavior can be quite useful, but it is not a union operation. In order to perform a proper union of the values in each array we will need to ensure all values of the returned array are actually unique.  This means we need to execute a uniqueness operation on the resulting set to get our array which is similar to a set.  I.e. if we have an array representing set A, [A], and an array representing set B, [B], then union([A], [B]) ~ A ⋃ B.

Intersection

Much like the union operation, before we try to talk too deeply about the intersection operation, it would be helpful to get a high-level understanding of what intersection means. Intersection is an operation which takes two sets and creates a new set which contains only the shared elements of the original sets.  This means the intersection of {1, 2, 3} and {3, 4, 5} is {3}.  Visually, intersection looks like the following diagram.

The darker region represents the intersection of sets A and B, which, from our first example, is a set containing only the value 3, or {3}.  Much like the union operation, we can create a function intersect which takes two sets and returns a new set containing the intersection.  We can diagram this in the same way we did with union.

This diagram shows us the close relation between intersect and union functions. The annotation for our intersection function is, actually, identical to our union function: (array, array) -> array. This means they share the same contract and could be used on the same sets to produce a result which, incidentally, will match the contract for any function which takes a set of values as a list.  Let's have a look at what the implementation of intersect looks like in Javascript.

```javascript
return map[value] ? accumulator.concat([value]) : accumulator;
}

function intersect (lista, listb){
var mapb = buildSetMap(listb);
}
```

As expected, the difference between union and intersection is in the details. Where union performed the combination before we performed a unique operation, intersections can only be taken if all of the values are already unique, This means intersections are slightly more computationally complex than a union, however, in the large, intersection is still a linear operation.  We know this by performing the following analysis:

unique is O(n) as defined before
reduce is O(n) by the nature of reduction of a list
buildSetMap is also O(n) as it was the defining characteristic of unique.
intersect is the sum of three O(n) operations, or intersection performs 3n operations, making it, also, O(n)

This algorithmic analysis is helpful in understanding the general characteristic of a function and how it will impact execution time in a larger system.  Since union and intersect are both O(n) functions, we can easily use them in a chained way, resulting in a new O(n) function. What this also tells us is, union and intersection are sufficiently performant for small sets of data and acceptable for medium sets.  If our sets get large enough we might have to start looking at ways to reduce the number of computations needed to complete the process, but that's another blog post.

Summary

We can actually use our union and intersect functions together to quickly perform complex mathematical behavior on even non-optimal arrays. Since these functions perform normalization on our sets of data, we can use rather poorly defined arrays and still get meaningful results. Let's take a quick look at a small example where we set A, B and C as poorly defined arrays and then perform A ⋃ B ⋂ C.

```javascript
var A = [1, 2, 4, 3, 7, 7, 8, 3],
B = [5, 4, 7, 9, 4, 10],
C = [1, 2, 3, 2, 4];

intersect(union(A, B), C); // [1, 2, 3, 4]
```

In this post we discussed performing union and intersection operations on arrays of data, as well as implementations for each and their performance characteristics.  By understanding these core ideas, it becomes easier to understand how data can be quickly and descriptively modified programmatically. This core understanding is useful both for working with arrays inside of your application as well as better understanding the way data is interrelated in database considerations. Now, go munge data and make it work better for you!

```

# Math for Programmers: Arrays, Objects and Sets

I’ve had conversations with a programmers with varied backgrounds and experience levels. One thing which has come up in several conversations is math, and how much math a programmer needs to be effective. I have a formal background in math, so I tend to lean heavily on the side of more math is better. Other programmers argue that only select topics in math are important to make a professional programmer effective.

Arguably, for day to day programming needs, it is unlikely you will need to demonstrate a strong understanding for the proof of indefinite integrals over n-space, however there are topics which seem to come up often and would be useful for a programmer to understand. I decided that the first in the series of math for programmers should cover something that ever programmer has to think about at one time or another: sets.

If you are working with data coming from persistent storage like a database, sets should be your bread and butter. Most of your time will be spent thinking about how sets work together and how to combine them to capture a snapshot of the data you need. On the other hand, if you are working with data at another layer somewhere above data access, your interactions with sets are going to be a little more subtle. Arrays and maps are sets of data with added restrictions or rules, but they are, at their core, still sets in a very real way.

If we look at an array of integers, it’s not immediately obvious you are working with a set. You could, in theory, have a duplicate number in an array. Arrays also have the characteristic of being ordered. This means that each element will come out of an array in the same order each time. Let’s take a look at an array of integers.

``````var myIntegers = [1, 2, 3, 4, 2, 5, 1, 1, 7];
``````

Honestly, this array is most reminiscent of a vector in mathematics. This could easily describe a point in a nine-dimensional space, which is kind of hard to get a visual on. Nevertheless, we can do just a little bit of reorganization and turn this into a set which adheres to all the normal rules a given mathematical set has.

### Rules of a Set

All sets are unordered. This means two sets are equal if both sets contain the same members, regardless of the order. For instance, {1, 2, 3} and {3, 2, 1} are the same set since they each contain the members 1, 2 and 3 and ONLY those members. Regardless of the order chosen to represent the elements, the set is guaranteed to be unique given the elements it contains alone.

Sets may not contain duplicate values. Each value in a set is uniquely represented, so {1, 1, 2, 3} would be correctly represented as {1, 2, 3}. This uniqueness makes sets well defined. Well defined simply means our sets are unambiguous, or any set can be constructed to be clearly defined and distinctly represented.

Sets may be constructed with individual values, like our set of integers, or with more complex structures like a set of sets or a set of vectors. This means {{1}, {2}, {3}} is not the same as {1, 2, 3} since the first set is a set of sets containing 1, 2 and 3 respectively, while the second set is a set containing the numbers 1, 2 and 3. Understandably, this is kind of abstract, so think about an array of arrays, versus an array containing integers. The relation is quite close.

### Thinking of Arrays as Sets

Let’s take another look at our original array. If we were to take our array and interpolate it into an object literal instead we can start to see the relation between a set and an array. Let’s see what our object literal would look like.

``````var myIntegerObject = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 2,
5: 5,
6: 1,
7: 1,
8: 7
};
``````

Now we can see how each element in our array maintains uniqueness. Our object has a key, which represents the array index and a value, which is the original value in an array. This data formation makes creating a set trivial. We can describe each value in our array as a vector containing two values, <index, value>. By using vector notation, we can make our array conform exactly to the definition of a set. Our array can be rewritten as a set this way, {<0, 1>, <1, 2>, <2, 3>, <3, 4>, <4, 2>, <5, 5>, <6, 1>, <7, 1>, <8, 7>}.

This vector notation helps us to tie together two separate data structures into a single, unified mathematical form. By understanding this form, we can see how values could be easily stored and organized in memory to either be sequential, or fragmented throughout memory with pointers stored in a sequential data structure for faster access.

Being able to move between the array or object formations we use day to day to set notation gives us a lot of power. We can abstract away the physical computer and think less about the language constructs, in order to solve data set problems in a language agnostic way first and then weave the solution back into our code in order to improve our thought process and, ideally simplify the logic we use to organize our thoughts in a program.

This is a first slice cutting across programming and its relation to math. There are techniques which can be used to dissect problems and solve them directly without code as well as different programming paradigms which abstract away the nuts and bolts of loops, conditions and other detail-related behaviors in favor of a more general approach, simply declaring the operation to be done and then working with the result. We will explore these techniques and the math related to them in upcoming posts.

# Getting Started Writing Visual Studio Code Extensions: Action Handlers

I started writing a Visual Studio Code extension about two and a half weeks ago and, as is little surprise, writing an extension with new, uncharted functionality takes more knowledge than you can find in the basic tutorial for creating a hello world extension. As I learned, I thought, I should write what I learned down so other people would have a little more illumination than I had.

If you haven’t read the Hello World tutorial, you should. It has a handy step-by-step recipe for creating a baseline extension. No, really, go read it. I’ll wait.

Great, now that you have Hello World set up, we’re ready to start really building something. So the first thing that I ran into was I wanted to change things a little and see if everything still worked. Text changes were obvious, so I decided I wanted to scrape something out of the editor and show that instead.

I made small changes… and broke the extension.

The first challenge I ran into is, these extensions give you almost no visibility into what you are doing. You can use the debugger, but, if you are like me, you probably have one screen to work on, so you will have a lot of flipping back and forth to spot the broken stuff. The best friend you have during this entire process is your debugger output. Log to it. A lot.

This is kind of like old school web development. If you want to see what you are doing, use console log. If you want to see what the data looks like you are tinkering with, use console log. When you really need to dig deep, hey, you’re already in an editor!

Anyway, like I said, I broke my extension right away. The first thing I did wrong was I messed up the action name. Don’t worry about what a disposable function is for now. We can cover that next time. The important issue is I mismatched my action name. Let’s take a look at a bit of the code in my main extension file.

``````context.subscriptions.push(vscode.commands.registerCommand('cmstead.jsRefactor.wrapInCondition', function () {
wrapInCondition(vscode.window.activeTextEditor);
}));
``````

### Action handler names

Our action name tells VS Code what our action is named. Our action name is shared between the extension file where we set up our action behaviors and two locations in our package file. I may have missed it, but I didn’t see anything in the documentation regarding lining up the name in several locations. Here’s what it looks like in the package file:

``````"activationEvents": [
],
"commands": [
{
"title": "Wrap In Condition",
"description": "Wrap code in a condition block"
}
]
``````

All of these separate lines need to match or when you try to test run your extension, your new action won’t work. This is surprisingly hard to debug. There is no unit test, scenario or any other process to properly check that all of the command name strings properly match. This is probably the first problem you are going to run into.

If you try to run a named action that isn’t fully declared, you will get an error. “No handler found for the command: ‘cmstead.jsRefactor.wrapInCondition’. Ensure there is an activation event defined, if you are an extension.”

The takeaway of today’s post is this: if you see this handler error, check that your action is declared in your extension Javascript, in actionEvents and in commands. If any of these identifiers are missing, your action will fail. By double checking your action declarations, you will have a smoother experience getting your extension up and running. Happy coding!

# Javascript Refactoring and Visual Studio Code

About a month ago, I started working at Hunter. Now, I have been pretty aware of refactoring, design patterns, good practices and common practices. I don’t always agree with what everyone else says or does, but I typically have a good reason to do it the way I do. For whatever I do, Hunter does more so. I am a notorious function extractor and deduplicator, but never more than what I have seen or done in the last month, or until now.

C# has a bunch of really cool tools and toys, the likes Javascript developers have never known, until now. Over the last couple of weeks, I have been working on an extension for Visual Studio Code to help even the odds. I’m no full-time tool builder, so I won’t be matching the quality of Jet Brains or the like, but I’m giving it my best go.

In future posts I will start covering some of the discoveries I have made while building this plugin, but this week is all showboat. I haven’t gotten my extension released on the Visual Studio Marketplace… yet. While that gets finished up, I do have everything together in a github repository.

Currently, I think the issues list is actually longer than the list of things JS Refactor (my extension) does, but what it does is kind of nifty. How many times do you discover you actually want to pull some code up and out of a function, into its own function? Probably a lot. The worst part of it all is all the goofing around you have to do with going to the top of the code, adding a function declaration, going to the bottom of the code and closing the function definition, making sure you matched all your braces and finally, when everything looks just right, you finally indent everything to the right place…

Nevermore!

JS Refactor has an action called “wrap in function.” Wrap in function will ask for a function name, wrap up your code, and indent everything using your preferred editor settings.

I KNOW, RIGHT? It’s awesome!

Seriously, though, wrap in function is nice except when it gets the job wrong. Sorry, it’s not perfect, yet, but I am working on it. Along with that, there are also a wrap in anonymous function and extract to function actions. These are a first go, so they still need some love, but they make things faster all the same.

Another part of this plugin, which generally works more reliably than the the actions, are the snippets. Fortunately, the snippets rely on code written by the good folks on the Visual Studio Code team. The snippet functionality really shines through when you start writing lots of code. It’s like having a miniature code generator living in your editor.

Currently I have a handful of snippets available for use and they work pretty darn well. Strict declarations, functions, and a couple other things I have actually noticed a significant increase in the speed of code generation, which gives me more time to spend just thinking about the problem I am most interested in solving.

I am not going to give a rundown of all the features of JS Refactor, instead I would encourage you to go play with it. Take a look at the code that drives the whole thing. Give me feedback. It’s part solving a problem and part learning how to even do the code analysis to make things work. I won’t promise to solve all of the issues quickly, but I will sure try to solve them eventually.

So, until next week, please take a look at JS Refactor and use it on your code. I think you’ll like it and I hope it will make your life easier. Next week we will start taking a look at building VS Code extensions and some of the stuff that I wish someone had put in a blog so to make the discovery process easier.

# Anonymous Functions: Extract and Name

It’s really great to see functional patterns become more accepted since they add a lot of really powerful tools to any programmer’s toolbox. Unfortunately, because functional programming was relegated primarily to the academic world for many years, there aren’t as many professional programmers who have developed a strong feel for good patterns and share them with more junior programmers. This is not to say there are none, but it is important to note that most programmers think of functional programming and say “it has map, filter and reduce; it’s functional.”

Though having those three higher-order functions does provide a functional flavor, it is more important that there are higher-order functions at all. With higher-order functions come the use of anonymous functions. Anonymous functions (also known as lambda functions) provide a great facility for expressing singleton behavior inline. This kind of expressiveness is great when the function is small and does something unexciting, like basic arithmetic or testing with a predicate expression. The problem is anonymous functions introduce cognitive load very quickly which makes them a liability when code gets long or complex.

Today I’d like to take a look at a common use of anonymous functions and how they can cause harm when used incorrectly. There are many times that anonymous functions are assigned directly to variables, which actually introduces one of the same issues we are going to deal with today, but I am not going to linger on that topic. Please consider this a more robust example of why even assigning anonymous functions to variables is dangerous.

### Jumbled Anonymous Functions - Our First Contestant

In Javascript, people use promises; it’s a fact of life. Chris Kowal’s Q library is a common library to see used in a variety of codebases and it works pretty well. Now, when someone writes an async function, it’s common to return the promise so it can be “then’ed” against with appropriate behavior. The then function takes two arguments, a resolve state function and a reject state function. These basically translate into a success and error state. I’ve created a common promise scenario so we have something to refer to.

``````    function doAsyncStuff(condition) {
myAsyncFn(condition).then(function (data) {
var moreConditions = {
foo: data.foo,
bar: data.bar.baz
};
return anotherAsyncFn(moreConditions);
}, function (error) {
logger.log(error);
}).then(function (data) {
}, function (error) {
logger.log(error);
});
}
``````

### Extract Method

The very first thing I see here that is a problem is, we have two functions logging an error. This behavior is not DRY which is a code smell and violates a commonly held best practice. There is a known refactoring for this kind of redundancy called “extract method,” or “extract function.” Technically we already have a function in place, so we can simply lift it and name it. This will reduce our footprint and make this code cleaner already. Let’s see what this would look like with our logging behavior extracted.

``````    function logError (error){
logger.log(error);
}

function doAsyncStuff(condition) {
myAsyncFn(condition).then(function (data) {
var moreConditions = {
foo: data.foo,
bar: data.bar.baz
};
return anotherAsyncFn(moreConditions);
}, logError).then(function (data) {
}, logError);
}
``````

With this simple extraction, we now know more about what our function does and our code has become more declarative. Although logError is a one-line function, the fact that it does exactly one thing makes it both easy to reason about and easy to test. We can inject a fake logger and capture the logging side effect, which gives us direct insight into what it does. Another benefit we get is that we can hoist this function further if need be, so we can reuse it across different modules or files.

### Debugging Problems

Now we get to the real nitty gritty. We have two anonymous functions which do not explicitly tell us what they do. Instead, they just contain a much of code which performs references into an object. We run up against two different issues because of this. First, the lack of declarative code means the next person who looks at this, which might be you, will have to sit and stare at this to understand what is happening.

Another, bigger issue than immediate comprehension is debugging. Suppose we take this file and concatenate it with all of the other files in our project and then uglify the whole thing and deploy it out for use in someone’s browser. All of our code now lives on a single line and may not even have meaningful variable names anymore. Now, suppose one of the data objects comes back null. Our debugging error will contain something like “error at line 1:89726348976 cannot treat null as an object."

This is bad, bad news. Now we have an error which we can’t easily identify or triage. One of the calls we are making no longer does what we think it does and it’s causing our code to break… somewhere. Whoops! We can actually use the same pattern we used for our error logging to extract our methods and make sense of the madness. Let’s take a look at what our refactoring would look like.

``````    function logError (error) {
logger.log(error);
}

function getChainedCondition(data) {
var moreConditions = {
foo: data.foo,
bar: data.bar.baz
};
return anotherAsyncFn(moreConditions);
}

function captureNewState (data){
}

function doAsyncStuff (condition){
myAsyncFn(condition).then(getChainedCondition, logError)
.then(captureNewState, logError);
}
``````

Now that we have lifted our last two functions out of our promise chain, everything makes a little more sense. Each of our behaviors is easy to reason about, we can test each function independently and all of our functions have a unique identifier in memory which saves us from the insidious debugger issue which can cost time and money.

There are other places we could go from here with our code to make it more fault tolerant, but that’s outside of the scope of this article. Instead, when you look at your code, see if you can easily understand what is going on. Look at it like you’ve never seen it before. How many anonymous functions are you using? How many different steps are crammed into a single function?

When you see this kind of muddy programming, think back on our reduction to simpler functions, avoid complex anonymous functions and think “extract and name.”

• ### Web Designers Rejoice: There is Still Room

I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

• ### Anticipating User Action

Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

• ### Anticipating User Falloff

As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

• ### User Frustration Tolerance on the Web

I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

• ### Google Geocoding with CakePHP

Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

• ### Small Inconveniences Matter

Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

• ### Know Thy Customer

I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

• ### When SEO Goes Bad

My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

• ### Balance is Everything

Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

• ### Coding Transparency: Development from Design Comps

Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

• ### Usabilibloat or Websites Gone Wild

It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

• ### Thinking in Pieces: Modularity and Problem Solving

I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

• ### Almost Pretty: URL Rewriting and Guessability

Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

• ### Content: It's All About Objects

When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

• ### It's a Fidelity Thing: Stakeholders and Wireframes

This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

• ### Developing for Delivery: Separating UI from Business

With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

• ### I Didn't Expect THAT to Happen

How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

• ### Degrading Behavior: Graceful Integration

There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

• ### Website Overhaul 12-Step Program

Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

• ### Pretend that they're Users

Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

• ### User Experience Means Everyone

I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

• ### Predictive User Self-Selection

Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

• ### Mapping the Course: XML Sitemaps

I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

• ### The Browser Clipping Point

Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

• ### Creativity Kills

People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

• ### OOC: Object Oriented Content

Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

• ### Party in the Front, Business in the Back

Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

• ### The Selection Correction

User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

• ### Ah, Simplicity

Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

• ### It's Called SEO and You Should Try Some

It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

• ### Information and the state of the web

I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

• ### Browser Wars

It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

• ### Web Scripting and you

If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

• ### Occam's Razor

I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

• ### Inflexible XML data structures

Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

• ### Accessibility and graceful degradation

Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

• ### Less is less, more is more. You do the math.

By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

• ### Note to self, scope is important.

Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

• -->