Michael Feathers defines legacy code as code without tests. This means code written years ago, with a good test harness, is not legacy code. It also means the code written yesterday, without tests, IS legacy code.
We don’t need to dig very deep into this to understand what is happening here. Code which has tests is going to be easier on the nerves to change than code without. If we dig a little deeper, code with descriptive tests actually documents context and meaning for the code under test.
It’s very common, even as TDD is continuing to gain popularity, to encounter legacy code. It’s a common response to want to remove legacy code and replace it with something new. Generally speaking it is unwise to do this.
There are two scenarios which arise around legacy code, adding new features, and updating old code. Trying to fit both of these topics into a single discussion is too much for my simple mind to attempt, so let’s talk adding new features!
When you add features to a legacy codebase, there are three things you will want to keep in mind. I even have a fun little mnemonic for you: TIP.
- Test expectations first
- Integrate late
- Pure behaviors by default
We will examine these three ideas and how they make adding new features a more reasonable request. Mind you, legacy code is a tough problem so, this is a guide, but not an absolute. You will always need to use your best judgement to assess your particular situation.
So, let’s have a look at the TIP approach.
Test Expectations First
New features may be big or small, but either way, it is important that you get a good feel for the expectations stakeholders have around the feature you’ll be developing. The most effective approach to gathering this information is to have conversations. Lots of them. At the very least you should probably talk to people about the problem you are solving as much of the time as you are writing code, but that’s a different discussion.
When you and your team start approaching a story, the user story is the beginning of the conversation, not the end. Be ready to take lots of notes. Draw pictures. Identify the kinds of behaviors which are expected in the system. For an especially robust conversation, try using event storming to gather insights.
Once you have all of your expectations captured, you are ready to start iterating on your solution. It’s important to understand that your solution is almost guaranteed to require iterations. It is entirely likely that you did not capture all of the information available in the first conversation.
Before you write a single line of code, write a test. Capture some behavioral expectation in that test and decide how you want to interact with the code you’re getting ready to write.
This test should reflect an initial state of the system, the event that triggers your new behavior, and the outcome of that behavior. There are a few different ways to capture this, including the classics: Arrange/Act/Assert and Given/When/Then. Regardless of the test format you choose, be sure you test discrete expectations and cover the cases you are aware of. Use each new test as your North Star, guiding your development efforts.
You’ll note we spent a lot of time talking about communication in this section. The reason for this is, the only way to uncover expectations is to communicate with the people who hold information about the desired outcome. Often they will forget to share something you would consider critical. As a developer, it is crucial you develop the skill of surfacing those important details, as they will be the signposts to building a well-aligned solution.
I received some questions and I wanted to provide direct insight. This integration is NOT with regard to the practice of continuous integration (CI). Keeping code outside of your CI pipeline can lead to tremendous challenges and pain.
Instead it can be viewed as code which exists along-side the rest of the working software source, under test. The integration is simply the introduction into the user-accessible flow of the application. Consider late integration in this case as an airgapped feature.
New features, regardless of where you are in the product lifecycle, go through a process of discovery, development, and iteration. All of this is best done outside the flow of the current system. Ideally, the current software is in production and providing value to users. We want to cause as little disruption as possible to the current software as we introduce new behaviors.
When working in a legacy system, the idea of working outside of the primary released software is even more important since there is a lot of risk associated with modifying existing code. Often, even small changes in a legacy system have wide-reaching consequences, so care is critical.
It’s common practice to introduce feature toggling into systems in order to cordon off new development work from the eyes of the user. This protects the user from accidentally stumbling into a feature which is incomplete and, possibly, unstable.
In a legacy system the feature toggle is not a conditional behavior. Instead we can view integration into the system as our feature toggle. By developing code which is not reachable, by any means, from the main application, we protect our new development efforts and the user who might interact with something that could lead to an unrecoverable situation.
Integrating late, then, is waiting until the point in time where you feel confident that the work you have done is at a point that, at least, the stakeholders could interact with it and provide feedback. This airgap provides safety around the changes you make and enables the company to continue providing value in the software without breaking customer expectation.
Pure Behaviors by Default
We can look to functional programming and get a sense, immediately, of what a pure behavior might be. For our purposes, we can consider a pure behavior to be a behavior which performs a data-in, data-out action without interacting with external systems or maintaining state.
Business logic can be largely characterized by our definition. Business rules can be stated as “if x, then do y.” This means we can describe most of the business concerns through pure behaviors, and test them accordingly.
If we write the majority of our new feature as a collection of pure behaviors, we will be able to test most of it without even concerning ourselves with the inner workings of the rest of the system.
It is worth noting, by creating new, pure behaviors, we may end up duplicating code which exists elsewhere in the system. This is fine, since we can always refactor later. It’s important in the refactoring that we be mindful of keeping pure behaviors by default, since this is our path out again.
Since pure behaviors are comparatively easier to test than behaviors embedded deep inside a legacy codebase, this approach will actually create a new positive feedback loop where others have an example of a testing methodology that is easy to follow and have success with.
Folding it All Together
Although this approach is not the grand unifying solution for all legacy code woes, it provides a means to start providing new value in a system which might, otherwise, be difficult, or impossible to work with otherwise.
If we look at the entire TIP methodology, we can see it bundles the classic TDD approach of test-first, a healthy practice of reducing coupling between program elements, and the descriptive quality of well-scoped pure functions. By working within the TIP structure, each part of the new feature development process builds upon the new, healthy codebase we created, meaning this is a self-reinforcing loop we can rely on.
Of course this method of approaching a legacy codebase continues to rely on good XP practice including sharing knowledge, refactoring, tests, automation, etc. Instead of viewing the TIP approach as a standalone practice, consider it a part of the process of integrating new, healthier practices into a codebase which makes change hard.
There are many people who would likely say I don’t handle some social sitations with the utmost grace, and they wouldn’t be wrong. I’m human and my emotions can get the better of me more often than I would like. Nevetheless there are interpersonal values I hold dear as I work with other people in the day-to-day.
During the time I worked with Jason Kerney and Willem Larsen, we sunk a fair amount of effort into discovering better ways to work together. Over time we ran many experiments and had varied results.
Side note – if you never have failed experiments, were they experiments? Perhaps this is another post.
Anyway, we had experiments that succeeded and experiments that failed. For software developers successes can feel hollow when they are not part of a visible feature, and failures can feel like a blow to the ego.
What kept our team afloat through the roller coaster ride were the values we held. Since we held our values more closely than any one success or failure, the successes felt less thankless and the failures felt more educational. Ultimately, we made the entire process about people, and this is what shaped the values which I still consider core to healthy interactions with my coworkers.
The values we landed on before we all went our separate ways (work-wise – we still talk) are as follows:
- People over code
- Tensions over problems over solutions
- Acceptance before alternatives
- Leadership through expertise
- Guide, don’t dictate
I shared a photo of the note cards I keep on Twitter and people felt a little baffled by the meaning of items on the list. This post is my way of expanding the pithy aphorisms and making them digestible by people who are outside of our immediate circle.
People Over Code
In any work I do with another developer, I always prefer to consider the person I am working with over the code we are writing. I am more willing to throw away code I have written than throw away the relationship I am building with my coworker. Their trust is worth more to me than any disputed chunk of code.
Tensions Over Problems Over Solutions
This one may seem a little more cryptic on the outset, but it actually follows pretty directly from the first value. When discussing an issue, prefer to discuss the feeling you have (tension) regarding the issue at hand. If you aren’t able to provide insight into the feeling, then an example of the problem you’re encountering may be appropriate. If the previous options are not satisfactory, then, and only then, offer a solution.
An example of surfacing a tension might look like the following:
“I understand, mechanically, what this code is doing, but I really don’t know why. How does this relate to the bigger task we are working on?”
There is no specific problem here. There is no statement that there is a specific “problem” with the code, there is just a feeling that some understanding is missing.
If surfacing a tension isn’t enough, sometimes a problem will help solidify the idea.
“As we look at this method, we have to scroll past the edge of the screen to see everything and I lose track of the task context. Is there something we could do to make this better?”
This is more specific about a particular problem. Though this could be great if the concrete example is the only issue you have, but if it is merely an example, people might try to solve this problem without thinking about the bigger picture tension you have.
Sometimes people need a nudge to get their problem solving brains in gear. This is where solutions come in. It’s important to note, solutions are a last resort, not a first go-to.
“This method is really long, I’d like to break it into smaller methods. What do you think?”
You’ll note, this still leaves room for others to provide thier thoughts, but it offers a solution which people might be able to follow if nothing better comes along.
Acceptance Before Alternatives
Acceptance before alternatives comes directly from the improv idea of “yes, and.” The aim with acceptance before alternatives is you listen and accept that the person you are talking with is bringing something to the table. Once they have said their piece, start by accepting it.
“I see what you’re saying. That method is really long, and it’s hard to see on one screen.”
Then, if you disagree with the proposed idea, you can provide an alternative.
“I understand you might want to put in folding regions* for the code there, would you be open to extracting methods instead?”
By offering an alternative this way, people are more likely to feel heard and understood. The discussion becomes less about ego and more about the advantages and disadvantages of a given approach.
- Folding regions are a way to tell some editors that a chunk of code goes together and can be folded to allow other code to fit on the screen.
Leadership through expertise
This value is a bit more fiddly than the previous. Though the words mean, by dictionary definition, what you might assume, together it’s important to understand that this is not meant as a club.
Leadership through expertise comes from the notion of a civil anarchist state. If we assume there is a power dynamic between people on a team, but there is no structured governing body, the team is guided exclusively through a shared desire to produce a software solution.
This means the person with the most expertise in a particular area can provide leadership through service to the rest of the team. No one person will be an expert in everything and each person is likely to have more expertise in a topic than anyone else.
If we view “leadership through expertise” through this lens, then leadership is a servant position and expertise is the means of service to the team. Rather than being an appeal to authority, it is a way to facilitate guidance.
Guide, Don’t Dictate
Guide, don’t dictate is the last value and it largely provides the way in which people can work together. As the “expert” title moves from person to person in the group, the way to smooth the handoff is through guidance first. If the expert is leading and leading is serving, then guidance is the tool they use to serve the team as others work toward completion of a task.
Each of these values, ultimately, is centered around the idea that I work with people and people are what drives the work. All the values do, for me, is provide me with a path toward better, healthier interaction with my coworkers. Hopefully you find these values useful and, perhaps, create some of your own.
I remember a time, long ago, when WordPress was a small, scrappy piece of software which was dedicated primarily to the publishing of blogs and basic site content. It wasn’t the most fully-featured CMS on the planet, but it worked well enough. It was fast enough and flexible enough, so people, including me, used it.
Over time I have noticed my site getting slower and slower. I looked at the database and there was nothing strange happening there. I checked the installation and ensured something hadn’t broken. Ultimately, WordPress is just kind of slow anymore. It’s fine for people who are not comfortable with writing their own HTML and/or using the command line, but I just couldn’t deal anymore.
The most important realization I had was: my blog, like most blogs, is effectively static content. Nothing on the web loads faster than a single HTML file loaded from the filesystem. This means, my site is likely to see the greatest performance improvement from converting to an entirely static content system.
I knew about Jekyll and I had used it before, but I looked around before diving in. There are several different packages including Jekyll, Hugo, and Gatsby which were my final three.
Ultimately, I was concerned about Gatsby because it is all tied together with React so I rejected it because: 1. I detest Facebook and don’t want to even be peripherally associated with any technology they control; 2. I looked at examples and it looked like people were actually using single page applications to serve their blogs. I’m sure that not every site is served as a React app since there is a server-side rendering system for React, but the choice just didn’t instill a sense of confidence.
This left Hugo and Jekyll. Both are static site generators which use template engines. Both have a fairly strong following. Both seemed to be completely acceptable options. The one and only thing that I felt uncomfortable with regarding Hugo was their template system actually felt MASSIVE and kind of confusing. I don’t want the infrastructure of my blog to become a separate hobby.
Ultimately, I leaned into what I knew and stuck with Jekyll. So far I have had no regrets.
The Mechanics of Conversion
I’ll be totally honest here, I only had a couple of pages so I simply copy/pasted the page content, made small edits, and moved on with my day. If you have a lot of pages, this will not be a good solution since you could end up copying and pasting for days, or weeks.
The really interesting part was converting my blog posts, which numbered over 100 in total. As I started to review the posts and what needed to be done to convert them, I knew I needed a script of some sort. What I ended up creating was a ~100 line conversion script with a little external configuration JSON file:
Using this script is as simple as editing your configuration file and then running the script. By default, the conversion script will write blog posts to the _posts directory. This means you should be able to run the script, rebuild your site and everything should be set to go.
Step 1: Export your WordPress blog posts
In your WordPress site, go to settings and choose the export option. WordPress has an export behavior which builds an RSS feed document by default. This is what we want.
Don’t try to be tricky with this or things could be harder later.
Step 2: Save the RSS XML export to your Jekyll project root
Just like the title says. Move the RSS XML document to your Jekyll project root. That’s all.
Step 3: Copy the script and configuration from the gist to your local Jekyll root
Copy the script into a file called
For the moment, let’s go ahead and make an assumption: automated tests (unit tests, integration tests, etc.) make code safer to write, update and change. Even if tests break, it says something about how the code was written and pulled together while running. I will address this concern in another post at a later date. Nevertheless, I am going to rely on this assumption throughout this post, so if you disagree with the initial assumption, you might be better served to drop out now.
Now that the grumpy anti-testers are gone, let’s talk, just you and I.
I don’t actually believe that require or import – from the freshly minted ES module system – are inherently bad; somewhere, someone needs to be in charge of loading stuff from the filesystem, after all. Nevertheless require and import tie us directly to the filesystem which makes our code brittle and tightly coupled. This coupling makes all of the following things harder to accomplish:
- Module Isolation
- Extracting Dependencies
- Moving Files
- Creating Test Doubles
- General Project Refactoring
Let’s take a look at an example which will probably make things clearer:
To get a sense of what we have to do to isolate this code, let’s talk about a very popular library for introducing test doubles into Node tests: Mockery. This package manipulates the node cache, inserting a module into the runtime to break dependencies for a module. Particularly worrisome is the fact that you must copy the path for your module dependencies into your test, tightening the noose and deeply seating this dependence on the actual filesystem.
When we try to test this, we either have to use Mockery to jam fakes into the node module cache or we actually have to interact directly with the external systems: the filesystem, and the external logging system. I would lean – and have leaned – toward using Mockery, but it leads us down another dangerous road: what happens if the dependencies change location? Now we are interacting with the live system whether we want to or not.
This actually happened on a project I was on. At one point all of our tests were real unit tests: i.e. they tested only the local unit we cared about, but something moved, a module changed and all of a sudden we were interacting with real databases and cloud services. Our tests slowed to a crawl and we noticed unexpected spikes on systems which should have been relatively low-load.
Mind you, this is not an indictment of test tooling. Mockery is great at what it does. Instead, the tool highlights the pitfalls built into the system. I offer an alternative question: is there a better tool we could build which breaks the localized dependence on the filesystem altogether?
It’s worthwhile to consider a couple design patterns which could lead us away from the filesystem and toward something which could fully decouple our code: Inversion of Control (of SOLID fame) and the Factory pattern.
Breaking it Down
To get a sense of how the factory pattern helps us, let’s isolate our modules and see what it looks like when we break all the pieces up.
With this refactoring, some really nice things happen: our abstractions are cleaner, our code becomes more declarative, and all of the explicit module references simply disappear. When modules no longer need to be concerned with the filesystem, everything becomes much freer regarding moving files around and decoupling concerns. Of course, it’s unclear who is actually in charge of loading the files into memory…
Whether it be in your tests or in your production code, the ideal solution would be some sort of filesystem aware module which knows what name is associated with which module. The classic name for something like this is either a Dependency Injection (DI) system or an Inversion of Control (IoC) container.
My team has been using the Dject library to manage our dependencies. Dject abstracts away all of the filesystem concerns which allows us to write code exactly how we have it above. Here’s what the configuration would look like:
Module Loading With Our Container
Now our main application file can load dependencies with a container and everything can be loosely referenced by name alone. If we only use our main module for loading core application modules, it allows us to isolate our entire application module structure!
Containers, Tests and A Better Life
Let’s have a look at what a test might look like using the same application modules. A few things will jump out. First, faking system modules becomes a trivial affair. Since everything is declared by name, we don’t have to worry about the module cache. In the same vein, any of our application internals are also easy to fake. Since we don’t have to worry about file paths and file-relative references, simply reorganizing our files doesn’t impact our tests which continue to be valid and useful. Lastly, our module entry point location is also managed externally, so we don’t have to run around updating tests if the module under test moves. Who really wants to test whether the node file module loading system works in their application tests?
Wrapping it All Up
With all of the overhead around filesystem management removed, it becomes much easier to think about what our code is doing in isolation. This means our application is far easier to manage and our tests are easier to write and maintain. Now, isn’t that really what we all want in the end?
For examples of full applications written using DI/IoC and Dject in specific, I encourage you to check out the code for JS Refactor (the application that birthed Dject in the first place) and Stubcontractor (a test helper for automatically generating fakes).
A couple weeks ago, I attended Agile Open Northwest. As with every other Agile Open conference I attend there were plenty of eye-opening experiences. One experience which surprised me was actually a session I facilitated. I went in with a concept of where it would land and I was dead, flat wrong.
I anticipated my session about creating joy for coders would be small and filled primarily with technical folk who wanted to figure out how they could make their experience incrementally better while they worked daily at their company. Instead I had a large crowd of people who performed a variety of duties at their company and they all wanted to get a perspective on how to make things more joyful. This, in itself, brought me a ton of joy.
Before I dive into the experiment I ran, I want to share a little background on how I arrived at the idea that there could be joy while coding. I know a lot of developers get a fair amount of joy out of creating something new. That first line of code in a fresh project can create a near euphoric experience for some developers. Later, as the code base ages, it can seem as though the joy has drained completely from the project and all that is left is drudgery and obligation to keep this project on life support.
I felt this very same thing with one of my own open source projects. I was shocked at the very notion I had started developing something which I very much wanted, only to find myself a couple years later feeling totally defeated every time I opened my editor to do a little bug fixing.
I decided I would make one last effort to combat this emotional sinkhole which used to be an exciting project for me. I went to work capturing information at the edges of my code, clarifying language which was used throughout and, ultimately, reformulating the domain language I would use to describe what I was doing. After some time and a little digital sweat, the project came back to life!
I realized I was actually more excited about my project NOW than I had been when I even started writing it in the first place. I actually was experiencing that same joy I used to experience when starting something afresh. Once I found this new dharma in code, I felt it only made sense I try to share it with my team at work. They loved the notion of joyfulness.
An Experiment In Joy
Having found this new, lasting sense of joy, I wanted to share it with the agile community. I didn’t have a deep plan, carefully constructed with layers of meaning and a precise process with which I could lead people to the promised land. I was just hoping someone would show up and be willing to share. With that in mind, here’s what my basic plan was:
- Drain the wound
- Make a Joy Wishlist
- Pave the Path
- Take Joy Back to the Team
I hoped this would be enough to get people talking about joy, what it meant and how they could claim it in their team. It seemed to pay out. Just having a list of pithy step names isn’t really enough to make sense of how to run this experiment in another environment, however. Let’s have a look at each step and what it means.
Running the experiment
Before starting the experiment, the facilitator must come in with certain assumptions and be willing to defuse any emotional outbursts that might begin to arise. It’s important to note that defusing is not simply squashing someones feelings, but accepting they have a feeling and help to reframe what might lead to how they are feeling this way.
In severe cases, the feelings that arise while experimenting might actually warrant deeper exploration before continue digging into building joy within your team. These explorations are okay, and should be welcomed since they will help people to start understanding where others on the team are coming from.
Drain The Wound
If we consider how medicine works, if someone is sick because they have a wound which has become infected, the first step to regaining health is to drain the wound in order to get the toxins out of the body. In much the same way, we will never become healthy as a team if we don’t drain the wounds which have accumulated throughout our lifetimes.
The exercise of draining the wound is simple: give people post-its, pens and a place to put everything and have them write down every last negative thing they feel about their work, the process or anything else that has them struggling with emotion coming into the experiment. It is important to discourage using names or pointing fingers since this simply spreads toxins around. We want to capture the toxic feelings and quarantine them.
The most important thing after performing this draining is to take all of the collected post-its and place it somewhere people can see it. Make sure to note: we are not ignoring the pain, we all know it is here and it has been exposed.
This is important.
If something new bubbles up during the course of the experiment, people should be welcomed to continue adding to the quarantine. It’s not off-limits to update, we are just setting all of the infection in a place where it can’t hurt us.
Sometimes wounds need more than one draining. Don’t be afraid of coming back around and doing this multiple times. Always be vigilant and identify new wounds that may be growing. Emotions can be fragile and wounds can be inflicted over time. We’re human, after all.
Make a Joy Wishlist
Everyone has things they wish they had. More time off, more support, faster builds, cleaner code, better communication, etc. Encourage people to simply write these things down. As a facilitator, you can also log ideas as people generate them. The important thing is to avoid filtering. The goal is to identify all the things which would make people happy regardless of how big, small or outlandish they are.
One important item here is, don’t log/allow anything that says stuff like “Bob needs to deliver stuff on time” or “make code suck less.” These kinds of negative statements are not things to aim for. Instead encourage people to think about what it means for code to suck less. Perhaps what Bob needs to deliver faster is support, so try to capture something like “help Bob when he feels overloaded.”
Pave the Path
Once people have come to a natural close on what their joy might look like, it’s time to start looking for ways to take action. It’s not particularly useful to identify things which could be joyful if they are little more than simply a dream never to be realized. Aim to start paving a path people can walk to approach the things which help them feel greater joy.
Once again, our goal is to seek positive actions, which means we want to limit the kind of negativity which might bubble up. The negativity is likely rooted in the toxins we purged at the beginning, so help people to reframe into something positive.
In the session at Agile Open, someone mentioned they believed developers are lazy. Instead of dwelling on it or letting it become a toxin in the middle of our joy seeking, I tried to encourage them to understand the developers and their position, while also pointing out these same developers might not understand that person’s position either. From here, we can see joy might take the form of more open, honest communication. Once we understand the tension, we can seek a problem and pose solutions.
Take Joy Back to the Team
To close the session, I encouraged people to think about ways they could take the message of joy, and what we discovered during our exploration, back to their respective teams. The goal here was not to be prescriptive. Instead each team needs to find their own joy and way to walk the path.throw them out
Within your own team there are several directions you can go from here; any of the following would be reasonable to try out:
- Dot vote on a point of joy to focus on and develop a path toward joy
- Pick an action at random, try it for a week and reflect on whether things were better or worse
- Leave each person with the idea that they must travel their own path to find joy and simply leave a single rule: one person's path cannot stomp on another's
- Devise your own brilliant plan on how to start moving toward joyful coding given what you've found
The most important part of this last bit is to ensure people feel empowered to drive toward joy on their own and together. It is up to the team to support each other and elevate the environment they are in.
Closing, Joy and My Approach
I will say, there is a fair amount of time I spend seeking joy on my own. I typically mix a bunch of different ingredients including music, time for recharge, time to vent, and poking the system until it moves in a way that feels better to me.
It’s really hard to say what might bring joy in a team or even for an individual, but I find joy often comes to me as clear communication – both in code and person to person, as a sense of directed purpose, as automation, and as opportunity to share, with others, the things which make my life a little better.
Perhaps you’ll find your team values some of these things as well. In all likelihood, your team will value things I wouldn’t have thought to include in this post; this too is completely acceptable. Unfortunately seeking joy is not a clear path with a process to force it to emerge. Joy is a lot like love, you just know it when you feel it.
I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.
Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?
As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.
I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.
Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.
Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.
I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.
My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.
Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.
Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.
When you start working on a website or application, what is your goal? In the current state of the web, there are many ways you can carry your user but, in the end, you must choose web inclusive or web exclusive. Sites with rich APIs which interact with the world around them are web inclusive. Sites which focus internally, drawing little content from the outside web and, ultimately, giving nothing back are web exclusive.
It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.
I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.
Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:
When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.
A little earlier this month, I made a post to Posterous called “What Have I Done?” It was less a post about what I had done as what I was doing. Here we are, approaching the end of the month and I’ve just completed phase one of what I was doing.
This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.
With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.
How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.
There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.
Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.
Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.
I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.
Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.
I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.
Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:
People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.
A couple of weeks ago, a friend of mine sent out a tweet asking what the ‘x’ was in Ux. I shot back a pithy “Ux is User Experience.” In a small way, the question got my mind rolling. I didn’t realize, at the time, that I was considering who does and doesn’t know anything about user experience and why that might be.
When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.
Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.
Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.
User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.
Every time I wander the web I seem to find it more complicated than the last time I left it. Considering this happens on a daily basis, the complexity appears to be growing monotonically. It has been shown again and again that the attention span of people on the web is extremely short. A good example of this is a post on Reputation Defender about the click-through rate on their search results.
I make no secret of the fact that i’m not a huge fan of Flash. It’s not really because I feel there is anything inherently wrong with Flash. I am opposed to the gross overuse and misuse that happens every day. Sometimes only Flash will do, and in those circumstances it is the answer. Sometimes Flash is the answer to a question that is totally incorrect.
It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.
I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server. I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness. Either way, I only post when something strikes me.
It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:
If there is one thing that I feel can be best learned from programming for the internet it’s modularity. Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess. Welcome to the “I need this fix last week” school of code updating. Honestly, that kind of thing happens to the best of us.
I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?
Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.
Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.
By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.