I’ve read several different books and articles about web projects and how to make sense of what needs to happen when. Everyone has their own slant and it flavors how the entire process should go. Meanwhile they hope the “magic” in another group is happening.
It’s really important to have a list of the things you need to accomplish. I am not going to tell you who does what. That depends on your team and how you dole things out. I will tell you that each of these pieces needs to be addressed or you’ll have a tough time moving to the next step.
Without anyere’s my core list of stuff to check off:
- Business and user needs
- Content Inventory and analysis
- Site analytics (Look at search terms. This is your user talking TO YOU)
- Brand, voice and message
- Curated existing content (text, images, documents, etc.)
- New content (Get it early or you'll hate yourself later)
- Process functions (search, dynamic functions for displaying content, etc.)
- Information Architecture (Hierarchy, page layout, workflow, wireframes, etc.)
- Build structured documents containing all raw copy for the site
- Colors, designs, images, flow, implemented voice and message
- Get info about servers, technical needs, etc.
- Prepare SEO deployment plan
- Prepare comps using design specs and wireframes
- Build templates to house content based on wireframes
- Implement design via styles as comps are completed
- Edit content to match voice and message and begin inserting into CMS
- Prepare SEO: descriptions, titles, friendly URLs, etc.
- Prepare 301 redirects for old pages being replaced or moved (I feel this SEO technique is important enough it needs to be stated separately)
- Ensure servers are online, technical needs are met and everything is ready for launch
- Stage site out of the public eye and QA completely
- Deploy site (Post-QA)
- QA for sanity (All pieces are behaving like they were on staging, nothing is broken)
- Review analytics and compare to pre-release stats
- Inventory new site (include all new copy, new images, etc.)
- Review conversions
Build your core list of needs and lead your team to the promised land. One project at a time, make the web a better place.
We’ve all been there at some point or another. A new project is just about to start. Everyone in the know is bracing for impact and the people who are going to contribute are blissfully unaware of the monster lurking around the corner.
Generally the kickoff goes like this:
“Hey, everyone, our client needs an update to their website. You know what you need to do, so let’s hit it. Peace out.”
Do they know what they need to do? At a high level, maybe? Probably not, really.
Stakeholders don’t know what’s coming. The client doesn’t know what they need. The design and development units are waiting until the other one is done with “their part.” In the end, everyone scrambles at the 11th hour and the project comes together. Barely.
Let’s hit rewind and do this kickoff the right way. If you are a team lead, figure out what you need. Sort out who your users are and what business needs are being addressed on your site. Contact stakeholders early, share what you need and the listen to what they want. Use this to sort out your priorities. Organize the moving parts, call your team together and give them the rundown.
Once people have their marching orders, collect everyone for happy hour. This is the first step in a long journey, start it with a cheers instead of a fizzle. Kick off your projects right and make the web a better place.
Information comes in all shapes and sizes. Some is simple. It’s copy that goes on a page. It’s an image. It’s a sound file. It’s a single PDF. It’s whatever atomic piece of information you can imagine. Then there is the molecular level, for instance, whole web pages with mixed content. Then there is listed information: movie titles, collections of documents, retail products, animals with feathers, types of beer, whatever.
Listed information can get tricky. I poked around the web to see how people were describing their lists and presenting them to the world. In the end I came up with a set of STEPS to help break off bite-sized pieces to digest. I even came up with a cute little acronym in the process.
Here’s how it works: STEPS is Sort, Transform, Eliminate, Paginate and Search. It goes in that order, even.
The first thing to do with the list is sort it. Generally you’ll want to sort in order of general importance. Sometimes alphabetically will be better. Other times a different sorting algorithm may be useful. Depending on the sheer volume of the information you may be able to do this by hand or, much like the meat in Chicken McNuggets, your information will need to be mechanically separated. I won’t dictate the correct sorting algorithm since it will vary, but it needs to be sorted first. Sorting will make life easier through the coming steps.
Once you have the basic sorting figured out, you’ll need to run a basic transformation on the resulting list. This transformation usually includes chunking the sorted list into manageable portions, and preparing the set for display. It is key, at this point, that you have already assessed personas for your project. The personas you have developed, along with your understanding of business goals will be key in understanding how to effectively transform your information into something your users can use.
This is the first presentation step. Elimination may be done accessum priori or it may be done on the fly. This depends on your use cases. Often you will know something about what your user needs before they start digging in, so why not make their lives easier and pitch the stuff they don’t want?
Even after eliminating all of the information your user doesn’t need, there may well be a large list left to sift through. Fortunately they are already sorted! Pagination helps to trim what the user is looking at and makes each group manageable. It’s easier to skim a short list without getting tired eyes, the same cannot be said for a list hundreds of items long. Keep the displayed list short.
Even after doing all of this, you may still have page after page after page of information the user has to wade through. Don’t forget that users are generally search-oriented. If they can just type in keywords and get what they want, or at least get close, they will be much happier. Preparing information for search is at least a post in itself, so I won’t talk about it here, but this last step could be the critical piece that makes or breaks your user experience when looking for something.
This is just a skeletal framework you can use to help focus each step along the way when you are organizing long lists of information for your users. In the end, the way each step works will be defined by the scope of your project, the amount of information, business goals and chosen personas. The same information could look radically different depending on these factors. In the end, though, working step-by-step will help to focus efforts and steer you away from analysis paralysis. Try my 5 STEPS process and make the web a better place.
As is typical with the break room at many offices, we have a microwave. Actually, we have two, but there is one in particular that everyone knows about and avoids. Everyone but me*, that is. It is a machine crafted in the forges of bad usability and total misunderstanding of user journey.
The main requirement for a microwave is that one be able to set the time for which their food will receive a nuclear blast, converting last night’s roast beef into magma. Either there is a “time cook” button, or you simply enter the time directly. The uranium-235 does the rest.
In all seriousness, people expect time and power level entry to exist with a clear designation. If a microwave has no designation, people often assume it is reasonable to simply begin keying a time. This microwave is different.
My coworkers avoid the evil microwave because it doesn’t do things the way they expect. If you begin keying time, it sets off “auto cook” mode and cooks food for some pre-designated time in full-minute increments from 1-6. There is no “time cook” button, only a “power” button. It appears there is no way to enter the exact time something should cook for.
As it turns out, there IS a way to enter an exact time. If you press “power” you can freely key in your time. Yes, by opting to enter your power level, you are free to enter the amount of time to cook your food.
Clearly someone decided it would be efficient to trim the “time cook” button out of the process because it was an “unnecessary” action that make the overall process slower. What they forgot to take into account is what people actually look for: how to enter the time.
Translation to the rest of the world?
Fewer clicks is just that: fewer clicks. Just because you took away a couple of clicks in order to get the user to their destination doesn’t mean you did them a service. They may have relied on that waypoint on your site. Though it cost them an extra click, they knew each click was an important step upon the journey. You helped build their confidence in your site by marking the path clearly. If you eliminate a critical sign at a fork in the road, your users may get lost.
If you find your users getting derailed at a certain point on your site, look for the missing “time cook” button. Take care in guiding your users carefully through your site, adding an extra step if it makes the entire journey clearer. Users only mind extra steps if they don’t lead to clarity. Avoid building the efficient and impractical microwave and make the web a better place.
*I intentionally use the bad microwave. Since nobody knows how to use it, the traffic to that appliance is low. Food for thought.
Most of what I have seen on the web regarding Information Architecture has been related, primarily, to what the user sees and interacts with directly. This means, what users see, and how the site is, ultimately, hierarchically constructed. Very little consideration is given for what is ACTUALLY going on with the site.
In the early days of the web, a website was mostly HTML and associated miscellany. There were some content management systems and dynamic utilities out there, but they were typically purpose-built and totally proprietary. One company I worked for had a system that, essentially, took in HTML snippets and stored them in a database as a web page that was 90% constructed, doing the last of the construction for presentation on the fly. It wasn’t graceful, but it served its purpose.
Today websites are web apps and vice versa. There are fewer and fewer business sites on the web that are old-fashioned HTML. The company I work for now, has a broad-reaching infrastructure for its web presence which involves various computers and disparate web services. This is all very important to the experience the end-user is going to have when they visit your site.
Information Architects today need to be cognizant of the systems they will be working with, the limitations of technology already implemented and the latitude they have regarding direction for the site and the user’s end interactions.
Let’s assume you have a list of items you already account for in your work: page layout, important items both for users and internal, site hierarchy, taxonomy, findability of information, etc.
Here are other items which should be considered:
What kind of a system is being used? Is there an e-commerce solution in place? What e-commerce package is being used? Does it integrate with the existing content solution or is it a separate tool?
Integrated or Decoupled Site/Content System
Is the site being run on a simple Wordpress/Movable Type/Drupal install where the site management is tightly coupled with the website itself? Is the website requesting content across the wire from a service on another site/computer/continent? How will this impact speed? How easy is it to integrate custom features and functions?
This is a biggie. If you have an old-school web project, this is unimportant. If you are sending data across a wire to a custom tool or application, this becomes VERY important. How do you want to connect to the data? Do you want to make a direct remote connection to the database or do you want to send the data as a single package across the wire? If the data is being sent as a package, what do you want to use? JSON? XML? Serialized string? Something sneaky I don’t know about?
These sound like much more engineery/techy/geeky considerations, but they are important in the end. Perhaps you are pulling information from your company’s database, a Twitter feed and pictures from Flickr. All of a sudden, you are working with mixed coupling. You will need to know this in case something fails. You are the one they are going to ask when they need an error screen.
Why data transport? Simple. You don’t need to know how to implement it, but if you work with the engineers and pick a standards-compliant data transport system, you will save them lots of headaches and yourself extra work in the end. By understanding the way data is being passed across the wire, you can start to understand how to better integrate RSS feeds into a design, present useful information to users and do it in a way that will be quick to implement and easy to maintain. Ultimately, if you want to invent a better wheel, you’ll want to be armed with a damn good reason.
Ultimately, there is a lot of information that needs to be shaped and directed. In order to best your user, yourself and your company, you need to consider things that are more than skin deep. By tackling the tough questions about your project early, you can write a more clear and useful specification. In the end, by peeling back the layers, you help to make the web a better place.
I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.
Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?
As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.
I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.
Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.
Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.
I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.
My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.
Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.
Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.
It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.
I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.
Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:
When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.
This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.
With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.
How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.
There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.
Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.
Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.
I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.
Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.
I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.
Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:
People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.
When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.
Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.
Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.
User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.
Every time I wander the web I seem to find it more complicated than the last time I left it. Considering this happens on a daily basis, the complexity appears to be growing monotonically. It has been shown again and again that the attention span of people on the web is extremely short. A good example of this is a post on Reputation Defender about the click-through rate on their search results.
It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.
I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server. I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness. Either way, I only post when something strikes me.
It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:
If there is one thing that I feel can be best learned from programming for the internet it’s modularity. Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess. Welcome to the “I need this fix last week” school of code updating. Honestly, that kind of thing happens to the best of us.
I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?
Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.
Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.
By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.