Web Designers Rejoice: There is Still Room

Sep 28, 2010

I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

Both articles focused on content and how it gets passed around. The problem is, there is a lot more going than just content on the web. What both articles overlook is the work of the web developer or web engineer. No, this isn’t an attempt to shoehorn engineers into this discussion. It’s about the fact that they are needed to produce function.

Beyond the world of content is a whole slew of function on the web. Web apps have become increasingly important in the landscape of the web. As a matter of fact, you’re currently visiting a web app. Yes, you’re seeing content, but you are also interacting with an application which allows me to manage and publish that content for you to see.

Other things which happen on the web include buying insurance, banking, playing games, posting comments, public forums, meeting whiteboards, chat and many other items which are missing from my list. Web applications are vital to the new web experience.

So, what about the web designers?

Web designers are needed to make all of the extant and constantly emerging applications sensible and enjoyable. Regardless of the particular language or server structure used to produce the web apps you use every day, one of the primary interfaces is still the browser. This means what is an application in one sense is a web page in another. Who designs these pages you see? Web designers.

This link between the design world and the application world has been developing for decades. Designers are vital in the production of web applications just as much as the engineers are. The world of applications today isn’t the same as it was even in the 1990’s.

Users crave satisfaction.

As little as 15 or 20 years ago, to simply have a working application was a feat unto itself. If you could enter input and get a meaningful result in return, the application was launched. People, today, expect more. They want to be able to enter what they need and get the meaningful output they expect, but they also desire rich interaction. They crave visually stimulating and sensible interfaces. Users have gotten more design savvy and they won’t stand for mediocre if they can have the best.

Regardless of where the content, which is fed from a website, is displayed, neither Facebook nor Google will ever be able to serve the function you provide on your site. Moreover, they will not give your user the experience they expect from your company. Only by interacting directly with YOUR site will the user ever find the satisfaction they seek.

Ultimately, the people responsible for bridging the gap between the engineer and the user are designers. Designers come in various flavors from the jack-of-all-trades to the specialist interaction, user experience and interface designers. Designers make the user comfortable. Designers provide the problem-solving expertise which is so crucial to making an interface meaningful and usable.

In the end, to say that the future of the internet has no room for designers would be just as foolish as saying the future of the internet has no place for engineers. I mean, there are all of these turnkey software packages out there, what do you need an engineer for?

It’s foolishness.

Ultimately, engineers and designers are both critical to the web experience. They have been until now and the need is only expanding. Even as content is served out to other distribution channels, the home still needs to be somewhere. Even as content is still king, the sea of applications continues to expand. Much like Jell-O there is always room for designers. Go, design and make the web a better place.

Anticipating User Action

Sep 21, 2010

Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

I have one last function to look at. This function will let us sort out how long a certain percent of our users will hang out at a site, trying to accomplish their goals given a random population interacting with a page or site they have never visited before. By having the ability to calculate the output of the user falloff function, we can compare user test results to our falloff curve without plotting the entire curve to show anticipated versus actual results.

We’ve already talked about the user tolerance constant (L) and our starting number of users (u_0). These are all the elements we need to figure out at what time (t) we will have a percentage (p) of our users left actively seeking on our site. The actual number of users remaining is trivial to calculate after you pick a percentage, i.e. p*u_0.

(Reminder: percentages should always be expressed as a decimal between 0 and 1.)

Without further ado, here’s the function of the day:

t = L*[ln(1/p)/ln(u_0)]^0.5

There are some interesting features of this function which are a direct result of the falloff/satisficing function we talked about last time. First, as we get closer to 100% (or p gets closer to 1) t gets closer to 0. This shows us that, at least at one end, this function must be meaningful since none of our users should have fallen off before the testing or site loading began.

At the other end, we can see that as we get closer to 1% (or p gets closer 0.01) then time is going to approach L, our user tolerance limit. This means our function is actually behaving the way it should and our results will prove to be reasonably meaningful, regarding our test population.

This function is probably as important, if not more so, for testing than our falloff function because you can actually see how you are performing in your test group against a theoretical control group. This means, if you take your testing group, collect data and analyze it against standard statistical curves, you should be able to get a reasonable estimate of how your users are measuring up against when visiting your site.

On the other hand, if you are underperforming, you now have a reasonable metric to deduce when things are going wrong. You can do things like plug in .68 for the percent and see if you are actually capturing the first standard deviation of your users within the allotted time.

In the end, this should all relate back to one question: are my users accomplishing what they want to do before they are getting too frustrated to continue with my website? If the answer is ‘yes,’ then pat yourself on the back and then make it even better. If, on the other hand, your answer is ‘no,’ it’s time to start evaluating how your site is impacting the users who visit it.

Are they suffering from unintended ad-blindness as the users tested for the US census website were? Are you suffering from a hover-and-cover anti-pattern which is causing your users to have to steer all over the page to get to what they need? Are you not using language that makes sense to your audience?

All of these questions and many more should come to mind to improve your performance against the baseline we’ve defined. Just remember, even when you are consistently beating my model you can still improve more. Surprise and delight your users. Beat the curve and then improve again. Think about your users, make your site a delight to use and make the web a better place.

Anticipating User Falloff

Sep 13, 2010

As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

It is reasonable to say that no two people have the same patience for waiting and searching. Some people will wait and search for an extraordinary amount of time while others get frustrated quickly and give up almost immediately. To expect that all of your users will hold out until the very last moment of the predicted 13, or so, seconds hardly reflects reality.

Instead, we can say that we have some maximum tolerance, L, which we can compute which the very last holdouts will actually wait for. Unfortunately, we also know that a majority of users, if they have to wait very long, won’t even see your site since they will fall off before the page finishes loading. This means that the bulk of the users which see your site will be something less than the number of users who actually attempted to load your site.

There were some really smart guys who lived about 100-200 years ago and did lots of math. The really neat thing is they did a lot of work that I don’t have to reproduce or prove since they were smarter, more patient and far more talented than I am. One of them was named Carl Friedrich Gauss, who some refer to as the Prince of Mathematics. He was really smart. Newton smart. Possibly smarter than Newton.

What does Gauss have to do with user experience?

Gauss is the guy who figured out how to evaluate (1/(2pi)^.5)e^(-1/2*x^2) when integrated from negative to positive infinity. Did I just see your eyes glaze over? It’s alright. That happens to me a lot.

What this really translates to is, Gauss figured out how to work with the statistical standard normal curve. This probably means a lot more to you, right? This function happens to be really useful in developing something meaningful regarding users and their falloff over time from initial click through to our tolerance, L.

I spent an entire weekend where I slept little and did math a lot. During that time, I developed a function, based on the standard normal curve which says something reasonably meaningful about users and how long they are willing to stay on your site and either a) search for what they need and b) not satisfice. I’ll give you the function without justification. Contact me if you want all the formalities, I have them in a folder, on the ready.

Our function looks something very much like the following:

u(t) = u_0^(1-(t/L)^2)

What this says is that the number of users still on your site, at time t, is equal to the initial users times some falloff function evaluated for t. The cool thing is, we already know everything that goes into this little gem when we are testing. We know how many users we started with and we know what L is. The really interesting bit is, when t>L, u(t) is less than one. This means that the probability we will have a user after we reach the maximum tolerance is exactly what we expect it to be.

Below is an estimation of what the graph would look like for your analysis:

[caption id=”” align=”alignnone” width=”380” caption=”User Falloff Over Time”]User Falloff Graph[/caption]

This may not seem like much of a revelation. We all know that, as people run out of patience, they leave the site. What this does is it gives us something we can plug into our calculators and project some sort of quantified result. What this also means is, if you can produce results which fall beyond the bounds of this graph as you are testing, you know you are outperforming expected results. You can also use this to compare to the number of people who satisfice during testing.

Probably one of the most important things is comparing the number of users who remain on a site for an expected amount of time to the amount of time needed to produce a conversion. This offers a real, concrete means to offer up ROI on your efforts to encourage users to remain on your site.

The uses of this modest function are so numerous I can’t afford the space here to list them. I will offer up more insight into this function as well as other, related, functions which can be used for further prediction. I encourage you to sit and play with this. See how it compares with your test findings. Gauge how you are performing against the model. Improve and test again and, above all else, make the web a better place.

User Frustration Tolerance on the Web

Sep 7, 2010

I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

Before we start down this particular rabbit hole, there’s a bit of a prerequisite for our discussion. It is important that you understand Fitts’ Law and its generalization, the Steering Law. These are critical to understanding how much time a user will be willing to dedicate to your site the first time they arrive, or after a major overhaul, before abandoning their goals and leaving the site.

So, let’s suppose you have users visiting your site, or, better yet, you are performing user testing and want to evaluate how your site is performing with the users you are testing. It is important to have a realistic expectation regarding what users would really tolerate on the web before they leave so you can evaluate the results of your test accordingly.

Most users have some sort of tolerance level. By this, I mean most users are only willing to give you a fraction of their day before they get fed up. Reasonably, some users will have a shorter fuse than others, but all will eventually blow the proverbial gasket and leave your site, never to return. Let’s call this tolerance for pain and frustration ‘L.’

L is the amount of time, typically in seconds, that your user is willing to spend time looking over your site and trying to accomplish their goals. It is becoming common to say that a user will attempt to satisfy a goal and, ultimately, they will attempt to satisfice if nothing else seems to work.

When they hit the satisfice point they are reaching their tolerance for frustration. The satisfice action comes quickly, so we have very little time to fully satisfy the user. There are actually 3 items which go into the base tolerance before satisficing occurs:

  1. The maximum acceptable page load time (p)
  2. The maximum time it takes after page load to locate a satisfactory action to achieve their goal (g)
  3. The Fitts'/Steering time it takes to get to their preferred action item (fs)

The maximum acceptable page load time seems to range from one to ten seconds depending on who you talk to or read on the web. I am opting to take the average and say that the maximum page load time should take around five seconds, though this can vary depending on other factors which are outside the scope of this discussion.

Users, once the site has loaded, have a maximum time they will spend searching for something to satisfy their goals. The number I keep seeing thrown around is seven seconds, so I am going to accept that as my number for a general baseline for user behavior.

Finally we have Fitts’ Law and the Steering Law. This lends a little complication to the matter as these functions will return varying results. The simplest case would be a Fitts’ law case where the user can move directly to an item on the screen without interruption or interference. Each person knows how much time it takes them to move from one place to another on the screen and they will, generally, allow for time to move the cursor to a target.

If the screen does other, unexpected things while the user is moving their pointer, like opening and closing ads, displaying inline pop-ups which cover the target or other interferences, the user will get distracted and frustrated. This is where a Fitts’ Law asset can become a Steering Law liability. A frustrated user is far more likely to leave than a satisfied user. For each item which interferes with the user’s ability to move to their target, their patience will wane. Reasonably, then, using the variables I defined above, we can calculate the tolerance constant as follows:

L = p + g + fs - (sum of all subsequent change in fs)

Better yet, if we plug in the basic values I collected from around the web, we get this:

L = 5 + 7 + fs - (sum of all subsequent change in fs) = 12 + fs - (sum of all subsequent change in fs)

Moving from one place on the screen to another is a relatively quick motion, so we can see, given there aren’t major issues with user-flow interruption, that the average user tolerance is going to be between 12 and 13 seconds for a site from beginning to end. That’s not a very long time, but to a user, it’s an eternity. Don’t believe me? Sit and watch the clock for 13 seconds, uninterrupted. Go on, I’ll wait.

Kind of a painful experience, isn’t it? Keep this in mind as you create your site and watch users test it. During your next test, run a stopwatch. If it takes your tester more than a quarter of a minute to sort everything out and do what they were tasked with, you have some serious issues to consider.

I threw a lot out in one little post, today. Let it soak for a while and tell me what you think in the comments. As you work on guiding users through your site and as you test, think about the 13 seconds just watching the clock tick. Consider your user and their tolerance for frustration and pain. Keep the journey quick and painless and make the web a better place.

Google Geocoding with CakePHP

Aug 31, 2010

Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

That is, I didn’t find anyone, though they may well be out there.

I did find several references to a Google Maps helper, but, that didn’t help too much since I had an address and no geodata. The helpers I found looked, well… helpful once you had the correct data, mind you. Before you can do all of the maps-type stuff, you have collect the geodata and that’s where I came in.

I built a quick little script which takes an address and returns geodata. It isn’t a ton of code, it doesn’t handle paid accounts and it isn’t fancy. What it lacks in bells and whistles, it makes up for in pure, unadulterated Google Maps API query ability. Let’s have a look at how to implement the code.

First, download the file and unzip it. Place it in /app/controllers/components. That’s the bulk of the work. Once you have the component added to your components directory, just add it to the components array in your controller and call the getCoords() function like in the code below.

class FakeController extends AppController
{

     var $components = array("Googlegeocode");

     /* functions and whatever other code ... */

     function getGeoData()
     {

          $address = $this->data["ModelName"]["address"];
          $coords = NULL;
          if($address)
          {
               $coords = $this->Googlegeocode->getCoords($address);
          }
          $this->set("coords", $coords);

     } // End of function

} // End of class
```

There is more code there in general class setup and comments than there is in actually making the coordinate request.  Note, do not URL encode your address before passing it into the function.  This can have unexpected results as the geocoding component will properly encode the address for you.

There are a couple of other functions in case you need them.  First is a call to retrieve the data set which is returned from Google.

// ... code ...
$geodataRecord = 
     $this->Googlegeocode->getGeodataRecord($address);
// ... code ...
```

This will return an array built directly from the XML returned by Google.  From this you can extract all of the information they typically return, including status, address information and geodata as well as several other odds and ends.  There is actually quite a bit of data returned for each address.

Two other useful functions are the lastCoords() and lastGeodataRecord() functions.  They are called as follows:

// ... code ...
$coords = $this->Googlegeodata->lastCoords();
$geodataRecord = $this->Googlegeodata->lastGeodataRecord();
// ... code ...
```

Once a record is retrieved, it is stored in memory until a new record is requested.  You can refer to these as needed to recall the latest records retrieved from Google until the script finishes executing.

Though this isn't the typical user experience related post, hopefully this will help you get moving more quickly on your project involving geocoding addresses for use with the Google Maps UI API.  I hope you find my component useful and you use it to make the web a better place.

    

  • Web Designers Rejoice: There is Still Room

    I’m taking a brief detour and talking about something other than user tolerance and action on your site. I read a couple of articles, which you’ve probably seen yourself, and felt a deep need to say something. Smashing Magazine published Does The Future Of The Internet Have Room For Web Designers? and the rebuttal, I Want To Be A Web Designer When I Grow Up, but something was missing.

  • Anticipating User Action

    Congrats, you’ve made it to the third part of my math-type exploration of anticipated user behavior on the web. Just a refresher, the last couple of posts were about user tolerance and anticipating falloff/satisficing These posts may have been a little dense and really math-heavy, but it’s been worth it, right?

  • Anticipating User Falloff

    As we discussed last week, users have a predictable tolerance for wait times through waiting for page loading and information seeking behaviors. The value you get when you calculate expected user tolerance can be useful by itself, but it would be better if you could actually predict the rough numbers of users who will fall off early and late in the wait/seek process.

  • User Frustration Tolerance on the Web

    I have been working for quite a while to devise a method for assessing web sites and the ability to provide two things. First, I want to assess the ability for a user to perform an action they want to perform. Second I want to assess the ability for the user to complete a business goal while completing their own goals.

  • Google Geocoding with CakePHP

    Google has some pretty neat toys for developers and CakePHP is a pretty friendly framework to quickly build applications on which is well supported. That said, when I went looking for a Google geocoding component, I was a little surprised to discover that nobody had created one to do the hand-shakey business between a CakePHP application and Google.

  • Small Inconveniences Matter

    Last night I was working on integrating oAuth consumers into Noisophile. This is the first time I had done something like this so I was reading all of the material I could to get the best idea for what I was about to do. I came across a blog post about oAuth and one particular way of managing the information passed back from Twitter and the like.

  • Know Thy Customer

    I’ve been tasked with an interesting problem: encourage the Creative department to migrate away from their current project tracking tool and into Jira. For those of you unfamiliar with Jira, it is a bug tracking tool with a bunch of toys and goodies built in to help keep track of everything from hours to subversion check-in number. From a developer’s point of view, there are more neat things than you could shake a stick at. From an outsider’s perspective, it is a big, complicated and confusing system with more secrets and challenges than one could ever imagine.

  • When SEO Goes Bad

    My last post was about finding a healthy balance between client- and server-side technology. My friend sent me a link to an article about SEO and Google’s “reasonable surfer” patent. Though the information regarding Google’s methods for identifying and appropriately assessing useful links on a site was interesting, I am quite concerned about what the SEO crowd was encouraging because of this new revelation.

  • Balance is Everything

    Earlier this year I discussed progressive enhancement, and proposed that a web site should perform the core functions without any frills. Last night I had a discussion with a friend, regarding this very same topic. It came to light that it wasn’t clear where the boundaries should be drawn. Interaction needs to be a blend of server- and client-side technologies.

  • Coding Transparency: Development from Design Comps

    Since I am an engineer first and a designer second in my job, more often than not the designs you see came from someone else’s comp. Being that I am a designer second, it means that I know just enough about design to be dangerous but not enough to be really effective over the long run.

  • Usabilibloat or Websites Gone Wild

    It’s always great when you have the opportunity to built a site from the ground up. You have opportunities to design things right the first time, and set standards in place for future users, designers and developers alike. These are the good times.

  • Thinking in Pieces: Modularity and Problem Solving

    I am big on modularity. There are lots of problems on the web to fix and modularity applies to many of them. A couple of posts ago I talked about content and that it is all built on or made of objects. The benefits from working with objectified content is the ease of updating and the breadth and depth of content that can be added to the site.

  • Almost Pretty: URL Rewriting and Guessability

    Through all of the usability, navigation, design, various user-related laws and a healthy handful of information and hierarchical tricks and skills, something that continues to elude designers and developers is pretty URLs. Mind you, SEO experts would balk at the idea that companies don’t think about using pretty URLs in order to drive search engine placement. There is something else to consider in the meanwhile:

  • Content: It's All About Objects

    When I wrote my first post about object-oriented content, I was thinking in a rather small scope. I said to myself, “I need content I can place where I need it, but I can edit once and update everything at the same time.” The answer seemed painfully clear: I need objects.

  • It's a Fidelity Thing: Stakeholders and Wireframes

    This morning I read a post about wireframes and when they are appropriate. Though I agree, audience is important, it is equally important to hand the correct items to the audience at the right times. This doesn’t mean you shouldn’t create wireframes.

  • Developing for Delivery: Separating UI from Business

    With the advent of Ruby on Rails (RoR or Rails) as well as many of the PHP frameworks available, MVC has become a regular buzzword. Everyone claims they work in an MVC fashion though, much like Agile development, it comes in various flavors and strengths.

  • I Didn't Expect THAT to Happen

    How many times have you been on a website and said those very words? You click on a menu item, expecting to have content appear in much the same way everything else did. Then, BANG you get fifteen new browser windows and a host of chirping, talking and other disastrous actions.

  • Degrading Behavior: Graceful Integration

    There has been a lot of talk about graceful degradation. In the end it can become a lot of lip service. Often people talk a good talk, but when the site hits the web, let’s just say it isn’t too pretty.

  • Website Overhaul 12-Step Program

    Suppose you’ve been tasked with overhauling your company website. This has been the source of dread and panic for creative and engineering teams the world over.

  • Pretend that they're Users

    Working closely with the Creative team, as I do, I have the unique opportunity to consider user experience through the life of the project. More than many engineers, I work directly with the user. Developing wireframes, considering information architecture and user experience development all fall within my purview.

  • User Experience Means Everyone

    I’ve been working on a project for an internal client, which includes linking out to various medical search utilities. One of the sites we are using as a search provider offers pharmacy searches. The site was built on ASP.Net technology, or so I would assume as all the file extensions are ‘aspx.’ I bring this provider up because I was shocked and appalled by their disregard for the users that would be searching.

  • Predictive User Self-Selection

    Some sites, like this one, have a reasonably focused audience. It can become problematic, however, for corporate sites to sort out their users, and lead them to the path of enlightenment. In the worst situations, it may be a little like throwing stones into the dark, hoping to hit a matchstick. In the best, users will wander in and tell you precisely who they are.

  • Mapping the Course: XML Sitemaps

    I just read a short, relatively old blog post by David Naylor regarding why he believes XML sitemaps are bad. People involved with SEO probably know and recognize the name. I know I did. I have to disagree with his premise, but agree with his argument.

  • The Browser Clipping Point

    Today, at the time of this writing, Google posted a blog stating they were dropping support for old browsers. They stated:

  • Creativity Kills

    People are creative. It’s a fact of the state of humanity. People want to make things. It’s built into the human condition. But there is a difference between haphazard creation and focused, goal-oriented development.

  • Reactionary Navigation: The Sins of the Broad and Shallow

    When given a task of making search terms and frequetly visited pages more accessible to users, the uninitiated fire and fall back. They leave in their wake, broad, shallow sites with menus and navigtion which look more like weeds than an organized system. Ultimately , these navigation schemes fail to do the one thing they were intended for, enhance findability.

  • OOC: Object Oriented Content

    Most content on the web is managed at the page level. Though I cannot say that all systems behave in one specific way, I do know that each system I’ve used behaves precisely like this. Content management systems assume that every new piece of content which is created is going to, ultimately, have a page that is dedicated to that piece of content. Ultimately all content is going to live autonomously on a page. Content, much like web pages, is not an island.

  • Party in the Front, Business in the Back

    Nothing like a nod to the reverse mullet to start a post out right. As I started making notes on a post about findability, something occurred to me. Though it should seem obvious, truly separating presentation from business logic is key in ensuring usability and ease of maintenance. Several benefits can be gained with the separation of business and presentation logic including wiring for a strong site architecture, solid, clear HTML with minimal outside code interfering and the ability to integrate a smart, smooth user experience without concern of breaking the business logic that drives it.

  • The Selection Correction

    User self selection is a mess. Let’s get that out in the open first and foremost. As soon as you ask the user questions about themselves directly, your plan has failed. User self selection, at best, is a mess of splash pages and strange buttons. The web has become a smarter place where designers and developers should be able to glean the information they need about the user without asking the user directly.

  • Ah, Simplicity

    Every time I wander the web I seem to find it more complicated than the last time I left it.  Considering this happens on a daily basis, the complexity appears to be growing monotonically.  It has been shown again and again that the attention span of people on the web is extremely short.  A good example of this is a post on Reputation Defender about the click-through rate on their search results.

  • It's Called SEO and You Should Try Some

    It’s been a while since I last posted, but this bears note. Search engine optimization, commonly called SEO, is all about getting search engines to notice you and people to come to your site. The important thing about good SEO is that it will do more than simply get eyes on your site, but it will get the RIGHT eyes on your site. People typically misunderstand the value of optimizing their site or they think that it will radically alter the layout, message or other core elements they hold dear.

  • Information and the state of the web

    I only post here occasionally and it has crossed my mind that I might almost be wise to just create a separate blog on my web server.  I have these thoughts and then I realize that I don’t have time to muck with that when I have good blog content to post, or perhaps it is simply laziness.  Either way, I only post when something strikes me.

  • Browser Wars

    It’s been a while since I have posted. I know. For those of you that are checking out this blog for the first time, welcome. For those of you who have read my posts before, welcome back. We’re not here to talk about the regularity (or lack thereof) that I post with. What we are here to talk about is supporting or not supporting browsers. So first, what inspired me to write this? Well… this:

  • Web Scripting and you

    If there is one thing that I feel can be best learned from programming for the internet it’s modularity.  Programmers preach modularity through encapsulation and design models but ultimately sometimes it’s really easy to just throw in a hacky fix and be done with the whole mess.  Welcome to the “I need this fix last week” school of code updating.  Honestly, that kind of thing happens to the best of us.

  • Occam's Razor

    I have a particular project that I work on every so often. It’s actually kind of a meta-project as I have to maintain a web-based project queue and management system, so it is a project for the sake of projects. Spiffy eh? Anyway, I haven’t had this thing break in a while which either means that I did such a nice, robust job of coding the darn thing that it is unbreakable (sure it is) or more likely, nobody has pushed this thing to the breaking point. Given enough time and enough monkeys. All of that aside, every so often, my boss comes up with new things that she would like the system to do, and I have to build them in. Fortunately, I built it in such a way that most everything just kind of “plugs in” not so much that I have an API and whatnot, but rather, I can simply build out a module and then just run an include and use it. Neat, isn’t it?

  • Inflexible XML data structures

    Happy new year! Going into the start of the new year, I have a project that has carried over from the moment I started my current job. I am working on the information architecture and interaction design of a web-based insurance tool. Something that I have run into recently is a document structure that was developed using XML containers. This, in and of itself, is not an issue. XML is a wonderful tool for dividing information up in a useful way. The problem lies in how the system is implemented. This, my friends, is where I ran into trouble with a particular detail in this project. Call it the proverbial bump in the road.

  • Accessibility and graceful degradation

    Something that I have learnt over time is how to make your site accessible for people that don’t have your perfect 20/20 vision, are working from a limited environment or just generally have old browsing capabilities. Believe it or not, people that visit my web sites still use old computers with old copies of Windows. Personally, I have made the Linux switch everywhere I can. That being said, I spend a certain amount of time surfing the web using Lynx. This is not due to the fact that I don’t have a GUI in Linux. I do. And I use firefox for my usual needs, but Lynx has a certain special place in my heart. It is in a class of browser that sees the web in much the same way that a screen reader does. For example, all of those really neat iframes that you use for dynamic content? Yeah, those come up as “iframe.” Totally unreadable. Totally unreachable. Iframe is an example of web technology that is web-inaccessible. Translate this as bad news.

  • Less is less, more is more. You do the math.

    By this I don’t mean that you should fill every pixel on the screen with text, information and blinking, distracting graphics. What I really mean is that you should give yourself more time to accomplish what you are looking to do on the web. Sure, your reaction to this is going to be “duh, of course you should spend time thinking about what you are going to do online. All good jobs take time.” I say, oh young one, are you actually spending time where it needs to be spent? I suspect you aren’t.

  • Note to self, scope is important.

    Being that this was an issue just last evening, I thought I would share something that I have encountered when writing Javascript scripts.  First of all, let me state that Javascript syntax is extremely forgiving.  You can do all kinds of  unorthodox declarations of variables as well as use variables in all kinds of strange ways.  You can take a variable, store a string in it, then a number, then an object and then back again.  Weakly typed would be the gaming phrase.  The one thing that I would like to note, as it was my big issue last evening, is scope of your variables.  So long as you are careful about defining the scope of any given variable then you are ok, if not, you could have a problem just like I did.  So, let’s start with scope and how it works.

  • Subscribe

    -->