I read with interest John Gruber’s latest article on usability (see also his followup clarifications), which discusses the challenges involved in creating truly usable software, why the free software community tends on the whole to fail to meet those challenges, and why some proposed remedies for this situation are misguided.
Here are a few of my own thoughts on important points to keep in mind when designing truly usable software, because make no mistake: making easy-to-use software is very, very hard.
First impressions last
Putting aside the issue of having a pretty icon, your software’s interface is the first part of it that the user will see, and indeed the only part. Its appearance is inseparably linked with the user’s perception of the character and worth of the software, and by extension it also plays a huge role in how the user feels about the company which created the software.
As a programmer, I really love elegant algorithms. I love efficiency. I really love finding nifty, tricksy little ways to solve thorny problems. Users couldn’t care less. Users want it to work, and want to be able to figure out how to make it work without having to read any kind of documentation or do much exploring or experimentation. If your software fails (from the user’s perspective, because yours doesn’t matter) in either of those areas, then your users will hate it. They will then also come to hate you by association, and will at best stop using your software. They will likely allow their subjectively bad experience to colour all future interactions (if any) with your software and with your company. Most people would agree that such a situation is undesirable for a businessperson trying to make money from sales of said software.
Why, then, is interface and interaction design so often a last-minute, no-budget afterthought in the design and implementation process? Indeed, stunningly it’s very often the case that UIs are never actually “designed” in the formal sense; they’re merely implemented piecemeal during the latter stages of implementation. Likewise, usability is often not seriously considered before the testing phase. This is a patently ludicrous and dangerous state of affairs given the exceptionally high profile of interface and usability from the user’s perspective.
UI must be designed from the start. Interface considerations infest your design choices, like it or not. Functionality can rarely be completely divorced from the controls which will trigger and modify it. Output must often lead to feedback for the user. The core modes of the application entirely determine the user’s type of experience. Like it or not, we have to realise that as much as it might offend our software engineering sensibilities, for the user the UI is the software.
Intuitive isn't simple
Almost anyone will tell you that a piece of software should be “intuitive” (or rather, intuitive to use). The problem is that almost as many people don’t actually understand the distinction between being intuitive and being simplistic. It may possibly be true that simplicity corresponds to intuitiveness, but the converse certainly doesn’t always hold.
A piece of software can be arbitrarily complex in functionality and still be intuitive. Software is intuitive if it does what the average, reasonable and sane user expects it to do. It behaves sensibly given its platform, intended function, and appearance. Its behaviour is internally consistent, and as much as possible is externally consistent with respect to the conventions of the operating system it’s running on, and any pre-existing expectations, conventions or constraints imposed by its intended function and/or the behaviour of competing products.
This is all entirely separate and independent of the apparent simplicity of the software; a term far too often confused with (or substituted for) intuitiveness. Simplicity actually implies limited functionality and/or configurability; it is not synonymous with ease of use. Simplicity is a choice; intuitiveness is a requirement.
I’ve said quite a bit in the past about consistency in interface design (here, and here, amongst many other posts), as have several other people, so a brief reiteration is hopefully all that’s necessary. Consistency in interface design and behaviour is perhaps the single most important facet of “good” UI. This consistency must obviously be internal within the application, but must also be external, wherever at all possible, to the host operating system.
Good rules of thumb to follow include:
- Never create a custom control unless absolutely necessary,
- Hand off as much as possible of your interface creation and handling to OS calls.
- When customisation is necessary, and when possible, subclass to inherit standard behaviour and/or appearances. There are countless subtleties you'll miss in a reimplementation - but your users will notice very quickly.
- Follow interface design guidelines (HIGs) for your platform.
- In UI design, good innovation is usually evolution rather than revolution.
Many (including myself) have conceded on multiple occasions that slavish following of platform UI design guidelines is at times impractical, and would stifle innovation if observed with absolute strictness; none of that is in question. The fact remains that in the vast majority of cases, existing OS-supplied controls, behaviours and design guidelines are exhaustively sufficient for a new piece of software. More than that, the OS UI, and that of OS vendor-supplied applications, is the appearance and behaviour which the user associates with your platform, and is thus the appearance which they will expect from your software. Be very sure before violating such a deeply-ingrained expectation, and choosing not to exploit a means to very cheaply elevate a user’s initial comfort-level with your software.
Prettier without make-up
As a consequence of interface and interaction design being a second-class citizen in the lofty towers of software engineering, several increasingly common practices have emerged which attempt to ameliorate the problems caused by shoddy interfaces. These amount to little more than concealer, to cover the distasteful blemishes of poor interaction design, UIs cobbled together at the last minute, lack of experience in interface design, internal and external behavioural inconsistency, and various other problems. Interestingly, all of these do have a legitimate use, but have been co-opted as ineffectual balms for poor UI. The big three will probably come as no surprise:
- Tool Tips everywhere
Tool Tips are great for clarifying functionality, or providing a hint as to the function of more obscure controls. They are not a substitute for clear, concise labelling, logical control-grouping, or judiciously-used stateful interfaces. Your interface shouldn't be so poor as to actually require the user to read your Tool Tips.
- "Real-world" interfaces
At first glance, it seems reasonable that software whose interface mimics a physical device or workflow in the real world will be inherently intuitive. Experience would seem to indicate that this is rarely the case. Rather, any idiosyncracies of the real device are hugely exacerbated by the introduction of the entirely unintuitive and unnatural requirement to use a keyboard and mouse to manipulate it. It might look familiar, but is it really easier to use?
Applications which are "skinnable" (can have custom visual themes and perhaps control-layouts applied to them, as popularised by MP3 players, some web browsers and various other types of software) are seen as trendy and desirable. Being skinnable remains a potential selling point, and has a legitimate purpose in branding and precise market targeting. Being skinnable is not a Get Out Of Jail Free card for bad interface design. Skinning APIs usually don't provide a second chance, and it's never acceptable to offload the burden of good interface design onto your users.
Used reasonably, each of these concepts is useful and probably not inherently bad for your software’s usability. It’s just important to remember that they are enhancements, and if used should be laid on top of an existing solid foundation of sound UI and interaction design.
"An error occurred (-516)"
Ironically, one of the most important parts of interface design has nothing to do with programming or control-layout; rather, it draws on your skills in language, to craft concise, unambiguous and useful control labels, informational messages and error descriptions.
It’s my opinion that interface and interaction design can justifiably be called an art. Writing good UI labels and messages is definitely an art, and one that’s even harder to become proficient at. Selection and summarisation of salient information, using non-threatening and precise wording, leaving the user with no doubt as to which button in an alert window performs which action - all of these take experience, practise, and a keen grasp of the psychology of using software. It’s not an exaggeration to say that the user’s personal proficiency, self-image, confidence, experience and stress-levels factor into their interpretation of your interface, and particularly your displayed text. Writing text for interfaces isn’t to be undertaken lightly.
Even something as commonplace as a basic alert dialog requires the designer and writer to be aware of many principles:
- The default button should be a largely non-destructive action, and/or the most commonly used action in the current context (probably preferring the second criterion). The default button should almost always be placed in a specific location within the dialog, depending on the platform.
- Common buttons should be placed consistently, for example "Cancel" on Mac OS X is always placed to the immediate left of the default button (assuming "Cancel" itself is not the default button).
- Button labels ("OK" notwithstanding) should be actions, preferably expressed in one or two words. Labels should not be direct Yes/No answers to the question posed by the dialog, because rewording the dialog becomes difficult, particularly if negations are involved ("are you sure you don't want to...").
- In the case of errors, the dialog text should describe what went wrong, without making an accusation or chastising the user, and should either describe the correct way of performing the intended action, or how to recover from the situation.
- If an action is not undoable, this fact should be made clear to the user.
- Don't include cryptic information. Users don't want to see error numbers, debug symbols, traces, memory dumps or related information. If you're concerned about getting feedback when a particular situation arises, for example, then give the user the option to email you relevant information about the problem, and automatically prepare the email.
- Use terminology (including spelling, capitalisation, punctuation and standard acronyms) consistently with your platform. Similarly, try to use terms which are familiar within the intended market for your application, but do so carefully so as not to alienate or confuse less experienced users.
This is by no means an exhaustive list even of the primary points to consider when creating a simple alert dialog. Issues of person, tense, acceptable level of colloquialism, and many other considerations all must be taken into account when writing truly effective interface text. Be guided by your platform’s HIGs, and even more so by the UI text of the operating system itself. Software companies, and particularly commercial OS vendors, pour large amounts of money into studies on this very topic, so take advantage of that research and experience.
The UI is your software’s face; its displayed text is its voice, attitude, personality, and perceived professionalism. Most people may be unaware of the subtleties of interface design and what exactly makes a UI easy to use or not, but we’re all experts at analysing language and interpreting intent, capability and personality. If your UI text is substandard, your users will notice.
Don't prefer preferences
I have a theory that the number of preferences (UI-configurable options) in a piece of software is a function of the number of contributors to the software, multiplied by the inverse of how well-designed and specified it is, multiplied further by the inverse of its nominal ease-of-use, and finally multiplied by an arbitrary large scaling factor if the software is open source. I realise that’s a rather controversial point of view, but I think it holds in a lot of cases.
Glib theories aside, we’ve all encountered software which has a profusion of arcane preferences, the bulk of which we either have no interest in altering, or which we don’t understand in the first place. Many of us have also encountered one of those slightly perverse people who, upon downloading a new application, immediately go into the Preferences window to see just which options are on offer (I’m such a person). Outwith certain market segments and niches, users often need to be dragged kicking and screaming into Preferences windows, and balk at rows of tabs and vast expanses of nested checkboxes and radio-button groups. They fear Preferences windows with an almost religious awe.
The thinking of the user is quite simple and understandable: the software in its default state (with factory settings intact) is in its “intended” mode of operation. It works, according to the superior wisdom of the developer, and is supported for use in this condition. Preferences are by their very nature at least slightly eccentric and esoteric, because the default settings as what’s normal for the software. Whether they admit it or not, a lot of users secretly fear that, if they tweak the options enough, they will break the software, and it will be entirely their own fault. The possibility that the developer has ensured that all permutations of all the preferences available all allow the software to continue working, never enters the mind of many users. I’ve often wondered if people actually hallucinate a “No User-Serviceable Parts Inside” warning label at the top of every Preferences window.
This raises a couple of considerations. Firstly, don’t depend on preferences for functionality - many users will never see them. Secondly, the choosing of initial defaults is absolutely critically important, because for many people those settings are the only ones which will ever be used. Even for those who do tweak your preferences, the factory settings are those which will be initially used to evaluate your software and assess its worth. A great deal of thought should go into the choice of factory settings, and you must consider the user first and engineering requirements last.
As a corollary to the above, it’s generally sound advice to avoid exposing preferences in the first place, wherever possible. Note my use of the word “exposing”; engineering and testing requirements, and plans for future expansion and enhancement, will undoubtedly necessitate the creation of configurable options - but this doesn’t automatically mean that these options must be exposed, i.e. made available to the user in the interface. With a well-considered choice of default behaviour, a tightly-specified piece of software and a sound understanding of the needs of your target market, it’s usually possible to drastically trim the number of exposed preferences in an application. Certainly, if you find yourself with preferences which toggle the availability of yet more preferences, you know you’re on a slippery slope.
Much is made of the concept of stability in interface design; the idea that the user should be able to rely on the placement of controls and on the software’s behaviour based on previous experience. This is unquestionably sound, and leads to several of the principles previously discussed, such as the consistent placement of the default button in an alert dialog, the use of OS-supplied controls with predictable, standard behaviours, and so forth. However, there’s a prevailing misunderstanding about stability which can limit the usability of software: for some reason, certain designers seem to equate stability with being static.
The issue is that, in real life, stability doesn’t always equate to absolute, strict consistency. For example, it’s not necessarily true that every time you want to find your car keys, you look in the same place; rather, you look where you last put them. This is intuitive stability, or persistence, in the real world, and it can (and should) also be applied to software design. It may seem to make sense to have an identical experience every time the user launches your software, but it will very likely also lead to frustration. Beyond a certain novice level, users desire stability in the sense of persistence, and only then absolute consistency of experience. Leave tool palettes where they put them, rather than in the same initial default position each time. Leave values in the same units which were last used. Preserve previous settings for any utility functions (like image filters’ settings, or Find & Replace options). Remember the last string the user searched for, and put it back into the Find field for them automatically. The concept is simple, and is absolutely expected by the user.
I don’t wear spectacles very often when out and about, but I frequently wear them at home in the evening. I’ve temporarily lost them a few times, as you do, and on those occasions it would perhaps have been useful if they always magically returned to their case on my desk whenever I took them off. In the vast majority of cases, however, I want them to be where I left them, and it’s immediately intuitive to look for them there. The concept of object persistence is grasped by infants in their earliest years, and remains as a fundamental tenet of our daily existence. Don’t allow over-zealous observance of absolute consistency of experience violate this basic expectation of your users.
So, how do you rate yourself so far on all of the previously discussed talents required for truly effective UI and interaction design? Feeling confident that you can do all that? Good; now all you need to do before you get started is to find another person who can do it too.
You see, one rather inconvenient fact of how our minds work in the business of creativity is that we become so close to the task in hand, and so completely invested in our chosen solutions to any problems which arise, that we become utterly incapable of detecting any flaws which may be present. Nor do I mean in the sense of bruised egos; we actually can’t see the mistakes in the first place. This is why you can’t effectively check the usability of your own software, and why you certainly can’t usefully proof-read your own interface text. You need someone else (at least one, though I don’t believe you ever need more than a few others), and they need to be at least significantly equal to yourself in knowledge of the principles of quality interface design. It’s no good having your labels checked by your little brother, or your usability assessed by the paper boy.
At this point, you may be mentally crafting a rebuttal of the form “but actually you need real, average users to assess usability, not experts”. That’s true enough, but you need to vet your design before getting as far as such testing. It’s staggering to me even now how blind I can become to obvious issues in my interfaces, simply because I created them that way and have been staring at them every day for a couple of months. Unbelievable mistakes and poor design choices sit there smugly, hiding in the light, entirely invisible to their own creator just inches away. It’s essential to have your interface, and particularly your displayed text, checked by someone with appropriate skills who is very familiar with the function and purpose of the software. Ideally, do this iteratively throughout development, and begin early. Only after this peer-review is complete and fully satisfied are you ready to move onto actual user testing.
Much has been said elsewhere about user testing, which I won’t repeat here other than to include an obligatory link to Nielsen’s Test with 5 Users article, which I’ve found to be very true. I will offer a couple of points from my own experience:
- Finding volunteer software testers is extremely easy. Join a developer mailing list for your chosen platform or technology, post a request with a description of the software, and the volunteers will flock to you.
- Many volunteer software testers are as practically useless as they are well-intentioned.
- Paradoxically, and for reasons I've not yet managed to pin down, developers actually make surprisingly good testers - as long as it's not their own product they're testing. Their bug reports, at least, are often of an extremely high quality, no doubt due to understanding what's required for a report to be truly useful.
- Interestingly, most development communities always have enough novices to give a reasonable set of more or less typical-user-level testers. Such novices are also the most keen to volunteer to test new software.
- Whilst convenient, none of this is an excuse for testing with "genuine" (non-developer) users who are confirmed to be within your target market. Almost every conceivable interest has a corresponding mailing list or, even more commonly, web forum these days. Sign up, post a request, and with any luck you'll elicit some testers. Make sure you have a build of your software available for immediate download when you make the request.
- When you find a truly useful and articulate tester, hang onto them. Cultivate the relationship, because these people are vanishingly rare and incredibly valuable. Give them a free copy of the finished software. Credit them in the "About" window. Thank them in the Read Me file or documentation. Give them free copies of your other products too. Do whatever it takes to keep them interested.
There’s a disturbing trend towards viewing version 1.x products as large-scale beta tests, with correspondingly low-quality software being released far too early. Don’t allow yourself to fall into the same trap. Take advantage of the opportunity to distinguish your product from the first release, by implementing a policy of continuous testing throughout the development cycle.
For anything but the very simplest software (and perhaps even then), it’s not possible to absolutely ensure that the user won’t make a mistake, and chances are that the mistake will potentially lead to the loss of time, data and/or patience (usually all three). It’s your unarguable responsibility as a developer to both minimise such occurrences by virtue of your design, and also to allow the user to recover when the inevitable does happen.
The obvious and most familiar software concept here is of course Undo; the ability to take back an action, and revert the application and its data to its previous state, before the action was performed. Single undo has been around for a long time, and worryingly still persists as the only available option in some cases. For modern development projects, where at all possible, multiple undo is expected and required.
Undo brings with it a number of complexities, and that’s without even considering the engineering problems raised. Undo is obviously inherently very stateful, and the undo chain or stack has intimate knowledge of the actions it will perform. The user, however, likely doesn’t recall much detail past his most recent action, and so the undo system (as typically represented by menu-commands in the Edit menu of an application) should inform the user precisely what action will be undone (or indeed redone, because Undo should always be accompanied by a corresponding available Redo command). The Undo menu command’s name should change dynamically to reflect the current action on the undo stack, for example “Undo Change Background Color”, “Undo Find & Replace”, and so on.
Another interface issue is with the keyboard shortcuts for triggering undo and redo. Some applications choose to have the default cmd-Z or ctrl-Z shortcut trigger a toggling function, which undoes or redoes the last action, as appropriate. Other applications use the default shortcut to always undo back through the undo history, and have a different shortcut to trigger successive Redo operations. It’s important to consider the purpose of your application, and try to ascertain whether it will be common for the user to want to step through multiple undo or redo operations in quick succession.
A (hopefully obvious) last point regarding undo is that, in the extreme and exceptional cases where performing an action will clear the undo history (i.e. a successive Undo will not be available; and note that saving a document should never result in the automatic clearing of the Undo stack if at all possible), the user should be explicitly warned beforehand. Ideally, of course, the software’s design and architecture will be such that this situation never arises in the first place.
Hopefully it’s clear that interface and interaction design is a fine art, and is incredibly difficult to do well. It requires a huge amount of knowledge of your software’s platform, purpose, target market, and indeed of human psychology both in general and in the specific field of human-computer interaction. Certainly, it’s not for the faint of heart, and nor is it to be rushed or attempted without due experience and consideration.
It’s now a very common thing for anyone with an online graphical arts portfolio to include interface design as one of their primary skills, typically including some screenshots or mockups of Flash or Shockwave UIs they’ve designed. Creating attractive custom interfaces for web applications or multimedia use is a very different task than creating a solid interface for a desktop application which users will spend perhaps hundreds of hours using. It’s important to have the correct skills for creating what will be the face and voice of your software and your company. Besides raw correct functionality, the interface is the most important part of your software from the user’s perspective, and so it should also be most important part from your perspective. Your design, development and testing efforts should reflect this.
It’s a sad fact that people can and will get used to just about anything, no matter how awful. So don’t make them.