lunes, 31 de diciembre de 2012

Grounding in the Basics of User-Centered Design

Grounding in the basics of human information processing, cognitive psychology, and user-centered design (essentially the domain of the human factors specialist) helps immensely because it enables the test moderator to sense, even before the test begins, which interactions, operations, messages, or instructions are liable to cause problems. Test moderators with this background have a knowledge of which problems can be generalized to the population at large
and which are more trivial. This helps to ascertain when to probe further and what issues need to be explored thoroughly during the debriefing session. 
Additionally, this background can also prevent the need to test situations that are known to cause problems for users, such as the inappropriate use of color or the incorrect placing of a note in a manual. Lastly, a strong background in usability engineering helps the test moderator to focus on fixing the important issues after a test is complete.

domingo, 30 de diciembre de 2012

Characteristics of a Good Test Moderator

Regardless of who conducts the test, either yourself or internal or external staff, and regardless of the background of that person, there are several key characteristics that the most effective test moderators share. These key characteristics are listed and described in the paragraphs that follow. If you are personally considering taking on the role of test moderator in your organization, use these key characteristics as a checklist of the skills you need to acquire. If you are considering using either an internal person or hiring
an external person to perform this role, use these key characteristics to help evaluate the person's capabilities.

sábado, 29 de diciembre de 2012

External Consultant

Another option is to hire an external consultant. Many human factors, industrial design, market research, and usability engineering firms now offer usability testing as one of their services, including the use of their test
laboratories. You may simply want to outsource the usability test to such a firm, or use such a firm to "kick off" a testing program in your organization. 
Using an external consulting company guarantees the objectivity that testing requires. Even some organizations that employ internal human factors specialists to work on the design and development of products still outsource the testing work for the greater sense of impartiality it provides.
If you know your organization is committed to eventually forming a long-term testing program on site, then seek out a consulting company that will work with you to transfer the knowledge of testing into your organization.
Even if you are unsure about the long-term prospects /or testing in your company, it still might be easier to have outside help with an initial test Just make sure that if you conduct the test off-site, its location is physically close enough to allow development team members to attend the test sessions. Do not simply farm out the test to a remote location. (Although, in a pinch, team members could observe tests from their remote locations via Morae, Camtasia, or other electronic monitoring tools.) Viewing tests in person is much more
effective than watching or listening to a recording, especially /or those who are skeptical about the value of testing.

viernes, 28 de diciembre de 2012

Rotating Team Members

Let's suppose that no one from the disciplines listed previously is available to help on your project, and you are still determined not to test your own materials. Another alternative is to draw upon colleagues of similar disciplines who are not working on the same product. An example of this approach is for technical communicators to test each other's manuals or for software engineers to test each other's program modules.
In such a scenario, the person whose product is being tested could help prepare many of the test materials and make the pretest arrangements, then turn over the actual moderating of the test to a colleague. One of the advantages of this approach is that two (or more) heads are better than one, and it is always beneficial to have someone other than yourself help prepare the test. 
The person acting as the test moderator would need time to become familiar with the specific product being tested and to prepare to test it in addition to the time required to actually moderator the test.
Should you decide to implement this approach, you must plan ahead in order to build the test into your mutual schedules. You cannot expect your colleague to drop everything he or she is working on to help you. Of course, you would reciprocate and serve as test moderator for your colleague's product.

jueves, 27 de diciembre de 2012

Technical Communicator

Technical communicators, including technical writers and training specialists often make wonderful test moderators. Many technical communicators already serve as user advocates on projects, and their profession requires them to think as a user m order to design, write, and present effective support materials

miércoles, 26 de diciembre de 2012

Marketing Specialist

A marketing specialist is typically customer-oriented, user-oriented, or both, has good interpersonal and communication skills, and would be very inter- ested in improving the quality of products. This type of specialist may already be involved with your product, but usually not to the detailed level that would tend to disqualify him or her from conducting the testing.

martes, 25 de diciembre de 2012

Human Factors Specialist

A human factors specialist is the most likely carididate to conduct a usability test. This tvpe of person typically has an advanced degree in psychology, industrial engineering, or similar discipline, and is familiar with experimental methodology and test rigor. Just as important, the human factors specialist is grounded in the basics of information processing, cognitive psychology, and other disciplines related to the development of usable products, svstems, and support materials. This grounding is crucial in differentiating the important from the superficial usability factors in a product and ultimately in  designing and conducting the test.
With the current focus on usability engineering and testing, it is highly probable that human factors specialists within your organization are already involved with testing in one form or another.

lunes, 24 de diciembre de 2012

Who Should Moderate?

One of the basic tenets of usability testing — and of this book — is that it is almost impossible to remain objective vvhen conducting a usability test of your own product. There is simply too strong a tendency to lead participants in a direction that you want the results to go, rather than acting as a neutral enabler of the process. This is even true for experienced test moderators vvho conduct the test from an external control room. In fact, asking someone to test his or her own product is like asking parents to objectively evalúate the abilities of theirchild. It is an impossible endeavor.
Having said that, if there is only you available to test your product, do so. In almost every case, it is still better to test than not to test, even if you must do the testing yourself. However, for the long term, you would want to be out of the self-testing business as soon as possible.
Imagine that you want to conduct a test on a product for which you have primar}' responsibility, and if possible you would like someone less involved with the product to conduct the test. You can help develop the test materiaüs, make arrangements, and select participants, but you need a more objective person to handle the actual test moderating. Suppose also that your organizaron currently has no in-house testing staff and does not plan to introduce one shortly. To whom should you look for help?
The following sources represent a number of areas from which you can find candidates who possess the requisite skills to conduct a test, or who could head up the beginnings of an interna! testing group. They may or may not already be working on your product.

domingo, 23 de diciembre de 2012

Skills for Test Moderators

The role of the test moderator or test administrator is the most critical of all the test team members, presuming that you even have the luxury of a test team. In fact, the moderator is the one team member that you absolutely must have in order to conduct the test. The moderator is ultimately responsible for all preparations including test materials, participant arrangements. and coordination of the efforts of other members of the test team.
During the test the moderator is responsible for all aspects of administration, including greeting the participant, collecting data, assistíng and probing, and debriefing the participant. After the test, he or she needs to collate the day's data collection, meet with and debrief other team members, and ensure that the
testLng is tracking with the test objectives, If the usability test were an athletic contest, the moderator would be the captain of the team. As such, he or she has the potential to make or break the test. An ineffective moderator can seriously negate test results and even waste much of the preliminary preparation work.
This chapter discusses several alternatives for acquiring test moderators from inside and outside your organization, as well as the desired characteristics of an effective test moderator. Chapter 9 includes guidelines for moderating test sessions, including information about when and how to intervene, and the advantages and disadvantages of using a "think-aloud" protocol.

sábado, 22 de diciembre de 2012

Bríef Summary of Test Outcome

Every major task passed the 70 percent successful completion criteria with the exception of two. The team felt that the problems associated with those tasks conld be corrected prior to release, and wanted to schedule a very quick test to confirm. Twenty recommendations from the test were identified for implementation prior to release, and at least fifteen recommendations were diverted to future releases.
Providing a "tour" of advanced features prior to the test proved to be a stroke of genius. Participants loved it, and some even insisted on taking itback to their current jobs. One user suggested the company market it or a onger virtual seminar as a separate product for customers, and that is alreadv _n the works.
The revamped organizatíon of the user guide was much more in ture with users' expectations than the previous set, although the index proved cificult to use. More task-oriented items must be added to the index to icprove accessibility.
As you can tell from this condensed series of tests, the product evolvei over time and reflected each test's findings. We strongly advócate such an mrative approach, but again, do not be discouraged if you can manage only on; :est to begin. Now let's talk about what it takes to be a good test moderator.

viernes, 21 de diciembre de 2012

Test 3: Verífication Test - II

Test Objectives
- Verify that 70 percent of participants can meet established successful completion criteria for each major task scenario. (The 70 percent benchmark is something that Jeff has personally evolved toward over time, and that Dana has used effectively. It provides a rea sonably challenging test while still leaving the design team some work to do before product release to move that number toward a more accept and trad»tional 95 percent benchmark. A benchmark of 100 percent is probably not realistic except for tasks iiwolving danger or damage to the system or possible loss of life, and should never be used lightly
In the 1960s NASA found that achieving 100 percent performance cost as much as 50 times the cost of achieving 95 percent performance, It is likely that such costs have gone down over 40 years, but the point is that you should only use the higher benchmark if you are willing to pay the piper.)
- Identify any tasks and areas oí the product that risk diré consequences (e.g., are unusable, contain destructive bugs) if the product is released as is.
- Identify all usability deficiencies and sources of those problems. Determine which deficiencies must be repatred befare release and which, if there is not time within the schedule, can be implemented in the next
release.

jueves, 20 de diciembre de 2012

Test 3: Verífication Test - I

The Situation
Some weeks have passed. For this last test, a fully functional product with comprehensive help topics has been prepared. All sections of the documéntate have been through one draft, with half of the sections imdergoine a second draft. The documentaron has a rough Índex for the test. A small "tour" for users about quarterly and semi-annual tasks was developed. For the major tasks ot the product, specified measurable time and accuracy criteria have been developed. For example, one criterion reads:
Using tl,e sctup guide, a user will be able to correctly implement Vicio and Ncttoork rreferemcs within 10 minutes, with no more than two attempts required

miércoles, 19 de diciembre de 2012

Test 2: Assessment Test - III

Brief Summary of Test Outcome
Many difficulties in operations were identified, but the users' workflow matched that employed by the design team for the product's interface operations. Essentially, the high-level interface "works," and the lower-level details remain to be implemented and refined. The help information was accurate and helpfu!, but users rarely invoked it unless prompted. There was a strong preference for tria! and error with this particular user audience. When users were prompted to try the help, it was found that the organizaron of the help topics needs to be extensively revamped and made more task-oriented Even more theoretical, contextúa! information needs to be included for the most advanced users. This last issue turned out to be very controversia! because designers felt it was not their responsibility to forcé particular operational approaches on corporate working groups. It is possible that an interactive primer for users may be required for infrequent but important tasks

martes, 18 de diciembre de 2012

Test 2: Assessment Test - II

Main Test Objectives
m Confirm whether the findings of the original test adequately match interface operations with the user's workflow.
■ Expose all major usability deficiencies and their causes for the most common tasks.
■ Determino if there is a seamless connection of help topics, embedded assistance, and messaging with the functionality and user interface. Does the software give support at the right moments? Is the help center orga-
nized in a way that answers participants' questions?
- Is the documentation being utilized as designed? Is it accessible? Are graphics understood and at the appropriate level of dotail? Are certain sections not read at all? Are additional sections required? Is all
terminology clear? Are there areas that require more explanation?
VVhere do parHcipants still have questions? What are their questions?

lunes, 17 de diciembre de 2012

Test 2: Assessment Test - I

The Situation
Time has passed. A single prototype has nowbeen expanded to approximately 60 to 80 percent of its eventual functionality. There are eomprehensive help topics for working functions in a separate section of the web site. A first draft, of simplified documentaron, on 8 1/2" by 11" bond paper is available for the test, with a table of contents, but no index.

domingo, 16 de diciembre de 2012

Brief Summary of Outcome

The test was conducted. As is typical of comparison tests at this point, there was no "winner" per se. Rather, the result was an interface with the best attributes of both prototypes. The navigation schema employing the navigation on the left was most efficient and effective, but some of the options available did not seem to belong with the others and so will remain in a navigation bar across the top of the main work area. Apparently, the optioris to remain in the top navigation are done less frequently.
There were many advanced features for use in a corporate setting that users needed additional information about. Because this personal information manager will be used throughout a large company, some functionality was added to support work group collaboration, which added complexity to the product. To remedy the complexity issue, the first line of defense is to develop a documentaron set that includes, at minimum, a guide for setting up preferences, some self-paced training on interface operations, and a procedural user guide for more advanced, less frequent tasks.

sábado, 15 de diciembre de 2012

Main Research Questions

■ Which of the two interface styles/«.vncepts is the most effective? In which is the user better able to renuin oriented within the program?
■ What are the best and worst feature* of each approach?
■ What are the main stumbling blods for the user?
■ After some period of initial learnir.c which style has the greatest potential for the power user?
- For which tasks will users need her. further instructions, or supportine documenta tion?
- What types of written information will be required?
■ Prerequisite
■ Theoretical or conceptual
■ Procedural
■ Examples
■ Training

viernes, 14 de diciembre de 2012

Test 1: Exploratory/Comparison Test - II


during the test, a technical expert will be available to reveal limited but crucial information needed to use the product. (See the gradual disclosure technique in Chapter 13 for an explanation of how to use a technical expert in this way.) Primitive help topics, available en paper only, will be provided to the participant on demand; that is, when the participant clicks the appropriate prompt or asks a question, the test nuxlerator will provide what would normally be embedded assistance, instruction prompts, or messages on paper as they would appear on the screen.

jueves, 13 de diciembre de 2012

Test 1: Exploratory/Comparison Test - I

The sítuation
Two early prototypes of the interface have been developed (see Figures 3-5 and 3-6) The interfaces use the same underlying architecture, programming languages, and functionality, although the layout of their navigation is considerablv different from each other. , nn. TL protolypes have very limted working funcHonahty (e.g, about 30 to 40 percent ofthe proponed functons work). There ¡s no documentaron, but

miércoles, 12 de diciembre de 2012

Iterative Testing: Test Types through the lifecycle

Now having reviewed the basics of each type of test, let us explore how a series of tests might in fact work. Let's suppose that your company is developing a web-based software application and its associated documentaron, The software is a personal Information manager, consisting of calendar, contad, and task management functionality. You intend to conduct three usability tests at three different times in the product development lifecycle. Following is a hvpothetical series of tests on this product throughout the lifecycle, complete with hypothetical outcomes at the end of each test, Understand the details have been greatly simplified to provide an overview of iterative desvgn in action,

martes, 11 de diciembre de 2012

Overview of the Methodology

The basic methodology involves the side-by-side comparison of two or more clearly dif/erent designs. Performance data and preference data are collected for each alternative, and the results are compared. The comparison test can be conducted informally as an exploratory test, or it can be conducted as a tightly controlled classical experiment, with one group of participants serving as a control group and the other as the experimental group. The form used is dependenton yourgoalsin testing. If conducted asa true experiment designed to acquire statistically valid results, the alternatives should vary along a single dimensión — for example, keeping the content and functionality constant, but altering the visual design or the navigation scheme — and the expected results of the test should be formulated as a hypothesis.
If conducted less formally as a more observational, qualitative study, the alternatives may vary on many dimensions. One needs to ascertain why one alternative is favored over another, and which aspects of each design are favorable and unfavorable. Inevitably, when comparing one or more alternatives in this fashion, one discovers that there is no "winning" design per se. Rather, the best design turns out to be a combination of the alternatives, with the best aspects of each design used to form a hybrid design.
For exploratory comparison tests, experience has shown that the best results and the most creative solutions are obtained by including wildly differing alternatives, rather than very similar alternatives. This seems to work because:
- The design team is forced to stretch its conceptions of what will work rather than just continuing along in a predictable pattern. With the necessity for developing very different alternatives, the design team is forced
to move away from predictable ways of thinking about the problem. rypically, this invoJves revisiting fundamental premises about an interface or documentation format that have been around for years. The result is often a design that redefines and improves the product in fundamental ways.
- During the test, the participant is forced to really consider and contémplate why one design is better and which aspects make it so. It is easier to compare alternatives that are very similar, but harder to compare very different ones. Why? Similar alternatives share the same framework and conceptual model, with only the lower-level operations working differently. Very different alternatives, however, are often based on different conceptual models of how each works and may challenge the user, especiallv one experienced with the product, to take stock of how the tasks are actually performed.

lunes, 10 de diciembre de 2012

Comparison Test - II

Objective
The comparison test is the fourth type of test and can be used in conjunction with anv of the other three tests. It is used to compare two or more des.gns, such as two different interface styles, or the current design of a manual wrth a proposed new design, or to compare your product with a compehtor s. The comparison test is typically used to establish which design is eas.e.: ta;use or leam, or to better understand the advantages and disadvantages of different designs.

domingo, 9 de diciembre de 2012

Comparison Test - I

When
The comparison test is not associated with any specific point in the product development lifecycle. In the early stages, it can be used to compare several radically different interface styles via an exploratory test, to see which has the greatest potential with the proposed target population. Toward the middle of the lifecycle, a comparison test can be used to measure the effectiveness of a single element, such as whether pictorial buttons or textual buttons are preferred by users. Toward the end of the lifecycle, a comparison test can be
used to see how the released product stacks up against a competitor's product.

sábado, 8 de diciembre de 2012

Overview of the Methodology

The validation test is conducted in similar fashion to the assessment test with three major exceptions.
- Prior to the test, benchmarks or standards for the tasks of the test are either developed or identified. This can be speciíic error or time measures, or as simple as eliminating the problems identified in earlier explora tory tests.
- Participants are given tasks to perform with either very little or no interaction with a test moderator. (And they are probably not asked to "think aloud.")
- The collection of quantitative data is the central focus, although reasons for substandard performance are identified.
Because you are measuring user performance against a standard, you also need to determine beforehand how adherence to the standard wilí be measured, a-nd what actions will be taken if the product does not meet its standards. For example, if the standard for a task addresses "time to complete," must 70 percent of participants meet the standard, or will you simply compare the standard to the average score of all participants? Under what conditions will the product's schedule be postponed? Will there be time to retest those tasks that did not meet the standard? These are all questions that should be addressed and resolved prior to the test.
Compared to an assessment test, a validation test requires more emphasis on experimental rigor and consistency, because you are makirtg important quantitative judgments about the product. Make sure that members of the design team have input and buy-in into developing the standards used during the test. That way they will not feel as if the standards were overly difficult or unattainable.

viernes, 7 de diciembre de 2012

Validation or Verification Test - II

Objective
The objective of the validation test is to evalúate how the product compares to some predetermined usability standard or benchmark, either a project-related performance standard, an internal company or historical standard, or even a competitor's standard of performance, The intent is to establish that the product meets such a standard prior to release, and if it does not, to establish thereason(s) w hy. The standards usually origínate from the usability objectives developed early in the project. These in turn come from previous usability tests, marketing surveys, interviews with users, or simply educated guesses by the development team.
Usability objectives are typically stated in terms of performance criteria, such as efficiency and effectiveness, or how welJ and how fast the user can perform various tasks and operations. Or the objectives can be stated in terms of preference criteria, such as achieving a particular ranking or rating from users. A verificaron test has a sJightty different flavor. The objective here is to ensure that usability issues identified in earlier tests have been addressed and corrected appropriately.
It only makes sense then that the validation test itself can be used to initiate standards within thecompanyfor future producís Verification can accomplish the same thing. For example, if one establishes that a setup procedure for a software package vvorks well and can be conducted within 5 minutes with no more than one error, it is important that future releases of the product perform to that standard or better. Products can then be designed with this benchmark as a target, so that usability does not degrade as more functions are added to future releases.

jueves, 6 de diciembre de 2012

Validation or Verification Test - I

When
The validation test, also referred to as the verification test, is usually conducted late in the development cycle and, as the ñame suggests, is intended to measure usability of a product against established benchmarks or, in the case of a verification test, to confirm that problems discovered earlier have been remedied and that new ones have not been introduced. Unlike the first two tests, which take place in the middle of a very active and ongoing design cycle, the validation test typically takes place much closer to the release of the product.

miércoles, 5 de diciembre de 2012

Assessment or Summative Test - III

Overview of the Methodology
Often referred to as an information-gathering or evidence-gathering test, the methodology for an assessment test is a cross between the informal exploration of the exploratory test and the more tightly controlled measurement of the validation test. Unlike the exploratory test:
■ The user will always perform tasks rather than simply walking through and commenting upon screens, pages, and so on.
■ The test moderator will lessen his or her interaction with the participant because there is less emphasis on thought processes and more on actual behaviors.
" Quantitative measures will be collected.

martes, 4 de diciembre de 2012

Assessment or Summative Test - II

Objective
The purpose of the assessment test is to expand the findings of the exploratory test by evaluating the usability of lower-level operations and aspects of the product. If the intent of the exploratory test is to work on the skeleton of the product, the assessment test begins to work on the meat and the flesh.
Assuming that the basic conceptual model of the product is sound, this test seeks to evamine and evalúate hoxv effectively the concept has been implemented. Rather than just exploring the intuitiveness of a product, you are interested in seeing how well a user can actually perform full-blown realistic tasks and in identifying specific usability deficiencies in the product.

lunes, 3 de diciembre de 2012

Assessment or Summative Test - I

When
The assessment test is probably the most typical tvpe of usability test conducted. Of all the tests, it is  probably the simplest and most straightforward for the novice usability professional to design and conduct. Assessment tests are conducted either early or midway into the product development cycle, usually after the fundamental or high-Ievel design or organizaron of the product has been established.

domingo, 2 de diciembre de 2012

Example of Exploratory Study - II

The purpose ol our session today is to review the design lor a new web site and get your opinions about it. As we review this design together, I will be asking you a series of questions about what you see and how you expect things to work. Please (eel free to ask any questions and otter any observations during the session. There are no wrong answers or stupid questions. This product is in a very preliminary stage; do not be concerned ifit acts in unexpected ways.
Let's begin with a hypothetical situation. You would like to understand just what it is that this company offers.
(User indicates how (he task would be attempted, or attempts to do the task if the navigation works.)
You would like to calculate the cost lor offerings from this company. How do you start?
{User indicates how the task would be attempted, or attempts to do the task if the navigation works.)

Okay, you've found the pricing page. What does it tell you?
(User discusses the information on the page, describing what is useful, clear (or not), and where there could be more detail.)
Figure 3-4 A Portion of an exploratory test script

sábado, 1 de diciembre de 2012

Example of Exploratory Study - I

Because the nature of the exploratory test is often somewhat abstract, let's review how a typical exploration might proceed for a product, such as a web site. Assume that you are exploring the home page of a web site, which employs options in the left navigation, each revealing further choices when the user mouses over it. Assume also that this is a very early stage of development, so the user interface simply consists of a single screen without any underlying structure or connections. However, the navigation menus function, so the
user can view the menu options underneath each menu heading, as shown in Figure 3-3.
Now let's look at Figure 3-4, which contains an excerpt of a test script for conducting an exploratory test, to see how the test might proceed. You might continue in this vein, having the user attempt to accomplish realistic tasks with much discussion about assumptions and thought process. Alternatively, though, if the web page is in such a preliminary stage that the navigation does

viernes, 30 de noviembre de 2012

Overview of the Methodology - Graphic


jueves, 29 de noviembre de 2012

Overview of the Methodology -

Exploratory tests usually dictate extensive interaction between the participant and test moderator toestablish the efficacy of preliminary design concepts. One way to answer very fundamental questions, similar to those listed previously, is to develop preliminary versions of the product's interface and/or its support materials for evaluation by representative users. For software, this would typically involve a prototype simulation or mockup of the product that represents its basic layout, organization of functions, and high-level operations.
Even prior to a working prototype, one might use static screen representations or even paper drafts of screens. For hardware representations, one might use two-dimensional or three-dimensional foamcore, clay, or plastic models. For user support materials, one might provide very rough layouts of manuals, training materials, or help screens. When developing a prototype, one need not represent the entire function-
ality of the product. Rather, one need only show enough functionality to address the particular test objective. For example, if you want to see how the user responds to the organization of your pull-down menus, you need only show the menus and one layer of options below. If the user proceeds deeper than the first layer, you might show a screen that reads, "Not yet implemented," or something similar and ask what the participant was looking for or expecting next.
This type of prototype is referred to as a "horizontal representation," since the user can move left or right but is limited in moving deeper. However, if your test objective requires seeing how well a user can move down several menu layers, you will need to prototype several functions "vertically," so users can proceed deeper. You might achieve both objectives with a horizontal representation of rt// major functions, and a vertical representation of two of the functions.
During the test of such a prototype, the user would attempt to perform representative tasks. Or if it is too early to perform tasks, then the user can simply "walk through" or review the product and answer questions under the guidance of a test moderator. Or, in some cases, the user can even do both. The technique depends on the point in the development cycle and the sophistication of the mockups. the testing process for an exploratory test is usually quite informal and almost a collaboration between participant and test moderator, with much interaction between the two. Because so much of what you need to know is cognitive in nature, an exploration of the user's thought process is vital.
The test moderator and participant might explore the product together, with the test moderator conducting an almost ongoing interview or encouraging the participant to "think aloud" about his or her thought process as much as possible. Unlike later tests where there is much less interaction, the test moderator and participant can sit side by side as shown in Figure 3-2

miércoles, 28 de noviembre de 2012

Exploratory or Formative Study - II

Objective
The main objective of the exploratory study is to examine the effectiveness of preliminary design concepts. If one thinks of a user interface or a document as being divided into a high-level aspect and a more detailed aspect, the exploratory study is concerned with the former.
For example, designers of a Web application interface would benefit greatly knowing early on whether the user intuitively grasps the fundamental and distinguishing elements of the interface. For example, designers might want to know how well the interface:
■ Supports users' tasks within a goal.
■ Communicates the intended workflow.
- Allows the user to navigate from screen to screen and within a screen.  Or, using the task-oriented user guide of a software product as an example, technical writers typically might want to explore the following high-level issues:
■ Overall organization of subject matter
■ Whether to use a graphic or verbal approach
■ How well the proposed format supports findability
■ Anticipated points of assistance and messaging
■ How to address reference information
The implications of these high-level issues go beyond the product, because you are also interested in verifying your assumptions about the users. Understanding one is necessary to define the other. Some typical user-oriented questions that an exploratory study would attempt to answer might include the following:
■ What do users conceive and think about using the product?
■ Does the product's basic functionality have value to the user?
■ How easily and successfully can users navigate?
■ How easily do users make inferences about how to use this user interface, based on their previous experience?
■ What type of prerequisite information does a person need to use the product?
■ Which functions of the product are "walk up and use" and which will probably require either help or written documentation?
■ How should the table of contents be organized to accommodate both novice and experienced users?

martes, 27 de noviembre de 2012

Exploratory or Formative Study - I

When
The exploratory study is conducted quite early in the development cycle, when a product is still in the preliminary stages of being defined and designed (hence the reason it is sometimes called "formative"). By this point in the development cycle, the user profile and usage model (or task analysis) of the product will have (or should have) been defined. The project team is probably wrestling with the functional specification and early models of the product. Or perhaps the requirements and specifications phase is completed, and the design phase is just about to begin.

lunes, 26 de noviembre de 2012

When Should You Test?

Our Types of Tests: An Overview
The literature is filled with a variety of testing methodologies, each with a lightly diferent purpose. Often, different terms are used to describe identical testing techniques. Needless tosay, thiscan be extremely confusing. In deciding which tests to discuss and emphasize, the most beneficial approach might be to use the product development lifecycle as a reference point for describing severa different types of tests. Associating a test with a particular phase in the lifecycle should help you understand the test's purpose and benefits.
We discuss three tests - exploratory (or formative), assessment (or summative), and validation (or verification) tests —at a high level, according to the approximate point in the product development lifecycle at which each would be administered. The fourth type of test, the comparison test, can be used as an integral part of any of the other three tests and is not associated with any specific lifecycle phase.
Hie basic methodology for conducting each test is roughly the same and is described in detail in Chapter 5. However, each test will vary in its emphasis on qualitative vs. quantitative measures, and by the amount of interaction

domingo, 25 de noviembre de 2012

Limitations of Testing

Now, having painted a rather glorified picture of what usability testing is intended to accomplish, let's splash a bit of cold water on the situation. Testing is neither the end-all nor be-all for usability and product success, and it is important to understand its limitations. Testing does not guarantee success or even prove that a product will be usable. Even the most rigorously conducted formal test cannot, with 100 percent certainty, ensure that a product will be usable when released. Here are some reasons why:
- Testing is always an artificial situation. Testing in the lab, or even testing in the field, still represents a depiction of the actual situation of usage and not the situation itself. The very act of conducting a study can itself affect the results.
- Test results do not prove that a product works. Even if one conducts the type o test that acquires statistically significant results, this still does not prove that a product works. Statistical significance is simply a measure of the probability that one's results were not due to chance It is not a guarantee, and it is very dependet upon the way in which the test was conducted

sábado, 24 de noviembre de 2012

Basic Elements of Usability Testing

■ Development of research questions or test objectives rather than hypotheses.
« Use of a representative sample of end users which may or may not be randomly chosen.
• Representation of the actual work environment.
■ Observation of end users who either use or review a representation of the product.
■ Controlled and sometimes extensive interviewing and probing of the participants by the test moderator.
■ Collection of quantitative and qualitative performance and preference measures.
■ Recommendation of improvements to the design of the product. We detail the "how-to" of this approach in the chapters that follow

viernes, 23 de noviembre de 2012

Basics of the Methodology - II

The preceding approach is the basis for conducting classical experiments, and when conducting basic research, it is the method of choice. However, it is not the method expounded in litis book for the following reason. 
■ It is often impossible or inappropriate to use such a methodology to conduct usability tests in the fast-paced, highly pressurized development environment in which most readers will find themselves. It is impossible because of the many organizational constraints, political and otherwise. 
It is inappropriate because the purpose of usability- testing is not necessarily to formulate and test specific hypotheses, that is, conduct research, but rather to make informed decisions about design to improve products.
■ The amount of prerequisite knowledge of experimental method and statistics required in order to perform these kinds of studies properly is considerable and better left to an experienced usability or human fac-
tors specialist. Should one attempt to conduct this type of tight research without the appropriate background and training, the results can often be very misleading, and lead to a worse situation than if no research had
been conducted.
■ In the environment in which testing most often takes place, it is often very difficult to apply the principle of randomly assigning participants because one often has little control over this factor. This is especially true
as it concerns the use of existing customers as participants.
■ Still another reason for a less formal approach concerns sample size. To achieve generalizable results for a given target population, one's sample size is dependent on knowledge of certain information about that population, which is often lacking (and sometimes the precise reason for the test). Lacking such information, one mav need to test 10 to 12 participants per condition to be on the safe side, a factor that might require one to test 40 or more participants to ensure statistically significant results.

jueves, 22 de noviembre de 2012

Basics of the Methodology - I

The basic methodology for conducting a usability test has its origin in the classical approach for conducting a controlled experiment. With this formal approach, often employed to conduct basic research, a specific hypothesis is formulated and then tested by isolating and manipulating variables under controlled conditions. Cause-and-effect relationships are then carefully examined, often through the use of the appropriate inferential statistical technique(s), and the hypothesis is either confirmed or rejected. Employing a true experimental design, these studies require that:
-A hypothesis must be formulated. A hypothesis states what you expect to occur when testing. For example, "Help as designed in format A will improve the speed and error rate of experienced users more than help as designed in format B." It is essential that the hypothesis be as specific as possible.
-Randomly chosen (using a very systematic method) participants must be assigned to experimental conditions. One needs to understand the characteristics of the target population, and from that larger pop-
ulation select a representative random sample. Random sampling is often difficult, especially when choosing from a population of existing customers.
-Tight controls must be employed. Experimental controls are crucial or else the validity of the results can be called into question, regardless or whether statistical significance is the goal, All participants should have
nearly the identical experience as each other prior to and during the In addition, the amount of interaction with the test moderator must be controlled.
- Control groups must be employed. In order to validate results, a control group must be employed; its treatment should vary only on the single variable being tested.
- The sample (of users) must be of sufficient size to measure statistically significant differences between groups. In order to measure differences between groups statistically, a large enough sample size must be used. Too small a sample can lead to erroneous conclusions.

miércoles, 21 de noviembre de 2012

Improving Profitability

Goals or benefits of testing for your organization are;
- Creating a historical record of usability benchmarks for future releases. By keeping track of test results, a company can ensure that future products either improve on or at least maintain current usability standards.
- Minimizing the cost of service and support calls. A more usable product will require fewer service calls and less support from the company.
- Increasing sales and the probability of repeat sales. Usable products create happy customers who talk to other potential buyers or users. Happy customers also tend to stick with future releases of the product,
rather than purchase a competitor's product.
-Acquiring a competitive edge because usability has become a market separator for products. Usability has become one of the main ways to separate one's product from a competitor's product in the customer's
mind. One need only scan the latest advertising to see products described using phrases such as "simple" and "easy" among others. Unfortunately, this information is rarely truthful when put to the test.
-Minimizing risk. Actually, all companies and organizations have conducted usability testing for years. Unfortunately, the true name for this type of testing has been "product release," and the "testing"
involved trying the product in the marketplace. Obviously, this is a very risky strategy, and usability testing conducted prior to release can minimize the considerable risk of releasing a product with serious usability
problems.

martes, 20 de noviembre de 2012

Eliminating Design Problems and Frustration

One side of the profitability coin is the ease with which customers can use the product. When you minimize the frustration of using a product for your target audience by remedying flaws in the design ahead of product release, you also accomplish these goals:

■ Set the stage for a positive relationship between your organization and your customers.
■ Establish the expectation that the products your organization sells are high quality and easy to use.
■ Demonstrate that the organization considers the goals and priorities of its customers to be important.
* Release a product that customers find useful, effective, efficient, and satisfying.

lunes, 19 de noviembre de 2012

Informing Design

The overall goal of usability testing is to inform design by gathering data from which to identify and rectify usability deficiencies existing in products and their accompanying support materials prior to release. The intent is to ensure the creation of products that:

* Are useful to and valued by the target audience
■ Are easy to learn
■ Help people be effective and efficient at what they want to do
■ Are satisfying (and possibly even delightful) to use

domingo, 18 de noviembre de 2012

Why Test? Goals of Testing

From the point of view of some companies, usability testing is part of a larger effort to improve the profitability of products. There are many aspects to doing so, which in the end also benefits users greatly: design decisions are informed by data gathered from representative users to expose design issues so they can
be remedied, thus minimizing or eliminating frustration for users.

sábado, 17 de noviembre de 2012

What Is Usability Testing?

The term usability testing is often used rather indiscriminately to refer to any technique used to evaluate a product or system. Many times it is obvious that the speaker is referring to one of the other techniques discussed in Chapter 1 Throughout this book we use the term usability testing to refer to a process that
employs people as testing participants who are representative of the target audience to evaluate the degree to which a product meets specific usability criteria. This inclusion of representative users eliminates labeling as usability testing such techniques as expert evaluations, walk-throughs, and the like that do not require representative users as part of the process.
Usability testing is a research tool, with its roots in classical experimental methodology. The range of tests one can conduct is considerable, from true classical experiments with large sample sizes and complex test designs to very informal qualitative studies with only a single participant. Each testing approach has different objectives, as well as different time and resource requirements. The emphasis of this book is on more informal, less complex tests designed for quick turnaround of results Ln industrial product development
environments.

viernes, 16 de noviembre de 2012

Follow-Up Studies

A follow-up study occurs after formal release of the product. The idea is to collect data for the next release, using surveys, interviews, and observations. Structured follow-up studies are probably the truest and most accurate appraisals of usability, because the actual user, product, and environment are all in place and interacting with each other. That follow-up studies are so rare is unfortunate because designers would benefit immensely from learning what happened to the product that they spent two years of their lives perfecting
Sales figures, while helpful, add nothing to one's knowledge of the product's strengths and weaknesses.
This is not a definitive list of methods by any means, and it is meant merely to provide the reader with an appreciation for the wealth of techniques available and the complexity involved in implementing a UCD approach It is a rare organization that performs all of these techniques, and just as few conduct them in their pure form. Typically, they are used in altered and combined form, as the specific needs and constraints of a project dictate For more about these techniques, check out our list of resources on the web site ^ accompanies this book at www.wiley.com/ao/usabilitytesting Now lets take a closer look at one of the most renowned techniques of all the ones discussed, and the focus of this book, usability testing, in Chapter 2

jueves, 15 de noviembre de 2012

Usability Testing

Usability testing, the focus of this book, employs techniques to collect empirical data while observing representative end users using the product to perform realistic tasks. Testing is roughly divided into two main approaches. The firs approach involves formal tests conducted as true experiments, in order o confirm or refute specific hypotheses. The second approach, a Jess formal but still rigorous one (and the one we emphasize in this book), employs an iterative cycle of tests intended to expose usability deficiencies and gradually shape or mold the product in question.

miércoles, 14 de noviembre de 2012

Expert or Heuristic Evaluations

Expert evaluations involve a review of a product or system, usually by a usability specialist or human factors specialist who has little or no involvement in the project. The specialist performs his or her review according to accepted usability principles (heuristics) from the body of research, human factors literature, and previous professional experience. The viewpoint is that of the specific target population that will use the product.
A "double" specialist, that is, someone who is an expert in usability principles or human factors as well as an expert in the domain area (such as healthcare, financial services, and so on, depending on the application), or in the particular technology employed by the product, can be more effective than one without such knowledge.

martes, 13 de noviembre de 2012

Paper Prototyping

In this technique users are shown an aspect of a product on paper and asked questions about it, or asked to respond in other ways. To learn whether the flow of screens or pages that you have planned supports users' expectations, you may mock up pages with paper and pencil on graph paper, or create line drawings or wireframe drawings of screens, pages, or panels, with a version of the page for each state. For example, if the prototype is for a shopping cart for an e-commerce web site, you can show the cart with items, as items are being changed, and then with shipping and taxes added. (Or, you may simply dec.de to have the participant or the "computer" fill these items in as the session progresses.)
To learn whether the labels help users know what to expect next, and if the categories you have planned reflect how users think and talk about tasks you can show the top-level navigation. As the participant indicates the top-level choice, you then show the next level of navigation for that choice. The process
continues until the user has gone as deeply into the navigation as you have designed and prepared for the sessions.
Or, you may simply ask participants about the prototype you have created. The questions can range from particular attributes, such as organization and layout, to where one might find certain options or types of information. 
The value of the paper prototype or paper-and-pencil evaluation is that critical information can be collected quickly and inexpensively. One can ascertain those functions and features that are intuitive and those that are not, before one line of code lins been written. In addition, technical writers might use the technique to evaluate the intuitiveness of their table of contents before writing one word of text. The technique can be employed again and again with minimal drain on resources.

lunes, 12 de noviembre de 2012

Open and Closed Card Sorting

Use card sorting to design in "findability" of content or functionality. This is a very inexpensive method for getting user input on content organization, vocabulary, and labeling in the user interface. You can either give participants cards showing content without titles or categories and have the users do the naming (an open card sort), or give participants preliminary or preexisting categories and ask participants to sort content or functions into those (a closed sort

domingo, 11 de noviembre de 2012

Walk-Th roughs

Once you hav e a good idea who your target users are and the task goals they have, walk-throughs are used to explore how a user might fare with a product by envisioning the user's route through an early concept or prototype of the product. Usually the designer responsible for the work guides his or her colleagues through actual user tasks (sometimes even playing the role for the user), while another team member records difficulties encountered or concerns of the team. In a structured walk-through, as first developed by IBM to perform code reviews, the participants assume specific roles (e.g., moderator, recorder) and follow explicit guidelines (e.g., no walk-through longer than two hours) to ensure the effectiveness of the effort Rather than the designer taking on the role of the user, you may want to bring in a real user, perhaps someone from
a fav ored client.

sábado, 10 de noviembre de 2012

Surveys

By administering surveys you can begin to understand the preferences of a broad base of users about an existing or potential product. While the survey cannot match the focus group in its ability to plumb for in-depth responses and rationale, it can use larger samples to generalize to an entire population.
For example, the Nielsen ratings, one of the most famous ongoing surveys, are used to make multimillion-dollar business decisions for a national population based on the preferences of about 1500 people. Surveys can be used at any time in the lifecycle but are most often used in the early stages to better understand the potential user. An important aspect of surveys is that their language must be crystal clear and understood in the same way by all readers, a task impossible to perform without multiple tested iterations and adequate
preparation time. Again, asking people about what they do or have done is no substitute for observing them do it in a usability test.

viernes, 9 de noviembre de 2012

Focus Croup Research

Use focus group research at the very early stages of a project to evaluate preliminary concepts with representative users. It can be considered part of "proof of concept" review. In some cases it is used to identify and confirm the characteristics of the representative user altogether. All focus group research
employs the simultaneous involvement of more than one participant, a key factor in differentiating this approach from many other techniques.
The concepts that participants evaluate in these group sessions can be presented in the most preliminary form, such as paper-and-pencil drawings, storyboards, and/or more elaborate screen-based prototypes or plastic models. 
The objective is to identify how acceptable the concepts are, in what ways they are unacceptable or unsatisfactory, and how they might be made more acceptable and useful. The beauty of the focus group is its ability to explore a few people's judgments and feelings in great depth, and in so doing learn something about how end users think and feel. In this way, focus groups  are very different from — and no substitute for — usability tests. A focus group is good for general, qualitative information but not for learning about performance issues and real behaviors. Remember, people in focus groups are reporting what they feel like telling you, which is almost always different from what they actually do. Usability tests are best for observing behaviors and measuring performance issues, while perhaps gathering some qualitative information along the way.

jueves, 8 de noviembre de 2012

Defined Usability Goals and Objectives

Designing a product to be useful must be a structured and systematic process, beginning with high-level goals and moving to specific objectives. You cannot achieve a goal — usability or otherwise — if it remains nebulous and ill -conceived. Even the term usability itself must be defined with your organization. An operational definition of what makes your product usable (tied to successful completion criteria, as we will talk about in Chapter 5) may include: 

■ Usefulness
■ Efficiency
■ Effectiveness
■ Satisfaction
■ Accessibility

Thus bringing us full circle to our original description of what makes a product usable. Now let's review some of the major techniques and methods a usability specialist uses to ensure a user-centered design

miércoles, 7 de noviembre de 2012

A 'Learn as You Co" Perspective

UCD is an evolutionary process whereby the final product is shaped over time. Tt requires designers to take the attitude that the optimum design is acquired through a process of trial and error, discovery, and refinement. Assumptions

domingo, 4 de noviembre de 2012

Concerned, Enlightened Management

Typically, the degree to which usability is a true corporate concern is the degree to which a company's management is committed to following its own lifecycle and giving its guidelines teeth by holding the design team accountable. Management understands that there are financial benefits to usability and market share to be won

sábado, 3 de noviembre de 2012

A Multidisciplinary Team Approach

No longer can design be the province of one person or even of one specialty.
While one designer may take ultimate responsibility for a product's design, he or she is not all-knowing about how to proceed. There are simply too many factors to consider when designing very complex products for less technical end users. User-centered design requires a variety of skills, knowledge, and, most importantly, information about the intended user and usage. Today, teams composed of specialists from many fields, such as engineering, marketing, training, user-interface design, human factors, and multimedia, are becoming
the norm. In turn, many of these specialists have training in complementary areas, so cross-discipline work is easier and more dynamic than ever before.


viernes, 2 de noviembre de 2012

Attributes of Organizations That Practice UCD

User-centered design demands a rethinking of the way in which most companies do business, develop products, and think about their customers. While currently there exists no cookie-cutter formula for success, there are common attributes that companies practicing UCD share. For example.

■ Phases that include user input
■ Multidisciplinary teams
■ Concerned, enlightened management
■ A "learn as you go" perspective
■ Defined usability goals and objectives

jueves, 1 de noviembre de 2012

Phases That Include User Input

Unlike the typical phases we have all seen in traditional development methodologies, a user-centered approach is based on receiving user feedback or input during each phase, prior to moving to the next phase. This can involve a variety of techniques, usability testing being only one of these.
Today, most major companies that develop technology-based products or systems have product lifecycles that include some type of usability engineering/human factors process. In that process, questions arise. These questions and some suggested methods for answering them appear in Figure 1-4.
Within each phase, there will be a variety of usability engineering activities Xote that, although this particular lifecycle is written from the viewpoint of the human factors specialist's activities, there are multiple places where collaboration is required among various team members. This leads to our next attribute of organizations practicing UCD.

miércoles, 31 de octubre de 2012

Iterative Design and Testing

Much has been made about the importance of design iteration. However, this is not just fine-tuning late in the development cycle. Rather, true iterative design allows for the complete overhaul and rethinking of a design, through early testing of conceptual models and design ideas. If designers are not prepared for such a major step, then the influence of iterative design becomes minimal and cosmetic. In essence, true iterative design allows one to "shape the product" through a process of design, test, redesign, and retest activities.

martes, 30 de octubre de 2012

Evaluation and Measurement of Product Usage

Here, emphasis is placed on behavioral measurements of ease of learning and ease of use very early in the design process, through the development and testing of prototypes with actual users.

lunes, 29 de octubre de 2012

An Early Focus on Users and Tasks

More than just simply identifying and categorizing users, we recommend direct contact between users and the design team throughout the development lifecycle. Of course, your team needs training and coaching in how to manage these interactions. This is a responsibility that you can take on as you become more educated and practiced, yourself.
Though a goal should be to institutionalize customer contact, be wary of doing it merely to complete a check-off box on one's performance appraisal form. What is required is a systematic, structured approach to the collection of information from and about users. Designers require training from expert interviewers before conducting a data collection session. Otherwise, the results can be very misleading.

domingo, 28 de octubre de 2012

What A/lakes Products More Usable? - II

Going beyond user-centered design of a product, we should be paying Tn?n7n Vh n ^ USer experience in the entire cyde of user -nershfp of a product. Idea ly, the entire process of interacting with potential customers, from the initial sales and marketing contact through the entire duration of ownership through the point at which another product is purchased or the current one upgraded, should also be included in a user-centered approach In such a scenario, companies would extend their concern to include all prepurchase and postpurchase contacts and interactions. However, let's take one step at a time, and stick to the design process.
Numerous articles and books have been written on the subject of usercentered design (UCD) (for a list of our favorites, see the web site that accompanies this book, www.wiley.com/go/usabilitytesting). However, it is important for the reader to understand the basic principles of UCD in order to understand the context for performing usability testing. Usability testing is not UCD itself; it is merely one of several techniques for helping ensure a good, user-centered design.

We want to emphasize these basic principles of user-centered design:

« Early focus on users and their tasks
■ Evaluation and measurement of product usage
■ Iterated design

sábado, 27 de octubre de 2012

What A/lakes Products More Usable? - I

User-centered design (UCD) describes an approach that has been around for decades under different names, such as human factors engineering, ergonomics, and usability engineering. (The terms human factors engineering and ergonomics are almost interchangeable, the major difference between the two having more to do with geography than with real differences in approach and implementation. In the United States, human factors engineering is the more widely used term, and in other countries, most notably in Europe,
ergonomics is more widely used.) UCD represents the techniques, processes, methods, and procedures for designing usable products and systems, but just as important, it is the philosophy that places the user at the center of the process.
Although the design team must think about the technology of the product first (can we build what we have in mind?), and then what the features will be (will it do what we want it to do?), they must also think about what the user's experience will be like when he or she uses the product. In user-centered design, development starts with the user as the focus, taking into account the abilities and limitations of the underlying technology and the features the company has in mind to offer.
As a design process, UCD seeks to support how target users actually work, rather than forcing users to change what they do to use something.
The International Organization for Standardization (ISO) in standard 13407 says that UCD is "characterized by: the active involvement of users and a clear understanding of user and task requirements; an appropriate allocation of function between users and technology; the iteration of design solutions; multidisciplinary design."

viernes, 26 de octubre de 2012

Reason 5: Design and Implementation Don't Always Match

The design of the user interface and the technical implementation of the user interface are different activities, requiring very different skills. Today, the emphasis and need are on design skills, while many engineers possess the mind-set and skill set for technical implementation.
Design, in this case, relates to how the product communicates, whereas implementation refers to how it works. Previously, this dichotomy between design and implementation was rarely even acknowledged. Engineers and designers were hired for their technical expertise (e.g., programming and machine-oriented analysis) rather than for their design expertise (e.g., communication and human-oriented analysis). This is understandable, because with early generation computer languages the great challenge lay in simply getting
the product to work. If it communicated elegantly as well, so much the better, but that was not the prime directive. 
With the advent of new-generation programming languages and tools to automatically develop program code, the challenge of technical implementation has diminished. The challenge of design, however, has increased dramatically due to the need to reach a broader, less sophisticated user population and the rising expectations for ease of use. To use a computer analogy, the focus has moved from the inside of the machine (how it works) to the outside where the end user resides (how it communicates).
What is needed are methods and techniques to help designers change the way they view and design products — methods that work from the outside in, from the end user's needs and abilities to the eventual implementation of the product is user-centered design (UCD). Because it is only within the context of UCD that usability testing makes sense and thrives, let's explore this notion of user-centered design in more detail

jueves, 25 de octubre de 2012

Reason 4: Team Specialists Don't Always Work in Integrated Ways - II

Each development group functions independently, almost as a silo, and the final product often reflects this approach. The help center will not adequately support the user interface or it will be organized very differently from the interface. Or user documentation and help will be redundant with little
cross-referencing. Or the documentation will not reflect the latest version of the user interface. You get the picture.
The problem occurs when the product is released. The end user, upon receiving this new product, views it and expects it to work as a single, integrated product, as shown in Figure 1-3. He or she makes no particular
distinction among the three components, and each one is expected to support and work seamlessly with the others. When the product does not work in this way, it clashes with the user's expectations, and whatever advantages accrue through specialization are lost.
Even more interesting is how often organizations unknowingly exacerbate this lack of integration by usability testing each of the components separately. Documentation is tested separately from the interface, and the interface separately from the help. Ultimately, this approach is futile, because it matters little if each component is usable within itself. Only if the components work well together will the product be viewed as usable and meeting the user's needs.

miércoles, 24 de octubre de 2012

Reason 4: Team Specialists Don't Always Work in Integrated Ways - I

Organizations employ very specialized teams and approaches to product and system development yet fail to integrate them with each other. 
To improve efficiency, many organizations have broken down the product development process into separate system components developed independently, For example, components of a software product include the user interface, the help system, and the written materials. Typically, these components are developed by separate individuals or teams. Now, there is nothing inherently wrong with specialization. The difficulty arises when there is little integration of these separate components and poor communication among the different development teams.
Often the product development proceeds in separate, compartmentalized sections. To an outsider looking on, the development would be seen as depicted in Figure 1-2.


martes, 23 de octubre de 2012

Reason 3: Designing Usable Products Is Difficult


The desing of usable systems is a difficult, unpredictable endeavor, yet many organizations treat it as if it were just “common sense.”
While ,uch has been written about what makes somethings usable, the concept remains maddeningly elusive, especially for those without a background in either the behavioral or social scients.
When this book was first published in 1994, few systems designers and developers had knowledge of the basic principles of user-centered design Today, most designers have some knowledge of-or at least exposure to — user-centered design practices, whether they are aware of them or not.
However, there are still gaps between awareness and execution. Usability principles are still not obvious, and there is still a great need for education, assistance, and a systematic approach in applying so-called "common sense" to the design process.


domingo, 21 de octubre de 2012

Reason 2: Target Audiences Expand and Adapt

As technology has penetrated the mainstream consumer market, the target audience has expanded and continues to change dramatically. Development organizations have been slow to react to this evolution.
The original users of computer-based products were enthusiasts (also known as early adopters) possessing expert knowledge of computers and mechanical devices, a love of technology, the desire to tinker, and pride in their ability to troubleshoot and repair any problem. Developers of these products shared similar characteristics. In essence, users and developers of these systems were one and the same. Because of this similarity, the developers practiced "next-bench" design, a method of designing for the user who is literally sitting one bench away in the development lab. Not surprisingly, this approach met with relative success, and users rarely if ever complained about difficulties. 
Why would they complain? Much of their joy in using the product was the amount of tinkering and fiddling required to make it work, and enthusiast users took immense pride in their abilities to make these complicated products function. Consequently, a "machine-oriented" or "system-oriented" approach
met with little resistance and became the development norm.
Today, however, all that has changed dramatically. Users are apt to have little technical knowledge of computers and mechanical devices, little patience for tinkering with the product just purchased, and completely different expectations from those of the designer. More important, today's user is not even remotely comparable to the designer in skill set, aptitude, expectation, or almost any attribute that is relevant to the design process. Where in the past, companies might have found Ph.D. chemists using their products, today they will find high-school graduates performing similar functions. Obviously, "next-bench" design simply falls apart as a workable design strategy when there is a great discrepancy between user and designer, and companies employing such a strategy, even inadvertently, will continue to produce hard-to-use products.
Designers aren't hobbyist enthusiasts (necessarily) anymore; most are trained professionals educated in human computer interaction, industrial design, human factors engineering, or computer science, or a combination of these. Whereas before it was unusual for a nontechnical person to use electronic or computer-based equipment, today it is almost impossible for the average person not to use such a product in either the workplace or in private life. The overwhelming majority of products, whether in the workplace or the home, be they cell phones, DVRs, web sites, or sophisticated testing equipment, are intended for this less technical USer" Today 5 wants a tool, not another hobby.             

viernes, 19 de octubre de 2012

Reason 1: Development Focuses on the Machine or System

During design and development of the product, the emphasis and focus may have been on the machine or system, not on the person who is the ultimate end user. The general model of human performance shown in Figure 1-1 helps to clarify this point.
There are three major components to consider in any type of human performance situation as shown in Bailey's Human performance model.
■ The human
■ The context
■ The activity
Because the development of a system or product is an attempt to improve human performance in some area, designers should consider these three components during the design process. All three affect the final outcome of how well humans ultimately perform. Unfortunately, of these three components, designers, engineers, and programmers have traditionally placed the greatest emphasis on the activity component, and much less emphasis on the human and the context components. The relationship of the three components to
each other has also been neglected. There are several explanations for this unbalanced approach:
There has been an underlying assumption that because humans are so inherently flexible and adaptable, it is easier to let them adapt themselves to the machine, rather than vice versa.
Developers traditionally have been more comfortable working with the seemingly "black and white," scientific, concrete issues associated with systems, than with the more gray, muddled, ambiguous issues associated with human beings.
Developers have historically been hired and rewarded not for theater, personal, "people" skills but for their ability to solve techmcal proble
The most important factor leading to the neglect of human needs has been that in the past, designers were developing products for end users who were much like themselves. There was simply no reason to study
such a familiar colleague. That leads us to the next point.

jueves, 18 de octubre de 2012

Five Reasons Why Products Are Hard to Use

For those of you who currently work in the product development arena, as engineers, user-interface designers, technical communicators, training specialists, or managers in these disciplines, it seems likely that several of the reasons for the development of hard-to-use products and systems will sound painfully
familiar.

■ Development focuses on the machine or system.
■ Target audiences change and adapt.
■ Designing usable products is difficult.
■ Team specialists don't always work in integrated ways.
■ Design and implementation don't always match.

miércoles, 17 de octubre de 2012

What Makes Something Less Usable?

Why are so many high-tech products so hard to use?
In this section, we explore this question, discuss why the situation exists, and examine the overall antidote to this problem. Many of the examples in this book involve not only consumer hardware, software, and web sites but also documentation such as user's guides and embedded assistance such as on-screen instructions and error messages. The methods in this book also work for appliances such as music players, cell phones, .and game consoles. Even products, such as the control panel for an ultrasound machine or the user
manual for a digital camera, fall within the scope of this book.

What Makes Something Less Usable?

Why are so many high-tech products so hard to use?
In this section, we explore this question, discuss why the situation exists, and examine the overall antidote to this problem. Many of the examples in this book involve not only consumer hardware, software, and web sites but also documentation such as user's guides and embedded assistance such as on-screen instructions and error messages. The methods in this book also work for appliances such as music players, cell phones, .and game consoles. Even products, such as the control panel for an ultrasound machine or the user
manual for a digital camera, fall within the scope of this book.

martes, 16 de octubre de 2012

What Do We Mean by "Usable"? - IV

True usability is invisible. If something is going well, you don't notice it. 
If the temperature in a room is comfortable, no one complains. But usability in products happens along a continuum. How usable is your product? Could it be more usable even though users can accomplish their goals? Is it worth improving?
Most usability professionals spend most of their time working on eliminating design problems, trying to minimize frustration for users. This is a laudable goal! But know that it is a difficult one to attain for every user of your product. 
And it affects only a small part of the user's experience of accomplishing a goal. And, though there are quantitative approaches to testing the usability of products, it is impossible to measure the usability of something. You can only measure how unusable it is: how many problems people have using something, what the problems are and why. 
By incorporating evaluation methods such as usability testing throughout an iterative design process, it is possible to make products and services that are useful and usable, and possibly even delightful.

lunes, 15 de octubre de 2012

What Do We Mean by "Usable"? - III

Accessibility and usability are siblings. In the broadest sense, accessibility is about having access to the products needed to accomplish a goal. But in this book when we talk about accessibility, we are looking at what makes products usable by people who have disabilities. Making a product usable for people with disabilities —or who are in special contexts, or both —almost always benefits people who do not have disabilities. Considering accessibility for people with disabilities can clarify and simplify design for people who face temporary limitations (for example, injury) or situational ones (such as divided attention or bad environmental conditions, such as bright light or not enough light). There are many tools and sets of guidelines available to assist you in making accessible designs. (We include pointers to accessibility resources on the web site that accompanies this book (see www.wiley.com/ go/usabilitytesting for more information.) You should acquaint yourself with accessibility best practices so that you can implement them in your
organization's user-centered design process along with usability testing and other methods.
Making things more usable and accessible is part of the larger discipline of •. user-centered design (UCD), which encompasses a number of methods and techniques that we will talk about later in this chapter. In turn, user-centered design rolls up into an even larger, more holistic concept called experience design. Customers may be able to complete the purchase process on your web site, but how does that mesh with what happens when the product is delivered, maintained, serviced, and possibly returned? What does your organization
do to support the research and decision-making process leading up to the purchase? All of these figure in to experience design. 
Which brings us back to usability.

domingo, 14 de octubre de 2012

What Do We Mean by "Usable"? - II

Efficiency is the quickness with which the user s goal can be accomplished accurately and completely and is usually a measure of time. For example, you might set a usability testing benchmark that says "95 percent of all users will be able to load the software within 10 minutes."
Effectiveness refers to the extent to which the product behaves in the way that users expect it to and the ease with which users can use it to do what they intend. This is usually measured quantitatively with error rate. Your usability testing measure for effectiveness, like that for efficiency, should be tied to some percentage of total users. Extending the example from efficiency, the benchmark might be expressed as "95 percent of all users will be able to load the software correctly on the first attempt."
Learnability is a part of effectiveness and has to do with the user's ability to operate the system to some defined level of competence after some predetermined amount and period of training (which may be no time at all). It can also refer to the ability of infrequent users to relearn the system after periods of inactivity.
Satisfaction refers to the user's perceptions, feelings, and opinions of the product, usually captured through both written and oral questioning. Users are more likely to perform well on a product that meets their needs and provides satisfaction than one that does not Tvoicallv „c and rank products that they trv and ZT/ users are as» for problems that occur 'eveai causes and reasons product usable is never simply the abilitv  the numbere can tell us no there ,s a distinctive qualitative element to how usable somethine is a well, which is hard to capture with numbers and is difficult to pin down to do with how one interprets the data in order to know how to fix a problem because the behavioral data tells you why there is a problem. Any do tor
can measure a patient's vital signs, such as blood pressure and pulse rate 
But interpreting those numbers and recommending the appropriate course of action for a specific patient is the true value of the physician JudeinR the several possible alternative causes of a design problem, and knowing which are especially likely in a particular case, often means looking beyond individual data points in order to design effective treatment. There exist these little subtleties that evade the untrained eye.

sábado, 13 de octubre de 2012

What Do We Mean by "Usable"? - I

In large part, what makes something usable is the absence of frustration in using it. As we lay out the process and method for conducting usability testing in this book, we will rely on this definition of "usability;" when a product or service is truly usable, the user can do what he or she wants to do the way he or she expects to be able to do it, without hindrance, hesitation, or questions. 
But before we get into defining and exploring usability testing, let's talk a bit more about the concept of usability and its attributes. To be usable, a product or service should be useful, efficient, effective, satisfying, learnable, and accessible.
Usefulness concerns the degree to which a product enables a user to achieve his or her goals, and is an assessment of the user's willingness to use the product at all. Without that motivation, other measures make no sense, because the product will just sit on the shelf. If a system is easy to use, easy to learn, and even satisfying to use, but does not achieve the specific goals of a specific user, it will not be used even if it is given away for free. Interestingly enough, usefulness is probably the element that is most often overlooked during experiments and studies in the lab.
In the early stages of product development, it is up to the marketing team to ascertain what product or system features are desirable and necessary before other elements of usability are even considered. Lacking that, the development team is hard-pressed to take the user's point of view and will simply guess or, even worse, use themselves as the user model. This is very often where a system-oriented design takes hold.

viernes, 12 de octubre de 2012

What Makes Something Usable?

What makes a product or service usable?
Usability is a quality that many products possess, but many, many more lack. There are historical, cultural, organizational, monetary, and other reasons for this, which are beyond the scope of this book. Fortunately, however, there are customary and reliable methods for assessing where design contributes to usability and where it does not, and for judging what changes to make to designs so a product can be usable enough to survive or even thrive in the marketplace.
It can seem hard to know what makes something usable because unless you have a breakthrough usability paradigm that actually drives sales (Apple's iPod comes to mind), usability is only an issue when it is lacking or absent. Imagine a customer trying to buy something from your company's e-commerce web site. The inner dialogue they may be having with the site might sound like this: / can't find what I'm looking for. Okay, I have found what I'm looking for, but I can't tell how much it costs. Is it in stock? Can it be shipped to where I need it to go? Is shipping free if 1 spend this much? Nearly everyone who has ever tried to purchase something on a web site has encountered issues like these. 
It is easy to pick on web sites (after all there are so very many of them), but there are myriad other situations where people encounter products and services that are difficult to use every day. Do you know how to use all of the features on your alarm clock, phone, or DVR? When you contact a vendor, how easy is it to know what to choose in their voice-based menu o options?