lunes, 31 de diciembre de 2012

Grounding in the Basics of User-Centered Design

Grounding in the basics of human information processing, cognitive psychology, and user-centered design (essentially the domain of the human factors specialist) helps immensely because it enables the test moderator to sense, even before the test begins, which interactions, operations, messages, or instructions are liable to cause problems. Test moderators with this background have a knowledge of which problems can be generalized to the population at large
and which are more trivial. This helps to ascertain when to probe further and what issues need to be explored thoroughly during the debriefing session. 
Additionally, this background can also prevent the need to test situations that are known to cause problems for users, such as the inappropriate use of color or the incorrect placing of a note in a manual. Lastly, a strong background in usability engineering helps the test moderator to focus on fixing the important issues after a test is complete.

domingo, 30 de diciembre de 2012

Characteristics of a Good Test Moderator

Regardless of who conducts the test, either yourself or internal or external staff, and regardless of the background of that person, there are several key characteristics that the most effective test moderators share. These key characteristics are listed and described in the paragraphs that follow. If you are personally considering taking on the role of test moderator in your organization, use these key characteristics as a checklist of the skills you need to acquire. If you are considering using either an internal person or hiring
an external person to perform this role, use these key characteristics to help evaluate the person's capabilities.

sábado, 29 de diciembre de 2012

External Consultant

Another option is to hire an external consultant. Many human factors, industrial design, market research, and usability engineering firms now offer usability testing as one of their services, including the use of their test
laboratories. You may simply want to outsource the usability test to such a firm, or use such a firm to "kick off" a testing program in your organization. 
Using an external consulting company guarantees the objectivity that testing requires. Even some organizations that employ internal human factors specialists to work on the design and development of products still outsource the testing work for the greater sense of impartiality it provides.
If you know your organization is committed to eventually forming a long-term testing program on site, then seek out a consulting company that will work with you to transfer the knowledge of testing into your organization.
Even if you are unsure about the long-term prospects /or testing in your company, it still might be easier to have outside help with an initial test Just make sure that if you conduct the test off-site, its location is physically close enough to allow development team members to attend the test sessions. Do not simply farm out the test to a remote location. (Although, in a pinch, team members could observe tests from their remote locations via Morae, Camtasia, or other electronic monitoring tools.) Viewing tests in person is much more
effective than watching or listening to a recording, especially /or those who are skeptical about the value of testing.

viernes, 28 de diciembre de 2012

Rotating Team Members

Let's suppose that no one from the disciplines listed previously is available to help on your project, and you are still determined not to test your own materials. Another alternative is to draw upon colleagues of similar disciplines who are not working on the same product. An example of this approach is for technical communicators to test each other's manuals or for software engineers to test each other's program modules.
In such a scenario, the person whose product is being tested could help prepare many of the test materials and make the pretest arrangements, then turn over the actual moderating of the test to a colleague. One of the advantages of this approach is that two (or more) heads are better than one, and it is always beneficial to have someone other than yourself help prepare the test. 
The person acting as the test moderator would need time to become familiar with the specific product being tested and to prepare to test it in addition to the time required to actually moderator the test.
Should you decide to implement this approach, you must plan ahead in order to build the test into your mutual schedules. You cannot expect your colleague to drop everything he or she is working on to help you. Of course, you would reciprocate and serve as test moderator for your colleague's product.

jueves, 27 de diciembre de 2012

Technical Communicator

Technical communicators, including technical writers and training specialists often make wonderful test moderators. Many technical communicators already serve as user advocates on projects, and their profession requires them to think as a user m order to design, write, and present effective support materials

miércoles, 26 de diciembre de 2012

Marketing Specialist

A marketing specialist is typically customer-oriented, user-oriented, or both, has good interpersonal and communication skills, and would be very inter- ested in improving the quality of products. This type of specialist may already be involved with your product, but usually not to the detailed level that would tend to disqualify him or her from conducting the testing.

martes, 25 de diciembre de 2012

Human Factors Specialist

A human factors specialist is the most likely carididate to conduct a usability test. This tvpe of person typically has an advanced degree in psychology, industrial engineering, or similar discipline, and is familiar with experimental methodology and test rigor. Just as important, the human factors specialist is grounded in the basics of information processing, cognitive psychology, and other disciplines related to the development of usable products, svstems, and support materials. This grounding is crucial in differentiating the important from the superficial usability factors in a product and ultimately in  designing and conducting the test.
With the current focus on usability engineering and testing, it is highly probable that human factors specialists within your organization are already involved with testing in one form or another.

lunes, 24 de diciembre de 2012

Who Should Moderate?

One of the basic tenets of usability testing — and of this book — is that it is almost impossible to remain objective vvhen conducting a usability test of your own product. There is simply too strong a tendency to lead participants in a direction that you want the results to go, rather than acting as a neutral enabler of the process. This is even true for experienced test moderators vvho conduct the test from an external control room. In fact, asking someone to test his or her own product is like asking parents to objectively evalúate the abilities of theirchild. It is an impossible endeavor.
Having said that, if there is only you available to test your product, do so. In almost every case, it is still better to test than not to test, even if you must do the testing yourself. However, for the long term, you would want to be out of the self-testing business as soon as possible.
Imagine that you want to conduct a test on a product for which you have primar}' responsibility, and if possible you would like someone less involved with the product to conduct the test. You can help develop the test materiaüs, make arrangements, and select participants, but you need a more objective person to handle the actual test moderating. Suppose also that your organizaron currently has no in-house testing staff and does not plan to introduce one shortly. To whom should you look for help?
The following sources represent a number of areas from which you can find candidates who possess the requisite skills to conduct a test, or who could head up the beginnings of an interna! testing group. They may or may not already be working on your product.

domingo, 23 de diciembre de 2012

Skills for Test Moderators

The role of the test moderator or test administrator is the most critical of all the test team members, presuming that you even have the luxury of a test team. In fact, the moderator is the one team member that you absolutely must have in order to conduct the test. The moderator is ultimately responsible for all preparations including test materials, participant arrangements. and coordination of the efforts of other members of the test team.
During the test the moderator is responsible for all aspects of administration, including greeting the participant, collecting data, assistíng and probing, and debriefing the participant. After the test, he or she needs to collate the day's data collection, meet with and debrief other team members, and ensure that the
testLng is tracking with the test objectives, If the usability test were an athletic contest, the moderator would be the captain of the team. As such, he or she has the potential to make or break the test. An ineffective moderator can seriously negate test results and even waste much of the preliminary preparation work.
This chapter discusses several alternatives for acquiring test moderators from inside and outside your organization, as well as the desired characteristics of an effective test moderator. Chapter 9 includes guidelines for moderating test sessions, including information about when and how to intervene, and the advantages and disadvantages of using a "think-aloud" protocol.

sábado, 22 de diciembre de 2012

Bríef Summary of Test Outcome

Every major task passed the 70 percent successful completion criteria with the exception of two. The team felt that the problems associated with those tasks conld be corrected prior to release, and wanted to schedule a very quick test to confirm. Twenty recommendations from the test were identified for implementation prior to release, and at least fifteen recommendations were diverted to future releases.
Providing a "tour" of advanced features prior to the test proved to be a stroke of genius. Participants loved it, and some even insisted on taking itback to their current jobs. One user suggested the company market it or a onger virtual seminar as a separate product for customers, and that is alreadv _n the works.
The revamped organizatíon of the user guide was much more in ture with users' expectations than the previous set, although the index proved cificult to use. More task-oriented items must be added to the index to icprove accessibility.
As you can tell from this condensed series of tests, the product evolvei over time and reflected each test's findings. We strongly advócate such an mrative approach, but again, do not be discouraged if you can manage only on; :est to begin. Now let's talk about what it takes to be a good test moderator.

viernes, 21 de diciembre de 2012

Test 3: Verífication Test - II

Test Objectives
- Verify that 70 percent of participants can meet established successful completion criteria for each major task scenario. (The 70 percent benchmark is something that Jeff has personally evolved toward over time, and that Dana has used effectively. It provides a rea sonably challenging test while still leaving the design team some work to do before product release to move that number toward a more accept and trad»tional 95 percent benchmark. A benchmark of 100 percent is probably not realistic except for tasks iiwolving danger or damage to the system or possible loss of life, and should never be used lightly
In the 1960s NASA found that achieving 100 percent performance cost as much as 50 times the cost of achieving 95 percent performance, It is likely that such costs have gone down over 40 years, but the point is that you should only use the higher benchmark if you are willing to pay the piper.)
- Identify any tasks and areas oí the product that risk diré consequences (e.g., are unusable, contain destructive bugs) if the product is released as is.
- Identify all usability deficiencies and sources of those problems. Determine which deficiencies must be repatred befare release and which, if there is not time within the schedule, can be implemented in the next
release.

jueves, 20 de diciembre de 2012

Test 3: Verífication Test - I

The Situation
Some weeks have passed. For this last test, a fully functional product with comprehensive help topics has been prepared. All sections of the documéntate have been through one draft, with half of the sections imdergoine a second draft. The documentaron has a rough Índex for the test. A small "tour" for users about quarterly and semi-annual tasks was developed. For the major tasks ot the product, specified measurable time and accuracy criteria have been developed. For example, one criterion reads:
Using tl,e sctup guide, a user will be able to correctly implement Vicio and Ncttoork rreferemcs within 10 minutes, with no more than two attempts required

miércoles, 19 de diciembre de 2012

Test 2: Assessment Test - III

Brief Summary of Test Outcome
Many difficulties in operations were identified, but the users' workflow matched that employed by the design team for the product's interface operations. Essentially, the high-level interface "works," and the lower-level details remain to be implemented and refined. The help information was accurate and helpfu!, but users rarely invoked it unless prompted. There was a strong preference for tria! and error with this particular user audience. When users were prompted to try the help, it was found that the organizaron of the help topics needs to be extensively revamped and made more task-oriented Even more theoretical, contextúa! information needs to be included for the most advanced users. This last issue turned out to be very controversia! because designers felt it was not their responsibility to forcé particular operational approaches on corporate working groups. It is possible that an interactive primer for users may be required for infrequent but important tasks

martes, 18 de diciembre de 2012

Test 2: Assessment Test - II

Main Test Objectives
m Confirm whether the findings of the original test adequately match interface operations with the user's workflow.
■ Expose all major usability deficiencies and their causes for the most common tasks.
■ Determino if there is a seamless connection of help topics, embedded assistance, and messaging with the functionality and user interface. Does the software give support at the right moments? Is the help center orga-
nized in a way that answers participants' questions?
- Is the documentation being utilized as designed? Is it accessible? Are graphics understood and at the appropriate level of dotail? Are certain sections not read at all? Are additional sections required? Is all
terminology clear? Are there areas that require more explanation?
VVhere do parHcipants still have questions? What are their questions?

lunes, 17 de diciembre de 2012

Test 2: Assessment Test - I

The Situation
Time has passed. A single prototype has nowbeen expanded to approximately 60 to 80 percent of its eventual functionality. There are eomprehensive help topics for working functions in a separate section of the web site. A first draft, of simplified documentaron, on 8 1/2" by 11" bond paper is available for the test, with a table of contents, but no index.

domingo, 16 de diciembre de 2012

Brief Summary of Outcome

The test was conducted. As is typical of comparison tests at this point, there was no "winner" per se. Rather, the result was an interface with the best attributes of both prototypes. The navigation schema employing the navigation on the left was most efficient and effective, but some of the options available did not seem to belong with the others and so will remain in a navigation bar across the top of the main work area. Apparently, the optioris to remain in the top navigation are done less frequently.
There were many advanced features for use in a corporate setting that users needed additional information about. Because this personal information manager will be used throughout a large company, some functionality was added to support work group collaboration, which added complexity to the product. To remedy the complexity issue, the first line of defense is to develop a documentaron set that includes, at minimum, a guide for setting up preferences, some self-paced training on interface operations, and a procedural user guide for more advanced, less frequent tasks.

sábado, 15 de diciembre de 2012

Main Research Questions

■ Which of the two interface styles/«.vncepts is the most effective? In which is the user better able to renuin oriented within the program?
■ What are the best and worst feature* of each approach?
■ What are the main stumbling blods for the user?
■ After some period of initial learnir.c which style has the greatest potential for the power user?
- For which tasks will users need her. further instructions, or supportine documenta tion?
- What types of written information will be required?
■ Prerequisite
■ Theoretical or conceptual
■ Procedural
■ Examples
■ Training

viernes, 14 de diciembre de 2012

Test 1: Exploratory/Comparison Test - II


during the test, a technical expert will be available to reveal limited but crucial information needed to use the product. (See the gradual disclosure technique in Chapter 13 for an explanation of how to use a technical expert in this way.) Primitive help topics, available en paper only, will be provided to the participant on demand; that is, when the participant clicks the appropriate prompt or asks a question, the test nuxlerator will provide what would normally be embedded assistance, instruction prompts, or messages on paper as they would appear on the screen.

jueves, 13 de diciembre de 2012

Test 1: Exploratory/Comparison Test - I

The sítuation
Two early prototypes of the interface have been developed (see Figures 3-5 and 3-6) The interfaces use the same underlying architecture, programming languages, and functionality, although the layout of their navigation is considerablv different from each other. , nn. TL protolypes have very limted working funcHonahty (e.g, about 30 to 40 percent ofthe proponed functons work). There ¡s no documentaron, but

miércoles, 12 de diciembre de 2012

Iterative Testing: Test Types through the lifecycle

Now having reviewed the basics of each type of test, let us explore how a series of tests might in fact work. Let's suppose that your company is developing a web-based software application and its associated documentaron, The software is a personal Information manager, consisting of calendar, contad, and task management functionality. You intend to conduct three usability tests at three different times in the product development lifecycle. Following is a hvpothetical series of tests on this product throughout the lifecycle, complete with hypothetical outcomes at the end of each test, Understand the details have been greatly simplified to provide an overview of iterative desvgn in action,

martes, 11 de diciembre de 2012

Overview of the Methodology

The basic methodology involves the side-by-side comparison of two or more clearly dif/erent designs. Performance data and preference data are collected for each alternative, and the results are compared. The comparison test can be conducted informally as an exploratory test, or it can be conducted as a tightly controlled classical experiment, with one group of participants serving as a control group and the other as the experimental group. The form used is dependenton yourgoalsin testing. If conducted asa true experiment designed to acquire statistically valid results, the alternatives should vary along a single dimensión — for example, keeping the content and functionality constant, but altering the visual design or the navigation scheme — and the expected results of the test should be formulated as a hypothesis.
If conducted less formally as a more observational, qualitative study, the alternatives may vary on many dimensions. One needs to ascertain why one alternative is favored over another, and which aspects of each design are favorable and unfavorable. Inevitably, when comparing one or more alternatives in this fashion, one discovers that there is no "winning" design per se. Rather, the best design turns out to be a combination of the alternatives, with the best aspects of each design used to form a hybrid design.
For exploratory comparison tests, experience has shown that the best results and the most creative solutions are obtained by including wildly differing alternatives, rather than very similar alternatives. This seems to work because:
- The design team is forced to stretch its conceptions of what will work rather than just continuing along in a predictable pattern. With the necessity for developing very different alternatives, the design team is forced
to move away from predictable ways of thinking about the problem. rypically, this invoJves revisiting fundamental premises about an interface or documentation format that have been around for years. The result is often a design that redefines and improves the product in fundamental ways.
- During the test, the participant is forced to really consider and contémplate why one design is better and which aspects make it so. It is easier to compare alternatives that are very similar, but harder to compare very different ones. Why? Similar alternatives share the same framework and conceptual model, with only the lower-level operations working differently. Very different alternatives, however, are often based on different conceptual models of how each works and may challenge the user, especiallv one experienced with the product, to take stock of how the tasks are actually performed.

lunes, 10 de diciembre de 2012

Comparison Test - II

Objective
The comparison test is the fourth type of test and can be used in conjunction with anv of the other three tests. It is used to compare two or more des.gns, such as two different interface styles, or the current design of a manual wrth a proposed new design, or to compare your product with a compehtor s. The comparison test is typically used to establish which design is eas.e.: ta;use or leam, or to better understand the advantages and disadvantages of different designs.

domingo, 9 de diciembre de 2012

Comparison Test - I

When
The comparison test is not associated with any specific point in the product development lifecycle. In the early stages, it can be used to compare several radically different interface styles via an exploratory test, to see which has the greatest potential with the proposed target population. Toward the middle of the lifecycle, a comparison test can be used to measure the effectiveness of a single element, such as whether pictorial buttons or textual buttons are preferred by users. Toward the end of the lifecycle, a comparison test can be
used to see how the released product stacks up against a competitor's product.

sábado, 8 de diciembre de 2012

Overview of the Methodology

The validation test is conducted in similar fashion to the assessment test with three major exceptions.
- Prior to the test, benchmarks or standards for the tasks of the test are either developed or identified. This can be speciíic error or time measures, or as simple as eliminating the problems identified in earlier explora tory tests.
- Participants are given tasks to perform with either very little or no interaction with a test moderator. (And they are probably not asked to "think aloud.")
- The collection of quantitative data is the central focus, although reasons for substandard performance are identified.
Because you are measuring user performance against a standard, you also need to determine beforehand how adherence to the standard wilí be measured, a-nd what actions will be taken if the product does not meet its standards. For example, if the standard for a task addresses "time to complete," must 70 percent of participants meet the standard, or will you simply compare the standard to the average score of all participants? Under what conditions will the product's schedule be postponed? Will there be time to retest those tasks that did not meet the standard? These are all questions that should be addressed and resolved prior to the test.
Compared to an assessment test, a validation test requires more emphasis on experimental rigor and consistency, because you are makirtg important quantitative judgments about the product. Make sure that members of the design team have input and buy-in into developing the standards used during the test. That way they will not feel as if the standards were overly difficult or unattainable.

viernes, 7 de diciembre de 2012

Validation or Verification Test - II

Objective
The objective of the validation test is to evalúate how the product compares to some predetermined usability standard or benchmark, either a project-related performance standard, an internal company or historical standard, or even a competitor's standard of performance, The intent is to establish that the product meets such a standard prior to release, and if it does not, to establish thereason(s) w hy. The standards usually origínate from the usability objectives developed early in the project. These in turn come from previous usability tests, marketing surveys, interviews with users, or simply educated guesses by the development team.
Usability objectives are typically stated in terms of performance criteria, such as efficiency and effectiveness, or how welJ and how fast the user can perform various tasks and operations. Or the objectives can be stated in terms of preference criteria, such as achieving a particular ranking or rating from users. A verificaron test has a sJightty different flavor. The objective here is to ensure that usability issues identified in earlier tests have been addressed and corrected appropriately.
It only makes sense then that the validation test itself can be used to initiate standards within thecompanyfor future producís Verification can accomplish the same thing. For example, if one establishes that a setup procedure for a software package vvorks well and can be conducted within 5 minutes with no more than one error, it is important that future releases of the product perform to that standard or better. Products can then be designed with this benchmark as a target, so that usability does not degrade as more functions are added to future releases.

jueves, 6 de diciembre de 2012

Validation or Verification Test - I

When
The validation test, also referred to as the verification test, is usually conducted late in the development cycle and, as the ñame suggests, is intended to measure usability of a product against established benchmarks or, in the case of a verification test, to confirm that problems discovered earlier have been remedied and that new ones have not been introduced. Unlike the first two tests, which take place in the middle of a very active and ongoing design cycle, the validation test typically takes place much closer to the release of the product.

miércoles, 5 de diciembre de 2012

Assessment or Summative Test - III

Overview of the Methodology
Often referred to as an information-gathering or evidence-gathering test, the methodology for an assessment test is a cross between the informal exploration of the exploratory test and the more tightly controlled measurement of the validation test. Unlike the exploratory test:
■ The user will always perform tasks rather than simply walking through and commenting upon screens, pages, and so on.
■ The test moderator will lessen his or her interaction with the participant because there is less emphasis on thought processes and more on actual behaviors.
" Quantitative measures will be collected.

martes, 4 de diciembre de 2012

Assessment or Summative Test - II

Objective
The purpose of the assessment test is to expand the findings of the exploratory test by evaluating the usability of lower-level operations and aspects of the product. If the intent of the exploratory test is to work on the skeleton of the product, the assessment test begins to work on the meat and the flesh.
Assuming that the basic conceptual model of the product is sound, this test seeks to evamine and evalúate hoxv effectively the concept has been implemented. Rather than just exploring the intuitiveness of a product, you are interested in seeing how well a user can actually perform full-blown realistic tasks and in identifying specific usability deficiencies in the product.

lunes, 3 de diciembre de 2012

Assessment or Summative Test - I

When
The assessment test is probably the most typical tvpe of usability test conducted. Of all the tests, it is  probably the simplest and most straightforward for the novice usability professional to design and conduct. Assessment tests are conducted either early or midway into the product development cycle, usually after the fundamental or high-Ievel design or organizaron of the product has been established.

domingo, 2 de diciembre de 2012

Example of Exploratory Study - II

The purpose ol our session today is to review the design lor a new web site and get your opinions about it. As we review this design together, I will be asking you a series of questions about what you see and how you expect things to work. Please (eel free to ask any questions and otter any observations during the session. There are no wrong answers or stupid questions. This product is in a very preliminary stage; do not be concerned ifit acts in unexpected ways.
Let's begin with a hypothetical situation. You would like to understand just what it is that this company offers.
(User indicates how (he task would be attempted, or attempts to do the task if the navigation works.)
You would like to calculate the cost lor offerings from this company. How do you start?
{User indicates how the task would be attempted, or attempts to do the task if the navigation works.)

Okay, you've found the pricing page. What does it tell you?
(User discusses the information on the page, describing what is useful, clear (or not), and where there could be more detail.)
Figure 3-4 A Portion of an exploratory test script

sábado, 1 de diciembre de 2012

Example of Exploratory Study - I

Because the nature of the exploratory test is often somewhat abstract, let's review how a typical exploration might proceed for a product, such as a web site. Assume that you are exploring the home page of a web site, which employs options in the left navigation, each revealing further choices when the user mouses over it. Assume also that this is a very early stage of development, so the user interface simply consists of a single screen without any underlying structure or connections. However, the navigation menus function, so the
user can view the menu options underneath each menu heading, as shown in Figure 3-3.
Now let's look at Figure 3-4, which contains an excerpt of a test script for conducting an exploratory test, to see how the test might proceed. You might continue in this vein, having the user attempt to accomplish realistic tasks with much discussion about assumptions and thought process. Alternatively, though, if the web page is in such a preliminary stage that the navigation does