sábado, 17 de agosto de 2013

Select a Data Collection Method

Once you are clear about the type of data you want to collect and how it will help you to achieve the test objectives, the next challenge is to develop the means for collecting that data. In terms of data collection instruments, you are limited only by your imagination, resources, and the time required to develop
the instruments. Will you have help with the collection? Will you have help reducing and analyzing the data once it is collected? It makes no sense at all to design a data collection method that requires extensive analysis of 20 hours of video recordings if you only have 2 weeks after the test in which to provide a
test report.
Envision yourself creating the test report and even making a presentation to members of the team. Visualize the type of findings you will want to report if not the actual content. Then, given the amount of time and resources at your disposal, plan how you will get to that point once the test has ended. Your data col echon effort should be bounded by that constraint, unless you realistically feel that you or someone else will be able to analyze the additional data later

jueves, 15 de agosto de 2013

Review the Research Question(s) Outlined in Your Test Plan

If after reviewing these, you have a difficult time ascertaining what data to collect, regard that as an important message. More often than not, it means that you need to clarify the research question(s) to make them more specific This may require reinterviewing the designers and developers and educating them as well. 6
Decide What Type of Information to Collect
Match the type of data you'll collect to a problem statement of the test plan. Figure 8-5 shows several matchups of problem statements with data collected.

miércoles, 14 de agosto de 2013

Data Collection Tools - II

For simplicity's sake, data collected during a test falls into two major categories:
- Performance data: This consists of objective measures of behavior, such as error rates, time, and counts of observed behavior elements. This type of data comes from observation of either the live test or review of the
video recording after the test has been completed. The number of errors made on the way to completing a task is an example of a performance measure.
. Preference data: Preference data consists of the more subjective data that measures a participant's feelings or opinions of the product. This data is tvpically collected via written, oral, or even online questionnaires or
through the debriefing session after the test. A rating scale that measures how a participant feels about the product is an example of a preference measure.

Both performance and preference data can be analyzed quantitatively or qualitatively. For example, on the performance side, you can analyze errors quantitatively simply by counting the number of errors made on a task. You can also analyze errors qualitatively to expose places where the user does not understand the product's conceptual model.
On the preference side, a quantitative measure would be the number of unsolicited negative comments a participant makes. Or, qualitatively, you can analyze each negative comment to discover what aspect of the product's design the comment refers to.
In terms of the product development lifecycle, exploratory (or formative) tests usually favor qualitative research, because of the emphasis on the user's understanding of high-level concepts. Validation (or summative) tests favor quantitative research, because of the emphasis on adherence to standards or measuring against benchmarks.
Following are examples of performance data.

martes, 13 de agosto de 2013

Data Collection Tools - I

Taking notes during the typical usability testing session can be incredibly difficult. If you are moderating the test and taking notes yourself, your attention will be divided between recording what you observe and observing what is happening now. We strongly encourage you to enlist someone else to take notes or record data if at all possible. If it isn't possible, you should give even greater consideration to designing the most efficient, effective data collection tools (keeping in mind that by "data collection tool" we mean anything from a basic Word document with space for notes to sophisticated tracking software).
The purpose of the data collection instruments is to expedite the collection of all data pertinent to the test objectives. The intent is to collect data during the test as simply, concisely, and reliably as possible. Having a good data collection tool will assist analysis and reporting as well.
There are many data measures from which to choose, and these should be tied back to the test objectives and research questions. Let us not get ahead of ourselves though. Before simply collecting data, you need to consider the following six basic questions:
- What data will address the problem statement(s) in your test plan?
■ How will you collect the data?
■ How will you record the data?
■ How do you plan to reduce and analyze the data?
■ How and to whom will you report the data?
- What resources are available to help with the entire process?
The answers to these questions will drive the development of the instruments, tools, and even the number of people required to collect the data.
Data collection should never just be a hunting expedition, where you collect information first, and worry about what to do with it later. This holds true even for the most preliminary type of exploratory testing. If you take that approach, you run the risk of matching the data to hoped-for results.
Also, an imprecise shotgun approach typically results in an unwieldy amount of data to reduce and analyze, and tends to confuse more than enlighten. The type of data you collect should be as clear in your mind as
possible before the test and should be tied directly to the questions and issues you are trying to resolve.

lunes, 12 de agosto de 2013

Test the Questionnaire

Try the questionnaire out on someone who fits the user profile or even on a colleague. It is amazing how easy it is for ambiguity to sneak in. Piloting the background questionnaire is just as important as pilot test.ng he other materials for the test, such as the screening questions (see Chapter 7), ana session script (discussed later in this chapter).

domingo, 11 de agosto de 2013

Make the Questionnaire Easy to Fill Out and Compile

Design the questionnaire for the ease of both yourself (in analyzing the responses) and the participants (in having to remember their history), by avoiding open-ended questions. Have the participants check off boxes or circle answers. This will also minimize their time filling out the questionnaire (important if they will be filling it out the day of the test) and will decrease the number of unintelligible answers. You may want to automate the questionnaire by using a survey tool or other online form maker.

sábado, 10 de agosto de 2013

Focus on Characteristics That May Influence Performance

Ascertain all background information that you feel may affect the performance of the participants. This could expand on the classifiers you specified in the screening process. Similarly, to develop screening questions when you are recruiting participants, form questions that focus on behaviors that vou are interested in exploring. For example, in a study for an entertainment news web site, you might collect information about the last time the participant downloaded shows or movies from similar web sites. However, unlike screening, now you can ask more questions about participants that could set a context in which to analyze the performance data from the session. For example, for the test of the entertainment news web site, you could ask about other, similar interests or habits such as magazine purchases or what the last five shows or movies were that participants watched and in what venue.

viernes, 9 de agosto de 2013

Background Questionnaire - II

To confirm that the "right" people show up. It is amazing how often mix-ups occur when there are so many details to manage. If you did not make the phone calls or write the emails yourself to screen and select
participants, it is important to verify that the people who show up actually possess the skills and knowledge you expected. It is not that unusual for agencies to misunderstand what you are doing and to send unqual-
ified people. If you do get the wrong people showing up, you will need to decide on the spot whether to use them or release them. You will also need to communicate to the person or organization supplying your par-
ticipants that they need to do a better job of qualifying the participants.

To provide a synopsis of each participant for the test moderator and for product team members who observe the test. If you anticipate that the usability tests will be observed by a design team or other inter-
ested parties, it is important that they know the background of each person while they observe the test. It is both confusing and misleading to observe a test without a sense of the skills, knowledge, and experi-
ence of the specific participant. There is no basis on which to judge how participants are doing or why they are performing as they are without this knowledge. To avoid this potential misunderstanding, make the
data from the screening questionnaire and the background questionnaire available to the observers after the participant fills it out. The observers can reference it while the test proceeds.

jueves, 8 de agosto de 2013

Background Questionnaire - I

The background questionnaire provides historical information about the participants that will help you to understand their behavior and performance during a test. It is composed of questions that reveal the participant's experience, attitudes, and preferences in all areas that might affect how they perform.
For example, if you are testing a database management system (DBMS), it will be helpful to know if the participants have used a DBMS before, and, if so, which one(s) and for how long. While you will not know if that experience will affect their performance negatively or positively, you almost certainly know that it will affect their performance differently than a person without DBMS experience.

The background questionnaire is typically filled out just prior to the test. 
Sometimes, particularly if it is lengthy, you might mail it to the participant ahead of time.

The information you include in the background questionnaire is initially culled from the participant profile in your test plan. The background questionnaire is similar to a phone screener, although more detailed. The phone screen need only determine if a potential participant meets the selection requirements and can be classified in a user group. The background questionnaire, however, goes further by exploring previous training and experience. This more specific information can help explain a participant's behavior during the test. Perhaps the participant is choosing buttons or menu selections based on expectations formed from using a competitive product that no other participant used.

In addition to the previously stated reasons for acquiring the correct crosssection of participants and providing insight about each person's performance from a historical perspective, there are two more purposes for the background questionnaire. Both come into play on the day of the test, just prior to its
beginning.

miércoles, 7 de agosto de 2013

Refer to Any Forms That Need to Be Completed and Pass Them Out

This includes background questionnaires, pre-test questionnaires, permissions, and so on.
Figures 8-2 and 8-3 show examples of orientation scripts for different types of tests.

martes, 6 de agosto de 2013

Ask for Any Questions

Before you begin, be absolutely sure that the participants understood your instructions. Due to being nervous and/or poor acoustics in the room, the participants may not have fully heard or understood your instructions. If you are not sure, ask them to parrot back a particular point by inquiring, for example, "Do you remember how to use the thinking aloud protocol

lunes, 5 de agosto de 2013

Mention That It Is Okay to Ask Questions at Any Time

Of course, explain that you may not answer those questions in order to simulate the situation of their being alone and having to rely on their own resources and materials at hand. Make that aspect of your role very clear. You are not there to solve problems participants encounter.

domingo, 4 de agosto de 2013

Explain Any Unusual Requirements

Demonstrate and practice how these special situations work and reassure the participant that you will be available to remind him or her how to do it, if need be.

Explain Any Unusual Requirements

Demonstrate and practice how these special situations work and reassure the participant that you will be available to remind him or her how to do it, if need be.

sábado, 3 de agosto de 2013

Assure the Participant That He or She Is Not Being Tested

This is probably the most familiar adage of testing, and you should certainly say this. However, do not hold out hope that they will necessarily belie e youjust because you say it. This slogan has become the -it's for your own good slogan of our youth. It is often repeated but never believed' Only the manner in which the test is conducted, the way in which you react to the person's behavior, and the behavior of the observers will cause the truth of this adage to sink in. Your manner, body language, and voice modulation during difficulties all communicate much more than just the words. In sum, do not be surprised if at the first sign of difficulty, you hear the participant mutter that familiar refrain, "Oh, I'm an idiot. That wasn't the program's fault," or "1
just need more time to learn how to use it."

viernes, 2 de agosto de 2013

Explain What Is Expected of the Participant

Describe how the usability test will proceed without providing every last detail. Broach the subject of nondisclosure, if you have not already done so, and how that wiJl be handled. Encourage them to perform as they normally would (e.g., same speed and attention to detail, given the fact that it is an artificial situation). Encourage the participants to ask questions and to take breaks if they need to.
Avoid any reference whatsoever to your expectations of their behavior or performance. Remain absolutely neutral in terms of their expected performance.
For example, do not say any of the following in order to make participants less nervous:
■ "Most people find this extremely easy."
■ "We brought you in for an extremely simple test."
■ "I'm sure you'll have no difficulty with this product, so don't be nervous."
While well intentioned, these are exactly the wrong things to say. By making those references, you have essentially put the participants on the defensive if things do get difficult. At the slightest hint of adversity, they may begin to hurry and try harder in order to fulfill your expectations. After all, if it is simple and they are having a hard time, then by definition thev must  No one likes to think of himself or herself that way.

jueves, 1 de agosto de 2013

Describe the Testing Setup

Point out and describe the equipment. Let the participants know whether they will be staying where they are, moving to another room, and so on. Locate the restrooms. Let them know if:
- People are watching from behind a one-way mirror or from another room via cameras. Do not get cute here and say "Oh that old thing. It's just so we can see all sides of the equipment."
■ The session is being recorded. It is never a good idea to lie to participants about being observed or recorded in order twt to make them nervous.
First of all, it is not ethical. Second, once the test starts, almost all participants forget their concerns about being watched and recorded, depending on the testing environment.