sábado, 17 de agosto de 2013

Select a Data Collection Method

Once you are clear about the type of data you want to collect and how it will help you to achieve the test objectives, the next challenge is to develop the means for collecting that data. In terms of data collection instruments, you are limited only by your imagination, resources, and the time required to develop
the instruments. Will you have help with the collection? Will you have help reducing and analyzing the data once it is collected? It makes no sense at all to design a data collection method that requires extensive analysis of 20 hours of video recordings if you only have 2 weeks after the test in which to provide a
test report.
Envision yourself creating the test report and even making a presentation to members of the team. Visualize the type of findings you will want to report if not the actual content. Then, given the amount of time and resources at your disposal, plan how you will get to that point once the test has ended. Your data col echon effort should be bounded by that constraint, unless you realistically feel that you or someone else will be able to analyze the additional data later

jueves, 15 de agosto de 2013

Review the Research Question(s) Outlined in Your Test Plan

If after reviewing these, you have a difficult time ascertaining what data to collect, regard that as an important message. More often than not, it means that you need to clarify the research question(s) to make them more specific This may require reinterviewing the designers and developers and educating them as well. 6
Decide What Type of Information to Collect
Match the type of data you'll collect to a problem statement of the test plan. Figure 8-5 shows several matchups of problem statements with data collected.

miércoles, 14 de agosto de 2013

Data Collection Tools - II

For simplicity's sake, data collected during a test falls into two major categories:
- Performance data: This consists of objective measures of behavior, such as error rates, time, and counts of observed behavior elements. This type of data comes from observation of either the live test or review of the
video recording after the test has been completed. The number of errors made on the way to completing a task is an example of a performance measure.
. Preference data: Preference data consists of the more subjective data that measures a participant's feelings or opinions of the product. This data is tvpically collected via written, oral, or even online questionnaires or
through the debriefing session after the test. A rating scale that measures how a participant feels about the product is an example of a preference measure.

Both performance and preference data can be analyzed quantitatively or qualitatively. For example, on the performance side, you can analyze errors quantitatively simply by counting the number of errors made on a task. You can also analyze errors qualitatively to expose places where the user does not understand the product's conceptual model.
On the preference side, a quantitative measure would be the number of unsolicited negative comments a participant makes. Or, qualitatively, you can analyze each negative comment to discover what aspect of the product's design the comment refers to.
In terms of the product development lifecycle, exploratory (or formative) tests usually favor qualitative research, because of the emphasis on the user's understanding of high-level concepts. Validation (or summative) tests favor quantitative research, because of the emphasis on adherence to standards or measuring against benchmarks.
Following are examples of performance data.

martes, 13 de agosto de 2013

Data Collection Tools - I

Taking notes during the typical usability testing session can be incredibly difficult. If you are moderating the test and taking notes yourself, your attention will be divided between recording what you observe and observing what is happening now. We strongly encourage you to enlist someone else to take notes or record data if at all possible. If it isn't possible, you should give even greater consideration to designing the most efficient, effective data collection tools (keeping in mind that by "data collection tool" we mean anything from a basic Word document with space for notes to sophisticated tracking software).
The purpose of the data collection instruments is to expedite the collection of all data pertinent to the test objectives. The intent is to collect data during the test as simply, concisely, and reliably as possible. Having a good data collection tool will assist analysis and reporting as well.
There are many data measures from which to choose, and these should be tied back to the test objectives and research questions. Let us not get ahead of ourselves though. Before simply collecting data, you need to consider the following six basic questions:
- What data will address the problem statement(s) in your test plan?
■ How will you collect the data?
■ How will you record the data?
■ How do you plan to reduce and analyze the data?
■ How and to whom will you report the data?
- What resources are available to help with the entire process?
The answers to these questions will drive the development of the instruments, tools, and even the number of people required to collect the data.
Data collection should never just be a hunting expedition, where you collect information first, and worry about what to do with it later. This holds true even for the most preliminary type of exploratory testing. If you take that approach, you run the risk of matching the data to hoped-for results.
Also, an imprecise shotgun approach typically results in an unwieldy amount of data to reduce and analyze, and tends to confuse more than enlighten. The type of data you collect should be as clear in your mind as
possible before the test and should be tied directly to the questions and issues you are trying to resolve.

lunes, 12 de agosto de 2013

Test the Questionnaire

Try the questionnaire out on someone who fits the user profile or even on a colleague. It is amazing how easy it is for ambiguity to sneak in. Piloting the background questionnaire is just as important as pilot test.ng he other materials for the test, such as the screening questions (see Chapter 7), ana session script (discussed later in this chapter).

domingo, 11 de agosto de 2013

Make the Questionnaire Easy to Fill Out and Compile

Design the questionnaire for the ease of both yourself (in analyzing the responses) and the participants (in having to remember their history), by avoiding open-ended questions. Have the participants check off boxes or circle answers. This will also minimize their time filling out the questionnaire (important if they will be filling it out the day of the test) and will decrease the number of unintelligible answers. You may want to automate the questionnaire by using a survey tool or other online form maker.

sábado, 10 de agosto de 2013

Focus on Characteristics That May Influence Performance

Ascertain all background information that you feel may affect the performance of the participants. This could expand on the classifiers you specified in the screening process. Similarly, to develop screening questions when you are recruiting participants, form questions that focus on behaviors that vou are interested in exploring. For example, in a study for an entertainment news web site, you might collect information about the last time the participant downloaded shows or movies from similar web sites. However, unlike screening, now you can ask more questions about participants that could set a context in which to analyze the performance data from the session. For example, for the test of the entertainment news web site, you could ask about other, similar interests or habits such as magazine purchases or what the last five shows or movies were that participants watched and in what venue.