miércoles, 14 de agosto de 2013

Data Collection Tools - II

For simplicity's sake, data collected during a test falls into two major categories:
- Performance data: This consists of objective measures of behavior, such as error rates, time, and counts of observed behavior elements. This type of data comes from observation of either the live test or review of the
video recording after the test has been completed. The number of errors made on the way to completing a task is an example of a performance measure.
. Preference data: Preference data consists of the more subjective data that measures a participant's feelings or opinions of the product. This data is tvpically collected via written, oral, or even online questionnaires or
through the debriefing session after the test. A rating scale that measures how a participant feels about the product is an example of a preference measure.

Both performance and preference data can be analyzed quantitatively or qualitatively. For example, on the performance side, you can analyze errors quantitatively simply by counting the number of errors made on a task. You can also analyze errors qualitatively to expose places where the user does not understand the product's conceptual model.
On the preference side, a quantitative measure would be the number of unsolicited negative comments a participant makes. Or, qualitatively, you can analyze each negative comment to discover what aspect of the product's design the comment refers to.
In terms of the product development lifecycle, exploratory (or formative) tests usually favor qualitative research, because of the emphasis on the user's understanding of high-level concepts. Validation (or summative) tests favor quantitative research, because of the emphasis on adherence to standards or measuring against benchmarks.
Following are examples of performance data.

No hay comentarios:

Publicar un comentario