Evaluation Techniques for Interactive Systems

Techniques to evaluate how far your interactive system supports usability as well as the functionality……..

Isuruni Rathnayaka
11 min readApr 17, 2022

If good design techniques are used is it necessary to do an evaluation? The answer is yes, it is necessary because evaluations are the key components of a good human computer interaction. Several best designing techniques can be used in the designing process of an interactive system. These can be the best practices to be used in an interactive system to improve the usability. But only an evaluation on the system can depict whether the system actually promote usability and meet the requirements designer expected.

Evaluation in an interactive system…

Evaluating an interactive system is the process of examining that the system supports both usability and it’s functionality as expected through it’s design as well as implementation. Evaluation also helps to discover modifications that should be done to a system as well as new paths to improve the system.

Normally, evaluation is not a single phase event in the design life cycle but a process that should done throughout the whole cycle. However, it is important to identify crucial modifications in the early stages of design life cycle as it is much easier to change a design in the early stages than in the later stages.

If evaluations on an interactive system is necessary, what can we achieved or expect to be achieved from an evaluation?

Goals of Evaluation…

Ideally, evaluating an interactive system is done with three main goals;

  • To assess the extent and accessibility of the system’s functionality

An interactive system is not a good one just because it is easy to use. Instead the design of the system should support users to perform their expected tasks of the system easily. So, the system’s functionality is completed by the availability and the ease of reachability. Hence, evaluation at this level include measuring the user’s performance with the system, to assess the effectiveness of the system in supporting it’s intended task.

  • To assess users’ experience of the interaction

Users’ experience of the interaction with the system mainly includes how easy the system is to learn, its usability and the user’s satisfaction with it, his enjoyment and emotional response as well as the comfortability a user is having when using the system. Therefore, evaluations with user participation, should be done to discover new modifications that are necessary to achieve best users’ experience.

  • To identify any specific problems with the system.

As a whole, evaluations are done to identify specific problems with the system related to both the functionality and usability of the design. By this designers can identify trouble-spots which can then be rectified before making the system available to users.

There several evaluation techniques that can be used for the evaluation purpose of an interactive system. These techniques can be categorized mainly under two methods:- expert analysis and user participation.

Expert Analysis

Evaluation is a continuous process that should be carried out throughout the design life cycle but with evaluations involving users, it can be difficult to get an accurate assessment of the experience of interaction from incomplete designs and prototype during at regular intervals the design process. So, expert analysis can be used as a remedy for this. Expert analysis evaluation techniques can be used at any stage in the development process from a design specification, through storyboards and prototypes, to full implementations, making them flexible evaluation approaches. Main purpose of these is to identify violations in cognitive principles, or any needed design rules. These techniques are also relatively cheap, since they do not require user involvement. Below are some of expert analysis evaluation techniques;

1. Cognitive walkthroughs

The cognitive walkthrough is an evaluation method in which one or more evaluators walk through sequence of actions refers to the steps that an interface will require a user to perform in order to accomplish a certain task. The main intention of such a walkthrough is to establish how easy a system is to learn through exploration (exploring its functionality hands on, and not after sufficient training or examination of a user’s manual).

To do a cognitive walkthrough four things are needed;

1. A specification or prototype of the system.

2. A description of the task the user is to perform on the system.

3. A complete, written list of the actions needed to complete the task with the proposed system.

4. An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them.

Evaluators in a cognitive walkthrough ask question themselves from the perspective of the user to address this exploratory learning.

1. Is the effect of the action the same as the user’s goal at that point? (Is this effect the same as what the user is trying to achieve at this point?)

2. Will users see that the action is available?

3. Once users have found the correct action, will they know it is the one they need

4. After the action is taken, will users understand the feedback they get?

With answers to these questions, evaluators try to find a successful end for each step in the process. If they fail to do so, reasons for the users’ inability to accomplish the task are explored.

Each evaluator involved in a cognitive walkthrough should record the step in the process where they found an issue and what that issue was. Data is gathered during the walkthrough and a single report on potential issues, is made by the evaluators to redesign the interface to address the issues identified.

2. Heuristic Evaluation

In a heuristic evaluation, small number of usability evaluators systematically inspect a user interface and make decisions based on predetermined set of usability principles known as heuristics. Heuristics are general rules that describe common properties of usable user interfaces. Although there are numerous heuristics, the most commonly used heuristics in usability evaluation were developed by Jacob Nielsen in 1995, known as the “Heuristics for User Interface Design”. They are published in Nielsen’s book on Usability Engineering as follows;

Jacob Nielsen’s in 10 Heuristics for User Interface Design

Steps in Heuristic Evaluation

To conduct a heuristic evaluation based on above mentioned heuristics, these steps can be followed:

· Select evaluators

· Brief evaluators

· First evaluation phase: In this phase, evaluators will use the product freely to gain the feel of user in the interactions. They will then identify specific elements that should be evaluated.

· Second evaluation phase: In this phase, evaluators will apply the chosen heuristics to the elements identified during the first phase.

· Record problems

3. Model based evaluations

Certain predefined models can be used by expert evaluators to examine an interactive system.

Several examples for such models are;

· GOMS (goals, operators, methods and selection) model: — predicts user performance with a particular interface and can be used to filter particular design options.

· Keystroke-level model (Lower-level modeling technique): — provide predictions of the time users will take to perform low-level physical tasks.

· Design rationale: — provides a framework in which design options can be evaluated.

· Dialog models: — evaluates dialog sequences for problems, such as unreachable states, circular dialogs and complexity.

· State transition networks: — evaluates dialog designs prior to implementation.

4. Using previous studies in evaluation

Results of studies in previous evaluations can be used as evidence to support interaction of the design. This may not be applicable in every case as designs vary according to the type of users, kind of intended tasks, type of elements used in the design etc. But sometimes a generic issue can be addressed through previous results as it is expensive to repeat experiments. Examples of such scenarios are;

· Use of generic set of icons

· Use of similar menu items

Detailed information regarding expert analysis evaluation techniques are explained in the below article.

However, these techniques do not address the usability and the functionality of the whole system, but whether or not a system supports accepted usability principles.

Usability Evaluations with User Participation

Expert Analysis may filter and refine the interactive system but in reality usability of the system by a user can be tested only through the evaluations that involve user participation. Therefore, usability evaluations are important to identify the problems related to the system form user’s perspective. User participation in evaluation tends to occur in the later stages of development when there is at least a high-level prototype of the system as users are not smart enough to deal with low level prototypes.

There are two common distinct evaluation styles that are used to perform evaluations with user participation. Those are;

1.Laboratory Studies

The Human Computer Interaction Laboratory (HCI)

Here users are taken out of their normal environment and evaluations are carried out in a controlled environment like a usability laboratory (may simply be a quiet room containing audio/visual recording and analysis facilities, two-way mirrors, instrumented computers etc.). The problem with these laboratories is that they will create an interruption-free and interpersonal communication-free environment which will never be there in the real world. Hence, it is difficult to obtain real results that demonstrate the real world. But in situations like system being implemented in a dangerous or remote location like a space station laboratory observation becomes the only option.

2. Field Studies

Here the evaluator is in the user’s work environment. The real nature of the environment where the evaluation is carried out will make it possible to obtain more reliable and nearly equal results. But the field studies are often disturbed by natural interruptions ambient noise, greater levels of movement and constant interruptions, such as phone calls.

So, it is better to do evaluation keeping these two styles on balance. Laboratory studies can be applied to scenarios where it seems reliable while engaging in field studies for others.

There a number of different approaches to evaluation through user participation. Some of them are;

1. Empirical methods (experimental evaluation)

One of the best methods of evaluating a design is to use a controlled experiment. A physical experiment is done to obtain results in a statistics based method. This provides empirical evidence to support a particular hypothesis. Here the evaluator;

· Chooses a hypothesis (A hypothesis is a prediction of the outcome of an experiment.)

· Clarifies the variables (Experiments manipulate and measure variables under controlled conditions, in order to test the hypothesis. There are two main types of variable: those that are ‘manipulated’ or changed (known as the independent variables) and those that are measured (the dependent variables)).

· Selects the participants (If participants are not actual users, they should be chosen to be of a similar age and level of education as the intended user group. Participants sample size should be considerably high to engage in statistical methods)

· Decides on the experimental method that will be used. There are two main methods: between-subjects and within-subjects.

2. Observational techniques

By observing the way users interact with the system it is easy to gather information about actual use of a system. Here;

· Users are asked to complete a set of predetermined tasks.

· The evaluator watches and records the users’ actions.

In this section some of the techniques used to evaluate systems by observing user behavior is discussed;

a) Think aloud

Here user is asked to explain what is doing when he is performing a task with the system. By this way, an insight about how the user has understood the system can be obtained by the observer.

b) Cooperative evaluation

It’s a variation of think aloud. Here the user is asked to see himself as a collaborator in the evaluation rather than an experimental participant. User is asked to think aloud at the beginning of the evaluation session and the evaluator can ask the user questions to clarify the behavior of the user. Reversely, user also can ask the evaluator for clarification if a problem arises. Here it is easy for the evaluator to learn the user and his behavior while the user is given the chance to criticize and clarify the system openly.

c) Protocol Analysis

Here the users’ interaction with the system while he is thinking aloud is comprehensively recorded using a media like text, audio, video etc. Then these records are evaluated by an evaluator to identify the users’ perspective of the design and to identify the modifications that should be added.

d) Automated Analysis

Analyzing protocols manually is time consuming and tedious. So, as a solution for this, automatic analysis tools can be used to support the task. These offer a means of editing and annotating video, audio and system logs and synchronizing these for detailed analysis.

e) Post-task walkthroughs

Here the user reflects on the action after the event. This provides analyst time to focus on relevant incidents and avoid excessive interruption of the task. The problem is that, this method lack freshness.

3. Query techniques

These techniques are about asking the user regarding the interface directly. Query techniques can be useful in eliciting detail of the user’s view of a system. There are two common such techniques as;

a) Interviews

Here, evaluator and the user talk face to face about the his idea on the system based on pre-prepared set of questions. The main thing about an interview is that it’s questions should not be biased misleading the interviewee. This is an informal method which is cost effective but at the same time it is time consuming.

b) Questionnaires

Here, users are given a set of fixed questions to evaluate their understanding and the idea about the system. This is not a very flexible method like interviews but it gives a chance for a large group of people to express their ideas in a short period of time.

4. Evaluations through monitoring physiological responses

This is a way of monitoring physiological aspects of computer use. These evaluations allow not only to see more clearly, exactly what users do when they interact with computers, but also to measure how they feel. The two techniques that are receiving the most attention to date are;

a) Eye Tracking

Eye movement measurements are based on fixations, where the eye retains a stable position for a period of time and saccades, where there is rapid ballistic eye movement from one point of interest to another. In this method of evaluation position of the eye is tracked through head or desk mounted equipment. Using that, the following measurements are taken;

· Number of fixations — The more fixations the less efficient the search strategy

· Fixations duration — Indicate the level of difficulty with the display

By analyzing those measurements, the evaluation is conducted. Eye movements are believed to reflect the amount of cognitive processing a display requires and, therefore, how easy or difficult it is to process.

b) Physiological Measurements

Here user’s emotions and physical changes while using the user interface is observed. Through this user’s reaction to an interface can be determined. Following measurements can be considered as primary measurements that are taken in this process;

· Heart activity, including blood pressure, volume and pulse.

· The activity of sweat glands: Galvanic Skin Response (GSR)

· Electrical activity in muscle: electromyogram (EMG)

· Electrical activity in the brain: electroencephalogram (EEG)

One of the problems with applying these measurements to interaction events is that it is not clear what the relationship between these events and measurements might be.

Evaluation is one of the most phase in the designing life cycle. It leads to a perfect and satisfactory interactive product as as it raises the need for change if necessary.

Hope the article is useful to bring you an understanding about evaluation techniques for interactive systems. Thank you very much for reading!!!!!!!!!!!!!!

Isuruni Rathnayaka

--

--

Isuruni Rathnayaka

Software Engineering Undergraduate - University of Kelaniya Sri Lanka