When users have to think! Evaluation is not design

Introduction

When we interact with a technological system, each of our senses is somehow engaged in a particular kind of communication. This communication forms the basis for a dialogue between person and technology (intrasystemic dialogue), that is to say, a dynamic relationship among three components: (i) the designer’s mental model, which generates the conceptual system model of the interface; (ii) the user’s mental model; and (iii) the image of the system. As Steve Krug claims in his famous work Don’t Make Me Think, a user has a good interaction experience only when the interface is “self-evident” —that is, when the user does not have to expend effort in perceiving the interface.

The implementation of a self-evident interface should be considered one of the most important issues to be solved when it comes to creating a good system, i.e., a good architecture of information. Therefore, Krug’s assumption strongly relates only to the designer perspective and can be epitomized as “the better the system works, the better the interaction will be.” However, even though a well-designed interface can be achieved only by considering the properties of the object, the evaluation process also needs to take into account other dimensions of the interaction. In particular, since the goal of the evaluation process is to measure the human–computer interaction (HCI), the user’s point of view somehow needs to be integrated into the evaluation methodologies.
Krug intends to provide developers with the tools for creating successful systems: he expresses the success of a system using the metaphor of good navigation without barriers, and this kind of navigation corresponds to his motto “don’t make me think”. Nevertheless when we have to evalute an interface we need to know “what the user thinks” about the system.  From Krug’s perspective, the user ‘should not think’ since the developer should already have thought about the possible barriers that could occur during the interaction. Conversely, from the evluators perspective, the simulation carried out by the developers during the design process cannot alone be enough to create a fully accessible and usable system. The key factors for developing a usable and accessible interface are (i) a well-planned assessment process and (ii) a harmonized and equalized relationship between evaluator and designer during the life cycle.

The myth of designing for the average user

The myth of designing for the average user has collapsed and has been substituted by the reality of actually involving users in the design process in order to cater to the requirements of the broadest possible user population. To this end, and in the context of an iterative design process, the role of evaluation is central in acquiring the user’s point of view and understanding the interaction experience. Recent evaluation approaches take into consideration not only the usability of an interactive system in terms of users’ effectiveness, efficiency, and satisfaction, but also the overall user experience, considering how a user thinks and feels about using a system, before, during, and after use. It should be noted that an important factor affecting the usability and overall user experience of a system is its accessibility by diverse user groups with different cultural, educational, training, and employment background, novice and experienced computer users, the very young and the elderly, as well as people with different types of disabilities. Accessibility, usability, and user experience have to be considered as three different perspectives on interaction, and any evaluator should assess these aspects in sequence in order to produce a complete interaction evaluation.

Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals

The book “Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals” focus on how accessibility, usability, and UX are defined as ISO international standards, how they can be evaluated, and what difficulties are encountered when analyzing the interaction dialogue between the user and his or her system, mostly through the interface. In fact, for the user, the application is the interface, since it is the only controlling component. In other words, accessibility refers to the openness of the system, while usability includes effectiveness (i.e., achieving one’s goal), efficiency, satisfaction, freedom from risk (not losing any data), and context coverage (encompassing a wide scope). From the software point of view, however, the quality depends on its scalability, efficiency, compatibility, usability, reliability, security, maintainability, and portability. As a reader may understand by looking at these two lists of properties (for usability from the user’s point of view) and software quality itself, the system’s evaluation is awkward. Therefore, standards that are incorporated within UX include users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, and behavior before, during, and after use. One of the main points stressed in this book is the relationship between the evaluator and the designer, which must drive the intrasystemic dialogue within three nested spaces: the context of use, the shared aim, and the shared perspective (the outermost layer). In order to perform a comprehensive evaluation, different techniques and methods can be employed for assessing accessibility, usability, and UX in a cascaded sequence. A number of issues must be considered, such as miscommunication (designers versus evaluators), so as to relay feedback to designers to improve their designs. There are also certain criteria for choosing the right users (including disabled ones) to perform controlled experiments, to precisely define the user’s goals, and even to employ a time scale (before using, during use, and after having used the system to be evaluated).

This book introduces a new evaluation model, the integrated model of interaction evaluation, aiming to equalize the roles of the design and the evaluation processes by considering them as two opposite and separate points of view toward reaching the same goal: a successful system. Providing the reader with all the necessary related literature and background information in order to comprehend the proposed model, discussing the need for an integrated model of interaction evaluation, presenting the model itself, and explaining the evaluation methods and techniques that can be used in the context of the proposed framework.

References

[1] Krug, S. (2000). Don’t Make Me Think! A Common Sense Approach to Web Usability.
Indianapolis, IN: New Riders.

[2] Borsci, S., Kurosu, M., Federici,S., and Mele, M.L. (2013) Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals. Boca Raton, FL:CRC Press

Empirical evidence, evaluation criteria and challenges for the effectiveness of virtual and mixed reality tools for training operators of car service maintenance

Highlights

• State of the art review of car service training with virtual and augmented reality.
• Current criteria considered by researchers focus on training effectiveness.
• Limited assessment of trainees experience pre- and post-training.
• This paper reports challenges for next generation of studies on training technologies.

Abstract

The debate on effectiveness of virtual and mixed reality (VR/MR) tools for training professionals and operators is long-running with prominent contributions arguing that there are several shortfalls of experimental approaches and assessment criteria reported within the literature. In the automotive context, although car-makers were pioneers in the use of VR/MR tools for supporting designers, researchers started only recently to explore the effectiveness of VR/MR systems as mean for driving external operators of service centres to acquire the procedural skills necessary for car maintenance processes. In fact, from 463 journal articles on VR/MR tools for training published in the last thirty years, we identified only eight articles in which researchers experimentally tested the effectiveness of VR/MR tools for training service operators’ skills. To survey the current findings and the deficiencies of these eight studies, we use two main drivers: (i) a well-known framework of organizational training programmes, and (ii) a list of eleven evaluation criteria widely applied by researchers of different fields for assessing the effectiveness of training carried out with VR/MR systems. The analysis that we present allows us to: (i) identify a trend among automotive researchers of focusing their analysis only on car service operators’ performance in terms of time and errors, by leaving unexplored important pre- and post-training aspects that could affect the effectiveness of VR/MR tools to deliver training contents – e.g., people skills, previous experience, cibersickness, presence and engagement, usability and satisfaction and (ii) outline the future challenges for designing and assessing VR/MR tools for training car service operators.

See the full article

Beyond the User Preferences: Aligning the Prototype Design to the Users’ Expectations

It is important for practitioners to conceptualize and tailor a prototype in tune with the users’ expectations in the early stages of the design life cycle so the modifications of the product design in advanced phases are kept to a minimum. According to user preference studies, the aesthetic and the usability of a system play an important role in the user appraisal and selection of a product. However, user preferences are just a part of the equation. The fact that a user prefers one product over the other does not mean that he or she would necessarily buy it. To understand the factors affecting the user’s assessment of a product before the actual use of the product and the user’s intention to purchase the product we conducted a study, reported in this article. Our study, a modification of a well-known protocol, considers the users’ preferences of six simulated smartphones each with different combination of attributes. A sample consisting of 365 participants was involved in our analysis. Our results confirm that the main basis for the users’ pre-use preferences is the aesthetics of the product, whereas our results suggest that the main basis for the user’s intention to purchase are the expected usability of the product. Moreover, our analysis reveals that the personal characteristics of the users have different effects on both the users’ preferences and their intention to purchase a product. These results suggest that the designers should carefully balance the aesthetics and usability features of a prototype in tune with the users expectations. If the conceptualization of a product is done properly the redesign cycles after the usability testing can be reduced and speed up the process for releasing the product on the market. [Read More]

Optimizing Conceptual models For Better UX

Megan Wilson – March 25, 2014

Conceptual models are key in the design process in order to provoke the right behaviors , emotions, and attitudes from its consumers. All serious businesses work towards optimizing conceptual models for better UX. Nothing is more beneficial to an enterprise than their product attracting positive User Experience (UX).  The best way to maximize on UX is to ensure that the conceptual model of the product matches nearly perfectly to the mental models that the users have. Some of the ways to do this are well illustrated below.

Use Conventional Conceptual Models

Using conventional product design will create the right UX. The users already have mental models formed from experience of using similar products. When a company replicates this positive experience through a similar design, the product or service is bound to be accepted too.

Conceptual Models Enhance Usability

Usability is the question of how efficiently and effectively the user can deal with the product. An important aspect under this is learnability. The conceptual models in coming up with a product should reduce time utilized by a consumer in learning how the product works.

Also the model would offer better UX by ensuring efficient achievement of the user’s goals when using the product. Errors associated with its use should be minimized through the conceptual model. In addition, it is essential that the user enjoys utilizing the product in the process of achieving success. Customer satisfaction is a key factor that conceptual models should address. [Articles]

Understanding Effect Sizes In User Research

Jeff Sauro • March 11, 2014
The difference is statistically significant. When using statistics to make comparisons between designs, it’s not enough to just say differences are statistically significant or only report the p-values. With large sample sizes in surveys, unmoderated usability testing, or A/B testing you are likely to find statistical significance with your comparisons. What you need to know is how big of a difference was detected. The size of the difference gives you a better idea about the practical significance and impact of the statistical result. [article]

USING THE SYSTEM USABILITY SCALE FOR USABILITY TESTING

by JESSICA MILLER  6, Nov. 2013

One of the bigger problems that designers and testers have in the way of usability is in working out a really good way to not only accurately measure it, but to implement standards for good and bad when it comes to the metric you get. In the past, many people have worked out their on guidelines and measurement systems, and this has resulted in, when case studies and published materials are released, a lot of confusion, and it makes usability difficult to actually measure. However, with the advent of the system usability scale (SUS), this isn’t nearly the problem it used to be.

The system usability scale is a series of ten questions with degree- based answers, one for strongly disagree, five for strongly agree. After the questions are answered, the numbers are tallied (and multiplied by 2.5) for an overall score. Now, fortunately, there is more or less a consensus on what a good score, when using the SUS, might be. A 68 is considered very good, and some variance in either direction is to be expected. So, the SUS is a very good, easy to use and pretty unanimously accepted scale to measure usability. That makes this all much less of a hassle. Well, this wasn’t that time consuming to explain, so let’s take some of the extra time we have, to talk about the questions commonly asked in the SUS. [Article]

 

Reviewing and Extending the Five-User Assumption: A Grounded Procedure for Interaction Evaluation

The debate concerning how many participants represents a sufficient number for interaction testing is well-established and long-running, with prominent contributions arguing that five users provide a good benchmark when seeking to discover interaction problems. We argue that adoption of five users in this context is often done with little understanding of the basis for, or implications of, the decision. We present an analysis of relevant research to clarify the meaning of the five-user assumption and to examine the way in which the original research that suggested it has been applied. This includes its blind adoption and application in some studies, and complaints about its inadequacies in others. We argue that the five-user assumption is often misunderstood, not only in the field of Human-Computer Interaction, but also in fields such as medical device design, or in business and information applications. The analysis that we present allows us to define a systematic approach for monitoring the sample discovery likelihood, in formative and summative evaluations, and for gathering information in order to make critical decisions during the interaction testing, while respecting the aim of the evaluation and allotted budget. This approach — which we call the Grounded Procedure — is introduced and its value argued. [Article]

Parallel session at HCII2014 – Interaction, assitive technologies and assited living

I’m organizing a parallel session entitled ‘Interaction, assistive technologies and assisted living’. This session will be one of the parallel sessions of the Human-Computer Interaction Thematic Area in the context of HCI International 2014 (http://www.hcii2014.org/) to be held in Heraklion, Crete, Greece, 22 – 27 June 2014.
If you want to submit a paper and participate in the parallel session please send an email to: simone.borsci@gmail.com
Deadlines:
Sunday, 15 December 2013 Abstract submission (max. 800 words) through the CMS, for the review process
Wednesday, 15 January 2014 Notification of review outcome
Friday, 7 February 2014 Submission through the CMS of the camera-ready version (full papers, typically 10 pages, with minimum 8 pages long, maximum 12 pages long) of all papers
You, or one of the co-authors, will be required to register and pay at a discounted rate for the Conference by Friday 7 February 2014 and give an oral presentation during the Conference.

Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals

By Simone Borsci 22, Oct 2013

This book provides the necessary tools for the evaluation of the interaction between the user who is disabled and the computer system that was designed to assist that person. The book creates an evaluation process that is able to assess the user’s satisfaction with a developed
system. Presenting a new theoretical perspective in the human computer interaction  evaluation of disabled persons, it takes into account all of the individuals involved in the evaluation process. [Article]

Design for Experience: Accessibility

by UX Magazine Staff, Design for Experience

“Accessibility is important,” Livia Veneziano and Yvonne So declare in their article “Designing for Everyone” published in July of 2012.

“It is not just an additional feature, it is a core component that makes modern interfaces complete. If designers fail to pay attention to the design needs for a small percentage of the population, they ultimately fail on a global scale.”

As techology becomes more sophisticated and commonplace, the opportunities to make it accessible to people with physical and sensory impairments and those with needs for special modes of interaction continue to grow. Despite this, accessibility remains under-adopted as a requirement for products and services. [Article]