When users have to think! Evaluation is not design


When we interact with a technological system, each of our senses is somehow engaged in a particular kind of communication. This communication forms the basis for a dialogue between person and technology (intrasystemic dialogue), that is to say, a dynamic relationship among three components: (i) the designer’s mental model, which generates the conceptual system model of the interface; (ii) the user’s mental model; and (iii) the image of the system. As Steve Krug claims in his famous work Don’t Make Me Think, a user has a good interaction experience only when the interface is “self-evident” —that is, when the user does not have to expend effort in perceiving the interface.

The implementation of a self-evident interface should be considered one of the most important issues to be solved when it comes to creating a good system, i.e., a good architecture of information. Therefore, Krug’s assumption strongly relates only to the designer perspective and can be epitomized as “the better the system works, the better the interaction will be.” However, even though a well-designed interface can be achieved only by considering the properties of the object, the evaluation process also needs to take into account other dimensions of the interaction. In particular, since the goal of the evaluation process is to measure the human–computer interaction (HCI), the user’s point of view somehow needs to be integrated into the evaluation methodologies.
Krug intends to provide developers with the tools for creating successful systems: he expresses the success of a system using the metaphor of good navigation without barriers, and this kind of navigation corresponds to his motto “don’t make me think”. Nevertheless when we have to evalute an interface we need to know “what the user thinks” about the system.  From Krug’s perspective, the user ‘should not think’ since the developer should already have thought about the possible barriers that could occur during the interaction. Conversely, from the evluators perspective, the simulation carried out by the developers during the design process cannot alone be enough to create a fully accessible and usable system. The key factors for developing a usable and accessible interface are (i) a well-planned assessment process and (ii) a harmonized and equalized relationship between evaluator and designer during the life cycle.

The myth of designing for the average user

The myth of designing for the average user has collapsed and has been substituted by the reality of actually involving users in the design process in order to cater to the requirements of the broadest possible user population. To this end, and in the context of an iterative design process, the role of evaluation is central in acquiring the user’s point of view and understanding the interaction experience. Recent evaluation approaches take into consideration not only the usability of an interactive system in terms of users’ effectiveness, efficiency, and satisfaction, but also the overall user experience, considering how a user thinks and feels about using a system, before, during, and after use. It should be noted that an important factor affecting the usability and overall user experience of a system is its accessibility by diverse user groups with different cultural, educational, training, and employment background, novice and experienced computer users, the very young and the elderly, as well as people with different types of disabilities. Accessibility, usability, and user experience have to be considered as three different perspectives on interaction, and any evaluator should assess these aspects in sequence in order to produce a complete interaction evaluation.

Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals

The book “Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals” focus on how accessibility, usability, and UX are defined as ISO international standards, how they can be evaluated, and what difficulties are encountered when analyzing the interaction dialogue between the user and his or her system, mostly through the interface. In fact, for the user, the application is the interface, since it is the only controlling component. In other words, accessibility refers to the openness of the system, while usability includes effectiveness (i.e., achieving one’s goal), efficiency, satisfaction, freedom from risk (not losing any data), and context coverage (encompassing a wide scope). From the software point of view, however, the quality depends on its scalability, efficiency, compatibility, usability, reliability, security, maintainability, and portability. As a reader may understand by looking at these two lists of properties (for usability from the user’s point of view) and software quality itself, the system’s evaluation is awkward. Therefore, standards that are incorporated within UX include users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, and behavior before, during, and after use. One of the main points stressed in this book is the relationship between the evaluator and the designer, which must drive the intrasystemic dialogue within three nested spaces: the context of use, the shared aim, and the shared perspective (the outermost layer). In order to perform a comprehensive evaluation, different techniques and methods can be employed for assessing accessibility, usability, and UX in a cascaded sequence. A number of issues must be considered, such as miscommunication (designers versus evaluators), so as to relay feedback to designers to improve their designs. There are also certain criteria for choosing the right users (including disabled ones) to perform controlled experiments, to precisely define the user’s goals, and even to employ a time scale (before using, during use, and after having used the system to be evaluated).

This book introduces a new evaluation model, the integrated model of interaction evaluation, aiming to equalize the roles of the design and the evaluation processes by considering them as two opposite and separate points of view toward reaching the same goal: a successful system. Providing the reader with all the necessary related literature and background information in order to comprehend the proposed model, discussing the need for an integrated model of interaction evaluation, presenting the model itself, and explaining the evaluation methods and techniques that can be used in the context of the proposed framework.


[1] Krug, S. (2000). Don’t Make Me Think! A Common Sense Approach to Web Usability.
Indianapolis, IN: New Riders.

[2] Borsci, S., Kurosu, M., Federici,S., and Mele, M.L. (2013) Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals. Boca Raton, FL:CRC Press


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s