Providing assistive technology in Italy: the perceived delivery process quality as affecting abandonment

Purpose: The study brings together three aspects rarely observed at once in assistive technology (AT) surveys: (i) the assessment of user interaction/satisfaction with AT and service delivery, (ii) the motivational analysis of AT abandonment, and (iii) the management/design evaluation of AT delivery services. Methods: 15 health professionals and 4 AT experts were involved in modelling and assessing four AT Local Health Delivery Service (Centres) in Italy through a SWOT analysis and a Cognitive Walkthrough. In addition 558 users of the same Centres were interviewed in a telephone survey to rate their satisfaction and AT use. Results: The overall AT abandonment was equal to 19.09%. Different Centres’ management strategies resulted in different percentages of AT disuse, with a range from 12.61% to 24.26%. A significant difference between the declared abandonment and the Centres’ management strategies (p = 0.012) was identified. A strong effect on abandonment was also found due to professionals’ procedures (p = 0.005) and follow-up systems (p = 0.002). Conclusions: The user experience of an AT is affected not only by the quality of the interaction with the AT, but also by the perceived quality of the Centres in support and follow-up.

Implications for Rehabilitation

AT abandonment surveys provide useful information for modelling AT assessment and delivery process.
SWOT and Cognitive Walkthrough analyses have shown suitable methods for exploring limits and advantages in AT service delivery systems.
The study confirms the relevance of person centredness for a successful AT assessment and delivery process.

See the article

Advertisements

Human Factors methods for IVD and POC devices at DEC London- Workshop at the NIHR DEC London open day

Quite rarely industrial practitioners, researchers and commissioners participate to a workshop all together. Professor Peter Buckle and I were lucky because we had the opportunity to host this event at the Open Day of NIHR Diagnostic Evidence Cooperative of London.
We asked to participants to work in groups to map all the possible stakeholders of a Point of Care Device. The outcomes were impressive! When you put together different perspectives and you may drive them with a Human Factor framework you can only enrich people perspective. – Download Workshop Presentation

DSC_0042 DSC_0043

 

When users have to think! Evaluation is not design

Introduction

When we interact with a technological system, each of our senses is somehow engaged in a particular kind of communication. This communication forms the basis for a dialogue between person and technology (intrasystemic dialogue), that is to say, a dynamic relationship among three components: (i) the designer’s mental model, which generates the conceptual system model of the interface; (ii) the user’s mental model; and (iii) the image of the system. As Steve Krug claims in his famous work Don’t Make Me Think, a user has a good interaction experience only when the interface is “self-evident” —that is, when the user does not have to expend effort in perceiving the interface.

The implementation of a self-evident interface should be considered one of the most important issues to be solved when it comes to creating a good system, i.e., a good architecture of information. Therefore, Krug’s assumption strongly relates only to the designer perspective and can be epitomized as “the better the system works, the better the interaction will be.” However, even though a well-designed interface can be achieved only by considering the properties of the object, the evaluation process also needs to take into account other dimensions of the interaction. In particular, since the goal of the evaluation process is to measure the human–computer interaction (HCI), the user’s point of view somehow needs to be integrated into the evaluation methodologies.
Krug intends to provide developers with the tools for creating successful systems: he expresses the success of a system using the metaphor of good navigation without barriers, and this kind of navigation corresponds to his motto “don’t make me think”. Nevertheless when we have to evalute an interface we need to know “what the user thinks” about the system.  From Krug’s perspective, the user ‘should not think’ since the developer should already have thought about the possible barriers that could occur during the interaction. Conversely, from the evluators perspective, the simulation carried out by the developers during the design process cannot alone be enough to create a fully accessible and usable system. The key factors for developing a usable and accessible interface are (i) a well-planned assessment process and (ii) a harmonized and equalized relationship between evaluator and designer during the life cycle.

The myth of designing for the average user

The myth of designing for the average user has collapsed and has been substituted by the reality of actually involving users in the design process in order to cater to the requirements of the broadest possible user population. To this end, and in the context of an iterative design process, the role of evaluation is central in acquiring the user’s point of view and understanding the interaction experience. Recent evaluation approaches take into consideration not only the usability of an interactive system in terms of users’ effectiveness, efficiency, and satisfaction, but also the overall user experience, considering how a user thinks and feels about using a system, before, during, and after use. It should be noted that an important factor affecting the usability and overall user experience of a system is its accessibility by diverse user groups with different cultural, educational, training, and employment background, novice and experienced computer users, the very young and the elderly, as well as people with different types of disabilities. Accessibility, usability, and user experience have to be considered as three different perspectives on interaction, and any evaluator should assess these aspects in sequence in order to produce a complete interaction evaluation.

Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals

The book “Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals” focus on how accessibility, usability, and UX are defined as ISO international standards, how they can be evaluated, and what difficulties are encountered when analyzing the interaction dialogue between the user and his or her system, mostly through the interface. In fact, for the user, the application is the interface, since it is the only controlling component. In other words, accessibility refers to the openness of the system, while usability includes effectiveness (i.e., achieving one’s goal), efficiency, satisfaction, freedom from risk (not losing any data), and context coverage (encompassing a wide scope). From the software point of view, however, the quality depends on its scalability, efficiency, compatibility, usability, reliability, security, maintainability, and portability. As a reader may understand by looking at these two lists of properties (for usability from the user’s point of view) and software quality itself, the system’s evaluation is awkward. Therefore, standards that are incorporated within UX include users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, and behavior before, during, and after use. One of the main points stressed in this book is the relationship between the evaluator and the designer, which must drive the intrasystemic dialogue within three nested spaces: the context of use, the shared aim, and the shared perspective (the outermost layer). In order to perform a comprehensive evaluation, different techniques and methods can be employed for assessing accessibility, usability, and UX in a cascaded sequence. A number of issues must be considered, such as miscommunication (designers versus evaluators), so as to relay feedback to designers to improve their designs. There are also certain criteria for choosing the right users (including disabled ones) to perform controlled experiments, to precisely define the user’s goals, and even to employ a time scale (before using, during use, and after having used the system to be evaluated).

This book introduces a new evaluation model, the integrated model of interaction evaluation, aiming to equalize the roles of the design and the evaluation processes by considering them as two opposite and separate points of view toward reaching the same goal: a successful system. Providing the reader with all the necessary related literature and background information in order to comprehend the proposed model, discussing the need for an integrated model of interaction evaluation, presenting the model itself, and explaining the evaluation methods and techniques that can be used in the context of the proposed framework.

References

[1] Krug, S. (2000). Don’t Make Me Think! A Common Sense Approach to Web Usability.
Indianapolis, IN: New Riders.

[2] Borsci, S., Kurosu, M., Federici,S., and Mele, M.L. (2013) Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals. Boca Raton, FL:CRC Press

Empirical evidence, evaluation criteria and challenges for the effectiveness of virtual and mixed reality tools for training operators of car service maintenance

Highlights

• State of the art review of car service training with virtual and augmented reality.
• Current criteria considered by researchers focus on training effectiveness.
• Limited assessment of trainees experience pre- and post-training.
• This paper reports challenges for next generation of studies on training technologies.

Abstract

The debate on effectiveness of virtual and mixed reality (VR/MR) tools for training professionals and operators is long-running with prominent contributions arguing that there are several shortfalls of experimental approaches and assessment criteria reported within the literature. In the automotive context, although car-makers were pioneers in the use of VR/MR tools for supporting designers, researchers started only recently to explore the effectiveness of VR/MR systems as mean for driving external operators of service centres to acquire the procedural skills necessary for car maintenance processes. In fact, from 463 journal articles on VR/MR tools for training published in the last thirty years, we identified only eight articles in which researchers experimentally tested the effectiveness of VR/MR tools for training service operators’ skills. To survey the current findings and the deficiencies of these eight studies, we use two main drivers: (i) a well-known framework of organizational training programmes, and (ii) a list of eleven evaluation criteria widely applied by researchers of different fields for assessing the effectiveness of training carried out with VR/MR systems. The analysis that we present allows us to: (i) identify a trend among automotive researchers of focusing their analysis only on car service operators’ performance in terms of time and errors, by leaving unexplored important pre- and post-training aspects that could affect the effectiveness of VR/MR tools to deliver training contents – e.g., people skills, previous experience, cibersickness, presence and engagement, usability and satisfaction and (ii) outline the future challenges for designing and assessing VR/MR tools for training car service operators.

See the full article

Beyond the User Preferences: Aligning the Prototype Design to the Users’ Expectations

It is important for practitioners to conceptualize and tailor a prototype in tune with the users’ expectations in the early stages of the design life cycle so the modifications of the product design in advanced phases are kept to a minimum. According to user preference studies, the aesthetic and the usability of a system play an important role in the user appraisal and selection of a product. However, user preferences are just a part of the equation. The fact that a user prefers one product over the other does not mean that he or she would necessarily buy it. To understand the factors affecting the user’s assessment of a product before the actual use of the product and the user’s intention to purchase the product we conducted a study, reported in this article. Our study, a modification of a well-known protocol, considers the users’ preferences of six simulated smartphones each with different combination of attributes. A sample consisting of 365 participants was involved in our analysis. Our results confirm that the main basis for the users’ pre-use preferences is the aesthetics of the product, whereas our results suggest that the main basis for the user’s intention to purchase are the expected usability of the product. Moreover, our analysis reveals that the personal characteristics of the users have different effects on both the users’ preferences and their intention to purchase a product. These results suggest that the designers should carefully balance the aesthetics and usability features of a prototype in tune with the users expectations. If the conceptualization of a product is done properly the redesign cycles after the usability testing can be reduced and speed up the process for releasing the product on the market. [Read More]

CALL FOR PAPERS HCII2015: Interaction and applications of tangible and virtual interfaces

The Conference Proceedings will be published by Springer in Lecture Notes in Computer Science (LNCS). This session will be part of the Human-Computer Interaction Thematic Area in the context of HCI International 2015 (http://2015.hci.international) to be held in Los Angeles, CA, USA, 2 – 7 August 2015.

Deadlines:

  • Monday, 26 January 2015 Abstract submission (800 words) through the CMS, for the review process,
  • Saturday, 31 January 2015 Notification of review outcome,
  • Friday, 20 February 2015 Submission through the CMS of the camera-ready version (full papers, typically 10 pages, with minimum 8 pages long, maximum 12 pages long) of all papers

To receive a formal invitation and submit a paper, please email to the chair of the session: simone.borsci@gmail.com