Standards Update: Usability Test Reporting

David Travis  •   June 5, 2003 (Updated April 21, 2006)

Usability Test Reporting

You know when a profession is mature, because the services and products offered by practitioners share a fair amount of consistency. So for example if I commission two different architects to carry out a house survey, their reports should be pretty similar. One may be cheaper than another, and one may be better able to describe the problems with the roof in terms I will understand, but the problems they find should be consistent.

Usability and variability

Embarrassingly, we have known for a while now that this doesn’t apply to usability testing. The well-publicised work of Rolf Molich shows us that when different usability groups are asked to carry out a web site evaluation, they find lots of usability issues. The problem is that each group finds only a sub-set of all the usability problems. Just one group (of nine) in Molich’s study found more than 25% of the problems. (More detail can be found at Molich’s web site).Given that all these people would describe themselves as “usability professionals”, it’s hard to blame the findings on different skill sets or competencies. A more likely contributor is the fact that the different groups carried out usability testing in a variety of different ways.

Usability standards

So it’s interesting that, during the period of Molich’s work, the US National Institute of Standards and Technology (NIST) initiated an effort to “Define and validate a Common Industry Format for reporting usability tests and their results”. The overall aim of the project was to increase the visibility of software usability.The Common Industry Format (or ‘CIF’ to its friends) isn’t a visual template that helps make usability reports look the same, nor does it tell you how to run a test. However, the framework of the report defines a consistent method of carrying out usability tests. For example, you can only write a compliant report if you take objective usability measures of effectiveness, efficiency and satisfaction (these definitions come from the international usability standard, ISO 9241-11). The report also requires information such as the design of the test (including information about independent variables), data scoring procedures (including operational definitions of usability measures) and details of the statistical analysis used. Following this type of guidance will help ensure consistency and contrasts with the more common approach, where usability tests aren’t “designed”, they just happen. If you are interested in seeing a CIF-style report, Andy Edmonds has recently prepared an HTML version. The CIF became anANSI standard in December 2001 (ANSI/NCITS 354-2001) and became an international standard in 2006 (ISO/IEC 25062:2006 “Common Industry Format (CIF) for usability test reports”). [Article]

Tired of your smartphone? What about an anticipatory phone

Alessio Malizia

In the last few years we have all been living in (strict) contact with our smartphones but there is a new player claiming to bring us a step ahead. The Moto X, the first born from the union between Motorola and Google, will take advantage of sensors and cloud-based services to provide us with a new experience: anticipate our desires and fulfill them when possible without even needing to take it off from our pockets [ARTICLE]. Interestingly, it seems that me and Kai Olsen predicted these gadgets almost 3 years ago:  Kai A. Olsen, Alessio Malizia, “Automated Personal Assistants,” Computer, vol. 44, no. 11, pp. 112, 110-111, November, 2011

20130802-MOTO-X-061edit-660x440

Via Wired

View original post

The Death of the Technologist

stuartmacpherson

In the world of technology, we focus so heavily on the evolution of the technology itself. New features, new releases, new terminology, methodology, ontology, buzzwords, languages, products, vendors and devices. We hardly ever focus on the changing nature of the people that use it. Who is the modern technologist? If we take the example of the computer, in the first half of the 20th century, the technologist was a mathematician, an academic… a necessarily brilliant mind.

eniac1

It was necessary not just to build the machine, but every component within, from base principles. He or she (and this was really a fair split) had to understand every aspect of the machine – the benefit that the machine brought was simply that once it was going, it could work faster than a team of humans; maybe only just! A visit to Bletchley Park will show you the fulcrum computer (specifically the ‘Turing Bombe’) and by that I…

View original post 1,203 more words

Business Objectives vs. User Experience

By Paul Boag  • February 4th, 2011
Here’s a question for you: would you agree that creating a great user experience should be the primary aim of any Web designer? I know what your answer is… and youʼre wrong! Okay, I admit that not all of you would have answered yes, but most probably did. Somehow, the majority of Web designers have come to believe that creating a great user experience is an end in itself. I think we are deceiving ourselves and doing a disservice to our clients at the same time. The truth is that business objectives should trump users’ needs every time. Generating a return on investment is more important for a website than keeping users happy. Sounds horrendous, doesn’t it? Before you flame me in the comments, hear me out.

The Harsh Reality

Letʼs begin with the harsh truth. If an organization does not believe that it will generate some form of a return on an investment (financial or otherwise), then it should not have a website. In other words, if the website doesn’t pay its way, then we have not done our jobs properly. Despite what we might think, our primary aim is to fulfill the business objectives set out by our clients. Remember that creating a great user experience is a means to this end. We do not create great user experiences just to make users happy. We do so because we want them to look favorably on the website and take certain actions that will generate the returns that our clients want. [Article]

Making Usability Findings Actionable: 5 Tips for Writing Better Reports

by Amy Schade • September 14, 2013

Summary: For usability testing to be valuable, study findings must clearly identify issues and help the team move toward design solutions.

If product teams don’t know what to do with usability results, they’ll simply ignore them. It is up to usability practitioners to write usability findings in a clear, precise and descriptive way that helps the team identify the issue and work toward a solution.

One of the keys to helping teams improve designs through usability is making results actionable. Running a test, analyzing results, and delivering a report is useless if the team doesn’t know what to do with those results. Usability practitioners — particularly those new to the field — sometimes complain that teams don’t act on their studies’ results.  While this may be due to a myriad of issues, it is often caused or exacerbated by problems with the findings themselves.

Usability findings have to be usable themselves: we need meta-usability.  Below are 5 tips for practitioners, old and new, to improve the usability of their results. These tips are also handy for managers, clients, product teams and customers of usability reports to help better assess the value of the reports they’re receiving. [Article]

Directive Versus Collaborative UX Consulting

by Baruch Sachs  •  September 9, 2013

“Our engagements to consistently require thought leadership around best practices for using our products. This is sometimes a new experience for our clients, who just expect us to enable them to do what they want to do rather than learning how they can do something better.”

My UX team consists of highly skilled, outgoing UX professionals who live and work all over the world and engage with a diverse set of customers—both rewarding and challenging. Generally, our consulting style is a blend of directive and collaborative consulting. By this, I mean that we provide thought leadership on how to create successful user experiences for our software products, but we do this with a customer rather than to a customer. This is a common and effective approach, blending leadership with a desire to be inclusive and get everyone on board with our ideas and see them come to fruition.

Recently, after an engagement of several months, one customer told me that one of my consultants was almost too adaptable to their needs. This struck me as a bit odd because adaptability is what we are all about in the consulting world. We lead people without commanding them. We adapt to and work within a customer’s culture, while still exposing them to new ideas and methods that will make their project a success. Since my team and I work for a software vendor and are the subject-matter experts for all things relating to the user experience of our products, I expect our engagements to consistently require thought leadership around best practices for using our products. This is sometimes a new experience for our clients, who just expect us to enable them to do what they want to do rather than learning how they can do something better.

Hearing that we need to be more forceful in applying our methodology, UX best practices, and UI design approach is not something that I’ve become accustomed to over the past few years. However, I do see a slight shift that has led me to think more deeply about directive versus collaborative consulting styles and how they relate to user experience.

Most UX engagements—and indeed, most general consulting engagements—employ a mix of both directive and collaborative styles. Very rarely, in my experience in user experience, is an engagement entirely one or the other. Given the fact that UX consulting is as much about leadership as it is about design, it is becoming more and more critical that we understand how our leadership style affects our consulting abilities. [Article]