UX Foes, Real and Imaginary

Source: UX Foes, Real and Imaginary

There’s a certain ethos in the UX community that goes like this: “You should test users in a focused way on the exact elements you want them to interact with.  And through this focused testing you will receive great feedback.  Complicated high fidelity prototypes make this difficult.”

This is the imaginary UX foe.
I’ve seen this both explicitly, in the form of blog posts and articles, as well as implicitly from being in the UX community for the past 3 years.  Here is an example, via Digital Telepathy

“With so many things to do, it may be hard to focus. Clients and test subjects wander from tree to tree, getting lost in the beautiful forest you’ve created, making it hard to get focused feedback.”

This is in reference to nuanced, complicated prototypes that perfectly mimic how the final site will look and feel.  I have news for you, if users are getting lost on a full fidelity version of your website, and can’t complete the tasks you give them, your site has problems.  And dumbing down the testing is not the solution… [continue on UX Foes, Real and Imaginary]

Google Analytics and User Experience

The Answer is Analytics

54ca4222a7048Google Analytics as a tool is known for many good things – it has  many cool features, tracks websites, mobile app , can also describe audience, traffic source etc etc, but one  area  where  it really fails its  users is its usability. It really creates a daunting task, to go past  the 90 odd reports and  numerous metrics to identify one  that should be improved or looked into for issues. That’s where data within Google Analytics and the 95 different  reports can potentially paralyze any analytical mind, not to speak of  first  timers.

For  me too, there is  not been a single day when I sit analyzing  with data and wish has  navigation been any easier. However what drives to go past all this and try to decipher  some sort of insight from a number is really the varied ways Google Analytics measures website performance.  If properly set up it…

View original post 259 more words

UX Without User Research Is Not UX

Hoa Loranger
August 10, 2014

Summary: UX teams are responsible for creating desirable experiences for users. Yet many organizations fail to include users in the development process. Without customer input, organizations risk creating interfaces that fail.

…[see Full Article]

…[Full Article]

How many testers are needed to assure the usability of medical devices?

Before releasing a product, manufacturers have to follow a regulatory framework and meet standards, producing reliable evidence that the device presents low levels of risk in use. There is, though, a gap between the needs of the manufacturers to conduct usability testing while managing their costs, and the requirements of authorities for representative evaluation data. A key issue here is the number of users that should complete this evaluation to provide confidence in a product’s safety. This paper reviews the US FDA’s indication that a sample composed of 15 participants per major group (or a minimum of 25 users) should be enough to identify 90–97% of the usability problems and argues that a more nuanced approach to determining sample size (which would also fit well with the FDA’s own concerns) would be beneficial. The paper will show that there is no a priori cohort size that can guarantee a reliable assessment, a point stressed by the FDA in the appendices to its guidance, but that manufacturers can terminate the assessment when appropriate by using a specific approach – illustrated in this paper through a case study – called the ‘Grounded Procedure’.

Read More: http://informahealthcare.com/eprint/XgtbY2HHtKzkX9psRixr/full

Optimizing Conceptual models For Better UX

Megan Wilson – March 25, 2014

Conceptual models are key in the design process in order to provoke the right behaviors , emotions, and attitudes from its consumers. All serious businesses work towards optimizing conceptual models for better UX. Nothing is more beneficial to an enterprise than their product attracting positive User Experience (UX).  The best way to maximize on UX is to ensure that the conceptual model of the product matches nearly perfectly to the mental models that the users have. Some of the ways to do this are well illustrated below.

Use Conventional Conceptual Models

Using conventional product design will create the right UX. The users already have mental models formed from experience of using similar products. When a company replicates this positive experience through a similar design, the product or service is bound to be accepted too.

Conceptual Models Enhance Usability

Usability is the question of how efficiently and effectively the user can deal with the product. An important aspect under this is learnability. The conceptual models in coming up with a product should reduce time utilized by a consumer in learning how the product works.

Also the model would offer better UX by ensuring efficient achievement of the user’s goals when using the product. Errors associated with its use should be minimized through the conceptual model. In addition, it is essential that the user enjoys utilizing the product in the process of achieving success. Customer satisfaction is a key factor that conceptual models should address. [Articles]

10 Things To Know About Variability In The User Experience

Jeff Sauro • March 18, 2014

Most people are comfortable with the concept of an average or percentage as a measure of quality. An equally important component of measuring the user experience is to understand variability.Here are 10 things to know about measuring variability in the user experience.

  • Variability is inherent to measuring human performance. People have different browsing patterns, speeds, inclinations and motivations when they use software or websites. Differences in prior experience and domain knowledge often play a primary role in how users solve problems and accomplish tasks in interfaces. This can lead to vastly different experiences—including encountering different interface problems and resulting in large differences in task performance times or perception metrics.
  • Often the differences between users outweighs the differences between designs. One consequence of the high variability between users is that even major design changes aren’t detected in completion rates or perception metrics from observing samples of users. It might be that changes did make an impact, but the variability between users masks the findings.
  • Two ways to manage high variability are increasing the sample size and using a within-subjects study. When making a comparison between designs, if each user attempts tasks on both designs (in a counterbalanced order), you effectively eliminate the between user variability. This is called a within-subjects study. Each user acts as their own control and differences between designs are easier to detect–even with small sample sizes.If you cannot use a within-subjects study or are not making a comparison, the next best alternative is to increase your sample size. Remember though that you need to roughly quadruple your sample size in order to cut your margin or error in half. [Article]

Sample representativeness in usability testing: the ramble in the woods example

Simone Borsci 17.03.2014

You might imagine that a practitioner invites a certain number of people for a particular kind of ramble in the woods. The practitioner assigns participants the following goal: “Go in the woods and follow the pathway indicated by the map in order to identify and report to me where in the woods you see mushrooms without picking them up from the woods. You can note on the map the position of each mushroom you see.” After each participant returns from the woods, the practitioner has a list of the mushrooms identified by each subject. It can so happen that some subjects will have identified the same mushrooms in the same positions; these mushrooms are probably the most visible. Other mushrooms, however—those in more hidden positions—are identified only by a small number of participants. When a mushroom is identified by a participant, the identification reported by the other participants adds nothing to the overall discovery behavior of the sample, while any new mushroom identified increases the overall effectiveness of the sample. The more participants to go into the woods the higher the probability that the sample will have identified a larger number of mushrooms, because a larger group has a greater scope for divergent behavior (i.e., more participants looking for mushrooms in hidden positions) as compared to a smaller group. Involving a large sample is quite expensive for the practitioner, however, and it is never the most efficient solution. It is in fact possible to identify the smallest group with the greatest quality of discovery behavior. In this context, the quality of the behavior is seen as the ability of a small sample to accurately represent the behavior of a larger sample. We can thus define the representativeness of a sample as the degree to which the mushrooms (i.e., the problems) accurately identified by the sample represent the mushrooms that can be identified by all possible participants in the woods ramble following the pathway indicated on the map (i.e., the task of the evaluation test). 

From:Computer Systems Experiences of Users with and Without Disabilities: An Evaluation Guide for Professionals, pp.184-185

Understanding Effect Sizes In User Research

Jeff Sauro • March 11, 2014
The difference is statistically significant. When using statistics to make comparisons between designs, it’s not enough to just say differences are statistically significant or only report the p-values. With large sample sizes in surveys, unmoderated usability testing, or A/B testing you are likely to find statistical significance with your comparisons. What you need to know is how big of a difference was detected. The size of the difference gives you a better idea about the practical significance and impact of the statistical result. [article]