User experience (UX) experts agree that there is no substitute for direct observation of test subjects as they wrestle with new products, features, and enhancements. However, those observations take time. Meanwhile, Agile methodologies have accelerated product development cycles, putting pressure on UX teams to report results faster. To help alleviate this pressure, Intel analysts have added another tool, known as heuristic analysis, to their arsenal.
During a heuristic analysis, expert evaluators use a set of standardized usability questions to score a product on a sliding scale. This approach complements other tools (for example, observational studies) and is valuable because it offers nimble, iterative loops that help design teams build better products while staying on schedule.
Dr. Daria Loi, who manages UX Innovation in the PC Client Group at Intel, has a long and impressive background when it comes to observing and testing user experiences. Loi is a passionate participatory design practitioner, with established expertise in ethnographic and practice-based inquiry, natural UIs, and creative management.
Figure 1: Dr. Daria Loi heads up Intel’s UX Innovation in the PC Client Group in Hillsboro, Oregon.
Loi is already well known and respected for her previous work on touch, with sensors, and for her groundbreaking 10x10 methodology. Now she has combined her super-Agile instincts with established user-centered methodologies to create a novel heuristic analysis tool that is specifically geared to 2-in-1 devices. The new system links to long-established doctrine in the field of UX analysis, with some particularly Intel-inspired tweaks.
Key Benefits to Using the New Heuristic Analysis Tool
The new heuristic analysis tool that Dr. Loi developed has three key benefits that center around speed, budget, and quantity.
- Speed. Whereas a user study involving a decent sample usually requires many weeks, the new tool enables Loi to provide UX feedback to her team within a week. Loi can report back faster to those who need her results.
- Budget. Heuristic analysis is much less expensive to set up and conduct. Loi says the budget required to conduct a heuristic evaluation cycle is “dramatically lower” than the money needed for a user study. In some cases, a formal user study can cost up to ten times what a heuristic analysis costs.
- Quantity. Because the constraints of time and money are relaxed, Loi and her team can do more. “Because of low cost and fast turnaround, I can conduct many more iterations and gather more data and more granular tracking of that data,” she said.
Does this mean that one could get away with conducting just a heuristic analysis, with no end-user testing at all? Loi doesn’t think so. “During the exploratory phase of a product, you could combine heuristic evaluations with the 10x10 process to provide fast yet deep UX scaffolds and then combine heuristics with full user studies when the product is in the definition phase,” she said. “This enables you to gather more data, have more iterations, be leaner, and triangulate diverse data points to improve the product as well as provide direction to the next generation.”
All About Heuristic Analysis
A good example of a heuristic is the sentence, "The stylus is recognized by the device.” If you next imagine a long list of similar questions, each one describing the ideal state of different components of a system, you have the beginnings of heuristic analysis. In Loi’s case, when a system goes through a heuristic loop, evaluators try many usages (for example, playing a video or making a video call), test the system in many contexts (for example, at a desk, on the lap, or standing), and carefully test all components, including the touch screen, the keyboard, or the stylus. During this in-depth expert testing, the tool provides a series of sentences describing the “ideal state” for each item, asking evaluators to score the product in relation to that sentence. Scoring is typically done by assigning a number from 1 to 5, where 1 usually means “very poor” and 5 means “very good.” The tool also provides space for evaluators to add comments explaining scores and any detail that can help design teams to improve that item. Here is an example:
The stylus is recognized by the device.
The device is not consistent in recognizing the stylus. Some applications seem to aggravate the situation, often forcing reboot. It could be an interoperability issue or a more generic bug. This must be fixed.
Heuristic evaluations were initially developed to identify problems associated with the design of user interfaces. Jakob Nielsen's heuristics are the most widely known. He developed them based on work with Rolf Molich in 1990 and the final set, still used today, was released in 1994 and published in Nielsen's bookUsability Engineering. Here’s an overview of Nielsen’s Top 10 list:
- Visibility of system status. The system should always keep users informed about what is going on.
- Match between system and the real world. The system should speak the users' language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms.
- User control and freedom. The system should contain a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue.
- Consistency and standards. The system should follow platform conventions.
- Error prevention. The system should prevent the problem from occurring in the first place.
- Recognition rather than recall. The system should minimize the user's memory load by making objects, actions, and help options visible.
- Flexibility and efficiency of use. The system should cater to both inexperienced and experienced users, and allow users to tailor frequent actions.
- Aesthetic and minimalist design. The system should not contain information that is irrelevant or rarely needed.
- Help users recognize, diagnose, and recover from errors. The system should use plain language to explain errors, with no codes, and precisely indicate the problem and then constructively suggest a solution.
- Help and documentation. The system should provide help and documentation. Any such information should be easy to search, focus on the user's task, list concrete steps to be carried out, and not be too large.
Adhering to this set of heuristics reduces complexity and saves evaluation time because it requires only a small set of experts. Most evaluations are accomplished in a matter of days, with variations based on a product’s complexity, the purpose of evaluation, the nature of discovered usability issues, and the reviewers’ competence. When conducted prior to user testing, heuristic analysis will reduce the number and severity of design errors, but it is just one tool out of many. Loi emphasizes that heuristic analysis brings value only if it is used as a complement to other tools.
Each individual evaluator inspects the system alone. Only after all evaluations have been completed are the evaluators allowed to communicate and have their findings aggregated. This procedure is important to ensure independent and unbiased evaluations from each evaluator.
The results of the evaluation can be recorded either as written reports from each evaluator or by having the evaluators verbalize their comments to an observer as they go through the interface. Written reports have the advantage of presenting a formal record of the evaluation but require additional effort from the evaluators and an evaluation manager. Using an observer adds to the overhead of each evaluation session but reduces the evaluators’ workload. Also, with an observer, evaluation results are available fairly soon after the last evaluation session because the observer needs to understand and organize only one set of personal notes, not a set of reports written by others. Furthermore, the observer can assist the evaluators in operating the interface if a problem occurs, such as an unstable prototype, and help guide any evaluators with limited domain expertise or who need to have certain aspects of the interface explained.
In Loi’s case, she applied this approach beyond interface aspects. Her heuristic tool includes hardware, software, and industrial design items. She organized the tool so that each evaluator can independently test the product, providing written scoring and commentary for each heuristic. Once all evaluators complete their assessment, she compiles quantitative and qualitative feedback, prioritizes recommendations, shares the report with key stakeholders, and works with them to address identified items.
One of the key findings that Nielsen discovered long ago is that there is a law of diminishing returns when it comes to adding evaluators. He demonstrated the relationship between the number of evaluators and percentage of issues discovered, with five evaluators usually discovering approximately 75 percent of issues (see Figure 2). “More evaluators should obviously be used in cases where usability is critical, or when large payoffs can be expected due to extensive or mission-critical use of a system,” Nielsen wrote.
Figure 2: After a certain point, there are diminishing additional benefits to adding evaluators.
At Intel, Loi currently has 9-10 evaluators enrolled for each cycle because she wants to catch more than 75 percent of the problems. But she also chose that number of evaluators due to the complexity of the products she usually investigates, such as Ultrabook™ devices and 2-in-1 devices.
To date, Loi has developed over 550 standardized questions, or heuristics, that pinpoint user experience issues not only in software but also in hardware. In addition, she ported those questions from a cumbersome spreadsheet to a web-based system that speeds up her statistical analysis. The result of Loi’s UX work over the past few years has been dramatic improvements in numerous reference designs for Ultrabook devices based on CPUs developed with code-names Broadwell, Ivy Bridge, and Haswell. According to published reviews, the hard work is paying off, enabling Intel to demonstrate a significantly better process and methodology for determining the right product features and capabilities. This approach in turn will help OEMs develop better consumer devices.
Loi continues to modify and improve Intel’s UX methodology processes focused on 2-in-1 system development to gather the most accurate user feedback in the most efficient manner possible. Her evolving method of using heuristics has led to further breakthroughs in hardware improvements.
By working with sustainable, repeatable processes that avoid lengthy lab or in-home studies and focus on fast Agile components, Intel’s UX team continues to blaze a trail for others to follow. Loi’s process resulted in faster integration of key findings and repeatable UX questioning, which led to an overall set of UX guidelines setting industry benchmarks.
Loi gains great satisfaction from the feedback she receives from product managers, planners, and team members. According to Mayne Mihacsi, strategic planner at Intel, Loi’s “…keen insights of end-user motivations, behaviors, and aspirations led to and resulted in strategic product line changes within Intel, benefiting Intel's customers and consumers of Intel-based products.”
James Edwards, principal engineer at Intel, echoed that sentiment. “Daria is one of the most conscientious, disciplined, and devoted people on the UX side that I've ever worked with. She consistently provides timely and deep information from her UX studies that help direct our work on systems and software. She provides very practical, user-focused data and results that bridge research to practical findings on our products.”
This is a promotional post, the original source of the article can be found via the link below
For more such windows resources and tools from Intel, please visit the Intel® Developer Zone