User Experience Re-Mastered Your Guide to Getting the Right Design- P9

46 526 0
User Experience Re-Mastered Your Guide to Getting the Right Design- P9

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

337 Analysis and Interpretation of User Observation CHAPTER 11 INTERPRETATION OF USER-OBSERVATION DATA Once you have analyzed your data – for example, by grouping them according to a coding scheme – your fi nal step is interpretation: deciding what caused the defects that you have identifi ed and recommending what to do about them. In Table 11.4 , we suggest a template for gathering defects and interpretations. Again, some sample data have been entered into the table for the purposes of illustra- tion. For the example task, because the defect is related to the fi rst action of the task and the task cannot be accomplished until the user chooses the right menu item, we have assigned a severity rating of “High.” Notice that this form carefully preserves the distinction between our observations and our comments on them. Some practitioners prefer to gather the defects and the good points about the interface on a single form, whereas others prefer to deal with all the defects and all the good points in two separate passes. Choose whichever method you prefer. Assigning Severities The process of summarizing the data usually makes it obvious which problems require the most urgent attention. In our form in Table 11.3 , we have included a column for assigning a severity to each defect. Bearing in mind our comments about statistics, one important point to remem- ber is that the weighting given to each participant’s results depends very much on comparison with your overall user profi le. Recommending Changes Some authorities stop here, taking the view that it is the responsibility of the development team to decide what to change in the interface. For example, the Common Industry Format for summative evaluation does not include a section for recommendations, taking the view that deciding what to do is a separate process when undertaking a summative evaluation: Table 11.3 Data Interpretation Form for User Observations Task Scenario No. 1 Evaluator’s Name: John Session Date: February 11 Session Start Time: 9:30 a.m. Session End Time: 10:20 a.m. Usability Observation Evaluator’s Comments Cause of the Usability Defect, if There Is One Severity Rating The user did not select the right menu item (Options) to initiate the task. The user was not sure which menu item Options was in. The menu name is inappropriate, as it does not relate to the required action. High –– – – User Experience Re-Mastered: Your Guide to Getting the Right Design 338 Stakeholders can use the usability data to help make informed decisions concerning the release of software products or the procurement of such products. ( http://zing.ncsl.nist.gov/iusr/documents/whatistheCIF.html ) If your task is to improve the interface as well as to establish whether it meets the requirements, then you are likely to need to work out what to do next: recom- mending the changes. So we suggest a template in Table 11.4 to record the recommendations. In the table, the “Status” column indicates what is being planned for the recommended change – when the usability defect will be rectifi ed, if it has been deferred, or if it is being ignored for the time being. It is hard to be specifi c about interpretation of results. Fortunately, you will fi nd that many problems have obvious solutions, particularly if this is an exploratory evaluation of an early prototype. Evaluations are full of surprises. You will fi nd defects in parts of the interface that you thought would work well, and conversely you may fi nd that users are completely comfortable with something that you personally fi nd irritating or never expected to work. Equally frequently, you will fi nd that during the analysis of the results you simply do not have the data to provide an answer. Questions get overlooked, or users have confl icting opinions. Finally, the experience of working with real users can entirely change your perception of their tasks and environment, and the domain of the user interface. Your recommendations, therefore, are likely to contain a mixture of several points: Successes to build on ■ Defects to fi x ■ Possible defects or successes that are not proven – not enough evidence ■ to decide either way (these require further evaluation) Areas of the user interface that were not tested (no evidence) (these also ■ require further evaluation) Changes to usability and other requirements ■ Table 11.4 Recommendations Form Participant Usability defect Cause of the usability defect Severity rating Recommended solution Status description Beth The user did not select the right menu item (Options) to initiate the task. The menu name is inappropri- ate, as it does not relate to the required action. The menu name should be changed to “Group.” High Make change in next revision. Mary – – – – – 339 Analysis and Interpretation of User Observation CHAPTER 11 WRITING THE EVALUATION REPORT Generally, you need to write up what you have done in an evaluation: To act as a record of what you did ■ To communicate the fi ndings to other stakeholders ■ The style and contents of the report depend very much on who you are writing for and why. Here is an example of a typical report created for an academic journal. EDITOR’S NOTE: TIMELINESS CAN CAUSE TROUBLE: WHEN OBSERVATIONS BECOME “THE REPORT” Be cautious about releasing preliminary results, including e-mails about the evaluation, that observers send to their teams after seeing a few sessions. By chance, observers might see sessions that are not representative of the overall results. Development schedules have been shrinking over the last decade and there is often pressure to “get the data out quickly.” In some cases, developers watch think-aloud sessions, discuss major problems at the end of the day and makes changes to the product (in the absence of any formal report) that sometimes appear in code even before the evaluation is complete. While fi xing an obvious bug (e.g., a misspelled label) may be acceptable, changing key features without discussing the impact of the changes across the product may yield fi xes that create new usability problems. If you plan to release daily or preliminary results, err on the conservative side and release only the most certain fi ndings with a caveat about the dangers of making changes before all the data are in. Caution observers that acting too hastily might result in fi xes that have to be “unfi xed” or political problems that have to be undone. EXTRACT FROM AN ACADEMIC PAPER ON THE GLOBAL WARMING EVALUATIONS Abstract The Open University [OU] has undertaken the production of a suite of multimedia teaching materials for inclusion in its forthcoming science foundation course. Two of these packages ( Global Warming and Cooling and An Element on the Move ) have recently been tested and some interesting general issues have emerged from these empirical studies. The formative testing of each piece of software was individually tailored to the respective designers’ requirements. Since these packages were not at the same stage of development, the evaluations were constructed to answer very different questions and to satisfy different production needs. The question the designers of the Global Warming software wanted answered was: “Is the generic shell usable/easy to navigate through?” User Experience Re-Mastered: Your Guide to Getting the Right Design 340 This needed an answer because the mathematical model of Global Warming had not been completed on time but the software production schedule still had to proceed. Hence the designers needed to know that when the model was slotted in the students would be able to work with the current structure of the program. 2.0 Background The multimedia materials for this Science Foundation course consisted of 26 programs. This fi rst year course introduces students to the academic disciplines of biology, chemistry, earth sciences and physics and so programs were developed for each of these subject domains. The software was designed not to stand alone but to complement written course notes, videotapes, home experiments, and face to face tutorials. The aims of the program production teams were to: Exploit the media to produce pedagogical materials that could not be made in any ■ other way Produce a program with easy communication channels to ■ i. the software itself via the interface ii. the domain knowledge via the structure and presentation of the program Provide students with high levels of interactivity ■ Sustain students with a motivating learning experience ■ In order to test whether the programs would meet the above aims a framework for the developmental testing of the software was devised. A three-phased approach was recom- mended and accepted by the Science Team. This meant that prototypes which contained generic features could be tested at a very early stage and that the developers would aim, with these early programs, to actually make prototypes to be tested quickly at the begin- ning of the software’s life cycle. This was known as the Primary Formative Testing Phase. The subjects for this phase would not need to be Open University students but people who were more “competent” computer users. We wanted to see if average computer users could navigate through a section and understand a particular teaching strategy without then having to investigate all the details of the subject matter. This would mean the testing could take place more quickly and easily with subjects who could be found on campus. The Secondary Formative Testing Phase was aimed to test the usability and learning potential of the software. It would take place later in the developmental cycle and would use typical Open University students with some science background. Pre- to post-test learning measures would indicate the degree of learning that took place with the software. Testing the time taken to work through the programs was an important objective for this phase. It was agreed that the Open University students would be paid a small fee when they came to the university to test the software. The Tertiary Testing Phase would include the fi nal testing with pairs of Open University students working together with the software. In this way, the talk generated around the tasks would indicate how clearly the tasks were constructed and how well the students understood the teaching objectives of the program. (The framework is summarized in the table presented here.) 341 Analysis and Interpretation of User Observation CHAPTER 11 3.0 Framework for Formative Developmental Testing 3.1 The Testing Cycle …The aim of the testing here was to evaluate some generic features, therefore all the pieces of the program did not have to be in place. In fact the aim of this evaluation study was to provide the developers with feedback about general usability issues, the interface and subjects’ ease of navigation around the system… 3.2 Subjects …Generic features were tested with “experienced users” who did not have scientifi c background knowledge and could easily be found to fi ll the tight testing schedule…. In order to understand if certain generic structures worked, “experienced users” were found (mean age ϭ 32.6 years ϩ 5). These consisted of 10 subjects who worked alone with the software and had already used computers for at least fi ve years and had some experience of multimedia software. The reason these types of subjects were selected was that if these experts could not understand the pedagogical approach and use the interface satisfactorily, then the novice learners would have extreme diffi culty too. Also these subjects were confi dent users and could criticize the software using a “cognitive walk through” methodology. Framework for the Developmental Testing of the Multimedia Materials Produced for the Science Foundation Course 3.3 Data Collection Instruments …In order to understand the students’ background knowledge, they were given two questionnaires to complete which were about their computer experience and also a pre- test about the subject area which was going to be investigated. The pre-test was made up of eight to 10 questions which addressed the main teaching objectives of the software… 4.0 Evaluation Findings …The Global Warming program introduced the students to a climatic model of the factors that change the earth’s temperature. These variables, which include the solar constant, levels of carbon dioxide and water vapor, aerosol content, cloud cover, ice and snow cover, and albedo could all be changed by the student who could then explore these factors’ sensi- tivities, understand the effects of coupling between factors by again manipulating them, and fi nally, to gain an appreciation of the variation of global warming with latitude and season. Evaluation type Aims Subjects Primary Phase Test design and generic fea- tures Competent computer users Secondary Phase Test usability and learning potential of product OU students with science back- ground Tertiary Phase Test usability and whole learn- ing experience Pairs of OU students with science background User Experience Re-Mastered: Your Guide to Getting the Right Design 342 There is a large cognitive overhead for the students using this software and they have to be guided through a number of tasks. It was, therefore, important to test the screen layout, interface and pedagogical approach very early in the developmental cycle and this was achieved by testing a prototype without the mathematical model being in place. The “cognitive walk through” technique worked well here. Subjects said when they arrived at a stumbling block, “I don’t know what to do here.” The main diffi culty experienced was when tabs instead of buttons suddenly appeared on the interface. The functionality of the tabs was lost on the subjects. A general fi nding here is not to mix these two different interface elements. Subjects liked the audio linkage between sections and the use of audio to convey task instructions. One subject enthusiasti- cally mentioned that, “This feels like I have a tutor in the room with me—helping me.” Other fi ndings suggest that any graphical output of data should sit close to the data table. The simulation run button did not need an icon of an athlete literally running; however, the strategy of predict, look, and explain was a good one when using the simulation… Conclusions The two formative testing approaches proved to be effective evaluation techniques for two separate pieces of software. This was because the multimedia programs were in different phases of their developmental cycle. On the one hand, usability of a generic shell was the primary aim of the testing and experienced users, who could be found at short notice, were an important factor to the success of this evaluation. The ability of the subjects to confi dently describe their experience became critical data in this instance. Extracted from Whitelock (1998) Should You Describe Your Method? If you are writing a report for an academic audience, it is essential to include a full description of the method you used. An academic reader is likely to want to decide whether your fi ndings are supported by the method and may want to replicate your work. If you are writing for a business audience, then you will need to weigh up their desire for a complete record of your activities and the time that they have to read the report. Some organizations like to see full descriptions, similar to those expected by an academic audience. Others prefer to concentrate on the results, with the detailed method relegated to an appendix or even a line such as, “Details of the method are available on request.” 343 Analysis and Interpretation of User Observation CHAPTER 11 FIGURE 11.6 Findings presented with a screenshot. From Jarrett (2004). The black line is dominant on the page. The prompts and headings are hard to read (orange on white). The three prompts have equal visual weight and it is not clear whether you have to enter one or all of them. The three prompts are the same color as the headings so give an impression of being headings rather than guiding data entry. It seems off- putting to be "welcomed" with the phrase, "Your location is not set" This seems somewhat accusing rather than giving me encouragement to delve further. The long list of partner names is offputting. It's important to see what the site is covering but this presentation makes it a blur. This information would be better presented in a bulleted list. Text requires horizontal scrolling at 800x600. The primary functionality for search is "below the fold" at 800x600. EDITOR’S NOTE: SHOULD YOU DESCRIBE YOUR SAMPLING METHOD IN A REPORT? There are a variety of ways to create a sample of users. Consider describing your sam- pling method (e.g., snowball sampling, convenience sampling, or dimensional sampling) briefl y since different sampling methods may affect how the data are interpreted. “Description” does not need to be confi ned to words. Your report will be more interesting to read if you include screenshots, pictures, or other illustrations of the interface with which the user was working. Jarrett (2004) gives two alternative views of the same piece of an evaluation report: Describing Your Results User Experience Re-Mastered: Your Guide to Getting the Right Design 344 We know that long chunks of writing can look boring, and we joke about “ordeal by bullet points” when we’re in a presentation. But how often have we been guilty of the same sins in our reports? Here are two ways to present the same information. First, as a block of text: It seems off-putting to be “welcomed” with the phrase, “Your location is not set.” This seems somewhat accusing rather than giving me encouragement to delve further. The long list of partner names is off-putting. It’s important to see what the site is covering but this presentation makes it a blur. This information would be better presented in a bulleted list. The three prompts have equal visual weight and it is not clear whether you have to enter one or all of them. The prompts and headings are hard to read (orange on white). The three prompts are the same color as the headings so give an impression of being headings rather than guiding data entry. The primary functionality for search is “below the fold” at 800 ´ 600. Text requires horizontal scrolling at 800 ´ 600. The black line is dominant on the page. (p. 3) Indigestible, right? Now look at the screenshot [in Fig. 11.6 ]. I preferred it, and I hope that you do too. SUMMARY In this chapter, we discussed how to collate evaluation data, analyze it, interpret it, and record recommendations. We introduced the concept of a severity rating for a usability defect: assigning severity ratings to usability defects helps in mak- ing decisions about the optimal allocation of resources to resolve them. Severity ratings, therefore, help to prioritize the recommended changes in tackling the usability defects. Finally, we started to think about how to present your fi ndings. We will return to this topic in more detail, but fi rst we will look at some other types of evaluation. 345 CHAPTER 12 CHAPTER 12 Inspections of the User Interface Debbie Stone, Caroline Jarrett, Mark Woodroffe, and Shailey Minocha EDITOR’S COMMENTS User interface inspections are the most commonly used tools in our efforts to improve usability. Inspections generally involve examining a user interface against a set of user interface standards, guidelines, or principles. This chapter describes heuristic evaluation, a method invented by Jakob Nielsen and Rolf Molich that was meant to be simple enough for developers and other members of a product team to use with limited training. The primary goal of a heuristic evaluation is to reveal as many usability or design problems as possible at relatively low cost. A secondary goal of the heuristic evaluation is to train members of the product team to recognize potential usability problems so they can be eliminated earlier in the design process. You can use heuristic evaluation when: You have limited (or no) access to users. ■ You need to produce an extremely fast review and do not have time to recruit ■ participants and set up a full-fl edged lab study. Your evaluators are dispersed around the world. ■ You are looking for breadth in your review. ■ Your clients have come to trust your judgment and for many issues do not require you ■ to provide the results of user testing or other more expensive evaluation methods. This chapter describes the procedure for heuristic evaluation and also provides several other inspection methods that practitioners can use, either individually or with groups, to eliminate usability defects from their products. Copyright © 2010 Elsevier, Inc. All rights Reserved. User Experience Re-Mastered: Your Guide to Getting the Right Design 346 “Inspection of the user interface” is a generic name for a set of techniques that involve inspectors examining the user interface to check whether it complies with a set of design principles known as heuristics . In this chapter, we describe the heu- ristic inspection technique (also known as heuristic evaluation ). Heuristic inspec- tion was chosen as it is one of the most popular and well-researched inspection techniques for evaluation (Molich & Nielsen, 1990). CREATING THE EVALUATION PLAN FOR HEURISTIC INSPECTION Choosing the Heuristics Your fi rst task in planning a heuristic inspection is to decide which set of guide- lines or heuristics you will use. If your organization has established a specifi c style guide, then that is one obvious choice. The advantage of using heuristics that you have used for design is that you can establish whether they have been applied consistently. Otherwise, the advantage of using a different set is that you get a fresh eye on the interface and may spot problems that would otherwise be overlooked. One set of heuristics often used in inspections is the set proposed by Nielsen (1993), which we have included as Table 12.1 . We found that the humorous article on the usability of infants in the box below helped us to understand how these heuristics might be applied. The Inspectors Instead of recruiting a real or representative user to be your participant, you need to fi nd one or more inspectors. Ideally, an inspector is an expert in human– computer interaction (HCI) and the domain of the system. These skills are rarely available in one person. It is also diffi cult for anyone, no matter how expert, to give equal attention to a variety of heuristics and domain knowledge. It is, there- fore, more usual to fi nd two or more inspectors with different backgrounds. The box below presents some ideas. INTRODUCTION Although user observation gives you a huge amount of insight into how users think about the user interface, it can be time consuming to recruit participants and observe them only to fi nd that a large number of basic problems in the user interface could have been avoided if the designers had followed good practice in design. Undertaking an inspection of the user interface before (but not instead of) user observation can be benefi cial to your evaluation. N O T E The contents of this section have been particularly infl uenced by the following sources: Virzi (1997), Nielsen (1994), and Nielsen (1993). [...]... theory, collating and summarizing data from a heuristic inspection is a relatively simple matter of gathering together the forms that the inspectors have used 351 352 User Experience Re-Mastered: Your Guide to Getting the Right Design However, because inspectors do not always have the same opinion, you may want to get the inspectors to review each other’s forms and discuss any differences between them,... accuracy, aesthetic appeal, and appropriate levels of completeness 15 Privacy The system helps the user to protect personal or private information – that belonging to the user or to the user s clients 355 356 User Experience Re-Mastered: Your Guide to Getting the Right Design Guideline Reviews Guideline reviews are inspections that use a set of design guidelines, such as a corporate style guide, instead... wide variety of user interfaces, so there may be some guidelines in the standard that are not applicable for the prototype you are evaluating (hence, the second column in Table 12.5 to record the applicability) The next column is for recording the adherence/nonadherence of the interface feature to the particular guideline of the standard The inspector records his or her comments in the last column... want to specify his or her own severity ratings for the usability defects based on his or her own experience and opinions Encourage the inspectors to be as specific as possible in linking the usability defects to the heuristics This helps the inspectors concentrate on the heuristics to be checked ANALYSIS OF HEURISTIC INSPECTION DATA The analysis of your data follows the same process as for the user. .. interaction with the user The user s interactions with the system enhance the quality of his or her experience The user is treated with respect The design reflects the user s professional role, personal identity, or intention The design is aesthetically pleasing – with an appropriate balance of artistic as well as functional value 14 Quality work The system supports the user in delivering quality work to his... want to record the inspection for later review, you will need to obtain permission from your inspector(s) If your inspectors are domain or HCI experts, then they are unlikely to need any training before the session If you have less experienced inspectors, it may be worthwhile to run through the heuristics with them and perhaps start with a practice screen so that everyone is clear about how you want the. .. from real users in the importance they attach to a defect For example, they may miss something they think is unimportant that will trip up real users, or they may be overly concerned about something that in fact only slightly affects the real users Inspectors may have their own preferences, biases, and views toward the design of user interfaces or interaction design, which in turn may bias the evaluation... indicating the problem, and constructively suggesting a solution Help and documentation Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation Any such information should be easy to search, focus on the user s task, list concrete steps to be carried out, and not be too large 347 348 User Experience Re-Mastered: Your Guide to Getting the. .. need to ask them any questions about their background Because the inspectors fill in the defect reports immediately, there is usually no need to record the session – there is little insight to be gained from watching a video of someone alternating between looking at a screen and filling in a form! However, you may want to record it if the inspector is verbalizing his or her thoughts while undertaking the. .. in the first place Recognition rather than recall Make objects, actions, and options visible The user should not have to remember information from one part of the dialog to another Instructions or use of the system should be visible or easily retrievable whenever appropriate Flexibility and efficiency of use Accelerators – unseen by the novice user – may often speed up the interaction for the expert user . that belonging to the user or to the user s clients. User Experience Re-Mastered: Your Guide to Getting the Right Design 356 Guideline Reviews Guideline reviews. 2010 Elsevier, Inc. All rights Reserved. User Experience Re-Mastered: Your Guide to Getting the Right Design 346 “Inspection of the user interface” is a

Ngày đăng: 20/10/2013, 16:15

Từ khóa liên quan

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan