Microsoft Usability Labs Tour Recap

Here are some notes of mine from the tour at the Microsoft usability labs in Mt. View last week. Just personal observations. Thanks to Christine, and Paula, Mark, and Noor, who made time to meet with us.

Seems eye trackers are common equipment at such labs these days. But seems they can also create problems understanding results. For example, someone may look at one thing but be thinking about something entirely different, a common experience. And attention shifts much more rapidly than eye movements, you might have several shifts in attention within a scene before you move your eyes once (as those of us taking Psyc 256, Perception, heard last week). At Microsoft they said the data from an eye tracker gets a lot of attention from other groups, because it looks like hard data, but it still takes a lot of interpretation too.

Microsoft has both large central user experience teams (such as Windows and Office), and smaller ones that report directly to a specific product group. The larger, more centralized UX teams can range in size from 50-80 people, including designers, writers, and researchers. Many on the products at the Bay Area campus that we visited came originally from acquisitions (such as WebTV and Hotmail). MSN Future Directions and Research is one of the user experience teams that we met in Mt. View. That team has 6 researchers worldwide, all with Master’s and PhDs in varies types of research, and a designer who does mockups and prototypes (BA degree).

The most challenging part of user experience work is probably working with other internal groups on usability. Program managers balance usability with other business issues when they make decisions on a product. Like other high tech companies, Microsoft is data driven, and customer viewpoints need to be presented (data such as number of clicks may not match customer preferences).

Research begins with specific questions. There is no standard research methodology, but combinations of research methods are used depending on the problem. Typical methods include ‘lab ethnography’ (customer studies in the lab), participatory design (often using paper outlines), field visits (in homes or workplaces), eye tracking, remote testing (UK or Canada, for example), focus groups, single day testing (multiple resources on one project for a day), concept design (futures), card sorting, static mockups, interactive prototypes, and straight usability testing.

Metrics are typically customized for each study. In field visits, biggest value is seeing user environment and artifacts, also long term use of a product.

On our way to the usability labs we could see life size color posters of different people posing as personas, typical Microsoft customers. Usually usability studies have from 8 to 12 participants, although remote studies can include up to at least 32 people. We saw a TV lab, to evaluate software for selecting and storing TV programs, and an eye tracker demonstration. Usability tests can be displayed live to developers and others in their offices, although it’s always good to have developers come to the test and discuss it on the spot. Seeing a user have difficulty finding a function is a powerful persuasive tool. Microsoft uses a set of standard software tools in their usability labs around the world. A single study might take a couple of weeks to prepare, a week to run the study, and then a week to review the results.

Evaluating software for selecting and retrieving TV programs emphasizes customer enjoyment, which cannot be measured directly. Someone may spend a few minutes finding a certain film, and be quite satisfied if they get what they want. The TV lab includes living room furniture, as well as a separate control room. One issue with developing the software is that it will be used with a wide variety of different remotes from different manufacturers, so there can be problems with certain remotes. Layout of some remotes might encourage users to confuse modes, for example.

The eye tracking lab used software and hardware from Tobii, a Swedish company. The software is simple to calibrate and to use, and different types of output are possible, including graphical plots and spreadsheet data. Eye tracking is only used after other studies. For example, a study might first use recall methods to investigate user experience. Eye tracking generates metrics and results that “feel valid”, but may still take some interpretation. For example, if someone looks up away from the display, they might not be ignoring the display, but just pausing to think. And eye tracking may show users scan ads, because the ads are designed to attract visual attention, when they do not report paying attention to ads. Again you have to interpret the eye tracker results.