Testing tomorrow’s surveys today: Avatars as interviewers

After five years in off-campus leased facilities, a big part of the ISR Survey Research Center moved back home in March 2006, into a new wing of the Perry Building, just a few blocks from ISR’s long-time Thompson Street home.

“Our staff is delighted to be back on campus, in close proximity to all the ISR Centers and to the wider University community,” said Beth-Ellen Pennell, who directs the Survey Research Operations unit, the design and data collection arm of the Survey Research Center. “This new location enhances our ability to participate in the intellectual life of the Institute, and also enhances access to our expertise and facilities.”

Among these facilities is a new instrument development laboratory, where research projects with the potential to transform survey data collection are underway this summer. One such project, funded by the National Science Foundation, sounds more like science fiction than survey methodology. The project studies how avatars – animated agents that resemble humans – influence respondents in selfadministered computerized interviews.

“We’re doing a series of lab experiments to explore which features of these agents improve respondent performance and satisfaction, and which hurt,” says ISR researcher Fred Conrad, principal investigator of the project. But instead of building a series of costly animated avatars with varying features and capacities, Conrad and colleagues at the U-M, the New School for Social Research, and the University of Memphis are using what they call a Wizard-of-Oz technique. They are simulating animated avatars, like Victoria below, using videos of real people. The human interviewers actually interact with test respondents, but respondents only see an animated version of the interviewer. “Because respondents believe they are interacting with actual animated agents and not human interviewers, this research will help us understand the way new data collection methods blur the traditional distinction between self- and interviewer-administered surveys,” says Conrad. More generally, the research should help in designing user interfaces that promote high-quality data. And thatmay mean NOT using animated agents under certain circumstances, such as when questions are sensitive.

Watch an animated clip of Victoria:

Another type of project being conducted at the new facility is the use of eyetracking to study survey response processes. Roger Tourangeau and colleagues at the University of Maryland/University of Michigan/Westat Joint Program in Survey Methodology are planning to invite a Swedish friend they call Toby to Ann Arbor to continue work on visual context effects – how images affect survey responses in web surveys – and other issues concerning the visual character of web surveys. In a recent experiment with 117 respondents recruited through ads and flyers, Tourangeau and colleagues, including Fred Conrad and Mick Couper at ISR, employed TOBII, an unobtrusive eye-tracking device that uses near-infrared beams and video to capture participant eye movements without the need for cumbersome lenses or helmets. In one study, they investigated the issue of “banner blindness,” the assumption that visual images are not as influential and interesting when they appear in a website survey header as when they are located in the question area.

TOBII found that subjects were indeed less likely to look at pictures in a website header: 81% of the subjects looked at the pictures in the question area versus 64% who looked at the pictures in the header. Subjects also spent more time looking at pictures in the question area than in the header. Tourangeau and colleagues found that the content of the picture mattered, too. Subjects looked at photos of a happy woman more often than they looked at photos of a sad one, and for longer periods of time. But oddly enough, those who looked at the happy woman said they were less happy themselves than those who looked at the sad woman – a contrast effect, according to Tourangeau.

The new instrument development lab also has a conference room for focus groups, often used in the initial stage of questionnaire development. An observation room with one-way mirrors permits viewing of both the computer testing stations and the conference room, where researchers will also be able to conduct cognitive interviews to make sure respondents understand survey questions the way investigators intend.

Since the new ISR Perry Building also houses classrooms for the Program in Survey Methodology, the ISR Summer Institute, and the ICPSR Summer Program in Quantitative Methods, the lab is expected to advance ISR’s educational mission as well as enhance its research on survey methods. “The new facilities offer a great mix of services that knit together survey methods research, education, and survey operations,” says Patty Maher, SRO associate director.

Testing tomorrow’s surveys today: Avatars as interviewers