ISR Inclusive Research Matters Series

This series signals ISR’s transparent and deliberate commitment to examining the role of inclusion in the research process. The idea for the seminar series arose from discussions within the DEI Educational Programs Working Group focusing on the role of inclusion in research methodology. The topics and questions that will be explored within the series include:

  • The measurement of constructs across intersectional identities
  • Assumptions and decisions that impact sampling, representation, and the selection of research topics.

Past Talks

Teaching Inclusive and Policy-Relevant Statistical Methods

Catie Hausman (University of Michigan)
April 3, 2023

Abstract: Professor Hausman will share examples of inclusive pedagogical approaches to teaching quantitative methods, based on her experiences teaching Statistics to master’s level students in the School of Public Policy. She’ll describe methods that can improve learning outcomes and student engagement, by recognizing a diverse array of learning styles and student backgrounds. She’ll also discuss how to promote critical thinking in quantitative classes, both to improve student comprehension and to acknowledge ethical considerations in the application of statistical methods.


Critical Quantitative Methodology: Advanced Measurement Modeling to Identify and Remediate Racial (and other forms of) Bias

Matt Diemer (University of Michigan)
February 20, 2023 

Abstract: The emerging Critical Quantitative (CQ) perspective is anchored by five guiding principles (i.e., foundation, goals, parity, subjectivity, and self-reflexivity) to mitigate racism and advance social justice. Within this broader methodological perspective, sound measurement is foundational to the quantitative enterprise. Despite the problematic history of measurement, it can be repurposed for critical and equitable ends. MIMIC (Multiple Indicator and MultIple Causes) models are a measurement strategy to simply and efficiently test whether a measure means the same thing and can be measured in the same way across groups (e.g., racial/ethnic and/or gender). This talk considers the affordances and limitations of MIMICs for critical quantitative methods, by detecting and mitigating racial, ethnic, gendered, and other forms of bias in items and in measures.


The Promise of Inclusivity in Biosocial Research – Lessons from Population-based Studies

Jessica Faul (Research Associate Professor, SRC, Institute for Social Research)
Colter Mitchell (Research Associate Professor, SRC, Institute for Social Research)

April 18, 2022


Giving Rare Populations a Voice in Public Opinion Research: Pew Research Center’s Strategies for Surveying Muslim Americans, Jewish Americans, and Other Populations

Courtney Kennedy (Director of Survey Research at Pew Research Center)

April 6, 2022

Abstract:

A typical public opinion survey cannot provide reliable insights into the attitudes and experiences of relatively small and diverse religious groups, such as adults identifying as Jewish or Muslim. Not only are the sample sizes too small, but adults who speak languages such as Russian, Arabic, or Farsi (and not English) are excluded from interviewing. This presentation discusses how Pew Research Center has sought to address this research gap by fielding large, multilingual probability-based surveys of special populations. Examples include the Center’s 2017 Survey of Muslim Americans and the 2020 Survey of Jewish Americans. These studies present numerous challenges in sampling, recruitment, crafting appropriate questions, and weighting. The presentation will also discuss the Center’s methods for studying racial and ethnic populations with the goal of reporting on diversity within these populations, as opposed to treated them as monolithic groups.


How Invalid and Mischievous Survey Responses Bias Estimates of LGBQ-heterosexual Youth Risk Disparities

Joseph Cimpian (Associate Professor of Economics and Education Policy at NYU Steinhardt)

March 16, 2022

Abstract:

Survey respondents don’t always take surveys as seriously as researchers would like. Sometimes, they provide intentionally untrue, extreme responses. Other times, they skip items or fill in random patterns. We might be tempted to think this just introduces some random error into the estimates, but these responses can have undue effects on estimates of the wellbeing and risk of minoritized populations, such as racially and sexually minoritized youth. Over the past decade, and with a focus on youth who identify as lesbian, gay, bisexual, or questioning (LGBQ), a variety of data-validity screening techniques have been employed in attempts to scrub datasets of “mischievous responders,” youths who systematically provide extreme and untrue responses to outcome items and who tend to falsely report being LGBQ. In this talk, I discuss how mischievous responders—and invalid responses, more generally—can perpetuate narratives of heightened risk, rather than those of greater resilience in the face of obstacles, for LGBQ youth. The talk will review several recent and ongoing studies using pre-registration and replication to test how invalid data affect LGBQ-heterosexual disparities on a wide range of outcomes. Key findings include: (1) potentially invalid responders inflate some (but not all) LGBQ–heterosexual disparities; (2) this is true more among boys than girls; (3) low-incidence outcomes (e.g., heroin use) are particularly susceptible to bias; and (4) the method for detection and mitigation affects the estimates. Yet, these methods do not solve all data validity concerns, and their limitations are discussed. While the empirical focus of this talk is on LGBQ youth, the issues and methods discussed are relevant to research on other minoritized groups and youth generally, and speak to survey development, methodology, and the robustness and transparency of research.


Workshop: Writing and Evaluating Diversity Statements For Academic Positions

March 15, 2022

ISR Inclusive Research Matters Workshop: Writing and Evaluating Diversity Statements For Academic Positions March 15, 2022 2-4pm. Workshop Facilitators: Laura Schram - Director of Professional Development and Engagement at Rackham Askari Rushing - DEI Certificate Academic Program Specialist Paula Fomby - Associate Director, Population Studies Center

Workshop description:
Increasingly, hiring committees are interested in how prospective faculty job and postdoctoral fellowship candidates will contribute to diversity, equity, and inclusion. As a result, many academic employers have begun to request a “diversity statement” as part of the faculty job or postdoctoral fellowship application process. The purpose of this workshop is to discuss best practices for writing and reviewing diversity statements for academic positions.

This session is most relevant for graduate students, postdocs, and faculty. Staff who support academic searches may also find this informative.

Learning objectives:

  • Reflect on ways you are committed to diversity, equity, and inclusion in your research, teaching, engagement, leadership, or other areas.
  • Identify resources that allow you to participate and contribute to DEI initiatives, opportunities, projects, and research.
  • Review best practices to write a diversity statement.
  • Learn how to critically evaluate diversity statements.

Optional pre-workshop video, about the history of the diversity statement.

Opening Statements: Sheri Notaro, ISR Director of Diversity, Equity, and Inclusion

Workshop Facilitators:
Laura Schram – Director of Professional Development and Engagement at Rackham
Askari Rushing – DEI Certificate Academic Program Specialist
Paula Fomby – Associate Director, Population Studies Center


Equity & Inclusion in Accessible Survey Design

Scott Crawford (Founder and Chief Vision Officer, SoundRocket)

Abstract:

As we work to adapt research designs to make use of new technologies (web and smart devices), it is also important to consider how study design and survey design may impact those who rely on assistive technology. Sections 508 (covering use of accessible information and communication technology) and 501(addressing reasonable accommodation) of the Rehabilitation Act of 1973 compliance standards have been around for a long time—but the survey research industry has often taken the path providing reasonable (non-technological) accommodations for study participants. These often involve alternate modes of data collection, but rarely provide a truly equitable solution for study participation. If a web-based survey is not compliant with assistive technologies, the participant may be offered the option of completing a survey with an interviewer. Survey methodologists know well that introducing a live human interaction may change how participants respond—especially if the study involves sensitive topics. Imagine a workplace survey on Diversity, Equity, and Inclusion where a sight-impaired employee is asked to answer questions about how they are treated in their workplace, but they are required to answer these questions through an interviewer, and not privately via a website. Not only is this request not equitable for the employee (fully sighted employees get to respond more privately), it can also bias the results if the participant is not honest about the struggle for fear of receiving backlash from their employer if the interviewer passed along their frustrations. In the act of being denied equitable participation, future decisions will then be made on potentially faulty results about the experience of such people.

In this presentation, Crawford focuses on developing an equitable research design, partially through considering the overall study—not just the technology itself. But also in sharing experiences in the development of a highly accessible web-based survey that is compliant with screen reading technology (screen readers, mouse input grids, voice, keyboard navigation, etc.). He presents experimental, anecdotal, and descriptive experiences with accessible web-based surveys and research designs in higher education student, faculty, and staff surveys conducted on the topic of Diversity, Equity, and Inclusion. The results will be directly relevant for inclusion and equity in these settings as well as some surprising unintended positive consequences of some of these design decisions. Lastly, he shares some next steps for where the field may go in continuing to improve in these areas.


Representative Research: Assessing Diversity in Online Samples

Frances Barlas (Vice President, Research Methods at Ipsos Public Affairs)

Abstract:

In 2020, we saw a broader awakening to the continued systemic racism throughout all aspects of our society and heard renewed calls for racial justice. For the survey and market research industries, this has renewed questions about how well our industry does to ensure that our public opinion research captures the full set of diverse voices that make up the United States. These questions were reinforced in the wake of the 2020 election with the scrutiny faced by the polling industry and the role that voters of color played in the election. In this talk, we’ll consider how well online samples represent people of color in the United States. Results from studies that use both KnowledgePanel – a probability-based online panel – and non-probability online samples will be shared. We’ll discuss some strategies for ways to improve our sample quality.

Dr. Frances Barlas is a Senior Vice President and the lead KnowledgePanel Methodologist for Ipsos. She has worked in the survey and market research industries for 20 years. In her current role, she is charged with overseeing and advancing the statistical integrity and operational efficiency of KnowledgePanel, the largest probability-based panel in the US, and other Ipsos research assets. Her research interests focus on survey measurement and online survey data quality. She holds a Ph.D. in Sociology from Temple University.


Impact of response styles on inclusive measurement

Fernanda Alvarado-Leiton (PhD Candidate, Program in Survey and Data Science, University of Michigan)
Sunghee Lee (Research Associate Professor, Program in Survey and Data Science, University of Michigan)
Rachel Davis (Associate Professor, Arnold School of Public Health, University of South Carolina)

Abstracts:

“Negated and Polar Opposite Items for Balanced Scale construction: An Empirical Cross-Cultural Assessment”

Fernanda Alvarado-Leiton: Acquiescent Response Style (ARS) is a culturally patterned measurement error in surveys that threatens comparisons across groups with different cultural backgrounds potentially undermining inclusivity estimating attitudes and beliefs in a population. Balanced scales blend items written in different directions and are hypothesized as a method for controlling ARS. This study examined the differences in measurement properties between two types of balanced scales. The first balanced scale type included negated items, which were item reversals formed by inserting a negation, such as, “no” and “not.” The second type included polar opposite items, which used antonyms or opposite terms to reverse the item direction (e.g., “unhappy” as the opposite of “satisfied”). Participants were recruited to a Web survey and randomly assigned to (1) unbalanced, (2) negated balanced or (3) polar opposite balanced scales. Participants came from three groups with different ARS tendencies to contrast the effects of scale wording in mitigating ARS across groups and improving measurement across cultural subgroups. These groups were: Non-Hispanic White respondents, Hispanic respondents in Mexico and Hispanic respondents in the US. Both types of balanced scales outperformed unbalanced scales in convergent validity, with higher correlations between scale scores and validation variables for balanced than unbalanced scales. No statistical differences were observed between negated and polar opposite scales in fit indices of factor models, reliability measures or convergent validity for any group. These findings suggest that negated and polar opposite balanced scales are equivalent for ARS control, and that they yield adequate measurement properties for all groups included in the study.

“Response Style and Measurement of Satisfaction with Life”

Sunghee Lee: Satisfaction with Life (SWL), a five-item scale, is designed to assess global judgment about one’s satisfaction with life as a whole rather than specific domains of life. Popularly used by many organizations, including the World Health Organization (WHO), the Central Intelligence Agency (CIA) and the United Nations Educational, Scientific and Cultural Organization (UNESCO), it has been translated into over 30 languages. However, with its standard version using a 7-point Likert response scale, it is subject to measurement error due to response style and measurement non-comparability across groups associated with systematically different response styles. More importantly, whether and how this is addressed in research may have implications for its inclusivity. This study examines the utility of balancing the SWL scale experimentally with multiple racial/ethnic/linguistic groups in the US: Latinx dominant in English, Latinx dominant in Spanish, non- Latinx Whites, non-Latinx Blacks, non-Latinx Koreans dominant in English and non-Latinx Koreans dominant in Korean. The results suggest the benefit of balancing measurement scales but not for groups that engage in middle response style.

“Reducing Acquiescent Response Style with Conversational Interviewing”

Rachel Davis: Acquiescent response style (ARS), the tendency for survey respondents to select positive answers such as “Strongly Agree,” is of particular concern for increasing measurement error in surveys with populations who are more likely to acquiesce, such as U.S. Latinx respondents. This study enrolled 891 Latinx telephone survey respondents in an experiment to address two questions: (1) Does administering a questionnaire using conversational interviewing (CI) yield less ARS than standardized interviewing (SI)? (2) Do item-specific (IS) response scales reduce ARS when compared to disagree/agree (DA) response formats? No difference was observed in ARS between the DA and IS response scales. However, CI yielded significantly lower ARS than SI, likely due to the CI interviewers’ efforts to clarify questions and help with response mapping. Findings from this study suggest that using CI to administer survey questions may decrease use of ARS and improve data quality among survey respondents who are more likely to engage in ARS.


How the Measurement and Meaning of Family Structure Shape Research on Young Adult Racial Inequality

Paula Fomby (Associate Professor, Survey Research Center and Population Studies Center, Institute for Social Research. University of Michigan)
Christina Cross (Postdoctoral Fellow and incoming Assistant Professor of Sociology, Harvard University)
Bethany Letiecq (Associate Professor, Human Development and Family Science program, George Mason University)

Abstract:

At the population level, Black and White youth in the United States enter adulthood after a lifetime of divergent family structure experiences. A substantial social science literature has investigated whether this variation in childhood family structure contributes to racial disparities in the timing, sequence, and context of events in the transition into adulthood. This discussion adopts a critical perspective on mainstream research on this topic. The panelists highlight opportunities in family demography, social stratification, human development, and race and ethnic studies to advance theory, measurement, and empirical modeling in order to more accurately reflect Black family organization and to situate Black and White families in the a broader context of racialized social, economic, and political inequality.