The last ten years have seen an explosion of interest in growth mindset and belonging among institutions of higher education. Since 2017, about 100 institutions have annually made use of free resources created by the College Transition Collaborative (CTC) and the Project for Education Research that Scales (PERTS), including the Growth Mindset for College Students program and the Social Belonging for College Students program. Despite the availability of free, easy-to-implement, scientifically validated programs for students, institutions still frequently approach PERTS and CTC in search of something else: measurement tools.
They are right to do so. Research has pointed to institutions’ critical role in creating culture and policies to help all students feel confident that they belong in college and that their institution believes in their potential. These factors can be important predictors of excellent and equitable outcomes (Murphy & Reeves, 2019; Canning et al., 2019). Programs to help students approach college with a growth mindset and clear beliefs about belonging — while valuable — are unlikely to help institutions change their cultural norms and practices and see more equitable outcomes (Murdock-Perreira, Boucher, Carter, & Murphy, 2019). It is unfair for the burden to be placed solely on students to “persevere” through conditions that actually send signals that they don’t belong. Systemic and sustainable change necessitates a structural approach.
Institutional leaders understand that measurement is key to meaningful change. Without measurement, you can try all the evidence-based, psychologically attuned practices you want — and some of them might even work. But you’ll never know what was effective, or where. With institutional resources and bandwidth increasingly limited, it is more important than ever to understand the return on investment for any new initiative. And once the most effective practices are identified, data helps build a case to administrations about how and where to spread them.
Likewise, measurement alone without a theory of change and aligned evidence-based change ideas can leave those leading systems change adrift. Determining how to change systems from scratch can be daunting and burdensome. And building buy-in for systems change without confidence in why the change is important or specificity in what the change will entail is usually a non-starter. A theory of change organizes the efforts of many diverse actors around a single definition of success, so that everyone is pointing in the same direction and with a shared set of measures and change ideas. (For free resources on developing theories of change and designing continuous improvement efforts, visit https://www.shift-results.com/online-learning/.)
The SEP Approach
In the Student Experience Project (SEP), we co-designed a theory of change, which includes a shared and measurable aim, a theory for how to accomplish that aim, and measurements including longer-term academic outcomes as well as more frequent measures to assess student experience across multiple terms. This framework was designed collaboratively by leading social psychologists and continuous improvement experts, alongside teaching faculty and staff from diverse fields such as chemistry, mathematics, public health, biology, and the social sciences. The six universities in the SEP cohort used this framework over two years to continuously improve students’ experiences in their academic courses, while sharing data and learning to learn faster together. These institutions were able to reduce disparities in positive student experiences, which predicted gains in academic outcomes as well. (See Boucher et al., 2021.)
Measuring Experiences
While most institutions already track academic outcomes (e.g., DFW’s, persistence), few track student experiences systematically (we hope this will change). We developed a portfolio of shared measures to learn about student experience in depth and over time, as well as to support day-to-day improvement work with student experiences in classes. We adapted survey measures from research literature to provide evidence-based, actionable, and practical measures for busy instructors to use in their classes.
Evidence-based. We chose measures to comprehensively address the most important factors known from research to shape learning and achievement outcomes, including social belonging, institutional growth mindset, and self-efficacy, as well as lesser explored but equally important contributors to outcomes such as identity safety and trust in the fairness of institutional practices. We started with a list of several dozen constructs, all pulled from research literature showing firm links to academic outcomes. We whittled this list down as a community, based on (1) initial data analyses showing which constructs were most important at each institution, (2) which constructs were likely to be sensitive to change in the time-course of a semester based on previous research; and (3) which were most meaningful to institutional stakeholders. We arrived at a list of seventeen items, designed to provide a global assessment of the experiences that matter most to student learning, appropriate for repeat sampling.
Actionable. A measure’s unit of analysis must match its unit of action. Institutions commonly turn to large-scale climate surveys to measure student experiences. But while climate surveys can be useful as snapshots across departments or a whole campus, they rarely get data into the hands of the people who can actually improve students’ experiences in the classroom directly. In the SEP, our data had to be relevant and accessible to the people who would actually use it: in this case, instructional faculty and community of practice (CoP) facilitators. All measures use language that focuses students’ attention to their immediate context (“in this class…”), rather than their global college experience, because class experiences are within instructors’ spheres of influence. Instructors and CoP facilitators furthermore received data early enough in the semester to make changes: the Ascend program allowed CoPs to survey students and receive comprehensive data reports within a week’s time. Reports are delivered directly (and confidentially) to faculty, and include recommendations for practical, evidence-based approaches that instructors can try. This allowed instructors to be nimble and responsive to how students’ experiences were changing week-to-week, rather than waiting until the end of the semester — or longer — when the data are no longer directly actionable.
Practical. Finally, we knew that minimizing the time and resources needed for students to engage with the survey, and for faculty and administrators to understand the data, would be critical to ensure buy-in. Our seventeen-item survey was brief enough to be completed in under 10 minutes, and the Ascend program handled all the data collation, imputation, and summarization so that busy instructors could prioritize learning from data and improving their practice.
Student Experiences are a “Global Health” Measure for Higher Education
The SEP student experience measures give educators a series of comprehensive yet straightforward snapshots of their students’ experiences throughout the semester. While this idea may be novel in higher education, we adapted lessons from a revolution occurring in medicine over the last 30 years. The field of medicine is working to move away from its sole reliance on clinical outcomes, to a more patient-centered approach that also incorporates patients’ own experience of their condition as indicators of overall health and well-being. The Patient Reported Outcome Measures (PROMIS) has helped enable this revolution by allowing diverse groups of clinicians to measure and improve patients’ quality of life using an industry-standard measurement framework (Forrest et al., 2014). Like student experience, facets of patient well being are diverse and include multiple constructs. The PROMIS measures provide both breakdown measurement by construct as well as a “global” measure to indicate overall wellbeing.
We have adapted this concept for education, with strong conviction that an evidence-based, actionable, and practical framework of measuring students’ experiences in college can help revolutionize higher education in a similar way. The SEP measurement framework includes both individual constructs of student experience as well as a “global” measure which weaves them together into a singular measure, which the SEP Community named the Student Experience Index. We believe that it is important to broaden institutional focus beyond objective (and more distal) outcomes like DFWs and retention, to also include students’ own experiences of their learning which are clearly predictive of these outcomes.
Resources and Next Steps
Going forward, we will continue to validate and develop the SEP measurement framework in order to further simplify it, maximize its value as a leading indicator of educational outcomes, and increase its practicality and scalability. Our goal is to create a measurement tool that is valuable to educators and can guide efforts to improve student experience and equitable outcomes across broad swaths of the higher education landscape. The measurements are currently available as a stand-alone instrument, or for use within the Ascend program. As we continue to refine and validate them, we will make updates available to the public as well.
We encourage those using these (or any) instruments to measure students’ experiences to be thoughtful about how the data are used. Data should be used as a tool for improvement by faculty to improve their practice, not wielded as an evaluative framework to judge or penalize faculty. These evaluative efforts often backfire as they do not result in authentic change, or belief about why the changes are important. Furthermore, evaluative measures such as course evaluations are known to increase bias against women and faculty of color. This risk is mitigated with the SEP measures because they focus attention on things like the pace of the course, the instructor’s flexibility, etc., rather than the instructor’s intellectual ability and competence (which trigger negative stereotypes about women and faculty of color). Nevertheless, the SEP measures are much more powerful when used in communities of practice for improvement. For this reason, the Ascend program keeps individual instructor data confidential — for faculty to share at their sole discretion. The power of using these in an improvement community (such as an improvement network or community of practice) is for faculty to share their data and exchange ideas for how to improve. We recommend a similar procedure for any institution looking to measure and improve student experiences.
A Student-Centered Revolution
While thoughtfulness is needed in this (or any) kind of higher education reform, the big picture is that institutions of higher education seem to be “awakening” to the importance of students’ experiences for their learning and achievement. One resource available to this revolution is the domain expertise of higher education practitioners. In fact, most data analysis techniques in wide use today were developed by such instructors. The capacity of higher education practitioners to use data for professional development is therefore high. We hope that more institutions will add frequent and consistent measurement as well as practices to learn directly from students and instructors. These snapshots of how the learning environment is being experienced speaks volumes of what is working, and what can be improved now.
Authors:
- Sarah Gripshover, PhD, Director of Research, PERTS
- Kathryn Boucher, PhD, Assistant Professor, College of Applied Behavioral Sciences, University of Indianapolis, and Principal Investigator, College Transition Collaborative
- Krysti Ryan, PhD, Director of Research, College Transition Collaborative
- Karen Zeribi, Founder and Executive Director of Shift