Equity and Assessment

Equity and Assessment

Student learning is messy. Developing a culture of evidence and effectiveness can be messy. Assessment is messy. Our cycles and double-looped spirals make is seem so tidy, but assessment is not research. We are not bound by the Scientific Method many of us learned growing up. For many, what we hope students learn and how we measure that learning are driven by the mission and values of the institutions we serve. The Professional Competency Areas for Student Affairs Educators developed by NASPA and ACPA treat Assessment, Evaluation and Research and Social Justice and Inclusion as two separate competencies and rightfully so; however, in our ongoing quest to break down silos and reach across boundaries, bridging the competencies of Inclusion and Assessment presents an opportunity to leverage data for equity. The assessment process has five basic elements: establishing outcomes, measuring outcomes, collecting data, analyzing and reporting data, and making data-informed decisions. At each point in the process, we have the opportunity to work towards equity. This post is going to focus on ideas and examples for inclusive assessment practice for equity in four out of five of these points in the process.

splattered colorful paint on a white table with bottles of paint

Establishing Learning Outcomes

Learning outcomes have become common nomenclature for Student Affairs professionals and typically, when creating a program or service, we can identify what we want students to know, be, or do after engaging with the program or service. The literature encourages us to make sure outcomes are manageable, measurable, and meaningful and this is where we have an opportunity to create a more inclusive practice. Create opportunities for students to operationalize their agency and share what they believe would be a meaningful learning opportunity. When creating a short-term study abroad trip to Trinidad, one of the orientation sessions with the students focused on three items: What we wanted them to learn from the trip, what they wanted to learn from the trip, and how they wanted to share what they learned.

Measuring Learning Outcomes

When measuring the Trinidad experience, the plan created by the professionals was a pretest, posttest, and journal reflections coded for themes and evaluated with a rubric. Students agreed that this was a good plan but also advocated to do something more exciting like create video blogs or design a group video - and they did. The information they shared in the video combined with the other data change the minds of the administrators who needed to approve the trip for future participants. Maki (2010) captures the idea students learn in ways that are as diverse and nuanced as the students themselves. Why shouldn’t our assessment efforts evolve in a way that empower students to share and demonstrate their learning, which can be evaluated with competency-based rubrics, in ways that are practical and meaningful to the students?

Data Collection  

Let’s be honest, for most of us, it has been a minute since we were college students. How we ask questions about competency development areas like critical thinking and multicultural competence is probably not how our students talk about these ideas. Engaging students in item design, or at the very least, soliciting their feedback on the questions and prompts developed, gives them an opportunity to share how they would respond or clarify what they think you are asking for which can serve as a great pilot before launching the instrument to the larger intended population. When designing a set of questions for all students living on campus, I took my draft to the Residence Hall Association and got an earful about how to rewrite the questions. It was a powerful and empowering experience. They also negotiated the addition of questions related to the programs they provide and made me promise to share the data back with them. I made them promise to use the data to inform their work. We all agreed.


Perhaps the most powerful opportunity for leveraging assessment data for equity in the context of program assessment comes with the analysis of data. Disaggregating the data is a solid start; however, when examining results across or between groups - thinking critically about if your are holding the European/Caucasian/White students as the standard to which you comparing other students responses or if you are genuinely comparing across all groups. In addition, when discussing the results with stakeholders, be mindful of conclusions drawn based off of stereotyping or other assumptions. Examine intersections of identities with simple cross-tabs of information, with new technology platforms this has becoming increasingly easier. Do not assume everyone with a deceptively simplified label (sex, gender, sexual orientation, race) within a group feels the same about a program or service. Look within groups for variation in responses. Lastly, finding statistically significant results is not the same as finding results that are significant to your practice of equitably serving students.

How have you made your assessment practices more inclusive? This post focuses on individual, more small-scale program assessment, do you have more systematic examples of how data is leveraged to create a more equitable experience for students? Share your experiences and examples with us in the comments. We would love to learn from you!

Ciji Heiser, Western Michigan University

Go Back


Blog Search