Menu

Promoting the Use of Assessment Results

Promoting the Use of Assessment Results

Working with graduate students is great. There are a host of reasons for this, but one primary reason for me is that they keep you on your toes professionally. They ask tough questions that drive one to reflect on one’s own practice, think critically about the use of theory, and adopt different lenses. A couple of months ago I was discussing assessment and planning with a group of graduate students who were preparing to graduate and enter their first post-master’s professional role. During the discussion, a student posed a great question: How do you make changes happen based on your assessment results when you aren’t leading a program or department?

This question gave me pause. I certainly think about issues of use and closing the loop; but I generally do so from my own lens of managing assessment from a divisional perspective. With this lens the answer can revolve around building infrastructures and processes that support closing the loop, and establishing related accountabilities and expectations. However, the student was asking a very different question – how does one promote the use of assessment results when one’s organizational position does not come with control of resources or decision making, or when the results speak to changes required at an organizational level above one’s current role? In considering the student’s question, I was reminded of literature from the field of evaluation, particularly theories that pertain to the use of evaluation results. Evaluators are often external to the program or organization being evaluated. As such, they share analogous challenges related to organizational locus and control. In this post, I want to expand upon my answer to the graduate student and leverage some ideas from evaluation theory to provide guidance on how to promote the use of assessment results.

A key concept within the literature related to use is that of participatory evaluation. Broadly described, participatory evaluation seeks to involve stakeholders within multiple aspects of the evaluation process, with the belief that this will ultimately increase the likelihood of the use of evaluation results. This same principle can be applied within the field of student affairs assessment as a means to promote use and help encourage organizational units to close the loop. Designing and implementing a particular assessment is often viewed as the job of one staff member. However, this approach can hinder the degree to which assessment results might be used. Borrowing from the ideas of participatory evaluation, one can involve others within the organization to design and implement assessment. Below are some opportunities to structure assessment processes in ways that are participatory.

Defining, reaffirming or revising outcomes to be assessed
Whether the program being assessed already has outcomes, or is a new program, involving others in a discussion around that program’s desired objectives and outcomes can create a shared commitment to the program’s impact and importance. If there is a shared commitment to the program’s value, staff will be invested in its success. When it comes to using assessment results, and closing the assessment loop, this shared investment will help set the stage for implementing changes that may address gaps in program or service effectiveness.

Determining the questions to be answered
Often the question guiding an assessment is pretty straight forward – Is the intended outcome being achieved? However, for most assessments there are additional questions that might be included: Who is participating and who is not? What is the level of satisfaction with the program or service being assessed? How many students are benefiting from participation? What are the barriers or supports for participation? Including staff across one’s unit or department in conversations about which questions should be incorporated into the assessment, will increase the likelihood that staff will find the results meaningful and actionable.

Deciding on assessment methods and data sources
The use of assessment results can be helped or hindered by the degree to which stakeholders find the assessment to yield credible evidence. This may seem like a pretty straight forward idea. However, what counts as credible evidence is almost always contextual; including the context of the individual knower. If you design an assessment alone, you will likely design an assessment that provides credible evidence for you. However, if you want to promote use, you need to design an assessment that will produce evidence that is credible to the actual users of the assessment results. In most cases, the actual users will be those who are in a position to make change based on the results. If that isn’t you, then you need to include those individuals in the discussion of assessment methods, approaches and data sources. Some individuals are motivated to make change based on the individual story of one student; others are motivated by the numbers from a quantitative assessment. We need to incorporate these preferences on the front end of assessment design to deliver credible evidence and promote use.

Interpreting results
Interpreting results is the process of making meaning of your analyzed data. This is often a process that is carried out in a solitary fashion. While data analysis may lend itself to being an individual task, opening up the process of interpretation to a larger group of individuals in your organization has several advantages. Making meaning is enhanced by bringing in multiple perspectives and lived experiences. Interpretations reached through a participatory process will be more inclusive and can produce insights beyond what one might garner through an individual process. This is also a way in which a participatory approach to assessment can support goals of equity and social justice. Furthermore, arriving at a collective interpretation increases the likelihood that more individuals across the organization will endorse that interpretation, in turn making it more likely that there will be collective agreement that identified weaknesses or gaps need to be addressed.

Generating and prioritizing recommendations
Similarly to the interpretation of results, generating and prioritizing recommendations collectively can foster buy-in and shared commitment to their implementation. Generating recommendations can be a more nuanced process than it sometimes appears. Depending on the scope of the assessment project, recommendations may have implications that affect other programs or services, require realigning resources, or necessitate re-allocating staff time. Whether these changes are contained within a unit for which the assessor has direct leadership or extend to staff and programs housed elsewhere in the organization, staff having a shared commitment to the merit and value of the recommendations can help support their implementation.

My answer to the graduate student was to try to make assessment as participatory as possible. But I don’t think this principle is limited to the situation the graduate student described. The principles of participatory evaluation can be applied in many circumstances in student affairs assessment; and their use can promote consensus building, buy-in, and collective meaning and value. All of which can help support the use of assessment results!

Have you involved others in your assessment in a participatory manner? We would love to hear your experiences, thoughts or ideas in the comment section!

Resources

If you are interested in learning more about participatory evaluation and theories that address the use of evaluation results check out:

  • Alkin, M. C. (Ed.). (2013). Evaluation Roots: A Wider Perspective of Theorists' Views and Influences. Thousand Oaks, CA: SAGE.

For a simple method of shared data interpretation take a look at this article from a recent issue of New Directions for Evaluation:

  • Pankaj, V., & Emery, A. K. (2016). Data placemats: A facilitative technique designed to enhance stakeholder understanding of data. New Directions for Evaluation, 149, 81-93.

Daniel Doerr, University of Connecticut 

Go Back

Comment

Blog Search

Comments