Menu

Program Theory and Assessment: Good Assessment Requires Solid Programs

Program Theory and Assessment: Good Assessment Requires Solid Programs

The Scenario

As a student affairs assessment consultant, I’ve had clients walk into a meeting to expect me to be the person to reassure them that their programs are good, and that everything else is an excuse why students couldn’t reach the intended outcomes. During times like these, I will reference the assessment cycle used here at James Madison University (shown below). As an exercise to help clients reflect on their assessment process, I will ask them to walk me through each step of the cycle. In doing so, the client has an opportunity to see where their program assessment went awry.


https://www.jmu.edu/_images/studentaffairs/saac/assessment-cycle.png

In scenarios like this, I have yet to have a client make it past the second step of the cycle (Creating and Mapping Programs to Outcomes). When discussing step 2, I provide clients with two prompts: (1) tell me about your program and (2) tell me how the program allows students to achieve the intended outcomes. Sometimes the client explains that they inherited the program from their predecessor, or the program was the result of a brainstorming idea pulled together in five hours. No matter the reason, clients just cannot tell me why their program is supposed to meet the intended outcomes. Cue my internal side-eye and deep sigh, along with another tally mark on my mental white board labeled, “number of SA professionals that didn’t think about using theory or research to make/revise a program.”

Infusing Program Theory

In my experience (and maybe yours too), it’s difficult to find Student Affairs professionals that use program theory when constructing programs. Program Theory is essentially all forms of theory and research used to develop an intervention that results in intended outcomes. For us in a higher education setting, this may include (but is certainly not limited to): Student Development, Psychology, Sociology, Cognition and Learning Theories, and other discipline-specific research. Discipline-specific research relates directly to the type of program you’re developing. For example, leadership theory is important when constructing a leadership-based program, and research around restorative justice may be useful when creating student-conduct interventions.

Organizing the development of programs infusing theory can easily be visualized in the form of logic models. Logic models highlight the outcomes we want to accomplish, as well as the activities that will allow us to meet them. Pope, Finney, and Bare (2019) frame how to employ logic models when promoting program theory during program development. Something to note is that logic models can and will look different, depending on the outcome and research used to inform programming. For example, we all know that retention is an issue in higher education. Depending on how one looks at retention, different programming may take place. If one sees retention as an academic issue, one may construct a logic model like Aaren Bare’s (James Madison University) logic model used to help students become more confident in their academic performance.  If one sees retention as an issue stemmed from a sense of belonging, a logic model similar to the one made by Sam Gonzalez (James Madison University) may help campuses up their retention rates.

When we as student affairs practitioners take the time to use theory and research in creating/revising programs, the outcomes used to create or revise those programs become MUCH easier to measure. Why is that? What does creating logical programs have to do with good assessment?

When programs are created using program theory and shown in the form of a logic model, it is easier to locate points in which it is best to assess students on the outcomes of the program. Two statements I hear many student professionals say are “we can only assess students right after a program,” and “let’s just assess students on the long-term/distal outcome.” Those statements aren’t necessarily true- finding the right time(s) and place(s) to assess students is completely dependent on how your program is structured to meet desired outcomes.

Better Assessment

Many times, we want to directly assess students on that distal or long-term outcome because that’s the behavior that we want to change, but that may not always be feasible Cue in the program theory used to define the logic behind our program; if we can explain the relationships between the activities and our student learning outcomes, we can assess students on more intermediate outcomes. If students are meeting the intermediate outcome(s), then by program theory we can say that students will be more likely to achieve the distal outcome. To explain this more, let me provide an example:

Many health services offices have programming around sexually transmitted infections (STIs) to help reduce STI transmissions in the student population. If the distal outcome is for students is to have safer sex, the outcome is not directly measurable; I don’t think it’s reasonable (and it’s also a bit odd) to ask or observe students directly on their sexual behaviors. Instead, let’s assess students on one or more intermediate outcomes that research says affects students’ behaviors surrounding safe sex. If we know that students perceive condoms as beneficial and that students have access to condoms, they are more likely to participate in safe sex than those who don’t perceive condoms as beneficial, or don’t have access to condoms. It is a lot easier to assess students on their perceptions of condoms and see if students think condoms are accessible. The results we would get from assessing students on those two things would give us a LOT of information on whether students are or aren’t participating in safer sex. Why? Because theory gave us a connection between the perceived benefits of condoms, condom access, and how they influence a students’ behavior regarding safer sex.

There are many other examples that highlight why assessing the distal outcome directly isn’t the best choice. At the same time, there are many examples as to where assessing the distal outcome is the best and easiest option! In conclusion, using program theory to construct programs and interventions is a critical part of doing assessment. If there isn’t any research underlying why your program should work, then assessing your program is a close to impossible task.

Program theory makes programs better, and better programs lead to better assessment practices. If you want better assessment practices, then start by making better programs.

Are you using program theory on your campus? If so, how is it being implemented and what were the impacts?

References

  • A. Pope, S. Finney, & A. Bare (2019). The essential role of program theory: Fostering theory-driven practice and high-quality outcomes assessment in student affairs. Journal of Research & Practice in Assessment, 14(Summer 2019), pp. 5-17.

Chris Patterson, James Madison University 

Go Back

Comment