What are we supposed to be assessing? What are we learning from our assessment efforts? How do we know if we are achieving our desired outcomes? How do we make sense of all the assessment data we do have? Whether you are assessing your own program, enhancing assessment within your unit, or coordinating assessment across a division of student affairs, these types of big-picture assessment questions can surface frequently. Such questions can prove vexing and can arise from a number of sources. Answering them often requires stepping back and trying to understand assessment efforts within the broader context of your work.
One process that can be particularly useful for exploring these types of questions is to develop a Logic Model for your program. Not only does constructing a Logic Model give staff an opportunity to think deeply about their programs and intended outcomes, but it can provide a way of categorizing assessment efforts and aligning them in more intentional ways.
What is a Logic Model?
Logic Models come in many shapes and sizes, and can serve many purposes. However, the most straightforward and immediately meaningful to staff is what’s called a Program Logic Model. A Program Logic Model articulates all the elements of a program from resources, through activities, and outcomes. It organizes these graphically in a way that implies a logic between their parts. Here is a visual description of a typical Program Logic Model.
As you can see, the model breaks down a program into two broad categories, your planned work and your intended results. In doing so, it lays out the logic of the program in a series of hypotheses; if we have these resources, we can engage in these activities, if we engage in these activities, we can deliver these outputs, etc. To provide a more concrete understanding of what a Logic Model might look like, below is an overly simplified example of a hypothetical AOD program.
Benefits of Developing a Logic Model
Logic Models provide a structure to guide staff in thinking about and assessing their programs. Their use may surface unarticulated assumptions and goals; help staff think more intentionally about their intended programmatic outcomes; and help organize, connect, and identify gaps in assessment and evaluation efforts.
Articulating programmatic outcomes
Much attention has been given to the importance of articulating learning outcomes for programs; and rightly so. Learning outcomes are essential to demonstrating the immediate impact of programmatic interventions. However, they can sometimes feel reductionist because often they are in fact instrumental; driving the attainment of outcomes that are more accurately described as the core outcomes of the intervention. Consider the hypothetical AOD program described above. If students were walking away from educational workshops able to articulate the risks associated with certain patterns of drinking, but never changed their behaviors accordingly, would staff within this program feel that their interventions were truly successful? Developing a Logic Model can provide a structured way of connecting immediate student learning to other desired down-stream outcomes.
Organizing and Connecting Assessment Activities
Logic Models also help programs identify and organize assessment activities into a more coherent whole. Looking across a Logic Model allows one to understand how different types of assessments can connect to provide a more holistic understanding of program effectiveness. Conducting program or unit reviews can speak well to the first two or three columns in the Logic Model. And using an external set of standards like the CAS Standards also provides an evaluative framework that will help to identify ways in which resources, practices, and organizational features may be impeding the achievement of desired student-level outcomes. Focusing on the third and fourth columns, one is led to familiar types of assessments: tracking, usage, and learning outcome assessment in particular. Seeing these in the context of the Logic Model, one can understand how assessing these provides important information about whether immediate outputs and outcomes are being achieved. Finally, focusing on the fourth and fifth columns provides opportunities to assess the mid- and long-term impacts of the program. This can be especially helpful in helping units think intentionally about how such outcomes could be assessed and designing data collection in ways that support such assessment.
Supporting a Culture of Assessment and Evaluative Thinking
Another benefit of constructing a Logic Model is that it allows evaluation and assessment to feel less personal and threatening; and more focused on learning. A Logic Model is supposed to articulate the logic and hypotheses about the program. In this sense, it combines experiential knowledge of the staff who have constructed the model, with theory and literature, to present an overall description of why and how the program should work. In turn, when evaluative questions arise the focus of that evaluation can be on the logic of the model, rather than on the performance of individuals and their work.
The use of Logic Models also supports deeper organizational learning by expanding the conversation around assessment results into the realm of evaluative thinking – which may include a consideration of immediate programmatic changes, the reevaluation of literature and theory guiding the interventions, and thoughts about what factors beyond the program itself may be moderating the desired programmatic outcomes. When focusing on assessment of short-term outcomes it can be easy to get locked into thinking that assumes that any underperformance with regard to desired outcomes can be addressed by tweaks and changes to the intervention itself. Placing the intervention within the context of a Program Logic Model, can expand this thinking. Failure to achieve a desired outcome can come from a breakdown in any of the activities or processes contained within the columns, but also from the logical leaps from one column to the next. And these breakdowns can stem from a host of sources. This opens up space for staff to consider the broader environmental and institutional context of their work and what might support or impede success.
Whether you are trying to articulate long-term programmatic outcomes, understand the theory behind your programs, or organize your assessment efforts, developing a Program Logic Model can be a helpful exercise that supports intentional assessment efforts and supports a culture of assessment, evaluation, and inquisitive thinking.
Have you used Logic Models in your practice? Are there programs on your campus that might benefit from the process of developing a Logic Model? Please share your thoughts and comments below!
Resources
For some a more substantial exploration of Logic Models and how they support program development, assessment and evaluation see both:
- The W.K. Kellogg Foundation Logic Model Development Guide
- The CDC’s Program Performance and Evaluation Office Logic Model page
For a variety of articles that address evaluative thinking see:
- New Directions for Evaluation: Evaluative Thinking, Volume 2018, Issue 158
Daniel Doerr, University of Connecticut