Menu

Barriers to Intentional Assessment

Barriers to Intentional Assessment

Recent posts on the SAAL blog have examined the importance of intentionality in the assessment cycle. The posts hit on one of the most significant hindrances to good assessment practice in student affairs work, specifically, that assessment initiatives require intentionality to be effective. Unfortunately, assessment initiatives often feel like they lack intentionality - why is this the case?

There was a time when creating a culture of assessment was the challenge for those leading assessment efforts in student affairs. At this point in the development of the field, however, the critical mass of professionals in student affairs recognize that is central to our work. If anything, the pendulum may have actually swung too far in the other direction. Now, the real challenge is not creating a culture of assessment, but rather, helping other professionals to use assessment thoughtfully.

Another way of thinking about it - at what point in the assessment cycle does intentionality become more difficult? In my experience, the most serious challenges become evident in the final phases of the assessment cycle; namely, the stages where we analyze our data and use results for improvement. This is counterintuitive on its surface; why, after deciding to enter the assessment cycle, designing outcomes, and crafting learning experiences, would we not allocate sufficient intentionality for data analysis and use?

  1. Over-collection: With the proliferation of data, we often over-collect, increasing the complexity of data analysis. For example, if you can use a smartphone, you likely already possess the technological skills for building a survey to collect student feedback. On the other hand, possessing the technological skills doesn’t necessarily lead to good survey design. The development of online survey platforms has been extraordinary for reducing the amount of work required to collect information. But the unintended consequence of simplifying data collection is that it frequently compels an over-amassing of information.

  2. Costs of Clarity: Most universities have well-developed student databases that can be mined for patterns and used to inform questions of involvement and predict student success. Again, there was a point in time where the real challenge was acquiring such information. Today, the challenge is less about collecting information, and more about how to distill the information that we have on hand down into something that can be used. The problem with over-collecting data is that it raises the costs of clarity - it takes more time, effort, and knowledge to make sense of the story that our data is telling.

  3. Data Choice: Data that is over-collected is often poor. For example, it is widely known that individuals have difficulty with accurate perceptions of their own learning, but yet, we continue to ask students if they feel that they have learned. It's fitting to compare this sort of data to junk food. The benefits of junk food are that it is cheap, delicious, and widely available; the problem is that the fried corn, salt, and overly-sugared fare is not particularly nurturing. If you're really hungry, eating junk food can sustain you until there are healthier options available. Self-reflections and surveys of learning are similar; they're cheap to implement, but aren't particularly telling when it comes to sustenance. In the same way, self-reflections and surveys of learning are befitting if there is no other data available. When deciding what to eat, it is often more difficult and more expensive to consistently make healthy nutrition decisions, but our bodies are happier when we do. Likewise, it is more difficult to be intentional with our data collection and analysis, but our ability to understand and use assessment results increase markedly when we are calculated in our choice of data collection.

So, what is the solution?

I believe the solution starts with helping practitioners develop new habits of assessment design. Neuroscientists describe habits as routine behaviors that are unconscious and result from frequent repetition. As anyone knows, bad habits can often countermand what we know will be good for us. In assessment practice, we must help those who aren't directly responsible for assessment and planning form new habits of design.

The new habits should focus on freeing up more time for analysis and using assessment projects for improvement. The quickest pathway to more time, perhaps, is to try and gain insight using data that we have on hand. Is there information in our institution’s data warehouse that can provide insight to the question that we’re looking to assess? I’ve heard numerous stories from colleagues in institutional research about large scale assessments - NSSE, senior exit evaluations, and the like - that are never studied and shared. We should tap into these stores first, making sure to use existing data if possible.

Another new habit is to encourage assessment projects that do not use surveys. It may be that a survey is the most appropriate way to obtain the information that we are seeking. But we should  push our colleagues to strongly consider other methods; if a survey is fitting, critically thinking over the full range of assessment strategies will tend to guide us towards survey methods when needed. Again - designing a good survey is difficult, time-consuming work. Analyzing survey data appropriately can be even more involved; entire volumes are written about how to appropriately analyze survey data. To add, focus groups, interviews, and counting outputs can frequently fulfill our assessment needs, and are often less time consuming than survey methods. In short, before using a survey for assessment we should ask whether it is truly essential, or whether there is another method that could be used to more efficiently obtain the information needed.

The most important habit, though, is to design simpler surveys. One well known advantage of a good research design is that subsequent analyses are less complicated. For example, analyzing data from a randomized experiment is usually more straightforward than analyzing observational data, which often requires more complex statistical techniques. The same is true in our assessment work involving surveys. Surveys should be short and to the point: specific in their aim, and directly relevant to the concepts we are looking to understand. Unfortunately, it is not uncommon for surveys to be indirect, littered with jargon, and asking questions that are beyond the respondent’s knowledge. The quickest way to improve surveys - if we must use them - is to ensure that they are simple and clear. The more that a survey rambles and wanders, the more trouble it presents for both the respondents, and for later analyses.

As those in charge of assessment at our institutions, it is our responsibility to give permission to our colleagues to focus on their most significant work. In higher education, there is a certain amount of errand work that we all must do: meetings, bureaucracy, solving problems, and the like. Unfortunately, assessment is widely-regarded as one of the many errands. If we want the work of assessment to be meaningful, we have to champion habitual behaviors that illustrate the quality of assessment being more important than the quantity of activity. Increasing the quality of our work creates the space to conduct more rigorous data analysis, and allows us to focus more time and energy on using our insights to drive improvement.


Andrew Hester, University of North Carolina Wilmington

Go Back



Comment