Menu

What Motivates You to Engage in Assessment and does it Matter?

What Motivates You to Engage in Assessment and does it Matter?

 

 


Photo by rawpixel.com from Pexels

I was recently in a meeting when the subject of faculty and staff attitudes towards assessment came up in the conversation.  The discussion evolved into a debate about whether it is important that university employees value, or embrace, assessment or is it enough that they have a basic understanding of assessment and how to incorporate it into their work and meet identified assessment-related deliverables (e.g., identifying annual goals for an office; creating and implementing an assessment plan).  The opinions on this question varied and the meeting ended without a clear consensus.

Since the time that the above conversation took place, I have been thinking more about attitudes regarding assessment.  Specifically, as someone who is responsible for coordinating divisional assessment efforts, I am happy when I hear each department has an identified assessment plan and is regularly engaged in an evaluation of their work.  But, does it really matter how staff feel about assessment?   I’ll be the first to admit there are some tasks I complete as part of my job that I don’t enjoy.  Is it okay if assessment falls into that type of category for people? Is there any harm to viewing assessment as just another task to complete?

These musings eventually led me to consider this question as part of a larger query about the role of motivation in our work.  In his book Drive, Daniel Pink examines motivation and argues that the ‘carrot and stick’ approach (i.e., rewarding things we view as positive and punishing what we don’t like) can certainly be effective for some tasks, but that it can be “devastating” for others.  Specifically, he argues that tasks involving new or unusual situations, especially those lending themselves to creativity, are not well-suited to being driven by external motivators. Rather, intrinsic (i.e., internal) motivation provides a context that best serves these types of tasks.  

If we apply the concept of motivation to the original debate about attitudes regarding assessment, we could say there are some people who are extrinsically motivated to engage in assessment.  These are individuals who may view assessment-related activities as part of their jobs and complete these as something they “have to do”. Assessment, for the extrinsically motivated, is viewed as another item to check off a list – for example, you just finished training resident assistants on how to respond to students in crisis (check) and then you administer a quick survey to see if your learning outcomes were achieved (check).  You can then use that data to provide support for the efficacy of your program or to make some adjustments to the training for future iterations. This doesn’t seem problematic on the surface; however, I wonder the following: if the same person in this example was intrinsically motivated to engage in assessment, would this scenario look different and, if so, in what way(s)?

At the time of the conversation that began my musings on this topic, I thought it didn’t matter what motivated people to engage in assessment.  At a basic level, I channeled Nike’s “Just Do It” mantra and was pleased to witness assessment efforts happening. But, now I think there could be something lost in this approach.  Assessment isn’t a routine, plug-in-the-data type of enterprise. It involves strategic thinking and problem solving and creativity. It also involves a sense of purpose – of recognizing the bigger picture and the ultimate goal of our work.  Can we achieve this from a purely extrinsically motivated approach? I am thinking carrots and sticks might not be enough. What do you think?


Melinda Stoops, Boston College 

Go Back

Like Whitney, I find myself in the middle, but then, I almost always opt for a "both/and" approach instead of an "either/or" one. So, folx who know me probably are not surprised. HA!

For staff who view assessment as some add-on, after-the-fact, bureaucratic requirement for their "actual" work, listening skills have been helpful for me. Exploring with them their reasons for doing the work they do and how they know that their hard work is having the desired impact usually offers me (and them) ideas about intrinsic motivations for work. When sharing my own intrinsic motivation, I speak to my value of fidelity: keeping promises. That is, when we offer a program or service to students, we do so because it should meet some theoretically or empirically identified need (or perhaps, a legal or bureaucratic requirement which I hope is based on an actual need). We implicitly tell students that if they participate, this program or service will help them to meet that need--such as, developing a skill or competency. How do we know that they effectively did so? Hint: The answer is assessment. When we have evidence that students who participate accomplish the outcome and that the outcome fosters the desired impact, then we know we have kept our promise to the students. Otherwise, we're just guessing or relying too much on anecdotal evidence. That's my own intrinsic motivator, and by sharing it with my coworkers, I hope that they will understand for what I am listening and what I am hoping to help them to find.

But despite my best efforts, some folx are not ready to embrace assessment--even if they can identify an intrinsic motivator. For them, I am content for them to "just do it," and I hold onto hope and optimism that someday, they too will get it and feel that intrinsic motivation. That they are fulfilling the requirement is sufficient even if not quite satisfying for me professionally.

To the note of "data-driven" decisions, I prefer "data-informed" decisions. Data--just like any other human endeavor--will always be imperfect and/or incomplete, so I encourage staff critically and thoughtfully to consider data (and other evidence) as well as theory, ethics, professional standards, laws, and insights from their own experiences of the world (e.g., identities, backgrounds, emotions, and intuitions) to inform their decisions.

Great blog, Melinda! Thank you for sharing your experience with the question and your reflections on it.

Reply

Ooow, great question. I find myself somewhere in the middle. For me, it’s important that staff fully embrace data-driven decision making... in essence, an outcome of assessment. I have found that the verbiage of “data-driven decision making” really resonates with the staff in my division. How do you make data-driven decisions though = through assessment. So, meeting them where they are at makes assessment practical to them and something they can embrace. I find that sometimes the tone of “embracing assessment” fosters a narrow view of assessment amongst my staff and makes them feel like they need to assess all things at all times, which makes them overwhelmed and not motivated to embrace assessment. By embracing data-driven decision making its making assessment a natural part of the conversation, a more intrinsic driver, and helps them think beyond surveys when it comes to assessment. For example, it reveals that they are evaluating enrollment dashboards each week to alter recruitment strategy, they are analyzing attendance data to see who isn’t participating to develop marketing efforts, they are creating common student employee training programs because they are all working towards the same learning outcomes, etc. Nice thought provoking post!

Reply

Thanks, Whitney. I really like the phrase, "data-driven decision making".

Reply


Comment