Menu

Federated Assessment

Federated Assessment

In higher education, we have a number of forums where we are able to share our ideas and best practices and discuss the day-to-day challenges we encounter.  We set up working groups and committees with our colleagues, we have professional development organizations and listservs where we share resources, and we have national conferences where we present findings from our major projects and discuss new directions for the field. 

Despite all of this collaboration though, there are the problems that persist.  Those challenges that, despite our best efforts, we need to revisit time and time again.  When it comes to assessment circles, there is one issue that seems to persist more than any other.  Regardless of the setting or forum, when we talk about the challenges of assessment we so often turn to one issue in particular.  The issue of:

Data Silos

The term “data silos” is so frequently used and discussed that it is often reduced to a cliché.  With that said, we continue to discuss it because it presents a real challenge to our work.  When the data that is collected by the various units, departments, and divisions at our respective institutions remains isolated, it leads to duplicated efforts and prevents us from drawing connections that allow us to see a complete view of the student experience. 

Naturally, in order to break down data silos we need to encourage collaboration and work toward integrating our different data systems.  This is something that we continue to work toward on our campuses as we meet with colleagues and look at the overall data infrastructure.  Recently though, I began to think about this problem in light of our typical assessment policies and practices.  And when I did so, I realized something.   Many of our assessment practices are focused on each unit or office in isolation and this focus runs counter to the goal of integration and collaboration.  As a result, I came to a realization.  

Perhaps our data silos persist because our assessment

practices encourage this isolation.

When I took stock of our assessment practices, I realized that we relied heavily on a system of “federated assessment.”  The term “federated assessment” means that our approach to evaluating the impact we have on each student considers each program, each unit, and each department separately.   When we implement our program reviews, we have a cycle where we work with each unit separately on their self-studies and external reviews. When we design strategic plans and write learning outcomes we meet with individual units to advise them and help meet their needs.  When we evaluate the impact of our programs, we look at the impact of each individual program in isolation. 

This is not to say that this approach does not provide us with a great deal of value.  Examining individual units and programs allows us to examine our work with more granular detail and reflect on potential areas for improvement.  The question remains though, how do we also structure our assessment processes in a way that encourages collaboration?  How do we ensure that these practices are aimed at breaking down silos rather than encouraging them to exist? 

What are some of the strategies we can use as assessment professionals?

  1. Establishing clear and consistent definitions across units.    

One of the biggest barriers to collaboration and data integration is that each unit or functional area uses different terms and definitions, even if they are referring to the same thing.  This makes it difficult to join data across our different systems or even identify instances when we are duplicating our data collection efforts.  As assessment professionals, we have the opportunity to take a broader view of the data landscape within our division or our institution as a whole.  By working toward implementing consistent definitions across units and ensuring that these definitions are clear and easily accessible, we reduce the barriers to collaboration.  We also make it easier to explore data that is collected across different units. 

  1. Ensuring that the tools we use allow for flexible data extraction. 

Often times, within a division of student affairs, we rely on a number of disparate systems to collect data across our various units.  We have systems that track event attendance, systems that track career services appointments, systems that track student conduct, systems that track housing contracts, and everything in between.  Having these different systems in place can be valuable because each one provides students and staff with a user interface that is designed specifically with that area in mind.  With that said, we need to be cognizant of whether these systems allow us to flexibly extract and join the data we collect with data from our other systems. Do they provide us with an API or direct SQL access?  Do they only allow us to generate static reports or pull data manually?  These are important considerations if we want to integrate our data systems, break down silos, and devise scalable solutions. 

  1. Making integration and collaboration a part of the evaluation process. 

In a system of federated assessment, we ask our units to demonstrate the impact that their services or their specific programs have on student learning and student success.  While this is important, it ignores the fact that our students may encounter a wealth of services and participate in a wide variety of programs.  It also ignores the fact that there are numerous factors that contribute to student success.  Is this individual program or service impacting student success or is it simply the case that students that are more likely to be successful are self-selecting into the program?  This question is difficult to answer if we are merely examining change over time and not designing our assessment in a way that provides us with better evidence of a causal relationship. 

As a result, we should also incorporate integration and collaboration efforts into the evaluation process.  It may be difficult to demonstrate statistically that an individual service or program has an impact on student success, but there are other questions we can ask.  Does the service or program reach students that we are not reaching in other ways?  Does it provide them with an opportunity that they could not get elsewhere on campus?  Is the data that is being collected by the unit valuable to other units?  Does it contribute to the overall data infrastructure?  Does it contribute to the broader goals of the division or the institution? 

In short, we should avoid evaluating the work of our units solely on the basis of the individual impact that they can demonstrate.   Instead, we should also consider the contributions they make to collaborative efforts and the broader data infrastructure.  This encourages them to look outward in addition to reflecting on their own work. 

When we talk about the challenges that we face as assessment professionals, the issue of data silos naturally rises to the top of the list.  While we can identify the problem, finding a solution can be much more difficult.  As a result, it is important that we look at our own assessment policies and practices to identify whether we have implemented a structure that encourages these silos to persist. 

In the comment section below, let us know about your own experiences.  Do you have policies in place that encourage your colleagues to break down data silos?  Do you have any policies that may inadvertently encourage them to persist?  What other ways can we facilitate the cooperation necessary to move our work forward?   


Eric Walsh, University at Buffalo

Go Back

Thank you for this blog, Eric. I especially appreciated the recommendation about consistent definitions. In a not-too-distant more intentional effort in my division to track and share information about the number of different types of incidents to occur in our facilities, my assessment committee discovered that we had as many definitions of "incident," "injury," and "accident" as we had facilities. OSHA offers definitions that we attempted to adopt, but the department directors pushed back and indicated that they had developed policies and procedures based upon their own idiosyncratic definitions, so they would not change to use the definitions we were recommending. My supervisor at the time agreed with them, and while I was sympathetic with their position, I believe that we can and should use consistent definitions. Any tips on how to persuade staff to adopt a common definition when each has their own and much time and energy committed to operationalizing it?

Reply

Great post, Eric. When I saw the title, I was intrigued by the term, "federated assessment" and was pleased to say I could relate to this topic in multiple ways.

Although this is not necessarily a solution to the silo-ing of date, one thing we have done in Student Affairs at my institution this past year is to create a document that shows the assessment cycle within Student Affairs for a full year. Each month lists specific large-scale assessment efforts each department has scheduled, which helps us in two ways: 1) it shows us what data is being collected so we don't duplicate efforts and, perhaps, we may be able to utilize some of that data, and 2) it helps minimize over-surveying by letting us know when students are already being surveyed in Student Affairs.

On the reverse of the assessment cycle document, we have a calendar of large-scale surveys (e.g., NSSE, CIRP, etc..) that are scheduled across campus, so we again know what data is being collected and when.

This process reminds us that we are not alone in our assessment efforts and keeps other areas' efforts on our radars. Hopefully this will pay off in fewer silos.

Reply

I have found that a division-wide assessment committee, and even one that includes representatives of other divisions, is a great way to break down silos between departments and get committee members (and others) to think about the greater good... that is, student learning and development that occurs - or can occur - as a result of our collective efforts.

Reply


Comment