Menu

Much Ado About Dashboards

Much Ado About Dashboards

The first assessment project that I ever worked on took me down a rabbit hole.  It was a major survey project that asked our first year students about their experiences during their first semester on campus.  Not only did we wind up with a good response rate, but we also had a number of dedicated partners on campus that were willing to help us out.  Our colleagues in enrollment management, financial aid, institutional research, residential life, and student involvement all provided us with supplemental data to give us a more complete view of our students.  After that, all it took was a few quick lines of code and all the raw data was compiled into one sprawling dataset.

So we went back to our stakeholders and partners and asked them what they wanted to learn from the data.  We got a few suggestions, but they all followed the same theme. I’m paraphrasing, but the quote that stands out was:


What we really need are some dashboards and interactive visualizations so we can use the data for decision-making.
 

This was not the first time that I received this type of request and I knew it certainly would not be the last.  The trouble was that there were hundreds of variables and an endless number of models, comparisons, and cross tabs that we could look at.  Sure we could load all of the data into a dashboard, let people filter through the different survey questions and demographic variables, but then what?  The more I thought about it though, I realized that the request actually reminded me of a different quote from a theatre class in undergrad:


There’s man all over for you, blaming on his boots the faults of his feet.

–Vladimir in “Waiting for Godot”
 

The problem was not with the tools themselves.  Data visualization tools are a tremendous resource for assessment professionals looking to explore data or disseminate their results.  The problem is that we were looking for a data visualization tool to solve a problem that it was not designed to solve. That experience led me to think more systematically about these tools in terms of what they do well and what they cannot accomplish.  It has led me to think about the potential problems that arise when we simply throw all our data into a dashboard and proclaim the project finished. It has also led me to think about the broader role we play as assessment professionals and analysts in pointing our colleagues toward sound insights and valid conclusions.  The list below is by no means exhaustive, but these points have helped me frame my thinking about the role that dashboards and data visualization tools play in our assessment.


What Dashboards Do Well:

  • Update results in real time:

    • Many dashboarding tools allow an analyst to link directly to a database or make repeated calls to an API.  For assessment professionals, this gives us the ability to not only automate some of our standard reports, but allow the results to update in real time as the data changes.  We all have the key metrics that we’re asked about on a regular basis. These tools allow us to automate the reporting process and point our stakeholders to a dashboard with the most up-to-date information rather than produce a custom report each time.

  • Allow stakeholders to explore the data:

    • A flexible dashboarding tool can provide our colleagues with a user-friendly platform that allows them to explore the data.  This reduces the barriers that may prevent some of our colleagues from exploring their data. These tools allow users to answer simple questions about their data without learning a complex statistical programming language.


What Dashboards Can’t Do:

  • Design your research question:

    • When we start a data collection effort, there are times that we fall into the trap of thinking about what would be “interesting” rather than clearly articulating what we want to know.  The trouble is that we run the risk of collecting data and pushing out results that do not actually speak to the outcomes that we care about. Even if we have the ability to push out a variety of different data points, we still have to have a clear sense of what we want to know.

  • Tell you what data to act on:

    • Similarly there are also times that we fall into the trap of assuming that having more data and more flexible access to that data will automatically lead to better insights and more informed decisions.  More data though cannot convince us if we don’t have a clear sense of our burden of proof. What is our threshold for choosing one course of action as opposed to another? What is our process for ruling out alternative explanations?  Unless we know what will convince us, there’s no way for a dashboard to deliver us information we can act on.


The Possible Stumbling Blocks:

  • Overwhelming our stakeholders:

    • While dashboards give users the ability to explore the data, if we throw all the data we can at our users in this format we run the risk of overwhelming them.  Higher education institutions have a wealth of data, but there is no reason we would expect our colleagues to have time to click through hundreds of variables and filters.  In many ways, our role as assessment professionals is not only about pushing out data to our stakeholders, but also refining the data we push out and providing our colleagues with clear and simple takeaways.

  • Encouraging Flawed Inference:

    • Providing our colleagues with tools that allow them to explore the data is an admirable goal.  The downside though is that when we’re unable to provide the proper context, we risk the possibility that decisions will be made based on conclusions that are not statistically valid or substantively meaningful. For example, if we were to filter our graduation and retention rates on a series of demographic variables, we may find that students from some demographic groups are less likely to persist and graduate.  Is there any statistical difference or is it likely that the difference could arise simple by chance? Does the relationship hold once we control for other factors like financial need or does it wash out of the model? These are important considerations for assessment professionals to examine in order to provide the proper context.

Overall, dashboards and data visualization tools are a tremendous resource for assessment professionals.  They allow us to push out data to our stakeholders more rapidly and encourage people to explore their data.  As assessment professionals, it’s important to understand the problems that these tools can solve and what problems require something other than a technological solution.  There is no way to automate the process of creating a research design and no tool that can tell you which data to act on. As assessment professionals, we have an essential role to play in pointing our colleagues to results that have clear implications and provide the proper context for how confident we should be in those results.

So let us know about your experiences below in the comments section.  Tell us about a time when you found these tools to be particularly effective or a time when you thought they were misapplied or came up short.  How do we give users enough room to explore without overwhelming them with information?


Eric Walsh, University at Buffalo 

Go Back

Fantastic article, Eric! This is such a great guide to coach others to fully think through what they're seeking to solve with data, when they are making requests for flashy dashboards, massive data sets, or complex reports.

Daniel - some thoughts on tools... Our data analyst is wonderful and built our Student Affairs KPI dashboard. He uses R, Plotty, and Shiny. Here's the link to our assessment site: https://www.uaa.alaska.edu/about/student-affairs/assessment/kpi.cshtml/ And a directly link to the KPI's here: http://studentaffairs.uaa.alaska.edu/shiny/kpis/.

Reply

Kelly, thank you for the kind words!

Daniel, great question. I use R a lot because the different packages allow you to push your results out in a variety of ways (e.g. interactive images using D3, interactive web apps using Shiny and flexdashboards, plotly, etc.). R is free and open source, but it does require learning some code.

I didn’t mention any specific tools in the post because I think it’s more important to just choose a tool that encourages good practice rather than any one specific platform. For any tool we should consider:

Does it allow us to audit an analysis? So we have a clear record of the steps we took and can convey that to our colleagues.

It is replicable? So another analyst can reproduce the same results.

Is it accurate? So changes in the data are correctly reflected in the output.

Hilary Parker has a great article that discusses these principles here:
https://peerj.com/preprints/3210/

Reply

Great article on how to direct the use of interactive dashboard.
Now, lets say your stay within the appropriate parameters, what tools or services do you use to create your interactive dashboards and what resources are required?

Reply

Wonderful, and very timely, blog post! Thank you Eric and SAAL!

Reply


Comment