Planning, organizing, and implementing assessment of Student Affairs programs, events and other experiences in a purely online environment due to the shift to remote learning in Spring 2020 may have at first seemed easy and accessible. Why? No longer does the assessor have to distribute half sheets of paper with questions for participants in a large venue, and don’t forget the pens. No longer is it necessary to try and decipher handwriting in order to get the data into Excel. No longer does one need to make their office floor or lab a space for storing envelopes, surveys, and consent letters for stuffing into those envelopes for distribution by hand.
However, the grand shift to online assessment of online activities has turned out to be more complicated ethically than initially thought. For example, a study requiring anonymity, when data were gathered by hard copy, could fairly easily promise anonymity. In an online environment, anonymity, given IP addresses, is practically impossible, although many online survey tools claim the “anonymous” survey as an option. Protect your data. Hackers do not need to be too sophisticated today. And with the research shift toward asking “whole person” questions that tap into the affective domain, for example, surveys, as a whole, are delving into more personal territory and need to provide a safe space for the respondents. The ease of access to participants also creates the following ethical contingencies:
- Problem 1: Clear, neat and orderly survey tools are more readily available (and often free) than ever to business and educational institutions. Although convenient for the reasons stated above, creating survey questions requires thought prior to rolling them out to prospective participants. Survey Monkey, Microsoft Forms, and so many more tools, which can also be upgraded for a minimal charge, are available for use by any individual who lacks the background in research, data collection, analysis or ethics. The ethical considerations regarding survey questions being created using online tools can easily be overlooked and not addressed before thousands of respondents are sent the survey in a single click. Leading questions might go unnoticed and lead to responses that the survey creator desired. Double-barrelled questions might confuse the respondent so they simply skip the question or give up and stop answering the survey altogether. A sample by convenience might be summarized as representative of the population. Or the word “random” might be used to refer to a survey posted on social media, without contingencies taken into account. Most often these flaws are not intentional; they simply occur because the designers are not knowledgeable in research as a discipline.
Indeed, professional-appearing online surveys can be an ethical nightmare. Will the beautifully laid out questions elicit valid and reliable information? Does the survey template provide a space for an introduction and consent statement? If participants do not have a stake or know their stake in the outcome of a survey, why not lie? Most of the time, online respondents never see any results or reporting of the study or inquiry. Then it is forgotten by the respondents. And it goes without saying that if the data reveal less than desirable results, there may be little to no checks and balances in place to guard against misrepresenting the data in a fancy color, fonted template with pages that turn gracefully in one click. Thus the organization may make decisions primarily based on presentation.
- Problem 2: Blindness to systemic equity issues in assessment and research. In Higher Education, research in most disciplines is still dominated by the concepts of objectivity and subjectivity, which provide the positivist archetypes that haunt many of the social sciences. As such, these disciplines respond softly to attacks on phenomenology. In a rush, researchers may forget to collect data in a way that it can be adequately disaggregated. In addition, researchers may rush to fit a complex and multilayered process into a design that fits a pre-determined and easier norm. At most institutions simply doing “research as usual” (Franco & Hernández, 2018) contributes to systemic inequities.
What Can We Do?
Boldly speak up and raise the issues. Speak up for ethics. Expect pushback in subtle ways that keep progress at bay. Take the high road with pushback. How assessment leaders respond to friction will make the difference between educating for change and equity, or perhaps creating actual harm. Assessment leaders encourage “higher-level thinking on the basis of sensitive and respectful consideration of others” (Hamrick, Evans & Schuh, 2002) by acting reasonably and ethically in our work. The most practical leadership is leadership by example. I have created several types of presentations on ethics for various audiences, and I promote it. Many professionals and paraprofessionals just need a nudge to realize the need.
Timeless ethical principles work online, too. Kitchener’s (1985) five principles to consider when performing assessment in student affairs hold true even in rushed, online contexts: Respecting autonomy (no coercion); doing no harm; benefiting others (a culture of improvement, not retaliation); being just (fair treatment of individuals and resources); and being faithful (scrupulous attention to finding and reporting the truth).
What is the purpose of this survey? Why now? Is there an underlying agenda? If the survey creator knows that certain outcomes will not be allowed to be reported, they can ethically turn down the request to do the survey at all. Why use a survey and not, say, interviews? The following guidelines develop these principles:
- “Maintaining confidentiality is the most salient principle to be used in assessment“ (Upcraft & Schuh, 1996, p. 295).
- Be honest. Respondents should know the purpose of the assessment, potential risks, who will have access to the data, and that their specific information will be protected. The researcher is responsible for including in the final report any mistakes made in the process of the assessment (Patton, 2002).
- Clarify purpose. If respondents don’t know why or where the data will go, some will intentionally fill out a wrong value: “If the institution does not have a culture of assessment or one in which routine organizational improvement is the norm, it is possible for people…to worry. What will happen to us? Will our program be eliminated?” The units should be clear about the purposes of an assessment of any type (Schuh et al, 2009, p. 197).
- Don’t make promises lightly (Patton, 2002). Being faithful includes “the concepts of loyalty, trustfulness, and basic respect” (Upcraft & Schuh, 1996, p. 294). Be sure that participants in your survey know who will be reading their responses and to what extent their responses will be traceable back to them, from any reports written based on the survey data. Your institution’s IRB is a solid resource on these issues.
- Obtain honest information. Whom to sample should be framed by the purpose of the assessment. Sometimes it’s OK that the respondents are only a section and not representative of the larger population. But clarify that fact in reporting (Jagadish, 2020).
- Provide an opportunity for people to admit they either don’t know or don’t remember. Sometimes we want to create a “forced choice” among provided survey responses, but we don’t want them to feel like they have to lie” (Suskie, 2009, p. 188).
- Accurately analyze data. Be sure that you are avoiding the single researcher approach, who singularly collects, analyzes and reports on the data. Elicit multiple perspectives in all phases of survey creation, distribution, analysis and reporting. Review the data separately; then together resolve discrepancies. Also, learn and report which individuals were not served to the extent that you expected? (See “Disaggregation of Data”.) Is there a pattern? How can we revise the program to promote its equitable effectiveness and success?
Assessment in an online environment carries a great deal of responsibility to manage and report the data just as ethically beautiful as it is aesthetically beautiful. “Bad or erroneous data leads to bad decisions and can create great harm” (Jagadish, 2020). Student affairs assessment leaders promote a trusting environment when we make our processes just as careful, transparent and equitable online as in person. No study or project is perfect, and the extent to which we are ethical and transparent in each assessment project will determine the improvements we can make and the quality of projects to come.
- Franco, M. A., & Hernández, S. (2018). Assessing the capacity of Hispanic serving institutions to serve latinx students: moving beyond compositional diversity. New Directions for Institutional Research, 177, 57–71. https://doi.org/10.1002/ir.20256
- Hamrick, F.A., Evans, N.J., & Schuh, J. H. (2002). Foundations of student affairs practice. San Francisco: Jossey-Bass.
- Jagadish, H.V. (2020). University of Michigan https://www.coursera.org/learn/data-science-ethics
- Kitchener, K. S. (1985). Ethical principles and ethical decisions in student affairs. In H. J. Cannon & R. D Brown. (Eds.). Applied ethics in student services, New Directions for Student Services, No. 30 (pp. 17-29). San Francisco: Jossey-Bass.
- Patton, M. Q. (1987). How to use qualitative methods in evaluation. Newbury Park: Sage.
- Schuh, J.H., & Associates (2009). Assessment methods for student affairs. San Francisco: Jossey-Bass.
- Suskie, L. (2009). Assessing student learning. San Francisco: Jossey-Bass.
- Upcraft, M. L. and Schuh, J. H. (1996). Assessment in student affairs. San Francisco: Jossey-Bass.
Sandra Mahoney, University of the Pacific