Editor: Daniel W. Newhart | Year: 2018 | Issues: 1
Joseph Levy, Richard Hess, and Amanda Thomas
While student affairs assessment does not have as long a history as academic assessment, a wealth of literature related to student learning makes clear reference to and inclusion of co-curricular and student affairs areas. Over the past several years, accrediting bodies have made criteria and standard revisions with language specifically naming co-curricular and student support services. Despite these elements, much is still misunderstood with respect to the role student affairs assessment may have in accreditation work. This article explores the key evolutions in the history of accreditation and student affairs assessment, explains accreditation interests and expectations for student affairs assessment, and concludes with implications and future considerations for institutions to integrate into practice.
Maureen Cochran, Pamelyn Shefman, and Mudithani Hettiarachchi
Often, an assessment professional’s eye for inquiry is focused on work happening outside of the assessment office. A program review is an opportunity to focus that lens inward to examine the quality of the ways in which the area under review works with other parts of the university. This paper describes the various actors involved in a program review, types of review, and offers perspectives of a person who served as an external evaluator and from a person who was part of an assessment office that was reviewed. Expectations and processes are discussed with practical applications to a reader’s campus implied.
The purpose of this study was to understand how student affairs assessment experts define a culture of assessment in student affairs. The extant literature has provided ample guides for the mechanics of student affairs assessment, a clear rationale for why assessment should be conducted in student affairs, and even suggests that cultures of assessment should be developed as a means for implementing assessment. However, while some scholars define and others describe this culture of assessment, a common, empirically based definition does not exist. The lack of an empirically based definition prevents important scholarship about cultures of assessment in student affairs and limits how practitioners can go about fostering and sustaining such a culture. This Delphi study engaged experts in a systematic process of developing a definition for student affairs culture of assessment. The results include a definition as well as characteristics describing a culture of assessment in student affairs that can promote further investigation of this important topic. The findings also suggest connections with learning organization and organizational culture literature that can strengthen the student affairs assessment literature in studying this construct.
Erika L. Beseler Thompson, Chris Ray, and Nathan Wood
Given the dynamic nature of culture as both a social and individual phenomenon, an investigation of assessment culture in student affairs necessitates a look at the reciprocal relationship between individual practitioners and their surrounding environment. The purpose of this study was to explore the range of perceptions of student affairs practitioners regarding assessment of student learning by integrating various individual and environmental variables into a comprehensive framework that encompasses the multiple levels of the social ecological model (McLeroy, Steckler, Bibeau, & Glanz, 1988). The researchers employed Q methodology to illustrate the range of viewpoints of 44 student affairs practitioners regarding assessment in student affairs. Participants representing various functional areas, position levels, and institution types completed a Q sort of 51 assessment-related statements. Participants’ sorting data were subjected to by-person factor analysis, resulting in groupings of respondents who share similar perspectives. Additional qualitative data were collected via post-sort questions and follow-up interviews to assist with interpretation of three participant viewpoints: Assessment-as-Significant, Assessment-as-Irrelevant, and Assessment-in-Isolation. These distinct viewpoints clearly illustrated that the various beliefs individuals hold about themselves and their institution’s culture of assessment interact dynamically to impact participants’ assessment perceptions and practice. Study findings provide further insight into reasons for the gap between the espoused value and actual practice of assessment and reinforce the notion that addressing this gap requires attention to the dynamic interactions of both individual and environmental factors. Implications for the scholarship of assessment and assessment practice are discussed.
Courtney Marsden and Erica Eckert
Existing literature reveals a disconnect between the assessment, evaluation, and research (AER) competencies sought by employers and those held by new professionals. The purpose of this descriptive study was to explore the content within master’s-level student affairs programs along with the literature pertaining to the needs of the profession. A content analysis of 45 AER course syllabi revealed research topics were more frequently listed in syllabi than assessment topics.
LaNette Thompson and Jeff Doyle
At a private four-year university, two student life poster sessions were held in order to share student affairs assessment projects and their results with the larger university community. During the sessions, faculty members, administrators, staff, and students viewed eleven posters highlighting various assessment projects having to do with student development. Topics included 1) student learning in public deliberation initiatives, 2) findings from a campus food insecurity task force, 3) correlations between attendance at summer orientation programs and retention, and 4) an assessment of goals related to the faculty-in-residence program. The process involved in preparing for the poster sessions included securing a venue, soliciting posters, offering training on poster preparation, advertising the poster sessions, holding the sessions, and assessing the sessions. Insights from the sessions included the positive impact of the sessions as well as suggestions for ways future sessions could be improved. It was observed that training was needed on how to present a poster as well as the purpose of a poster session. The process involved in conducting the sessions and insights gained are shared for the benefit of those who may wish to utilize this simple and cost-effective method to encourage student affairs assessment at their institution.