For the past few days, I have temporarily placed my primary hat of librarianship on the rack and taken up the Stetson hat of assessment to attend the 2012 SACS Annual Meeting in Dallas with some colleagues. With assessment firmly on the brain, I am finally getting around to a post that I have been meaning to share for some time. My institution completed a reaccreditation site visit this past spring, concluding a lengthy, two-year self-study process. Or so one might think. The site visit does bring a sense of finality to a self-study, but there is another way to view the assessment activity that drives the self-study process.
At the initial meeting for the site visit, the following statement about the reaccreditation process was offered: “It’s not a destination; it’s a journey.” Off and on since that time, I have reflected on that statement. And I consistently end up thinking about one thing:
Anyone who has ever played a flight sim game or actually flown an airplane is familiar with the concept of the waypoint. In flight navigation, a waypoint is a specific point along a flight route that serves as a marker or guidepost to help keep you on your correct flight path. One dictionary defines a waypoint as “the co-ordinates of a specific location as defined by a GPS.” Another dictionary offers this definition: “An intermediate point on a route or line of travel.” In other words, it’s not an (ultimate) destination, but rather a (point along the) journey.
The reaccreditation site visit is typically seen as the “capstone” event of a self-study process, bringing much-needed closure to a long-suffering process that includes massive amounts of data collection and review, many sleepless nights, more meetings that you can shake a stick at, a few more sleepless nights, hours upon hours of writing, and even more sleepless nights. Our minds need some finality to the whole process. We need time to breathe. To quote Jack Nicholson from “A Few Good Men” [with some translative license]: You WANT this to be the end! You NEED this to be the end!
There is an ongoing nature to the whole process. Even after the site visit, for example, there still remain those follow-up reports in response to recommendations of the visiting team. In many ways, today’s accreditation process never really ends. It shouldn’t end. The end-game for assessment is improvement and effectiveness, and I believe we will never reach the bottom of the jar of improvement and greater effectiveness. Yes, the site visit could be seen as a singular event in time. Underneath that event, however, flows a steady stream of ongoing activity–a river of assessment. Assessment is an ongoing process of identification, collection, measurement, and review followed by a determination of what level of success is reflected in the outcomes and a plan for using what is learned to benefit and guide going forward. And that is followed by another round of identification, collection, measurement, review, and so on.
So on the return flight back to South Carolina tomorrow afternoon, my colleagues and I will be in a plane that will (hopefully) be hitting its waypoints in order to effectively reach the GSP airport. Likewise, when we get back to campus, we will be aiming for waypoints to guide us through the continual journey of assessment. Happy flying, everyone.
This post continues a series on five thoughts with which I have been tussling concerning my library. To get caught up, first go here and then here. Thought #3 is perhaps the most overarching of all, touching on every aspect of what we do. It actually has two parts:
Thought #3: (Part A) Are our current methods of assessment and evaluation effectively doing their job? And (Part B) are we using assessment and evaluation outcomes to their fullest potential?
As a library administrator, I fully understand the role and value of assessment. Apart from mandates and professional responsibilities, I appreciate assessment simply because I care. I believe that any of us with genuine concern about the impact of our efforts grasps the merit of evaluation. It is from this vantage point that I have been conducting a mental assessment (so to speak) of our assessment efforts. This includes evaluation of the resources and services provided by the library as well as evaluation of the library staff. I am wondering if we can makes any changes in our assessment methods that will make the process more meaningful.
With our assessment efforts:
- Are we asking the right questions? Quite simply, are we assessing the right things? Are we leaving anything on the table? Is there something that we do or offer about which our users would be more than willing to provide feedback if we only asked? Would the library staff find greater interest and value in staff evaluations if we totally redesigned the process?
- Are we asking those questions the right way? Are we approaching our assessment efforts from the best angle to yield the best results? When assessment involves feedback from users, do we pose our questions in a way that they understand what we are asking? I love the recent blog post by Andy Burkhardt (Information Tyrannosaur) about librarians seeing the library with fresh eyes. Sometimes we need to remove ourselves from our everyday role and see what we do from a library user’s perspective. Andy offers some great suggestions on how to give it a try, including a reference to a brilliant idea posed by Brian Herzog (Swiss Army Librarian).
- Are we asking the right people? When seeking feedback concerning a particular resource/service, are we asking the people who are actually using that resource/service? Are we considering input from every possible user group (i.e. students, faculty, staff, alumni, freshmen, athletes, music majors, etc.)?
- Are we closing the loop? I’ll be honest; I have been guilty of going to great lengths to gather evaluative data only to let it collect dust. You can have the richest collection of assessment data in the universe. You can even prepare the sharpest and clearest report of evaluation findings known to mankind. But all of that means very little if you do nothing with it. Assessment for assessment’s sake generates a file of data. Assessment for the sake of improvement generates value. We must do something with that data that we collect.
I must confess that short of minor tweaks, many of our library’s assessment tools have changed very little over the past several years. When assessment is one of many tasks in a roster of duties, it is easy to just continue using the same metrics, collecting them the same way year after year. The reality, however, is that the playing field continues to change. It stands to reason that our assessment efforts must often do the same in order to remain in step and retain their relevance.
Are you trying any innovative methods of assessment that draws useful participation and feedback?
Are you conducting staff evaluations in a fresh way that is resonating with those being evaluated?
What steps are you taking to ensure that you are doing something to “close the loop” with your assessment data?
Pic credit: swannman