5 Thoughts about the Applecart: #3 – Assessment and Evaluation
This post continues a series on five thoughts with which I have been tussling concerning my library. To get caught up, first go here and then here. Thought #3 is perhaps the most overarching of all, touching on every aspect of what we do. It actually has two parts:
Thought #3: (Part A) Are our current methods of assessment and evaluation effectively doing their job? And (Part B) are we using assessment and evaluation outcomes to their fullest potential?
As a library administrator, I fully understand the role and value of assessment. Apart from mandates and professional responsibilities, I appreciate assessment simply because I care. I believe that any of us with genuine concern about the impact of our efforts grasps the merit of evaluation. It is from this vantage point that I have been conducting a mental assessment (so to speak) of our assessment efforts. This includes evaluation of the resources and services provided by the library as well as evaluation of the library staff. I am wondering if we can makes any changes in our assessment methods that will make the process more meaningful.
With our assessment efforts:
- Are we asking the right questions? Quite simply, are we assessing the right things? Are we leaving anything on the table? Is there something that we do or offer about which our users would be more than willing to provide feedback if we only asked? Would the library staff find greater interest and value in staff evaluations if we totally redesigned the process?
- Are we asking those questions the right way? Are we approaching our assessment efforts from the best angle to yield the best results? When assessment involves feedback from users, do we pose our questions in a way that they understand what we are asking? I love the recent blog post by Andy Burkhardt (Information Tyrannosaur) about librarians seeing the library with fresh eyes. Sometimes we need to remove ourselves from our everyday role and see what we do from a library user’s perspective. Andy offers some great suggestions on how to give it a try, including a reference to a brilliant idea posed by Brian Herzog (Swiss Army Librarian).
- Are we asking the right people? When seeking feedback concerning a particular resource/service, are we asking the people who are actually using that resource/service? Are we considering input from every possible user group (i.e. students, faculty, staff, alumni, freshmen, athletes, music majors, etc.)?
- Are we closing the loop? I’ll be honest; I have been guilty of going to great lengths to gather evaluative data only to let it collect dust. You can have the richest collection of assessment data in the universe. You can even prepare the sharpest and clearest report of evaluation findings known to mankind. But all of that means very little if you do nothing with it. Assessment for assessment’s sake generates a file of data. Assessment for the sake of improvement generates value. We must do something with that data that we collect.
I must confess that short of minor tweaks, many of our library’s assessment tools have changed very little over the past several years. When assessment is one of many tasks in a roster of duties, it is easy to just continue using the same metrics, collecting them the same way year after year. The reality, however, is that the playing field continues to change. It stands to reason that our assessment efforts must often do the same in order to remain in step and retain their relevance.
Are you trying any innovative methods of assessment that draws useful participation and feedback?
Are you conducting staff evaluations in a fresh way that is resonating with those being evaluated?
What steps are you taking to ensure that you are doing something to “close the loop” with your assessment data?
Pic credit: swannman