Information literacy assessment. (Day 88/100)

My job title is Pedagogy and Assessment Librarian. I took a course called “Assessment” in my LIS graduate program. I just finished writing a 10-page year-end assessment report which I submitted to our University Assessment Director (and he looooved it).

The point is, I should know a lot about assessment. I don’t. I’m still figuring it out.

One thing I’ve learned over the past year is just how complicated, yet simultaneously meaningless, the word assessment can be.

People hate the word assessment because it has too many onerous connotations. Extra work, reports, rubrics, Excel spreadsheets. Administrative obligation. A looming sense of futility.

Maybe we could jazz it up a bit by referring to it simply as giving a shit.

Do you give a shit?

So do I.

Let’s give a shit together.

If you give a shit about something, I think it is natural that you would be curious about it. If you’re curious about it, you would ask a question, and (ideally) care about the answer. I think that’s what information literacy assessment is about–being curious about information literacy, wondering how students become information literate, and caring about how you can impact their learning.

The world of library assessment is messy. I’ve known this for a while, but it became very clear to me when I attended the Library Assessment Conference in Arlington, Virginia last fall. At the conference, I discovered that many of my peers have similar titles (“Assessment Librarian”) but we have radically different jobs. For example, I do not assess spaces, services, or collections. I do not administer LibQual surveys and I have no idea how to use NVivo or SPSS. My job focus is solely on student learning and information literacy assessment. I have not met a single person who is jealous of this.

Information literacy is hard to define. Lots of smart people don’t exactly agree on what it is. If you can’t define it, then how do you measure it?

From a student learning perspective, we would argue that you measure information literacy by defining student learning outcomes. Next, you create opportunities to assess those outcomes. For each outcome, you would identify criteria and performance indicators that define to what degree the outcome has been met (not yet met, partially met, met, etc.).

For 15 years, the ACRL Information Literacy Standards for Higher Education provided a neat and tidy checklist of over 80 skills that an information literate student should have. Cue the collective teeth gnashing when the Standards, rescinded last June, were replaced by the ACRL Framework for Information Literacy for Higher Education, a delightfully nebulous document that utilizes threshold concept theory to describe the behaviors and dispositions of information literate students. “You can’t assess this!”, librarians said. And they continue to say it. I won’t belabor this point–if you’re really interested, you can attend one of the many ACRL-sponsored webinars, workshops, or conference sessions on the topic. (In fact, Meredith Farkas presented a particularly fantastic session last week on the Framework and its implications for instruction.) Or, like me, you can practice the art of silently sobbing when colleagues characterize their engagement in the following manner: “Oh, the Framework? Yeah, we haven’t really looked at it yet.”

From what I can tell (my devoted readers are encouraged to disagree with me), a lot of academic librarians are not engaged with student learning assessment in any way. Some librarians are doing some assessment of student learning, usually by collecting data (worksheets, minute papers) in one-shot instruction sessions. A few librarians are engaged in meaningful, longitudinal, campus-wide initiatives related to the assessment of student learning (through institution-level learning outcomes, reflective learning activities, portfolio assessment, etc.). A small group of folks drive me completely insane by using data analytics to report correlative findings about student performance, e.g., “Students who check out books from the library have higher GPAs.” If you think this is assessment of student learning, I feel sad for you.

Where to start?

So what does real assessment of student learning look like? Andrew Walsh tackles this question in his 2009 article, “Information Literacy Assessment: Where Do We Start?” from the Journal of Librarianship and Information Science. He reviewed nearly 100 articles (hey, nice number) about information literacy assessment to investigate how the authors measured student learning. He found that about a third of the articles used multiple choice assessment (blargh). Other assessment methodologies are examined and explained, including observation, simulation, and self-assessment. Walsh eloquently describes the complexity of truly assessing information literacy–as he says, the assessment tools that are easiest and quickest to administer don’t actually measure the nuanced skills and behaviors of information literacy.

Walsh’s article is probably one of the best overviews of different information literacy assessment methodologies, their frequency of use (really, have we changed our practices much in 10 years? Probably not, I’m afraid), and their benefits/drawbacks.

Another helpful introductory article is Christopher Stewart’s short, two-page review from 2011, which provides an overview of the landscape of information literacy assessment. Stewart explains the purpose of tools like Measuring Impact of Networked Electronic Services (MINES), Standardized Assessment of Information Literacy Skills (SAILS), and surveys like LibQual and the National Survey of Student Engagement (NSSE). It also briefly explains the VAL Report and Megan Oakleaf’s insistence that the future of student learning outcomes assessment is going to revolve around linking student data to library data. Barf.

Sail away from SAILS…

I read a couple of articles about SAILS because that’s all I could stomach. Some thoughts:

  • Why was an article about information literacy assessment in Technical Services Quarterly? I’m still scratching my head about that one.
  • Speaking of that same article, the authors, Rumble & Noe describe a remarkably interconnected relationship with their English department and writing tutors. Although the article doesn’t include the results of the SAILS assessment, they observe that simply implementing a test made faculty think more about learning outcomes. I thought that was kind of backwards. Is it possible to care about learning without a standardized test?
  • If your institution uses SAILS, or is considering using it, I recommend Lym, Grossman, Yannotta, and Talih’s 2010 article from Reference Services Review. They discuss how institutions have administered and used SAILS. The most damning sentence can be found in the conclusion:”Our data tend to show that administering SAILS did not produce clear evidence of the efficacy of our sample institutions’ information literacy programs” (p. 184). They suggest doing a pre- and post-test before and after information literacy instruction to prove that one-shots work. Sigh.


Gimme that good trip that make me not quit (Grande, 2016)

What I liked:

  • Perruso Brown and Kingsley-Wilson (2010) provide an interesting example of collaborating with Journalism faculty to administer and assess an exam that tested students on how they would handle real-life information needs of journalists. Open-ended answers were difficult to assess, but I like the authenticity of the questions and they way they let students choose how to resolve their information needs (students were able to choose which sources to consult). I also appreciated that the article shared versions of questions that didn’t work, e.g., outdated questions that required students to refer to print encyclopedias instead of using easily available free web sources.
  • There are several things I appreciated about the 2013 article from Yager, Salisburg, and Kirkman at La Trobe University in Australia. They published their findings in The International Journal of the First Year in Higher Education, a scholarly publication outside the realm of libraries/information literacy. I also appreciate that they used two different forms of assessment with first-year students: an online quiz taken early in the course as well as a course-integrated assignment, which was assessed with a rubric. Their sample size is large–nearly 300 students. I’m not entirely sure how I feel about their overall approach (using a quiz to determine who will be successful with the course-integrated assignment later), but their results are interesting–they conclude that the “quiz was not particularly useful in determining those students who would later go on to demonstrate that they exceeded the cornerstone-level standards in Inquiry/Research” (p. 68). I interpret this to mean that the students who were low-performing in the beginning of the course had a positive and transformative learning experience throughout the course.
  • I was impressed by Holliday, et. al.’s article from 2015 in College and Research Libraries. The authors reviewed 884 papers from different students at different points in the curriculum (the papers came from ENGL 1010, ENGL 2010, PSY 3500, and HIST 4990). At the end of the article, Holiday et. al. conclude that the benefit of the assessment process was looking at a large body of student work, getting to know the curriculum, and making changes to information literacy instruction as well as course assignments. Hallelujah. I love that they used the assessment process to drive curriculum and pedagogy changes, rather than trying to prove the efficacy of the one-shot. Kiel, Burclaff, and Johnson come to a similar consensus in their 2015 article, “Learning By Doing: Developing a Baseline Information Literacy Assessment.” They also looked a large number of student papers (212!) and found that the process provided “insights into student assignments outside of the specific skills being assessed” (p. 761).
  • I really liked the 2007 article by Sonley, Turner, Myer, and Cotton which discusses assessing information literacy using a portfolio. The portfolio included a bibliography, evidence of the search process, and a self-reflection about the student’s research process. I think all of these components are so important, so it’s great that they were included–but the researchers only had nine completed samples. Oof. It’s hard to imagine this being done at scale (with an entire first-year cohort of 1500 students, for example).
  • Chan’s 2016 article about institutional assessment of information literacy found that as students progressed through their degrees, they self-identified as using the free Internet less for research. I question why is this a good thing that we want to reward, given that searching the free web will be the dominant search retrieval method that students use after they graduate. We should encourage more adept use of the free web, not less use of it overall. I wonder, will the emphasis on academic research atrophy their web searching skills by the time they graduate and begin working?


If you’re new to student learning assessment–don’t read too much about it. Reading about it is really confusing until you’ve had some experience with it. I think the best way to learn more about information literacy assessment is to talk to other teachers about it (at conferences, via e-mail, in department meetings) and participate in student learning assessment for yourself. When I was at the ACRL Immersion program in Seattle in 2013, Deb Gilchrist said this about assessment: Start small, but start. It’s good advice.


Chan, C. (2016). Institutional assessment of student information literacy ability: A case study. Communications in Information Literacy, 10(1), 50-61.

Holliday, W., Dance, B., Davis, E., Fagerheim, B., Hedrich, A., Lundstrom, K., & Martin, P. (2015). An information literacy snapshot: Authentic assessment across the curriculum. College & Research Libraries, 76(2), 170-187. doi:10.5860/crl.76.2.170

Kiel, S., Burclaff, N., & Johnson, C. (2015). Learning by doing: Developing a baseline information literacy assessment. Portal-Libraries and the Academy, 15(4), 747-766.

Perruso Brown, C., & Kingsley-Wilson, B. (2010). Assessing organically: turning an assignment into an assessment. Reference Services Review, 38(4), 536-556.

Rumble, J., & Noe, N. (2009). Project SAILS: Launching information literacy assessment across university waters. Technical Services Quarterly, 26(4), 287-298. doi:10.1080/07317130802678936

Sonley, V., Turner, D., Myer, S., & Cotton, Y. (2007). Information literacy assessment by portfolio: A case study. Reference Services Review, 35(1), 41-70. doi:10.1108/00907320710729355

Stewart, C. (2011). Measuring information literacy: Beyond the case study. The Journal of Academic Librarianship, 37(3), 270-272. doi:10.1016/j.acalib.2011.03.003

Walsh, A. (2009). Information literacy assessment: Where do we start? Journal of Librarianship and Information Science, 41(1), 19-28. doi:10.1177/0961000608099896

Yager, Z., Salisbury, F., & Kirkman, L. (2013). Assessment of information literacy skills among first year students. The International Journal of the First Year in Higher Education, 4(1), 59-71. doi:10.5204/intjfyhe.v4i1.140