To Lock or Not to Lock?

Slides from IGNIS Webinar (4/26/18)

Last fall, I was working as a part-time librarian at a community college when a student approached me at the reference desk and asked where she could take her test. She explained that she needed to download a software called Respondus that would lock down her browser, and she needed to use a webcam because she would be recorded during the test. It was through that interaction that I first learned about the complicated world of online exam proctoring.

Now that I’ve had some time to think about this topic a little more (and get over my initial shock), I’ve come to the conclusion that the decision to secure online assessments is part of the instructional design process. I’ve read some interesting pieces, suggested below, which made me think about the various factors that instructors need to weigh before deciding to use online exam security.

Faculty who teach online can’t ignore the imperative to ensure that the student who is taking the online course is the one completing the work. This is specified in the 2008 Higher Education Opportunity Act (HEOA). However, HEOA does not specify exactly how programs must verify identity, and it suggests a range of options including secure logins, proctoring, and other technology.

Some courses don’t use exams at all. However, in courses that do rely heavily on exams, especially for large portions of the student’s overall grade, the instructor may feel strongly that they need to verify the student’s identity and ensure that the student is not receiving any outside help during the exam. Online proctoring software may include features that require students to show photo ID and limit the student’s computer capabilities (by locking down the browser). Some companies offer live remote proctoring where the student is monitored via webcam to ensure the student is following the rules of the exam.

In institutions where faculty do not have access to proctoring software (all of the companies offering these services are costly), instructors may add features to their exams to make it more difficult for students to cheat. This could include randomizing answers, timing questions, showing only one question at a time, preventing students from reviewing correct answers, and limiting the time window during which the exam can be taken.

At the same time, making online assessments more difficult to take can hinder accessibility and equity. Randomizing answers can slow down learners who depend on a consistent pattern of answers in order to answer questions successfully. It’s also recommended that faculty shorten the time window that tests are available to prevent collaboration and the sharing of answers. This sounds good in theory, but can create burdens on students may need to shift work/childcare schedules in order to take a test during a limited time frame.

There are many ways to discourage academic dishonesty in online courses without using proctoring (which is costly to the institution and still does not completely prevent cheating). I think the first step is to look at the course’s learning outcomes and determine if any of the outcomes can be assessed without using simple multiple choice exams. Moving away from exams to other kinds of assessments, including group work, portfolios, problem-based assignments, essays, and reflective writing, can give a clearer picture of students’ learning.

If exams must be used, it is best to limit their value (e.g., make them worth less of a student’s overall grade). Another option is to use automatically-graded multiple choice tests as simple knowledge checks that unlock the next module, but do not have an impact on the learner’s grade.

When I think back to the scenario of the student who needed somewhere to take her exam, the real head-scratcher for me is that she was not taking a fully-online course. She was taking a face-to-face course and, obviously, since she was standing in front of me, she was a student who came to campus regularly. She asked me this question very early in the quarter, far too early to be taking any kind of summative assessment like a midterm or final exam. So why did her instructor feel it was necessary to use online proctoring software? If it was a low-value quiz, couldn’t it have been taken in person, during class? I don’t know the answers to these questions, but these are the kinds of conversations I’d like to have with faculty to better understand their thoughts about securing assessments.

Suggested Reading

Alessio, H. M., Malay, N., Maurer, K., Bailer, A. J., & Rubin, B. (2017). Examining the effect of proctoring on online test scores. Online Learning, 21(1), 146-161.

Higher Learning Commission. (2018, January). Federal compliance overview. Retrieved from http://download.hlcommission.org/FedCompOverview_PRC.pdf

Michael, T.B., & Williams, M.A. (2013). Student equity: Discouraging cheating in online courses. Administrative Issues 3(2). Retrieved from https://dc.swosu.edu/aij/vol3/iss2/6

Schaffhauser, D. (2016, April 6). How students try to bamboozle online proctors. Campus Technology. Retrieved from https://campustechnology.com/articles/2016/04/06/how-students-try-to-bamboozle-online-proctors.aspx

Smith, C. & Noviello, S. (2012, September). Best practices in authentication and verification of students in online education [Presentation]. Retrieved from http://hdl.handle.net/10755/243374

Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat: Frequency and type of academic dishonesty in the virtual classroom. Online Journal of Distance Learning Administration, 12(3), 1-10.

Watson, G. & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses?. Online Journal of Distance Learning Administration, 13(1). Retrieved from https://www.westga.edu/~distance/ojdla/spring131/watson131.html

 

 

Featured image by Mike Szczepanski on Unsplash

Who Succeeds in Higher Education? Questioning the Connection Between Academic Libraries & Student Success.

This is a rough transcript of the plenary session presented to the California Academic & Research Libraries conference on April 15, 2018. The full paper will be published in the conference proceedings. All errors, typos, and misunderstandings are my own and do not represent the views of anyone else, including my past, present, and future employers.

My slides are here: https://tinyurl.com/zohzohcarlconf2018

Abstract: Many academic libraries are feeling pressured to join “student success” initiatives to collect and analyze data about students’ academic behaviors. In the library, this may result in tracking who uses group study rooms, who checks out books, who asks questions at the reference desk, and who participates in information literacy instruction. These data points are being used to prove that students who use the library are more likely to succeed in college; therefore, academic libraries are valuable. Such surveillance methods have been used in several high-profile studies, including those in the Association of College & Research Libraries’ Assessment in Action initiative. In this talk, I will question the role of academic libraries in student success and the methods being used to prove academic library value. What is at stake when academic libraries connect student library use with their academic performance? What are the implications for students’ privacy? Could tracking students in the library lead to self-censorship and intellectual freedom concerns? Most importantly, what do students really need from an academic library in order to be successful in college?

Introduction

Good morning, and thank you for the opportunity to be here to speak with you today. I am very grateful to the conference organizers, and I especially want to thank Miguel Figueroa and Charlotte Roh for their wonderful talks. I couldn’t have asked for a better keynote and plenary session to precede this one, and their presentations set the bar very, very high.

I am truly humbled by the amount of knowledge and experience in this room. I won’t pretend for a second to know more than any other person here. But I don’t think that I’m here to tell you about what I know. I believe I was invited to speak to you because of how I think about libraries, and how I see the role of academic libraries in student success.

Longview, Washington

My perspective on academic libraries is shaped by how I started my career. After receiving my Master’s degree in 2010, my first professional position was an adjunct faculty librarian role at Lower Columbia College in Longview, Washington. Longview is a small mill town an hour north of Portland on Interstate 5. It sits on the Columbia River which made it a bustling place for timber industry and port shipping in the early twentieth century. Now it’s well-known for its distinct and lingering timber mill smell, its annual Squirrel Festival, and the song “Longview” from the 1994 album Dookie by Green Day.

Students at Lower Columbia College

I tell you this because Lower Columbia College was the first college I ever worked for as a librarian, and those students shaped my empathy and my passion for higher education. They were the highlight of my day, every day I was there. Since then, I have worked for several other community colleges, and community college students continue to be the most interesting, driven, capable, and hardworking people I’ve ever met. According to the student demographics when I worked there in 2012, LCC students 60% of LCC students were part-time. 58% of them were age 25 or older. Half of them had children. And 72% of students were there for certificate programs and basic skills education– only 28% of students were seeking transfer degrees.

When I talk about college students with you, I don’t picture an 18-year-old person living in a dorm room at a sprawling university. In my head, when you say “college student”, I picture someone who works to make ends meet, who goes to school part-time, and who is raising a family.

 

Community College Educators

Whether or not you realize it, you are all community college educators, too. Of all students who completed a four-year college degree in 2015-2016, half of them had been enrolled at a two-year college at some point in the ten years prior to earning their degree.

Here in California, 29% of University of California graduates started at a California community college. The number goes up to 51% for California State University graduates.

Community college students are the people I care most about. When we talk about who succeeds in higher education, and how the library impacts student success, we have to remember that 40% of undergraduates attend community colleges. What are we saying about these students when we talk about library value?

What are we saying about students when we correlate their library use with their academic performance–for example, at the University of Wollongong Library, where students’ library use is tracked and correlated with their grades? Or the University of Minnesota, where students who used the library were found to have a higher GPA and a higher retention rate than those who did not? I think we’re trying to say, “The library is valuable.” I think we’re also saying, “Student success is determined by the actions of the individual student.”

 

Neoliberalism and Student Success

As I see it, the prevailing message about the academic library’s impact on student success is presented as an issue of individual choices made by students, a framework that aligns with a neoliberal philosophy of higher education.

To describe neoliberalism can feel like trying to describe the taste of the air. Put simply, neoliberalism is the political and economic theory that all human relationships are based on competition (Monbiot, 2016) and, thus, society functions most efficiently when working according to market principles (McCabe, 2015). Neoliberal philosophy was put into practice by policies enacted by Reagan and Thatcher, policies which eroded public goods and dismantled social welfare programs.

Neoliberalism also emphasizes the notion of scarcity: there is not enough for all of us. If you don’t get enough resources, it’s because you didn’t work hard enough for them, or you didn’t provide the right commodities demanded by the market (Palley, 2004). We see this every day in academic institutions where departments that do not produce enough deliverables or meet institutional outcomes are thusly cut, and their erasure is treated as part of doing business.

I like the way Karen Nicholson describes neoliberalism as “an invisible part of the fabric of our daily lives.” In addition to Nicholson, there are several other researchers in LIS/archives who are actively questioning the impact of neoliberalism and corporatization in libraries. I recommend reading Ian Beilin, Marika Cifor, Maura Seale, and David McMenemy, all of whom have written words more eloquent than my own.

 

Who Succeeds in Higher Education?

So when we measure the value of academic libraries by the individual success of students who use them, we are saying: the more students engage with the library, the more successful they are. This fits with the neoliberal ethos of meritocracy: we all have equal opportunity, everyone can succeed if they try hard enough, and success is determined by the choices you make.

We seem unwilling to admit that the architecture of privilege permeates all things, including student success. In higher education, we regularly discuss success as an issue of what students do with their time rather than just bluntly admit: success is systemic and structural, still more often defined by your zip code, income, race, and inheritance than almost anything else you do.

As you might have guessed, the title of my talk is a rhetorical question. If the question is, “Who succeeds in higher education?”, we already know the answer.

Success in higher education comes largely from identities that are ascribed to us, rather than what we have achieved. You are more likely to graduate from college if you are white, female, and have money. You are more likely to finish your degree if you go to school full-time and if you attend a selective institution with limited admissions. There are clear connections between wealth, privilege, status, and college completion.

If this is the case, why are libraries investing time and money to ask how library use impacts student success? If we already know who succeeds, why are we tracking students who use the library to prove they are more successful than students who do not?

No moment exists in a vacuum and nothing is inevitable. I am married to a historian of science and technology, so we have a lot of dinner table conversations about the importance of history. I believe that we librarians find ourselves in this difficult moment for two reasons.

One, the Association of College and Research Libraries is actively pushing a research agenda that promotes connecting the individual use of academic libraries to traditional measures of student success. Two, it is much easier to believe that students are responsible for their success than to acknowledge the more complicated intersections of privilege and oppression that determine who succeeds in our society.

What I’d like to do with my time this morning is attempt to convince you that

  • correlating student use of academic libraries with their individual success is a harmful framework;
  • the ACRL research agenda is driven by a neoliberal concept of higher education that pins student success on their choices as individuals;
  • if we are truly dedicated to the cause of student success, library money and library labor would be better spent focusing on what students actually need: financial support, food, and housing.

In order to better understand how we arrived at this moment, we need to discuss the Value of Academic Libraries Report.

 

The VAL Report

There’s a good chance that what you’re working on in your library right now aligns with the Value of Academic Libraries report, more commonly known as the VAL report, which was published by ACRL in 2010.

If your library has defined outcomes, created or adopted systems for assessment management, collected data about users beyond what is necessary to provide services, and/or connected library use to student engagement, retention, graduation, and academic performance, then you are practicing some of the 22 recommendations for demonstrating library value, as defined by Megan Oakleaf in her seminal report.

The overarching recommendation that libraries must “demonstrate value” by integrating with their institution’s outcomes assessment reporting might sound normal to us now, but at the time of its publication, the VAL report suggested practices and procedures that did not exist in many libraries — and, to be frank, that are still very new to many academic libraries around the country.

 

Assessment in Action

After the publication of the VAL report, ACRL received an IMLS grant to fund Assessment in Action, a three-year project to support data-driven assessment projects in academic libraries. AiA spawned three cycles of annual assessment projects at nearly 200 different colleges and universities, all of which defined research questions, created teams that included external partners outside the library, and collected data to study the impact of academic libraries on student success.

Many of the projects in Assessment in Action are interesting, well-researched case studies with deeply provocative questions about how students and faculty find value in academic libraries. Looking over the project summaries, you’ll see that many libraries undertook collaborative projects that engaged multiple campus units and took hundreds of hours to complete. Reflections from participants indicate that the most valuable part of their involvement in AiA was forming connections with partners outside the library, including institutional research offices and faculty in targeted areas like Composition and STEM.

 

Assessment in Action – Correlating Library Use with Student Success

AiA projects were and continue to be widely publicized by ACRL, and many institutions are attempting to replicate the data collection and analysis performed by AiA participants. From my perspective, AiA popularized and normalized the idea that it is not just acceptable but imperative for libraries to track how students use the library to prove the value of the academic library.

A few examples of AiA studies connecting library use with student success:

  • At Eastern Kentucky University Libraries, they found that students who accessed online resources through the library had a higher GPA than students who did not. They also found that students with a low (0-1 point GPA) had not accessed any online library resources.
  • At Nevada State College, they found a positive correlation between library use and GPA, between using online resources and retention, and between using the library and achieving good academic standing.
  • The Northwest Arkansas Community College Library found that students who attended information literacy sessions had better course-end grades and retained at a higher rate than those who did not attend the sessions.
  • At Colorado Mesa University, 92% of students who used the library’s research assistance were retained, compared with 83% of students who did not use the service.

 

From Case Studies to Generalizations

Each year, AiA published an executive summary of its findings. Both Year 2 and Year 3 reports highlight findings from studies like those just mentioned, emphasizing that students who use the library show better outcomes than those who do not.

But can we really make such broad generalizations from these small, localized case studies? And what does it really tell us that students who use the library are more successful? Doesn’t this just mean that students who use the library are probably more academically engaged generally? It is my hypothesis that students who use the library have the time to do so. Students who attend library instruction sessions? Well, they probably have pretty good class attendance overall, and there’s certainly a strong correlation between attending class and getting good grades in college.

I am not convinced that correlational studies do anything more than tell us what we already know: students who have the time and resources to do well in college do well. As I’ve said before, the library is not the thing that makes students successful–privilege is.

In their AiA project summary, Michigan State University acknowledged the complexity of isolating the library’s impact on student success.

Their team wrote,  

we do not have sufficient data to make generalizations. This project reiterates the difficulty in demonstrating even correlative relationships between library use and student success; while we can compare the numbers, there are many external and environmental factors for which we cannot account.

 

Learning Analytics

The transition from Assessment in Action projects and correlational studies to involvement in campus learning analytics initiatives is a short one. Learning analytics is described by Erik Duval as “collecting traces that learners leave behind and using those traces to improve learning.” In order to optimize education, learning analytics gathers data about student behavior and performance and makes such data instantly available to campus stakeholders through sleek online dashboards. Often times, students are not fully aware of the kinds of data being collected about them as they move through their days: they may be tracked when using Learning Management Systems (like Canvas and Blackboard), as well as when they swipe their ID cards at places like tutoring centers, the gym, and campus events, and now, in libraries, where circulation records and database logins can be transferred to institutional analytics repositories.

Collecting and analyzing information about students in higher education is not new; however, in the past, most administrators were stuck analyzing performance results like grades and course completion after the end of the term. Learning analytics offers educators the unique opportunity to act on real-time data during the term and intervene when students are flunking assignments or not attending classes, to offer “nudges” based on low performance, and to even predict what students’ outcomes might be.

 

Student Attitudes About Learning Analytics

How students feel about learning analytics is still largely unknown. Researchers in Australia conducted focus groups with students at a large metropolitan university and found that students were hopeful about how learning analytics could potentially help them connect with campus resources, but they were also concerned about being patronized and having their privacy invaded. One student commented that they felt “nudges” from professors through a learning analytics system could feel like a parent nagging them to do their chores. Overall, students expressed that they felt “uninformed and uncertain” about learning analytics and, after being given more information, they had concerns about how instructors’ access to analytics could result in preconceived judgments and bias that would impact their learning opportunities.

 

Oakleaf Attitudes About Learning Analytics

How does Megan Oakleaf see the future of learning analytics in libraries? In the VAL report, Oakleaf lamented the lack of individualized, user-level data about academic library use. She wrote:

For instance, until libraries know that student #5 with major A has downloaded B number of articles from database C, checked out D number of books,  participated in E workshops and online tutorials, and completed courses F, G, and H, libraries cannot correlate any of those student information behaviors with attainment of other outcomes.

Until librarians do that, they will be blocked in many of their efforts to demonstrate value…demonstrating the full value of academic libraries is only possible when libraries possess evidence that allows them to examine the impact of library user interactions. (page 96 of the VAL report)

 

In November 2017, Oakleaf presented at EDUCAUSE about the importance of integrating individual library user data into learning analytics dashboards. She included a screenshot from a campus analytics dashboard to show what it might look like to see a students’ interactions with the library–for example, what if faculty could see if students had attended information literacy instruction? I imagine that we could also include whether or not that student has logged in to our resources or borrowed materials. To be clear, this is not about research studies that look at student behavior in the aggregate — this is identifying student behaviors and interactions with the library down to the individual student, for other stakeholders on campus to see, analyze, and interpret.

Earlier this year, Oakleaf published her most recent article about libraries and learning analytics, in which she discusses the “problem” of privacy in learning analytics as requiring “a significant shift in professional library practice and a reconciliation between long held ethical positions and new imperatives to support student learning and success.”

To me, it is not a “significant shift” to collect and analyze individual, identifiable use of the library and its resources; it is a seismic pivot in library values and intellectual freedom principles. I do not believe it is possible to foster unhindered academic inquiry while, at the same time, tracking when a student logs in to online resources, how many books they check out, or how often they attend information literacy instruction sessions. Putting this data into dashboards accessible by faculty, administrators, student advisors, counselors, and other campus stakeholders is an enormous violation of trust.

Oakleaf is unwavering in her certainty that learning analytics is the future of library assessment data and, while she acknowledges concerns about ethics and privacy, her true concern seems not to be with students’ autonomy and agency–but with librarians’ hesitance to hand over library use data for input in campus-wide advising and retention systems.

As the author of the VAL report, a prolific scholar, and an iSchool professor, Oakleaf has enormous influence over the direction of the conversation around library assessment. This influence is evident in the latest ACRL research agenda, titled “Academic Library Impact” which was published in September 2017.

 

ACRL Research Agenda for Student Learning & Success

The ACRL Research Agenda for Student Learning & Success presents six priorities areas, one of which is including library data in institutional data collection. In that section, the authors propose the following suggested actions.

  • Know how other academic stakeholders are using learning analytics.
  • Research the safeguards needed to ensure student privacy or confidentiality.
  • Strategically collect data that can be integrated into learning analytics software.
  • Advocate for the inclusion of library data in the volumes of information collected from multiple systems within the academic institution.
  • Integrate library data into campus analytics components.
  • Work with stakeholders to statistically analyze and predict student learning and success based on shared analytics.

 

Why Resist Learning Analytics?

I am deeply troubled by the endorsement of pursuing learning analytics as part of an academic library research agenda for several reasons: one, it erodes student privacy and intellectual freedom, two, it takes away control and power from learners, and three, it conflates data tracking and surveillance with library assessment.

As Kyle M.L. Jones and Dorothea Salo explore in this month’s issue of College and Research Libraries, there are serious ethical considerations to incorporating library data in institutional learning analytics. They note that students’ intellectual freedom may be hindered if they believe the library is tracking what they search for and where they look for it. Additionally, there may be adverse psychological effects to knowing that library engagement is reported to faculty–how will students feel when faculty, after reviewing students’ low engagement with library resources, “nudge” them to use the library more?

April Hathcock, scholarly communications librarian at NYU and lawyer, says learning analytics are “a colonialist, slave-owning, corporatizing, capitalist practice that enacts violence against the sanctity of a learner’s privacy, body and mind. It is not in keeping with our professional values as librarians or educators.” She goes on to write that we owe learners the agency to be involved in decisions about learning analytics. “You can’t object to something,” she writes, “if you don’t know it’s happening to you.” I would add that you can’t ethically opt-in to something that hasn’t been fully explained to you, either.

 

Assessment =/= Analytics

Most importantly in my mind: analytics is not assessment. I think we have to take a step back and remind ourselves of this, because our current conversation reflects analytics as assessment. Some people think that I’m against assessment because I don’t support learning analytics, and that is not true.

It is my belief that the best library assessment initiatives ask questions with an inquiry mindset. Are we providing the right services at the right times? Do we have the right materials? Is the coffee shop open late enough? Are there enough outlets? (The answer is always no, on the last one.)

At the heart of true assessment is the willingness to change and make adjustments, to move the library to better fit the user. We do this all the time. We adjust hours for finals week, extend borrowing privileges for long-term research projects, and put furniture on wheels so students can move it around to suit their needs. We make the library better based on assessment results, which includes direct feedback from users. In return for being studied and observed ethically, transparently, and with care, people who use the library are given a better library to use.

In contrast, harvesting our users’ data from their EZ Proxy logins, their ID card swipes, and their circulation records, and then comparing it to their GPA, retention rate, or graduation rate, does little to nothing to help them, and it only serves us–provided the results are in our favor. I think York University was brutally honest in their project summary for AiA.

They wrote,

[Our] project found  that there is a positive correlation between library eResource usage and GPA. While the project did not result in data from which we would make changes to library or institutional practices, it does give the library a new way to communicate value.

The bottom line is this: ACRL tells us that we need to connect the dots in order to prove that libraries are valuable, and specifically encourages us to perpetuate the narrative that simply using the library has a positive impact on students.

 

But what if using the library hurt student success? Would we do anything differently?

Lise Doucette is a librarian and a researcher in Canada, and she has done some wonderful work studying library assessment. When I talked to her about my frustrations with correlative studies in library assessment, she smiled and said, “I always ask, ‘What would you do if the results were opposite of what you expected? What if library use was correlated with NEGATIVE student outcomes? e.g., the more students used the library, the worse their grades were?'”

It really made me stop and think when she said that, because I don’t think that anyone doing these kinds of studies has considered that result–what would you do if students who spent more time in the library, or logged in to more databases, were more likely to fail their classes? Would you limit their library use? And if the answer is you wouldn’t close the doors and stop students from coming to and using the library, and you would just keep doing what you’re doing, then what does this say about your beliefs? Your motivations? Your ideology?

 

The Stories We Tell

If we ignore all other factors and put student success squarely on what students do, it takes the pressure off of us as educators. If we believe that everyone is created equal and has the opportunity to succeed, then we can sit back and track success as data blips on a dashboard. Neoliberalism in higher education says: If you don’t succeed, it’s because you didn’t engage enough. Not because you were the primary caretaker for your family, not because you couldn’t afford tuition, not because you work two or three jobs, not because you were living in your car. In the neoliberal academy, we don’t have to question the way market forces might be harming or hindering our students’ success. Students simply succeed or fail at their own hand.

In the conversation about library value, we have chosen to believe in neoliberalism because it is easier for us. In this mindset, we embrace meritocracy (those who succeed do so based on their hard work) and stories of those who start at the bottom and work their way to the top. This is the same philosophy that says that students who attend information literacy instruction sessions, borrow materials, and use online resources will be more successful.

I’m afraid that the students who will suffer the most from this narrative are the students higher education was not designed to serve: students of color, queer/trans/gender nonconforming students, students who work, students living with disabilities, older students, and students with families.

I can easily imagine a scenario where we sit down those students and say, “Well, we’ve looked at the data, and other students in your situation did x, y, and z, and they were successful, so why haven’t you done the same?” For example, if students of color who ask questions at the reference desk are more successful than students of color who don’t, how long will it be until we recommend library use as the antidote to structural racism?

Ultimately I want the next ACRL research agenda to move beyond its current obsession with handing over data to stakeholders to instead study the impacts of poverty, housing insecurity, and hunger on student success. I want the ACRL research agenda to acknowledge that higher education replicates systems of oppression, including racism, heterosexism, transphobia, classism, and white supremacy.

To me, these are the issues that are at the core of student success, and we cannot expect our students to become wholly-realized citizens, to thrive, unless we begin to acknowledge the possibility that using the library is not the answer. “More library” will not feed them, house them, and pay their bills. “More library” does not equalize the terrible inequalities faced by our students. “More library” is not going to stop students from dropping out.

 

Why do students leave higher education?

In 2009, the non-profit organization Public Agenda interviewed over 600 young people with at least some college education to find out what kept them in college, or if they didn’t finish their degrees, why they dropped out. According to their results, the number one reason students leave school is because they can’t balance work and school at the same time.

When asked to rank various options for what colleges could do to retain students, the number one thing students wanted was financial assistance for part-time attendance. Other popular responses included adding more evening and weekend classes, cutting the cost of college overall, and providing childcare.

And, I’m very sorry to have to point this out to you, but not a single student in this study indicated they dropped out because they didn’t use the library enough.

 

Paying the Price

Sara Goldrick-Rab at Temple University has been studying housing and food insecurity among college students for years. In her 2016 book, Paying the Price, she argues that the prohibitive cost of college not only leads to low student success rates but even harms students by leaving them with insurmountable debt that follows them for the rest of their lives.

Her most recent study, released just this month, finds that 36% of college students were food insecure at some point in the thirty days before responding to the survey. Nearly one in ten students is homeless. Almost half of community college students say they struggle to pay for housing and utilities.

The University of California system has long been aware of food-related challenges faced by students. UCLA has had a food pantry in its Student Activities Center since 2009, which provides staples like peanut butter and oatmeal. A 2016 survey of 9,000 students in the UC system found that 40% had experienced food insecurity. In 2015, each campus in the UC system was asked to form a food security working group to establish food pantries and develop programs to meet student food needs, including education around nutrition, cooking, and budgeting for meals.

 

What Libraries Can Do

With this in mind, if libraries are truly devoted to student success, I would encourage us to look deeply at our communities and see how we can meet their needs. In all of our communities, there are students who need financial aid, food, and housing. Many libraries are already doing incredible things to better serve their campuses, and I think we can always do more of the following.

  • Provide spaces for students with families, including areas where children can play.
    • Portland Community College and Sacramento State are just two institutions of many that provide family study rooms equipped with amenities for children, including toys and games to keep children occupied while caregivers study.
  • Provide scholarships directly to students.
  • Eliminate late fines & review loan rules.
  • Textbooks. Ugh.
    • Required textbooks on reserve on critical for students who can’t afford to buy them. Maintaining a robust and accurate reserves collection takes a lot of labor, but has a huge impact on student learning.
    • Alverno College took the radical step of creating a textbook collection on open reserve. They spent about $6,000 and bought 300 textbooks. They found that their collection of 329 items circulated 1,126 times in the Spring 2017 semester.
    • We need to continue to be leaders in the conversation around low and no-cost learning materials, including open educational resources.
  • Technology
    • I love seeing the unique and creative items that libraries loan to students, especially high-demand technology items, like laptops and iPads.
    • The last library I worked at loaned out USB charging cables for Androids/iPhones and headphones, and those were in constant use.
    • One trend I’m particularly excited about is lending WiFi hotspots. Many public libraries already do this, but I think this is a great idea for academic libraries for students who do not have reliable Internet access off-campus.

But perhaps most importantly, we must continously ask our students what they need from us. Ask regularly, review their responses with care and empathy, and take action to meet the gaps in your community. You may not be the answer to student success, but you are definitely an answer.

 

Assessment. Success. Value.

So, let’s recap what we’ve discussed about Assessment, Student Success, and Value.

If you are doing assessment, do assessment. This means asking open-ended questions without an agenda to prove your value, and being willing to make changes to improve.

If you are interested in helping students succeed, find out what your students need and provide it.

When you prove your value, as you always must, have answers and data at the ready that are meaningful to you and to your institution. You should position yourself as best you can to decide how you will tell the story of your value.

If you are regularly collecting statistics and evaluating your spaces, programs, and services in a variety of ways, then you get to choose how to tell your story of your library’s value. What is it that you provide on your campus that no one else can? How are you critical to student success? My guess it’s hundreds of small things that you do every day, and honestly, some of these things are the hardest to quantify. Sometimes it’s having a stapler available ten minutes before a paper is due. It’s having bathrooms and tables and good lighting. It’s having well-trained, helpful library workers who maintain the stacks, answer questions, and support students.

 

Does Our Work Matter?

One of the saddest things I read in the Assessment in Action project summaries was this sentiment: We asked this question because we wanted to know if our work matters.

If that is your question, my answer is yes. Yes. I promise that you matter to your students, your faculty, and your campus. It’s your job to ask how you matter, and if you are doing the right things and enough of them.

It’s your job to meet the needs of your campus so well that you fill your campus with the wildest, loudest advocates. They should be there to sing your praises when you need them.

I also think we have to accept that all of this might not be enough.

 

Is It Enough?

Maybe I’m wrong. Maybe connecting individual use of academic libraries to student success, and feeding that data into institutional learning analytics dashboards, is the way that we will prove academic library value. Maybe our collections will flourish, and lost faculty and staff support positions will return to us.

I’m sad to say I think it’s more likely that the money that has left library budgets is gone, and it’s not coming back. Surveillance tactics will not save us. Handing over our data to institutional dashboards will not save us.

In her blog, Emily Drabinski recently pondered how librarians can reframe the terms of the debate around learning analytics. If student-level data determines resource allocation, and the library needs resources, how can we reject the system by which our funding is determined? She notes, correctly, that it is easier to say “resist!” but much more complicated to actually do so.

Drabinski asks:

What does that rejection look like if we were to reject it an organized way, in a way that reflected a meaningful we, rather than as single individuals taking loud public stands and then getting fired for it?

 

Using Our Voice

I am not going to tell you that the work ahead is easy or simple. There are large and powerful forces in higher education that want us to quantify and measure our work to prove our worth.

I had the opportunity to speak to you today, and I wanted to talk to you about this–about students, what they need, and how we can be there for them. And the next time you have the ability to organize, to speak together as a group, and to question how the way we prove our value might harm our students, I hope you do so.

I think Drabinski is right that this is not work we can do alone, and we need to organize ourselves and demand better from our institutions, including our employers and our professional associations. But I do think, at the very beginning of any resistance, there are individual voices looking for each other, hoping to find resonance and strength in community.

You are not silly, or stubborn, or impractical for valuing students and their privacy and agency. You are not unreasonable for thinking learning analytics is a load of nonsense, designed to serve a particular narrative about student success in higher education.

The first time you use your voice to speak out against the status quo, your words may waver. Your voice is not shaking from fear. It is power. And when your voice shakes, that is the moment you have to use it.

Thank you.

 

Photo courtesy of Iain Watts.

 

Sources

Abel, R., Nackerud, S., Brown, M., Oakleaf, M., & Jantti, M. (2017, November 1). Closing the data gap: Integrating library data into institutional learning analytics. Presentation at Educause Annual Conference. Retrieved April 7, 2018 from https://events.educause.edu/annual-conference/2017/agenda/closing-the-data-gap-integrating-library-data-into-institutional-learning-analytics

Beilin, I. (2016). Student success and the neoliberal academic library. Canadian Journal of Academic Librarianship. 1(1), 10-23.

Brody, L. (2016, August 9). I didn’t even have an address. Glamour Magazine. Retrieved April 7, 2018 from https://www.glamour.com/story/i-didnt-even-have-an-address

Brownstein, R. Are college degrees inherited? The Atlantic. Retrieved April 4, 2018 from https://www.theatlantic.com/education/archive/2014/04/are-college-degrees-inherited/360532/

California community colleges key facts. (n.d.). Retrieved April 1, 2018 from http://californiacommunitycolleges.cccco.edu/policyinaction/keyfacts.aspx

Cifor, M., & Lee, J. A. (2017). Towards an archival critique: Opening possibilities for addressing neoliberalism in the archival field. Journal of Critical Library and Information Studies, 1(1). https://doi.org/10.24242/jclis.v1i1.10

Community College Research Center. (n.d.). Community college FAQs. Columbia University. Retrieved April 1, 2018 from https://ccrc.tc.columbia.edu/Community-College-FAQs.html

Connaway, L.S., Harvey, W., Kitzie, V., & Mikitish, S. (2017). Academic library impact: Improving practice and essential areas to research. Association of College and Research Libraries. Retrieved March 31, 2018 from http://www.ala.org/acrl/sites/ala.org.acrl/files/content/publications/whitepapers/academiclib.pdf

Cox, B. & Jantti, M. (2012, July 17). Discovering the impact of library use and student performance. Educause Review. Retrieved March 31, 2018 from https://er.educause.edu/articles/2012/7/discovering-the-impact-of-library-use-and-student-performance

Drabinski, E. (2017, November 5). How do we change the table? [Web log post]. Retrieved April 1, 2018 from http://www.emilydrabinski.com/how-do-we-change-the-table/

Duval, E. (2012, January 30). Learning analytics and educational data mining [Web log post]. Retrieved April 12, 2018 from https://erikduval.wordpress.com/2012/01/30/learning-analytics-and-educational-data-mining/

Eberhart, G.M. (2017, June 25). Doing away with fines. American Libraries Magazine. Retrieved April 12, 2018 from https://americanlibrariesmagazine.org/blogs/the-scoop/doing-away-with-fines/

Greene, M., & McMenemy, D. (2012). The emergence and impact of neoliberal ideology on UK public library policy, 1997–2010. In Library and information science trends and research: Europe (pp. 13-41). Emerald Group Publishing Limited.

Harvard Library. (2017, April 1). Changes in library fines and loan rules policies. Retrieved April 7, 2018 from https://library.harvard.edu/changes-library-fines-and-loan-rules-policies

Hathcock, A. (2018, January 24). Learning agency, not analytics [Web log post]. Retrieved April 1, 2018 from https://aprilhathcock.wordpress.com/2018/01/24/learning-agency-not-analytics/

Association of American Colleges & Universities. (2016, May). Higher education attainment by family income: Current data show persistent gaps. AAC&U News. Retrieved April 8, 2018 from https://www.aacu.org/aacu-news/newsletter/higher-education-attainment-family-income-current-data-show-persistent-gaps

Hunger and homelessness are widespread among college students, study finds. (2018, April 3). National Public Radio. Retrieved April 7, 2018 from https://www.npr.org/sections/thetwo-way/2018/04/03/599197919/hunger-and-homelessness-are-widespread-among-college-students-study-finds

Johnson, J., & Rochkind, J. (2009). With their whole lives ahead of them: Myths and realities about why so many students fail to finish college.. Public Agenda. Retrieved April 1, 2018 from https://www.publicagenda.org/media/with-their-whole-life-ahead-of-them

Jones, K. M., & Salo, D. (2018). Learning analytics and the academic library: Professional ethics commitments at a crossroads. College and Research Libraries, 79(3): 304-323.

Longview (song). (n.d.). In Wikipedia. Retrieved April 1, 2018 from https://en.wikipedia.org/wiki/Longview_(song)

Lower Columbia College. (2013). LCC facts and figures 2012-13. Retrieved April 1, 2018 from https://lcc.ctc.edu/info/webresources/Institutional-Research/FactBook2012-13.pdf

McCabe, B. (2015). Lester Spence argues that African-Americans have bought into the wrong politics. Johns Hopkins Magazine. Retrieved April 3, 2018 from https://hub.jhu.edu/magazine/2015/winter/lester-spence-african-americans-neoliberalism/

Monbiot, G. (2016, April 15). Neoliberalism: The ideology at the root of all our problems. The Guardian. Retrieved April 1, 2018 from https://www.theguardian.com/books/2016/apr/15/neoliberalism-ideology-problem-george-monbiot

National Center for Education Statistics. (2018 April). Educational attainment of young adults. Retrieved April 12, 2018 from https://nces.ed.gov/programs/coe/indicator_caa.asp

Nicholson, K. P. (2017, October 26). The “Value Agenda”: Negotiating a path between compliance and critical practice. Canadian Library Assessment Workshop. Keynote address. Retrieved from https://ir.lib.uwo.ca/fimspres/49/

Oakleaf, M. (2018). The problems and promise of learning analytics for increasing and demonstrating library value and impact. Information and Learning Science, 119(1), 16-24.

Oakleaf, M. (2010). The value of academic libraries: A comprehensive research review and report. Retrieved March 31, 2018 from the Association of College and Research Libraries: http://www.ala.org/acrl/sites/ala.org.acrl/files/content/issues/value/val_report.pdf.

Palley, T. I. (2004, May 5). From Keynesianism to neoliberalism: Shifting paradigms in economics. Foreign Policy in Focus. Retrieved April 1, 2018 from http://fpif.org/from_keynesianism_to_neoliberalism_shifting_paradigms_in_economics/

PCC Library. Family study rooms and kits. Portland Community College.  Retrieved April 7, 2018 from https://www.pcc.edu/library/about/spaces/family-study-rooms-and-kits/

The Pell Institute for the Study of Opportunity in Higher Education. (2016). Indicators of higher education equity in the United States: 2016 historical trend report. Retrieved March 31, 2018 from http://www.pellinstitute.org/downloads/publications-Indicators_of_Higher_Education_Equity_in_the_US_2016_Historical_Trend_Report.pdf

Penn State University Libraries. Scholarship opportunities. Penn State University. Retrieved April 7, 2018 from https://libraries.psu.edu/about/awards-scholarships/scholarship-opportunities

Reeves, R.V. (2015, December 21). America’s zip code inequality. The Brookings Institution. Retrieved April 5, 2018 from https://www.brookings.edu/opinions/americas-zip-code-inequality/

Roberts, L. D., Howell, J. A., Seaman, K., & Gibson, D. C. (2016). Student attitudes toward learning analytics in higher education: “The Fitbit version of the learning world”. Frontiers in Psychology, 7. doi:10.3389/fpsyg.2016.01959

Sacramento State University Library. Family study room. Sacramento State University. Retrieved April 7, 2018 from http://library.csus.edu/spotlight-and-events/family-study-room

Seale, M. (2013). The neoliberal library. In Information literacy and social justice: Radical professional praxis (pp. 39-61). Library Juice Press.

Skowronek, D. (2017). Textbooks on open reserve: A pilot. College and Research Libraries News, 78(11): 607-615.

Smith, K. (2016, August 22). Predictive analytics: Nudging, shoving, and smacking behaviors in higher education. Educause Review. Retrieved April 8, 2018 from https://er.educause.edu/articles/2016/8/predictive-analytics-nudging-shoving-and-smacking-behaviors-in-higher-education

Soria, K. M., Fransen, J., & Nackerud, S. (2013). Library use and undergraduate student outcomes: New evidence for students’ retention and academic success. portal: Libraries and the Academy, 13(2), 147-164.

Student food access and security study. (2016). University of California Office of the President. Retrieved April 7, 2018 from https://www.ucop.edu/global-food-initiative/best-practices/food-access-security/

UC Davis Library. (n.d.). Course reserves. UC Davis. Retrieved April 8, 2018 from https://www.library.ucdavis.edu/service/course-reserves/

Vercelletto, C. (2017, September 25). How to hot spot. Library Journal. Retrieved April 7, 2018 from https://lj.libraryjournal.com/2017/09/lj-in-print/hot-spot-techknowledge/

Watanabe, T. & Newell, S. (2016, July 13). Four in 10 students do not have a consistent source of high-quality, nutritious food, survey says. Los Angeles Times. Retrieved April 7, 2018 from http://www.latimes.com/local/california/la-me-uc-food-insecurity-07112016-snap-story.html

 

Libraries & learning analytics: A brief history.

March 5, 2018 — Revised and updated from the original post on November 10, 2017.

 

A slide deck from EDUCAUSE made the rounds on Twitter last week, with many folks expressing shock about libraries & their involvement (complicity) in learning analytics efforts on higher education campuses. But this isn’t new. Academic librarians have been talking about using library data to prove library value for quite a while. Over the past decade, the conversation has been held hostage by one particular professor who has made proving library value the exclusive focus of her scholarly research agenda.

As the old saying goes, if you’re not pissed off, you haven’t been paying attention.

To me, these are some of the significant milestones in the conversation about libraries and their involvement in learning analytics. (Emphasis on “to me” — your timeline might look a bit different!)

2010
Megan Oakleaf, LIS professor at Syracuse University, publishes the Value of Academic Libraries Report, which was commissioned by ACRL. The report suggests that libraries should track individual student behavior to demonstrate correlations between library use and institutional outcomes, such as retention.

2011
Value of Academic Libraries committee is formed by ACRL Executive Committee.

2012
ACRL is awarded a $249,330 grant from IMLS to fund Assessment in Action: Academic Libraries and Student Success.

2013 – 2016
ACRL runs three 1-year cohorts of AiA projects. Assessment in Action aims to teach academic librarians how to collaborate with other stakeholders on their campuses to measure the library’s impact on student success. According to the AiA website: “The projects will result in a variety of approaches to assessing library impact on student learning which will be documented and disseminated for use by the wider academic library and higher education communities.”

Spring 2014
Oakleaf teaches IST 600 “Academic Libraries: Value, Impact & ROI” at Syracuse University for the first time.

October 2014
Bell publishes “Keeping Up With… Learning Analytics” on the ALA website.

August 2014
Margie Jantti presents “Unlocking Value from Your Library’s Data” at the Library Assessment Conference. The presentation highlights how, among other metrics, the University of Wollongong correlated student performance with number of hours of using the library’s electronic resources.

December 2014
Lisa Hinchliffe and Andrew Asher present “Analytics and Privacy: A Proposed Framework for Negotiating Service and Value Boundaries” at the Coalition for Networked Information Fall Membership Meeting.

March 2015
Oakleaf publishes “The Library’s Contribution to Student Learning: Inspirations and Aspirations” in College & Research Libraries.

2016
Jantti and Heath publish “What Role for Libraries in Learning Analytics?” in Performance Measurement and Metrics. The article describes how the integrated existing library analytics and student data (from the “Library Cube”) with institutional learning analytics efforts at the University of Wollongong.

June 2016
College and Research Libraries News declares learning analytics one of the top trends in academic libraries.

July 2016
Oakleaf publishes “Getting Ready & Getting Started: Academic Librarian Involvement in Institutional Learning Analytics Initiatives” in The Journal of Academic Librarianship.

I present “Can we demonstrate library value without violating user privacy?” at Colorado Academic Library Association Workshop in Denver.

2017
Oakleaf secures nearly $100,000 in grant funding from IMLS for “Library Integration in Institutional Learning Analytics (LIILA)“. The full proposal can be read here.

January 2017
ACRL Board discusses “patron privacy” and if, as a core value, it conflicts with support of learning analytics. The minutes record: “Confidentiality/Privacy is in ALA’s core values, and the Board agreed that patron privacy does not need to conflict with learning analytics, as student research can still be confidential.”

Also at Midwinter 2017,  ACRL Board approves Institutional Research as an interest group to incorporate interest in Learning Analytics (but, notably, the Board did not want to name the group the “Learning Analytics” interest group). ACRL Board formally adopts the Proficiencies for Assessment Librarians and Coordinators which makes frequent reference to using learning analytics.

March 2017
Oakleaf et al present “Data in the Library is Safe, But That’s Not What Data is Meant For” at ACRL 2017 in Baltimore, Maryland.

April 2017
Kyle M.L. Jones and Dorothea Salo’s article, “Learning Analytics and the Academic Library: Professional Ethics Commitments at a Crossroads“, is available as a preprint from College & Research Libraries.

June 2017
Value of Academic Libraries committee meets at ALA Annual. The minutes reflect that VAL wants to distance itself from learning analytics, now that they have their own interest group.

September 2017
ACRL publishes Academic Library Impact, which explicitly advocates for working with stakeholders to “statistically analyze and predict student learning and success based on shared analytics”.

October 2017
Karen Nicholson presents her paper, “The ‘Value Agenda’: Negotiating a Path Between Compliance and Critical Practice“, at the Canadian Library Assessment Workshop in Victoria, British Columbia.

November 2017
Oakleaf et al present “Closing the Data Gap: Integrating Library Data into Institutional Learning Analytics” at EDUCAUSE 2017 in Philadelphia. The presentation seems to advocate feeding individual patron data into campus-wide learning analytics dashboards so that other campus administrators, faculty, and advisors can see student interactions with the library.

Emily Drabinski asks, “How do we change the table?” In her blog post, she wonders how organizing can help librarians build power to make change. “We need to reject learning analytics,” she declares.

Penny Beile, Associate Director of Research, Education, and Engagement at the University of Central Florida Libraries, publishes “The Academic Library’s (Potential) Contribution to the Learning Analytics Landscape” on the EDUCAUSE blog.

January 2018
April Hathcock responds to the ongoing learning analytics conversation with her own blog post about learning agency. Regarding the need to collaborate with students rather than simply surveil them, she writes, “Essentially, it’s the difference between exploiting a community to study and report on them versus collaborating with that community in studying their needs. It is the very essence of feminist research methods, rooted in an ethic of care, trust, and collaborative empowerment.”

March 2018
Community college librarian Meredith Farkas questions the value of learning analytics in her column in American Libraries.

Kyle M.L. Jones and Ellen LeClere publish “Contextual Expectations and Emerging Informational Harms: A Primer on Academic Library Participation in Learning Analytics Initiatives” in Applying Library Values to Emerging Technology: Decision-Making in the Age of Open Access, Maker Spaces, and the Ever-Changing Library.

April 2018
The Call for Proposals for the special issue of Library Trends about learning analytics and the academic library closes April 1. The issue will be published in March 2019.

Featured image by Lukas Blazek on Unsplash

Observe, Reflect, Learn: Developing a Peer Teaching Observation Program in Your Library

This post corresponds with my presentation at the Canadian Library Assessment Workshop on Friday, October 27, 2017 in Victoria, British Columbia.

Slides: https://docs.google.com/presentation/d/1Stx0qmaKZRM4SIGqoH9N3dqUMzNMCmHzJfuX8YYhr7M/edit?usp=sharing

 

Scenario – Leave No Trace

You are the Assessment Librarian at a large university with a team of a dozen instruction librarians. Everyone is excited to embark on a new peer observation program–except Barbara. She’s had problems with the Dean in the past and is convinced that the Dean will use the observation process to terminate her. She agrees to participate in the observation program–as long as there is no record of her observation.

How do you proceed?

 

 

Scenario – No News for the Newbie

You are a new instruction librarian at a small college with an established peer observation program. The observation process just consists of a simple checklist that faculty fill out and file with the Library Director. Your observer is Terry, an instruction librarian who has been at the library for 30 years and will retire in the spring. He shows up to your class 10 minutes after it starts, and submits the observation checklist to your Director without letting you see it first.

How do you proceed?

 

References

Alabi, J. & Weare, W. H., Jr. (2014). Criticism is not a four-letter word: Best practices for constructive feedback in the peer review of teaching. LOEX Conference Proceedings 2012. 141-145.

Bandy, J. (2017). Peer review of teaching. Vanderbuilt University. Retrieved from https://cft.vanderbilt.edu/guides-sub-pages/peer-review-of-teaching/

Cosh, J. (1998). Peer observation in higher education: A reflective approach. Innovations in Education and Training International, 35(2), 171-176.

Centre for Teaching Support & Innovation. (2017). Peer observation of teaching: Effective practices. Toronto, ON: Centre for Teaching Support & Innovation, University of Toronto. Retrieved from  http://teaching.utoronto.ca/teaching-support/peer-observation-of-teaching/

Classroom/teaching observations. North Alberta Institute of Technology. Retrieved from   http://www.nait.ca/docs/Resource_Module_for_Observations.pdf

Davis, K. D. (2007). The academic librarian as instructor: A study of teacher anxiety. College & Undergraduate Libraries, 14(2), 77-101.

Elmendorf, D. C., & Song, L. (2015). Developing indicators for a classroom observation tool on pedagogy and technology integration: A Delphi study. Computers in the Schools, 32(1), 1-19.

England, J., Hutchings, P., & McKeachie, W. J. (1996). The professional evaluation of teaching. American Council of Learned Societies. Occasional Paper No. 33. Retrieved from http://archives.acls.org/op/33_Professonal_Evaluation_of_Teaching.htm

Fielden, N. (2010). Follow the rubric road: Assessing the librarian instructor. LOEX Conference Proceedings. Retrieved from http://commons.emich.edu/cgi/viewcontent.cgi?article=1026&context=loexconf2010

Franchini, B. (2014). Maximizing the benefits of peer observation. Rochester Institute of Technology. Retrieved from http://www.rit.edu/academicaffairs/facultydevelopment/sites/rit.edu.academicaffairs.facultydevelopment/files//images/MaximizingBenefitsofPeerObservation.pdf

Goosney, J. L., Smith, B., & Gordon, S. (2014). Reflective peer mentoring: Evolution of a professional development program for academic librarians. Partnership: The Canadian Journal of Library and Information Practice and Research, 9(1), 1-24.

Kilcullen, M. (1998). Teaching librarians to teach: Recommendations on what we need to know. Reference Services Review, 26(2), 7-18.

Qualities of an effective peer classroom observation. (2017). Center for Teaching Excellence of the University of Virginia. Retrieved from http://cte.virginia.edu/qualities-of-an-effective-peer-classroom-observation/

Samson, S., & McCrea, D. E. (2008). Using peer review to foster good teaching. Reference Services Review, 36(1), 61-70.

Saunders, L. (2015). Education for instruction: A review of LIS instruction syllabi. The Reference Librarian, 56(1), 1-21.

Snavely, L., & Dewald, N. (2011). Developing and implementing peer review of academic librarians’ teaching: an overview and case report. The Journal of Academic Librarianship, 37(4), 343-351.

Sproles, C., Johnson, A. M., & Farison, L. (2008). What the teachers are teaching: How MLIS programs are preparing academic librarians for instructional roles. Journal of Education for Library and Information Science, 195-209.

Van Note Chism, N. (2007). Peer review of teaching: A sourcebook. San Francisco: Anker Publishing.

Walter, S. (2006). Instructional improvement: Building capacity for the professional development of librarians as teachers. Reference & User Services Quarterly, 45(3), 213-218.

 

Have we confused surveillance with assessment of student learning?

Somehow I had been blissfully unaware of Respondus Lockdown Browser until last week, when several students came to the library asking if we had this software available on our computers. If you’re not familiar with this product, Respondus is one of several LMS-integrated cheating-prevention tools. In simple terms, it shuts down a student’s Internet browser while they are taking a test in an online class environment, such as Canvas or Blackboard. One of the students who asked about Respondus said something that raised the hair on the back of my neck.

“I need a webcam,” they said. “I have to take the quiz with my webcam on, and there can’t be any movement in the background.”

What the hell? I thought. What are they talking about?

Recording Students During Online Tests

After doing some digging through an e-mail chain, I found a message from the campus eLearning Administrator with instructions for students taking tests with Respondus.

You will be required to use LockDown Browser with a webcam which will record you while you are taking the three module tests. Your computer must have a functioning webcam and microphone. A broadband connection is also required.

  • You will first need to review and agree to the Terms of Use.
  • The Webcam Check will confirm that your webcam and microphone are working properly. The first time the Webcam Check is performed on a computer, Adobe Flash Player will require you to select Allow and Remember.
  • Next you will be asked to take a picture of yourself.
  • After that, you will be required to show and take a picture of a government issued ID such as a driver’s license with your picture clearly displayed. If you don’t have a driver’s license, you can use your student ID card with your picture clearly displayed.
  • Click “Start Recording” and slowly tilt/pan your webcam so a brief video can be made of the area around your computer. Make sure you show your desk, what’s underneath your desk, and a panorama of the room you are in.  (If the webcam is built into the monitor, move it around as best you can to show the areas described.)

As a librarian who cares deeply about student privacy, all of this makes me want to throw up. If I understand this correctly, students must:

  • Accept Terms of Use (which I couldn’t find on the Respondus website, so I’m not sure what, exactly, students are agreeing to)
  • Take a picture of themselves
  • Share their government-issued ID (which would include their date of birth, address, height, weight, and other personal details)
  • Share whatever is in visible around their desk and workspace which, if they’re at home, could include any number of extremely personal items.

Can we agree that asking a student to show “what’s underneath your desk” is particularly perverse?

But the benefits of this invasive procedure, according to Respondus, are numerous—easy to integrate with existing learning platforms, money saved on printing costs, increased efficiency, superior confidence in the accuracy of test results, and so on.

Beyond privacy, what are some other concerns? After some brief searching, I found a presentation from 2012 where two researchers at Central Washington University found that Respondus was incredibly easy to manipulate to steal student data—hopefully this has changed. The following year, the same presenter, Donald Moncrief, gave a follow up presentation about the exact methodology they used (which they withheld the previous year, probably to prevent folks from following their steps).

My outrage is a little delayed. Respondus has been in business for ten years. Their website boasts that their software is used to proctor 50 million exams annually and they work with 2,000 institutions in 50 different countries. But here I am, angry as ever, concerned that educators have gotten carried away with a technology without considering its implications. And, as usual, my gripe is about assessment.

What are we really measuring?

Respondus offers regular training webinars for instructors. Here are the outcomes for an upcoming webinar:

Each training will cover, from the instructor perspective:

  • How to use LockDown Browser to prevent digital cheating in proctored testing environments
  • How use Respondus Monitor in non-proctored environments, to protect exam integrity and confirm student identity
  • How Respondus Monitor provides greater flexibility for when and where tests are taken
  • Efficient review of the assessment data collected, including student videos
  • Best practices and tips for success with both applications
  • A chance to ask questions

I am particularly confused by the portion in bold (my emphasis added). How is the surveillance data collected considered assessment data? Isn’t the assessment data the actual test results (e.g., whether or not students could meet the learning outcomes of the quiz or test)? I suppose if you saw clear evidence of academic dishonesty in the surveillance data (for example, the student had the textbook open on their desk but it was a “no book” test), then it would invalidate the assessment results, but it would not be the assessment data itself.

Maybe they’re just using “assessment” in an inaccurate way. Maybe it’s not a big deal. But I’m inclined to believe the word “assessment” has a particular meaning about student learning, and most accrediting bodies would agree.

Accreditation and surveillance

Colleges and universities almost never lose accreditation over facilities. You can educate students in a cornfield, in a portable building, in a yurt without running water or electricity—provided you have assessment data that shows that student learning outcomes were met for the program. You can’t award degrees without assessment data. You have to show that your students learned something. Seems reasonable, no?

So here’s my worry. Are we confusing surveillance with assessment data? Do we think that recording students during exams will appease accreditors? “Look, see! They didn’t cheat. They answered all of these test questions, and they got good scores.”

I understand the occasional need for a controlled testing environment, especially in high-stakes exam situations for professional certification (I’m think of the NCLEX for nurses, for example). I don’t understand controlled testing for formative assessment, especially for short quizzes in a first-year general education course. Even in a completely online course, I’m not sure I see the value in putting students through surveillance measures for quick knowledge checks of essential facts. When it comes to summative assessment of your course’s essential learning outcomes, couldn’t you meet the learning outcomes some other way that prevented simple cheating? What possibilities might open up if you invited your students to deeply process the material, connect to it in their own way, and show you the meaning they’ve made from it?

I think that there is no greater indication of an instructor’s values than how they spend time in a classroom. If what you truly value is assessing student learning in a tightly-controlled, surveilled environment—why not just take the quiz in a computer lab classroom where you can watch all students at once?

Is surveillance necessary for accreditation of online degrees?

My first answer to this question is, I’m not sure, and I’d like to learn more about this. I know that some fully online programs require students to take exams at proctored testing sites (e.g., by using a campus testing center at a nearby college or university). This practice is held up to accrediting agencies as proof of the program’s commitment to academic honesty. Of course, there is some healthy skepticism about this. In a 2011 article about online exam procedures, researchers suggested that requiring a once-per-semester proctored exam was a “a token effort to ensure academic honesty.”

I took a quick glance through the Western Association of Schools and Colleges (WASC) Postsecondary Accreditation Manual and I couldn’t find the word proctor anywhere in the document. Or the word cheat or the phrase academic honesty (the word honesty is used—to describe the governance procedures of the institution). While it is important to demonstrate student learning outcomes are being met through valid means (e.g., institutions need some reasonable assurance that students are doing their own work), I could not find evidence that this accrediting body specifically requires proof of proctoring or cheating-prevention. Does anyone know if other accrediting standards indicate otherwise?

Sources

Cluskey Jr, G. R., Ehlen, C. R., & Raiborn, M. H. (2011). Thwarting online exam cheating without proctor supervision. Journal of Academic and Business Ethics4, 1-7.

Moncrief, D., & Foster, R. (2012). Well that was easy: Misdirecting Respondus Lockdown Browser for fun and profit. Retrieved from http://digitalcommons.cwu.edu/source/2012/oralpresentations/18/

Moncrief, D. (2013). Respondus LockDown Browser revisited: Disclosure. Retrieved from http://digitalcommons.cwu.edu/source/2013/oralpresentations/73/

Postsecondary Accreditation Manual. (2013).Western Association of Schools and Colleges. Retrieved from http://www.acswasc.org

Respondus Lockdown Browser. (2017). Retrieved from https://www.respondus.com

Featured image courtesy of Pixabay.com.

Information literacy assessment. (Day 88/100)

My job title is Pedagogy and Assessment Librarian. I took a course called “Assessment” in my LIS graduate program. I just finished writing a 10-page year-end assessment report which I submitted to our University Assessment Director (and he looooved it).

The point is, I should know a lot about assessment. I don’t. I’m still figuring it out.

One thing I’ve learned over the past year is just how complicated, yet simultaneously meaningless, the word assessment can be.

People hate the word assessment because it has too many onerous connotations. Extra work, reports, rubrics, Excel spreadsheets. Administrative obligation. A looming sense of futility.

Maybe we could jazz it up a bit by referring to it simply as giving a shit.

Do you give a shit?

So do I.

Let’s give a shit together.

If you give a shit about something, I think it is natural that you would be curious about it. If you’re curious about it, you would ask a question, and (ideally) care about the answer. I think that’s what information literacy assessment is about–being curious about information literacy, wondering how students become information literate, and caring about how you can impact their learning.

The world of library assessment is messy. I’ve known this for a while, but it became very clear to me when I attended the Library Assessment Conference in Arlington, Virginia last fall. At the conference, I discovered that many of my peers have similar titles (“Assessment Librarian”) but we have radically different jobs. For example, I do not assess spaces, services, or collections. I do not administer LibQual surveys and I have no idea how to use NVivo or SPSS. My job focus is solely on student learning and information literacy assessment. I have not met a single person who is jealous of this.

Information literacy is hard to define. Lots of smart people don’t exactly agree on what it is. If you can’t define it, then how do you measure it?

From a student learning perspective, we would argue that you measure information literacy by defining student learning outcomes. Next, you create opportunities to assess those outcomes. For each outcome, you would identify criteria and performance indicators that define to what degree the outcome has been met (not yet met, partially met, met, etc.).

For 15 years, the ACRL Information Literacy Standards for Higher Education provided a neat and tidy checklist of over 80 skills that an information literate student should have. Cue the collective teeth gnashing when the Standards, rescinded last June, were replaced by the ACRL Framework for Information Literacy for Higher Education, a delightfully nebulous document that utilizes threshold concept theory to describe the behaviors and dispositions of information literate students. “You can’t assess this!”, librarians said. And they continue to say it. I won’t belabor this point–if you’re really interested, you can attend one of the many ACRL-sponsored webinars, workshops, or conference sessions on the topic. (In fact, Meredith Farkas presented a particularly fantastic session last week on the Framework and its implications for instruction.) Or, like me, you can practice the art of silently sobbing when colleagues characterize their engagement in the following manner: “Oh, the Framework? Yeah, we haven’t really looked at it yet.”

From what I can tell (my devoted readers are encouraged to disagree with me), a lot of academic librarians are not engaged with student learning assessment in any way. Some librarians are doing some assessment of student learning, usually by collecting data (worksheets, minute papers) in one-shot instruction sessions. A few librarians are engaged in meaningful, longitudinal, campus-wide initiatives related to the assessment of student learning (through institution-level learning outcomes, reflective learning activities, portfolio assessment, etc.). A small group of folks drive me completely insane by using data analytics to report correlative findings about student performance, e.g., “Students who check out books from the library have higher GPAs.” If you think this is assessment of student learning, I feel sad for you.

Where to start?

So what does real assessment of student learning look like? Andrew Walsh tackles this question in his 2009 article, “Information Literacy Assessment: Where Do We Start?” from the Journal of Librarianship and Information Science. He reviewed nearly 100 articles (hey, nice number) about information literacy assessment to investigate how the authors measured student learning. He found that about a third of the articles used multiple choice assessment (blargh). Other assessment methodologies are examined and explained, including observation, simulation, and self-assessment. Walsh eloquently describes the complexity of truly assessing information literacy–as he says, the assessment tools that are easiest and quickest to administer don’t actually measure the nuanced skills and behaviors of information literacy.

Walsh’s article is probably one of the best overviews of different information literacy assessment methodologies, their frequency of use (really, have we changed our practices much in 10 years? Probably not, I’m afraid), and their benefits/drawbacks.

Another helpful introductory article is Christopher Stewart’s short, two-page review from 2011, which provides an overview of the landscape of information literacy assessment. Stewart explains the purpose of tools like Measuring Impact of Networked Electronic Services (MINES), Standardized Assessment of Information Literacy Skills (SAILS), and surveys like LibQual and the National Survey of Student Engagement (NSSE). It also briefly explains the VAL Report and Megan Oakleaf’s insistence that the future of student learning outcomes assessment is going to revolve around linking student data to library data. Barf.

Sail away from SAILS…

I read a couple of articles about SAILS because that’s all I could stomach. Some thoughts:

  • Why was an article about information literacy assessment in Technical Services Quarterly? I’m still scratching my head about that one.
  • Speaking of that same article, the authors, Rumble & Noe describe a remarkably interconnected relationship with their English department and writing tutors. Although the article doesn’t include the results of the SAILS assessment, they observe that simply implementing a test made faculty think more about learning outcomes. I thought that was kind of backwards. Is it possible to care about learning without a standardized test?
  • If your institution uses SAILS, or is considering using it, I recommend Lym, Grossman, Yannotta, and Talih’s 2010 article from Reference Services Review. They discuss how institutions have administered and used SAILS. The most damning sentence can be found in the conclusion:”Our data tend to show that administering SAILS did not produce clear evidence of the efficacy of our sample institutions’ information literacy programs” (p. 184). They suggest doing a pre- and post-test before and after information literacy instruction to prove that one-shots work. Sigh.

 

Gimme that good trip that make me not quit (Grande, 2016)

What I liked:

  • Perruso Brown and Kingsley-Wilson (2010) provide an interesting example of collaborating with Journalism faculty to administer and assess an exam that tested students on how they would handle real-life information needs of journalists. Open-ended answers were difficult to assess, but I like the authenticity of the questions and they way they let students choose how to resolve their information needs (students were able to choose which sources to consult). I also appreciated that the article shared versions of questions that didn’t work, e.g., outdated questions that required students to refer to print encyclopedias instead of using easily available free web sources.
  • There are several things I appreciated about the 2013 article from Yager, Salisburg, and Kirkman at La Trobe University in Australia. They published their findings in The International Journal of the First Year in Higher Education, a scholarly publication outside the realm of libraries/information literacy. I also appreciate that they used two different forms of assessment with first-year students: an online quiz taken early in the course as well as a course-integrated assignment, which was assessed with a rubric. Their sample size is large–nearly 300 students. I’m not entirely sure how I feel about their overall approach (using a quiz to determine who will be successful with the course-integrated assignment later), but their results are interesting–they conclude that the “quiz was not particularly useful in determining those students who would later go on to demonstrate that they exceeded the cornerstone-level standards in Inquiry/Research” (p. 68). I interpret this to mean that the students who were low-performing in the beginning of the course had a positive and transformative learning experience throughout the course.
  • I was impressed by Holliday, et. al.’s article from 2015 in College and Research Libraries. The authors reviewed 884 papers from different students at different points in the curriculum (the papers came from ENGL 1010, ENGL 2010, PSY 3500, and HIST 4990). At the end of the article, Holiday et. al. conclude that the benefit of the assessment process was looking at a large body of student work, getting to know the curriculum, and making changes to information literacy instruction as well as course assignments. Hallelujah. I love that they used the assessment process to drive curriculum and pedagogy changes, rather than trying to prove the efficacy of the one-shot. Kiel, Burclaff, and Johnson come to a similar consensus in their 2015 article, “Learning By Doing: Developing a Baseline Information Literacy Assessment.” They also looked a large number of student papers (212!) and found that the process provided “insights into student assignments outside of the specific skills being assessed” (p. 761).
  • I really liked the 2007 article by Sonley, Turner, Myer, and Cotton which discusses assessing information literacy using a portfolio. The portfolio included a bibliography, evidence of the search process, and a self-reflection about the student’s research process. I think all of these components are so important, so it’s great that they were included–but the researchers only had nine completed samples. Oof. It’s hard to imagine this being done at scale (with an entire first-year cohort of 1500 students, for example).
  • Chan’s 2016 article about institutional assessment of information literacy found that as students progressed through their degrees, they self-identified as using the free Internet less for research. I question why is this a good thing that we want to reward, given that searching the free web will be the dominant search retrieval method that students use after they graduate. We should encourage more adept use of the free web, not less use of it overall. I wonder, will the emphasis on academic research atrophy their web searching skills by the time they graduate and begin working?

Whew.

If you’re new to student learning assessment–don’t read too much about it. Reading about it is really confusing until you’ve had some experience with it. I think the best way to learn more about information literacy assessment is to talk to other teachers about it (at conferences, via e-mail, in department meetings) and participate in student learning assessment for yourself. When I was at the ACRL Immersion program in Seattle in 2013, Deb Gilchrist said this about assessment: Start small, but start. It’s good advice.

References

Chan, C. (2016). Institutional assessment of student information literacy ability: A case study. Communications in Information Literacy, 10(1), 50-61.

Holliday, W., Dance, B., Davis, E., Fagerheim, B., Hedrich, A., Lundstrom, K., & Martin, P. (2015). An information literacy snapshot: Authentic assessment across the curriculum. College & Research Libraries, 76(2), 170-187. doi:10.5860/crl.76.2.170

Kiel, S., Burclaff, N., & Johnson, C. (2015). Learning by doing: Developing a baseline information literacy assessment. Portal-Libraries and the Academy, 15(4), 747-766.

Perruso Brown, C., & Kingsley-Wilson, B. (2010). Assessing organically: turning an assignment into an assessment. Reference Services Review, 38(4), 536-556.

Rumble, J., & Noe, N. (2009). Project SAILS: Launching information literacy assessment across university waters. Technical Services Quarterly, 26(4), 287-298. doi:10.1080/07317130802678936

Sonley, V., Turner, D., Myer, S., & Cotton, Y. (2007). Information literacy assessment by portfolio: A case study. Reference Services Review, 35(1), 41-70. doi:10.1108/00907320710729355

Stewart, C. (2011). Measuring information literacy: Beyond the case study. The Journal of Academic Librarianship, 37(3), 270-272. doi:10.1016/j.acalib.2011.03.003

Walsh, A. (2009). Information literacy assessment: Where do we start? Journal of Librarianship and Information Science, 41(1), 19-28. doi:10.1177/0961000608099896

Yager, Z., Salisbury, F., & Kirkman, L. (2013). Assessment of information literacy skills among first year students. The International Journal of the First Year in Higher Education, 4(1), 59-71. doi:10.5204/intjfyhe.v4i1.140

 

I want your fight: On shame and #ACRL2017

This isn’t what I wanted to write about.

When I envisioned my blog post that would sum up my experience at ACRL2017 in Baltimore, I was hoping to write about things like sitting front row for Roxane Gay’s keynote, having lunch with my mentee, attending sessions about the devaluing of feminized labor and the problems with grit/resilience narratives, presenting my poster, and closing out a karaoke bar.

Instead, I have to write about this:

On Saturday morning (after closing out aforementioned karaoke bar), the very last session I attended was a contributed paper by librarians from Westminster College titled, “In a World Where… Librarians Can Access Final Research Projects Via The LMS.” Erin Smith, Taylor Eloise Stevens, John Garrison, and Jamie Kohler co-presented the paper, which outlines their institution’s unique experience with merged IT/Library departments. Librarians are Learning Management System administrators who have access to course content, including student research papers in the first-year writing program. The librarians provide information literacy instruction through a “library week” module which teaches students how to find and read sources on a pre-selected topic.

Apparently, librarians thought that students’ final work was very amusing. On one of the slides, they had a list of quotes from student papers that were clearly meant to poke fun at students’ ignorance (e.g., look at this student who tried to write a paper on the history of African-Americans in four pages, hur hur!). On the closing slide of the presentation, they referred to their students as “sweet dum-dums” who would “get there”; I took a picture of the slide and shared it on Twitter.

20170325_104201

When the presentation closed and the panel asked for questions, I was too exhausted to bring up their choice of language regarding their students. Honestly, I just wanted to get to the ballroom to get a seat for Carla Hayden’s closing keynote, and I promised myself I would follow up with an email to the presenters later.

wc-1

On Monday, the Associate Dean of Library & Information Services, Erin Smith, posted an apology to Twitter using her account. This surprised me because I did look for her Twitter account on Saturday but I couldn’t find it, so I assumed she didn’t have one. When I messaged her directly to learn how she found the tweets about their presentation, she said that was tipped off to the situation through a text from a friend.

wc-2

 

Today she sent a “public” apology e-mail to several librarians, including me, in which she implored us to reach beyond the “Librarian Twittersphere” to engage colleagues who make errors like this one. She still did not explain how they ended up using the “dum-dum” phrase in their presentation, or why they continue to use it to describe themselves (#wearethedumdums).

wc-3

Many people are lauding Smith for being brave enough to acknowledge the mistake and apologize. I recognize the courage it takes to face this criticism head on and I admire it. However, I do not yet feel that this situation is fully resolved.

This e-mail is not a public apology.

E-mail inboxes are not public spaces. If the intent of the e-mail is to be a public apology, it should be posted in a public forum, preferably on the Westminster College website. It would seem that the Westminster College Library expects the 40 recipients of the e-mail to do the work of making the apology public. [Edited to add: I see that Smith has posted the letter on Twitter as a response to someone else, which is a good start in making the apology public.]

As far as I know, Westminster College Library has not yet apologized to the right people.

The librarians who presented this paper made jokes at their students’ expense. The people who deserve an apology are not other librarians. The people who deserve an apology are their students, along with course faculty and other administrators who gave librarians access to student work samples. Unless I’m mistaken, the librarians have not apologized to the students whose work was the basis for this contributed paper. As a student learning assessment librarian, I am particularly upset that librarians were entrusted with student work for the purpose of improving information literacy instruction–and instead Westminster College librarians used a national conference as a forum to mock their students’ learning.

#LibraryTwitter isn’t just snark.

In her e-mail, Smith said that she and co-presenters almost didn’t find out about the comments made about their presentation on Twitter because none of them are active in that space. So, if I understand correctly, Smith is upset that she might miss out on critical comments made in forums where she isn’t active.

If she doesn’t like things being said behind her back, how does she think her students would feel knowing that their research papers were fodder for librarian laughter at a professional conference?

When I was asked on Twitter who presented the slide, I readily provided the contact information for Westminster College and Erin Smith. I intended for this situation to move off Twitter-land and into real life, and I knew it would be the beginning of a much longer conversation. I know that Smith feels ashamed. I know her apology is sincere, if a little off-key (e.g., stop perpetuating the ableist term “dum-dum”, please). The great thing about shame is that it’s temporary.

In Roxane Gay’s keynote, she talked about this age of American disgrace. White people often tell her that they feel ashamed.

“I don’t want your shame,” she said. “I want your fight.”

We don’t need more shame. We need to fight. I want the Westminster College librarians to fight for their students, to protect them, to love them, to rage on their behalf, and to care as very deeply for their students as they say that they do.

Smith said that she and her librarians were surprised by their assessment results. In a message to me, she explained that they did not realize how “unprepared” their students are. In her e-mail, she said the librarians’ expectations were a “mis-match” with students’ abilities.

Here’s my take: it is not the students who were unprepared for learning. It was the librarians who were unprepared for teaching.

Resisting analytics in a correlation-crazy world.

Last Friday, I presented a six-minute lightning talk at the Library Assessments Workshop hosted by the Colorado Alliance of Research Libraries. Much to my supervisor’s chagrin, I was frantically putting together my slides and practicing my talk at the end of the day on Thursday. Some of the disjointed brainstorming notes looked like this:

  • Don’t confuse satisfaction with learning.
  • Self-efficacy (confidence) is important, but it’s not the same thing as applying skills.
  • “I feel confident using the library catalog” doesn’t mean you can use it.
  • What does this mean for assessment?
  • Watch the language that you use in your student evaluations and reflections.
  • “Helpful.” What does it mean that something is helpful? Or useful? That you’re comfortable working with the librarian?

I think I was able to pull together something coherent, but you can judge for yourself. My finished talk is available here (just the slides) and here (a five-minute recording). The article that I refer to, “Best methods for evaluating educational impact” by Schilling and Applegate (2012), is available here.

learningisnotaservice

The day began with Megan Oakleaf joining us virtually from her office at Syracuse University to deliver a keynote address about the state of assessment in higher education today and the academic library’s role in current assessment trends. I admire Oakleaf’s contributions to academic librarianship and her leadership in assessment, but I disagreed with most of her presentation. It seems, perhaps, that I misunderstood her presentation as advocating for certain actions, e.g., collecting individual student-level data to affirm the library’s value in retention and persistence. According to her responses to me on Twitter (all of which she has deleted), she was simply the “bearer of news” (as opposed to an advocate for surveillance state academic libraries).

I think she made valid points about the role of accreditation–that there are many flaws with the accreditation process, it’s confusing, it varies a lot, and the library’s role in accreditation is usually focused on our spaces and collections, rather than on our impact on students. Accreditors expect higher education institutions to commit to continuous improvement, but don’t say how to do this (similar to how the ACRL Framework expects academic libraries to use the frames to guide information literacy outcomes, but doesn’t tell library instructors how to do it).

It’s hard to get any population to police itself, and faculty are not the most compliant animals.

Megan Oakleaf talking about accreditation in her Library Assessments Workshop keynote

Oakleaf differentiated the concepts of student learning and student success. I also agree that student learning and student success are not the same thing, and the metrics we use for success (persisting from year to year, graduating within a certain amounting of time) do not necessarily measure gains in student learning. In Oakleaf’s words, “Student success surrogates do not equal learning.” Amen.

In the remarks that followed, Oakleaf emphasized that libraries need to situate their value within their individual institutional contexts. “Libraries have stuff, know stuff, and do stuff,” she said. The key is to connect all of that “stuff” to your bigger institutional picture, and to show that your library makes an impact in the lives of your stakeholders (including students, faculty, alumni, community, and so on). I’m definitely on board with all of this.

The part where she lost me (and when I had to take off my blazer because I was sweating so hard) was when she said,

We won’t know what difference the library makes until we collect data on individual library users.

Megan Oakleaf talking about the role of analytics in library assessment in her Library Assessments Workshop keynote

I am one of those people who has, in Oakleaf’s words, a “visceral reaction” to such an assertion. I believe in the Library Bill of Rights and the ALA Code of Ethics, like really believe in them, so yeah, I don’t believe in (or value) tracking individual users beyond the extent necessary to provide our sources (e.g., we keep track of a book that’s checked out while it’s gone, but once it’s back, we delete any record that you ever had it, and we don’t tell anyone else what you’ve been reading, either).

oakleaf2

I fundamentally do not see the value in correlating an individual’s library use with their ability to persist in college. I think there are better, more meaningful ways to tell the story of our value and the impact we have on our stakeholders. I loved the example that Oakleaf included about the “Critical Incident” studies by Ross Todd, where students in K-12 libraries were asked about a time that the library helped them. Students were asked, “What was the help you got from the library, and what were you able to do because of it?” Asking more questions like this, and listening to our stakeholders, is far more valuable to me than correlational data gained from surveilling and tracking students.

oakleaf1

I get the sense, and I could be wrong, that all of this is theoretical to Oakleaf, who does not work directly with undergraduate student populations and has not done so in over ten years. Her last position as an instruction & reference librarian ended in 2006 when she became faculty at the School of Information Studies. Maybe it feels abstract to advocate for the privacy rights of students when you don’t work with them everyday. At this point, it would be fair to allow Oakleaf to explain herself a bit more, and I would be glad to include her own words here. As I mentioned above, she engaged with me enthusiastically on Twitter and responded to nearly all of the tweets I wrote during her presentation–but I should have taken screenshots of her responses because she has has since deleted everything she said to me. This is unfortunate because I wanted to include her remarks here as a way of balancing my interpretation of her keynote.

 

One of the things I like about having this blog is that I can be wrong, publicly wrong, and I can come back later and read what I thought and see how wrong I was. Perhaps that will be the case here, and I’ll come back in a year or two or five and laugh at how silly I was for resisting the idea of using student data to prove library impact. For right now, though, I will hold fast to what I believe, and you can thank me in advance for freeing up a seat at Oakleaf’s panel at ACRL2017 about the “responsible use of library data”–instead, I’ll be at Eamon Tewell’s session, “Asking, Listening, Observing: Learning about Student Research through Ethnography.” Sounds like my kinda jam.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Library Assessment Conference 2016, the day after: Next steps

I started out my last day of the Library Assessment Conference sharing biscuits, poached eggs, and strong coffee with Maoria Kirker, Instructional Services and Assessment Librarian at George Mason University, and fellow alum of the 2013 ACRL Immersion Teacher Track program in Seattle, Washington, where we met. I admire Maoria for lots of reasons: she’s sharp, funny, energetic, dedicated to student learning, and constantly reflective of her own practice as a teacher. We have a lot in common, and it’s no surprise that we’ve even applied for the same jobs.

withmaoria#infobros4lyfe

Maoria and I both share strong identities as educators. “I really feel like I’m an educator first,” I told her, which lead to an interesting discussion about how we describe what we do.

“I say that I’m a teacher,” she said to me. “I always say that first. When they ask what I teach, then I explain that I’m a librarian and I teach college students research skills and concepts.”

Her response made me pause. When people ask what I do, I always say that I’m a librarian–but why do I say that, if what I really believe is that I’m a teacher? Do I feel like I would be taking ownership of a word that isn’t really mine? Maybe it’s the fact that I’ve always wanted to be a librarian, and the teacher part came second, after some convincing that being a teacher wasn’t a bad thing. Librarian was a word that always had a positive connotation for me, and teacher, well, that’s a tough one. I felt a lot of shame in school (how is that children, whether labeled “smart” or “dumb”, end up feeling ashamed?) and resisted seeing my teachers as allies. I never thought that being a teacher was a positive thing. Until I learned I could be a teacher in a library, I didn’t think a teaching career was possible for me. It was a powerful revelation.

yesiamalibrarianMy Halloween costume.

I left Arlington yesterday afternoon after attending two more sessions, glumly eating a boxed lunch, and chatting up Paul Bracke about the state of library leadership today. I brought home a new tote bag (of course ) and a long list of next steps. In no particular order, I need to:

  • Continue my own research agenda. The papers and posters presented at LAC16 affirmed that my approach to information literacy and student learning assessment are valid, and I was inspired by presentations like Ann Medaille’s, who studied students’ drawings about their research process, and Anne Grant’s, who was inspired by the Framework to develop student-centered one-shot instruction that included having students write their own LibGuide.
  • Clarify my thoughts, beliefs, and opinions. I was extremely flattered by the people who stopped me and asked about some of the (half-formed) thoughts expressed in my tweets & blog posts, including my position on disaggregated data in student learning assessment, card swiping, and longitudinal tracking of individual students. I have a lot more to say, but I have to figure out how to say it first, and the informal conversations I had were wonderfully helpful in pushing me to refine and synthesize my positions.
  • Apply to the Institute for Research Design in Librarianship. I know it’s unlikely that I’ll be accepted (as I learned from Kristine Brancolini‘s presentation, they received 250 applications for 60 spots), but I think this would be an incredible opportunity to define and implement a broad-scale information literacy research project. (Although, as I also learned, they could fill an entire IRDL with just information literacy projects, and they need diverse applications representing different library services and functions!)
  • Develop my quantitative and qualitative research skills. For the past few years, I have had an intense focus on developing my pedagogy and information literacy instruction capacities, and I have not done much (if any) formal research. I was disappointed by some of the projects at LAC because I felt like the research wasn’t very strong, but I recognize my own deficits in this area and I’d like to improve my skills. Thankfully I’m at an institution that provides classes in these areas, so I can take advantage of that sometime in the next year.
  • Follow up with people who are doing great work, including Rachel Gammons and Lindsay Inge at the University of Maryland, Katie Fox with Colorado State Library, and AJ Boston at Murray State University.
  • Send thank you cards. It’s just a good habit. I met some really nice people, and I’d like to send them a brief note expressing my gratitude for them.

Yesterday, Luke Vilelle from Hollins University presented about the assessment efforts at his library. Hollins is a very small university with fewer than 700 undergraduates, all women, and the library has a tiny staff (nine people by my count). The audience was clearly impressed when Vilelle shared that their library chooses and assesses outcomes annually, generates regular reports, and even maintains a simple but effective dashboard of “Library Stats” available on their website.

A participant in the audience stood up and asked, “How do you do all this assessment with such limited staff? How do you have time for anything else?”

Vilelle chuckled. “We just get it done,” he said. It was a good reminder for all of us that, no matter the size or scope of our institutions, whether we have fifty librarians or just one or two, we can make good things happen by choosing a direction, delegating tasks, and pitching in.

The word that came to mind for me throughout the conference was habit. What are the habits in your workflow? What do you do every day, every week, every month, every semester, every year? If you do something on a regular basis (count the number of instruction sessions taught, study how people use your space, run reports of your database downloads, reflect on reference interactions, assess student learning samples, etc.), then it’s less onerous. The path forward with library assessment isn’t just buying more software (although Tableau has plenty of new fans now), hiring someone called an Assessment Librarian, or writing outcomes–it’s making the work a habit for everyone, at all levels of your library, and showing them that their habits yield positive results, internally and externally.

– – –

Missing out on: Dr. Cornel West speaking on campus today–the line was too long and my brain was too full.

Seeing tonight: Moonlight.

Ready for: Whatever’s next.

 

The power, the pressure of the post-test.

I’ve been an Assessment Librarian for about three months now. In a few days, I’ll travel to Arlington, Virginia for the annual Library Assessment Conference, which I’ve never attended before. Sessions with titles like “The Illusory Holy Grail: Comprehensive Mixed-Methodology Assessment is No Better Than Using a Single Method; A Case Study on the 21st-Century Science Library” and “Lean Libraries Optimize Outcomes!” await me. Apparently we’re encouraged to wear costumes on the first day of the conference (Halloween). Given the limited space in my luggage, I’m packing a pretty simple costume: a red and gold Fred Flare pin that says LIBRARIAN.

I’m anticipating that #LAC16 will draw a mixed crowd — hardcore data geeks who do, as the pre-conference vendor e-mail subject line suggested, “get excited about analytics!” as well as the, hmm, analytics-averse folks like myself, who are less interested in big numbers and more interested in big stories.

When people ask what I do, I say, “I’m here to tell the story of the student learning that happens in the library.”

Easier said than done, right?

If you’ve read any articles about student learning assessment in libraries–well, God bless you–but you know that librarians are quick to use pre-tests and post-tests to provide evidence of the learning that happens in the library. “Look!” The librarian will triumphantly write in their Discussion section, “I did that! Forty-five minutes ago, they had no idea what Boolean operators were, and now look at them! They’re just AND/OR/NOT-ing their little hearts out!”

Sigh.

I have a lot of problems with this. I don’t believe that post-tests given after one-shot information literacy instruction provide meaningful evidence of student learning. At best, it may give the librarian some insight as to what worked well in their instruction session (who was asleep and who was paying enough attention to recall what was said), but it does not mean that students are more information literate than they were before the session. I am particularly distrustful of multiple choice tests–as I used to say in my College Success class, multiple choice tests are more about gambling than critical thinking. You have a 25% chance of getting a correct answer, even if you have no idea what the question is asking.

So what is the proper post-test of information literacy skills?

Well, I think most folks are not going to like this answer. The best post-test for information literacy skills is life. It’s your students’ ability to function on a daily basis in an information-saturated environment. It’s the ability to write an e-mail to a colleague, the ability to contextualize a single news item reported through a series of tweets, Facebook posts, and Buzzfeed news articles, the ability to discern and evaluate conflicting information coming from reliable sources, and the ability to pick up the phone and call someone when they have an answer you need.

But that’s much harder to measure, and much harder to publish, and therefore much harder to earn tenure (if you’re a tenure-track librarian) based on such research. That’s why I am very interested in attending Lise Doucette‘s session on Tuesday next week, “Acknowledging the Political, Economic, and Values-Based Motivators of Assessment Work: An Analysis of Publications on Academic Library Assessment.” What are we assessing and why? What are our motivators? Are we interested in improving student learning, or are we interested in proving our value as instruction librarians?

I’ll be honest with you–I’m not excited by snap results, or clicker quizzes, or the ability to fill in the blank after forty-five minutes of instruction. I’m interested in the slow burn. The long game. In The Master, Amy Adams tells Joaquin Phoenix’s character that he needs to be invested in “The Cause” for “a billion years or not at all.”

When it comes to information literacy, I’m in it for a billion years. Anything less just isn’t that interesting to me.

– – –

Listening to: “I Really Like You” by Carly Rae Jepsen

Reading: Another Brooklyn by Jacqueline Woodson

Eating: A lot of seafood lately, which is weird, since I moved to a landlocked state. Guess I miss fish.