Gearing up for another week

I made some progress last week on writing. At least I got a draft written up about the Empty Bowls workshops. It’s more of a “what we did” kind of article instead of anything scholarly, but I do those workshops for fun and to connect with an important cause, not as a scholarly activity.

In addition to more writing, this week I need to start working on my summer project: designing a large test bank to use as an assessment tool for our semester-long course.

Last year, we decided to implement Project SAILS testing. Unfortunately, we were not able to get buy-in from campus partners who could have included the test in either the first year writing program (ENGL 1101 & 1102) or the first year experience program. So, just to get some data which could be useful in pushing for campus-wide implementation, we decided to include it in most of our LIBR 1101 sections. At the same time, a few of us had some pedagogical research we were wanting to pursue. I was planning to do research comparing my two sections (the study I mentioned last week, which I have presented on but still need to write up). A couple of colleagues had designed a video game to use as the text book for the course, and they wanted to test the efficacy of that.

To combine all of these interests, we decided to use the individual scores version of the SAILS test. We did one round of testing at the very beginning of the semester and another at the end. Students had to complete the test and turn in either a screenshot or print out of the final confirmation page that showed their score. They were NOT required to give consent to have their responses included in research, but they did have to do the test for class participation credit. Some instructors offered extra credit to those who improved their scores from the pretest to the posttest as an incentive to get students to take the time to do well on this exercise at a time when term papers and final exams for other classes were competing for their attention.

I’m not finding anything on their website directly addressing this type of use, but we discovered that this particular assessment does not work as well as we had hoped to test learning over the course of a single semester. It makes sense that it would not be directly addressed, since only a minority of colleges & universities offer a semester-long course on information literacy. Instead, most librarians have to attempt to scaffold students across a series of “one-shot” instruction sessions throughout their four (or six) years of college. So, for most librarians, the best option to test the impact of their instruction is to do a pre-test on incoming freshmen and then try to catch graduating seniors for a post-test. This is an important difference because of some of the more advanced questions on the SAILS test. For example, most of my students missed the question on the post-test that asked something about when you need IRB approval for research… Hopefully, at least some of them will learn this by the time they graduate, but I don’t cover that in my course.

SAILS has a test bank of over 500 questions (I don’t know the actual number, but it’s a large pool!). We did repeat the testing this spring, and I haven’t yet looked at the data from that. Maybe the randomly selected questions were a better fit for the way we were using the test this time?

Ultimately, though, what we learned from this process was that we need to develop our own test bank of questions, written to assess our course learning outcomes, to draw from for our various purposes. That seems kind of obvious in hindsight, but we were trying to cut some corners by using the SAILS test.

So one of my projects for the summer is to develop that test bank. I want to be able to draw from it for the purposes of pedagogical research, including a pre-test and post-test to see which section improves more. We will also be able to select a subset of questions to use in all course sections for the assessment data we need to collect for accreditation paperwork. We also expect to be able to implement a pre/post-test in sections of our course that would more clearly illustrate the value of our course to administrators. And, we’ll have more flexibility to use the results as we like in publicity types of stuff, instead of worrying about how much we’re allowed to say. For example, it would be great to include 2 or 3 of the questions and the responses we get from students on their pre-tests in the faculty newsletter. As it is, though, I don’t think we’re allowed to reprint any questions, so we have to be careful about how we present those results. Also, all we get is how many students got that correct, not how many gave each answer… Sometimes it’s helpful to see what wrong answers were selected – if 75% of those who answered incorrectly gave the same response, does that show a pattern in how they’re misunderstanding the information or are they misreading the question?