Since mid August, a big chunk of my time and energy has been focused on prepping and teaching the class I teach. While many instruction librarians are focused on getting access to other people’s classes through “one-shot” instruction sessions (the “library day” when a librarian gets to teach students how to find the sources they’ll need for their paper), I get to teach my own course. LIBR 1101, officially titled Academic Research & the Library, is a 2 credit hour course that is included in the core curriculum. It’s not required, since we don’t have anywhere near the faculty we would need for that, but it is one of the options available to fill one of the core requirements.
As with any approved course, we have a shared set of learning outcomes for the course. Beyond that, we all have a lot of freedom to craft our own syllabi. We don’t all teach the concepts in the same order, though there is a lot of overlap due to sharing ideas. One colleague spends more time than I do talking about hegemonic ideologies when discussing the evaluation of information. My plagiarism lesson delves further into disarticulating plagiarism from copyright violation, compared to how others address plagiarism. (It’s also a prelude to spending more time on open access and Creative Commons later in the semester!) A portion of the job ad when I applied here included:
These new spaces will provide opportunities for librarians to experiment with emerging pedagogies. The ideal candidate will be open to new ideas, willing to take risks and have the ability and courage to fail gracefully and change course when necessary.
I have been very lucky that they weren’t kidding when they decided to include that. (And I think we’ve kept that part when we got two new faculty lines a bit over a year ago.)
I’ve been experimenting with using blogs in the course for the past several semesters. In summer 2012, four of us team-taught two 4-week sections of the course for a summer bridge program (each taught one week of the session). We first tried implementing blogging in there, though I don’t remember the reasoning at the time. However, our campus course management system sucked – it was a custom version of Blackboard, but using about a 10 year old version of Blackboard that was not compatible with current Java script… So I went all in on using blogs to replace the CMS for everything except posting grades when I taught my own section again that fall. In that team-taught summer course, we also tried including in class writing assignments. The idea was that the process of writing about the concepts in their own words on the spot would improve long term learning.
After incorporating those ideas into my own course in Fall 2012 and Spring 2013, I was questioning whether the in class writing was actually having the impact I hoped for. Students were learning, but was that particular piece of the puzzle contributing enough to make it worth the class time it ate up? Or would it be as effective to have them respond to the same prompts as homework?
In Fall 2013, I designed a study to test that. I taught two sections of the same course. One section would get time in class to write a response to the prompt, while the other had to respond as homework. To balance the workload, those who wrote in class had to comment on at least 2 classmates’ posts, while those who wrote the post as homework did not have to comment. Unfortunately, even if I had gotten conclusive results, I would have had to ask whether it was the setting for writing the original post or the commenting that made the difference. These posts were in addition to regular homework assignments, in which students had to read an article or watch a video, and then write a post responding to several questions about that text. The weekly writing post prompts were designed for them to just reflect on the material we had covered that week in class.
The results were completely inconclusive, though. Two factors contributed to this. First, completely by chance, one section was an anomaly. In aggregate, they had a higher average gpa than most other sections I’ve taught. That section also had a higher average number of credit hours already earned… Which is significant because our retention and progress to graduation rates leave a lot to be desired. We lose a lot of students before they cross the threshold to sophomore status. In fact, the stats I found track retention based on how many years they attend, but not credit hours earned. Only about 70-72% of our students make it to their second year, but a glance at our 4- and 6- year graduation rates suggests that making it to the second year does not equal sophomore status according to number of credit hours earned (30 credit hours). The other section was more like the other sections I’ve taught, with more than half of the students falling into the range of 0-29 credit hours earned.
Qualitatively, that anomalous section was different as well. I would set a minimum word count for posts to give some indication of how long of a response I was looking for. I didn’t like being a bean counter, but I started doing that because, when I first started this, some students would post really brief responses that didn’t fully answer the question or adequately explain their response. In previous sections and the other section that fall, most students would write just about the required number of words. The word count was usually 250, and very few students ever went over 300 words. In the anomalous section, several students would regularly post 400-500 word responses… And that wasn’t a bunch of fluff, they were going into more detail and depth than the minimum requirement! They were actually really getting into examining this stuff! So, to what degree did the weekly writing posts affect their learning, and to what degree was their learning affected by this higher demonstrated level of curiosity?
The second factor that contributed to the inconclusive results was the fact that I was trying to use the SAILS test to measure learning. My department had talked about instituting a pre- and post-test in all sections of the course. I’m not sure who evaluated our options and decided to go with SAILS, but once the momentum was going that way, I requested that we do the individual score reports instead of cohort scores. That way I could use the data for my purposes too, since I wasn’t going to assign two separate pre-tests! Unfortunately, I realized too late that SAILS is really not designed to test learning over the course of a single semester. We get more contact with these students over the course of a semester than many librarians get with any student over the course of a few one-shots sprinkled through their college careers… But our students remain mostly freshmen and sophomores through the entire semester. We’re building a foundation, but the post-test included a question about when you need IRB approval for a research project… That is beyond the scope of this class, since it will be a couple of years before most students are likely to consider doing any research that involves that. That’s our screw up, since SAILS markets itself as a test to give incoming freshmen and graduating seniors – we should have paid more attention to the level of questions they ask. But when the post-test includes several questions I don’t really address, that is not a particularly valid measure of learning.
I am once again teaching two sections of the course. I’ve redesigned the parameters while still comparing in class writing assignments in the two sections. I’m also using pre- and post-test questions written specifically to address the learning outcomes of this course. But I suppose I should stop now, and write about that stuff another day!