Teaching the new Information Literacy Framework

As I’ve mentioned before here, I am lucky enough to get to teach a semester long 2 credit hour course on information literacy. Unlike some librarians who have to figure out how to convey the complexities of evaluating a wide range of potential sources in one-shot instruction sessions in other people’s courses, I get to meet the same group(s) of students 29 times over the course of a single semester (twice a week for 14 weeks, and then a 2.5 hour final exam period). This semester, I am teaching two sections of the course, with 24 students per section. The down side to that is the time spent grading, but I even enjoy that a lot of the time!

In the past, one of my weak points was not doing enough to clearly articulate the purpose of various assignments. They are all designed to address a specific learning outcome, but some students see them as busywork. Part of the problem is that I have not always been good at making the explicit connection between the assignment and the learning outcome. But part of the problem, I think, has also been that students don’t always have the framework needed to see how that learning outcome relates to the learning outcomes for the course as a whole.

This semester, I’m trying to address both of those issues. To help put the more granular learning objectives in each lesson into a larger context, I am assigning readings from the revised draft Framework for Information Literacy for Higher Education. Instead of reading the whole thing in one go, I’m breaking it up to address each threshold concept on its own. I’m trying this because of the way this draft is written, with a paragraph or two describing the concept, followed by lists of practices and dispositions. The old IL Standards were not written in a way that was as easy to turn into assigned readings!

There are a million ways one could organize a course like mine, since many of the concepts relate back to the others. I start out by spending some time on evaluating information, including discussing evaluation criteria and identifying logical fallacies used as rhetorical devices. From there, we move into a discussion of plagiarism (with a dose of how plagiarism is different from copyright violation). Up next, we talk about developing appropriate research questions and they start to develop their own research questions to work on through the rest of the semester. And then, we start to talk about different types of sources and explore options for finding those sources. Those discussions include types of sources (book, journal article, mag article, etc.), what types of information you’re likely to find in those different types (overview & background info, research findings, details about current news, etc.), scholarly vs. popular and the grey area in between, the cycle of information, and so on. Yeah, we spend time talking about how to access and search databases, but always with an emphasis on function over form – EBSCO looks like this now, but ProQuest databases have most of the same functions, they just look different, and what are they all really doing when you click this filter and how can that help you? From there, I move toward more discussion about copyright, open access, and Creative Commons, and then some social media literacy issues.

So, this semester, on the second day of the first week of classes, I assigned a reading that introduced the new IL Framework and then included the “Authority is constructed and contextual” concept. In one section, we spent probably 20 minutes of the next class period discussing the concept before talking about evaluating information and developing a list of evaluation criteria to use. In the other section, when I asked who had done the reading, only 3 (of 24) raised their hands. So I tabled it to discuss later. Unfortunately, I’m not sure I ever got back to discussing it explicitly in that class.

Next, I assigned them to read “Scholarship is a conversation” in time to discuss it on the Thursday before plagiarism week. This time, though, I added a google form they had to fill out, which will count toward class participation scores (15% of their final grades). In addition to asking them to summarize the readings, I asked them to select one bullet pointed item (either a knowledge practice or a disposition) that they want to discuss in class. I tallied those up right before class and we discussed them in order of how many votes each got (most to least popular). That discussion seemed to go really well in both sections, though we didn’t “cover” the same ground in the same depth in both sections, since I was adjusting to what each group wanted to discuss and had questions about.

This week, they will read “Format as process”. Tomorrow will be the kind of boring class period during which they teach each other some tricks for searching the library catalog and then do a citation exercise. On Thursday, we’ll discuss the concepts in “format as process,” and then next week start identifying some different categories of types of sources. Right now, I have “Searching as exploration” on the schedule in a couple of weeks (week 8 of the semester) – after discussing types of sources but before doing too much in databases. The minor complication in that is that I will be presenting at a conference in Denver on the day that discussion is planned. Luckily, my colleague Jessica Critten has offered to sub for me that day. We haven’t discussed it yet, but I suspect that discussion will be right up her alley! A week later, I’ll assign “Research as inquiry”, and then “Information has value” a week after that – timed so that the last one leads us into a discussion about open access and Creative Commons licensing!

At this point, I feel like this is going really well… But, since we’ve only discussed 2 of the 6 threshold concepts so far, we’ve got a ways to go! Unfortunately, discussing the concept doesn’t equal internalizing that concept. I feel like they’re mostly getting it. I’m often the weird one among friends – when I sit down to grade, I’ve been getting excited at how well most of them are doing instead of complaining about grading being a tedious and/or disappointing chore. But we’re not even half way through the semester, so it remains to be seen how well they will apply this stuff in an actual research project.

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

The library class

Since mid August, a big chunk of my time and energy has been focused on prepping and teaching the class I teach. While many instruction librarians are focused on getting access to other people’s classes through “one-shot” instruction sessions (the “library day” when a librarian gets to teach students how to find the sources they’ll need for their paper), I get to teach my own course. LIBR 1101, officially titled Academic Research & the Library, is a 2 credit hour course that is included in the core curriculum. It’s not required, since we don’t have anywhere near the faculty we would need for that, but it is one of the options available to fill one of the core requirements.

As with any approved course, we have a shared set of learning outcomes for the course. Beyond that, we all have a lot of freedom to craft our own syllabi. We don’t all teach the concepts in the same order, though there is a lot of overlap due to sharing ideas. One colleague spends more time than I do talking about hegemonic ideologies when discussing the evaluation of information. My plagiarism lesson delves further into disarticulating plagiarism from copyright violation, compared to how others address plagiarism. (It’s also a prelude to spending more time on open access and Creative Commons later in the semester!) A portion of the job ad when I applied here included:

These new spaces will provide opportunities for librarians to experiment with emerging pedagogies. The ideal candidate will be open to new ideas, willing to take risks and have the ability and courage to fail gracefully and change course when necessary.

I have been very lucky that they weren’t kidding when they decided to include that. (And I think we’ve kept that part when we got two new faculty lines a bit over a year ago.)

I’ve been experimenting with using blogs in the course for the past several semesters. In summer 2012, four of us team-taught two 4-week sections of the course for a summer bridge program (each taught one week of the session). We first tried implementing blogging in there, though I don’t remember the reasoning at the time. However, our campus course management system sucked – it was a custom version of Blackboard, but using about a 10 year old version of Blackboard that was not compatible with current Java script… So I went all in on using blogs to replace the CMS for everything except posting grades when I taught my own section again that fall. In that team-taught summer course, we also tried including in class writing assignments. The idea was that the process of writing about the concepts in their own words on the spot would improve long term learning.

After incorporating those ideas into my own course in Fall 2012 and Spring 2013, I was questioning whether the in class writing was actually having the impact I hoped for. Students were learning, but was that particular piece of the puzzle contributing enough to make it worth the class time it ate up? Or would it be as effective to have them respond to the same prompts as homework?

In Fall 2013, I designed a study to test that. I taught two sections of the same course. One section would get time in class to write a response to the prompt, while the other had to respond as homework. To balance the workload, those who wrote in class had to comment on at least 2 classmates’ posts, while those who wrote the post as homework did not have to comment. Unfortunately, even if I had gotten conclusive results, I would have had to ask whether it was the setting for writing the original post or the commenting that made the difference. These posts were in addition to regular homework assignments, in which students had to read an article or watch a video, and then write a post responding to several questions about that text. The weekly writing post prompts were designed for them to just reflect on the material we had covered that week in class.

The results were completely inconclusive, though. Two factors contributed to this. First, completely by chance, one section was an anomaly. In aggregate, they had a higher average gpa than most other sections I’ve taught. That section also had a higher average number of credit hours already earned… Which is significant because our retention and progress to graduation rates leave a lot to be desired. We lose a lot of students before they cross the threshold to sophomore status. In fact, the stats I found track retention based on how many years they attend, but not credit hours earned. Only about 70-72% of our students make it to their second year, but a glance at our 4- and 6- year graduation rates suggests that making it to the second year does not equal sophomore status according to number of credit hours earned (30 credit hours). The other section was more like the other sections I’ve taught, with more than half of the students falling into the range of 0-29 credit hours earned.

Qualitatively, that anomalous section was different as well. I would set a minimum word count for posts to give some indication of how long of a response I was looking for. I didn’t like being a bean counter, but I started doing that because, when I first started this, some students would post really brief responses that didn’t fully answer the question or adequately explain their response. In previous sections and the other section that fall, most students would write just about the required number of words. The word count was usually 250, and very few students ever went over 300 words. In the anomalous section, several students would regularly post 400-500 word responses… And that wasn’t a bunch of fluff, they were going into more detail and depth than the minimum requirement! They were actually really getting into examining this stuff! So, to what degree did the weekly writing posts affect their learning, and to what degree was their learning affected by this higher demonstrated level of curiosity?

The second factor that contributed to the inconclusive results was the fact that I was trying to use the SAILS test to measure learning. My department had talked about instituting a pre- and post-test in all sections of the course. I’m not sure who evaluated our options and decided to go with SAILS, but once the momentum was going that way, I requested that we do the individual score reports instead of cohort scores. That way I could use the data for my purposes too, since I wasn’t going to assign two separate pre-tests! Unfortunately, I realized too late that SAILS is really not designed to test learning over the course of a single semester. We get more contact with these students over the course of a semester than many librarians get with any student over the course of a few one-shots sprinkled through their college careers… But our students remain mostly freshmen and sophomores through the entire semester. We’re building a foundation, but the post-test included a question about when you need IRB approval for a research project… That is beyond the scope of this class, since it will be a couple of years before most students are likely to consider doing any research that involves that. That’s our screw up, since SAILS markets itself as a test to give incoming freshmen and graduating seniors – we should have paid more attention to the level of questions they ask. But when the post-test includes several questions I don’t really address, that is not a particularly valid measure of learning.

I am once again teaching two sections of the course. I’ve redesigned the parameters while still comparing in class writing assignments in the two sections. I’m also using pre- and post-test questions written specifically to address the learning outcomes of this course. But I suppose I should stop now, and write about that stuff another day!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

Library Instruction West 2014

I flew out to Portland, OR in July to attend the Library Instruction West (formerly known as LOEX of the West) conference. I was excited to visit Portland for my first time, but a little apprehensive about my expectations for the conference. I attended LOEX of the West in Burbank in 2012. I was in my first year of my first librarian job, and I fondly remember it as an amazing conference. Would LIW in Portland live up to those memories, or did it just seem that amazing because I was still so new to the field?

I’m happy to say that it lived up to, and maybe exceeded, my expectations. I remember very different aspects of the conference this time around than I remember from last time, though. In 2012, I remember coming back with more specific ideas about exercises to try in class or for outreach. This time, though, most of what I remember are the higher level discussions about what we should even be teaching. I can’t say how much of that is because of the range of presentations and how much of that is because I’m more settled in to my position. It may also be because I spent less time meeting new people and more time discussing these things with a small group of friends.

The conference started with a keynote from Alison Head discussing Project Information Literacy findings. The latest study asked recent grads about their perception of their information literacy skills and asked companies about their perception of the information literacy skills newly hired recent grads bring with them. While Alison discussed the specifics of the disconnect between these two different perspectives, the take-away message for me was just how important my credit-bearing semester long course is. The skills that employers say they want are skills that I can work to develop over the course of the semester. I don’t know how to go about addressing those skills in one 50-minute session. Though I had some reservations about some of the details, the keynote did the job of getting people talking and thinking.

The highlight for me of Thursdsay’s sessions was the last session of the day. Larissa Gordon talked about assessment at the level of the institution. She worked with an upper level administrator (I think provost, but didn’t write that detail down!) to plan the survey, which helped with getting buy in from faculty. Surprisingly, better faculty buy in leads to more complete responses from students!* When the professors explained the reasoning for doing the survey and stayed in the room while students completed it, they got more responses and more detailed responses. Following this, Mark Lenker talked about bias in the information receivers. We spend a lot of time on teaching students to evaluate the reliability of the information source, including looking for biases, but neglect the impact of the receiver’s biases in terms of how they interpret that information. When faced with information that contradicts one’s ideology, some will just become more entrenched in their convictions and reject information from credible sources. I don’t remember any clear solutions or strategies being offered, but I’m not sure there is any one size fits all strategy for this. Regardless, it’s an issue that we need to grapple with. And finally, Zoe Fisher wrapped up the session by talking about incorporating an inquiry-based approach in one-shots. Hers was one of the few talks I went to that talked specifically about ideas to use in instruction sessions, instead of the big picture discussions that caught my eye more often. I really like her idea of having students record their observations about library space AND questions that they have based on what they saw.
(* Not surprising. I hope the sarcasm translates on screen. Though this wasn’t surprising, it was good to hear how this was done. )

My Friday highlight was Nancy Noe’s session on teaching without technology… Though it was pointed out on twitter that pen and paper IS technology… And the most popular technology at my library are the whiteboards… But I digress! Nancy handed out carbonless copy paper and asked us put away our devices and take notes by hand. She went over the evidence-based reasons for stepping away from the computer sometimes. These included research on how the brain fires when writing by hand versus typing on a computer, benefits of increased blood flow in active exercises, and research showing how poorly most people are able to multitask. Throughout, she had us do some small active learning exercises, to demonstrate ways of teaching concepts without the computer. I don’t know that any of this was new to me, but I hadn’t put it together that way before. The major outcome of this session for me was rethinking how to design a study I’m planning to do with my classes this fall. But I’ll leave that for a later post!

During Nancy’s time slot, Kevin Seeber was giving his talk on teaching format as process. According to twitter, his session was excellent:


I don’t feel too bad about missing his session, though, since I got the small group seminar version over drinks with Kevin, Jessica Critten, and Craig Schroer. As awesome as it is to work with Jessica and Craig, often the demands of day to day deadlines get in the way of these higher level discussions. I spent a good chunk of each evening just listening to Jessica and Kevin discuss the issues each addressed in their presentations.

So now it’s two weeks later and I’m just sitting down to write this up (and then left this to languish as a draft for another month!). Thanks to Nicholas Schiller having set up a storify archive of the tweets from LIW, I was able to go through and fill in gaps in my notes this morning. Between going over my notes, reading the tweets, and being reminded of details that weren’t in either place, I had to start a list of ideas to incorporate into my class this fall.

The grand narratives of the conference, for me, were that we need to focus on larger threshold concepts instead of simple, easily measured skills, and it is ok to say no. Really. We’ve been saying this at my library for years, but this is the first time I’ve heard it said so clearly by so many presenters. We are the experts in what we do, and it is good to say no to doing a bad lesson. Of course, it’s better to say “no, that is pedagogically problematic, but here’s how we could more effectively achieve that learning outcome”.

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

A bit on reviewing and conferences

Well, it didn’t take long for my once a week posting attempt went off track. I’m in the process of getting ICL surgery to correct my vision (my eyes are too bad for LASIK!), so I’ve missed a few days for that, plus personal time off.

This morning, I’m thinking about peer reviews – mainly because I have a couple of book chapters to review on my table right next to my laptop! This will be my first time serving as a peer reviewer for a chapter or article, though I have reviewed proposals for conferences a few times. So those chapter reviews, as well as some conference proposal reviews hanging out on my to-do list, have me thinking about logistics of these processes and how these things can be done better.

Logistically, these chapters were handled differently than journal articles. Each of the authors is expected to serve as a peer reviewer for other submitted chapters, instead of sending submissions off to external reviewers. I don’t know how common it is to do it this way for edited books. It makes sense in some ways – if you are submitting a chapter, you must have some expertise in the field, which makes it easier for the editors to assemble a group of willing reviewers. It has the added benefit that, if I read a chapter that connects somehow to my chapter, I can add in a reference to that chapter during revisions, making the book a more cohesive unit. I reviewed an edited book for Anthropos a while back that did that rather well – several chapter authors gave nods to other chapters or responded to them somehow.

One flaw in this scenario was that it was not clear in the CFP that all authors would be expected to serve as peer reviewers. Somehow, we read it thinking that each chapter submitted would equal X number of chapters to review, not each author would get X number of chapters to review. Why does this matter? In our case, I’m the first author and went into it planning to be available to do peer reviews. I was willing to take on that chunk of the responsibility as first author. My co-authors have much more scheduled over the summer, including a family vacation out of the country for two weeks during the time in which the editors want reviews to be completed. So both co-authors have had to reply that they could not complete the reviews within the time frame the editors requested due to prior obligations. We’ll see how that shakes out. We should have clarified before submitting, but the CFP could have been written more clearly as well.

But that also raises the issue of fairness. If every author and co-author is expected to complete X reviews, but my one chapter has 3 co-authors, and another author submitted 2 chapters as a single author (yet still reviews the same number of chapters as each of us), then we are doing significantly more work to get that one chapter published than that other author is doing to potentially get 2 chapters published. So what is a more fair and yet efficient way to get this done? Enlisting all co-authors and spreading the reviews out evenly means fewer reviews per person, which hopefully means a quicker turn-around time. Assigning the reviews only to the first author reduces the number of available reviewers, meaning that it will take longer to get done… Especially if one author submits multiple chapters (which I don’t know whether that has happened, but I’ve seen plenty of edited volumes in which that has happened), if that author then gets double the number of reviews to do, that really would be a big time commitment. Which way is better? Or is there a better method for reviews for an edited volume?

Conference reviews go much faster than chapter/article reviews, but they also bring some logistical challenges. I’ve served as a proposal reviewer for the Georgia International Conference on Information Literacy for a few years now, and it seems they are still trying to figure out the best way to handle reviews. Each year has been differently since I started. The first time I reviewed for them, in spring 2012, they used a rather cumbersome program. Reviewers logged in to this system, and it was not terrible but not super user-friendly. The next year, they sent out a super simple excel spreadsheet to use to review the proposals. Yeah, it’s excel (not everyone’s favorite), but it was all in one big spreadsheet, so everything is in one place, and then you send in the one big file after you complete it. I don’t remember if there were special instructions for naming the file to make it easier for them to keep it all straight, but from my end it was really simple.

This year, I was disappointed. First, each individual proposal was emailed to me in its own email. That meant that I got 20 separate emails over the course of a day (the first was at 11 am, the last was a bit after 5 pm). Please, please, if you are ever in a position to organize proposal reviews, do not do this! How easy would it have been for me to overlook one or two of those emails? I don’t think I did, and I took care to move them into their own special folder as they came in. But it would be easy for someone who gets a large quantity of email each day to overlook one or two, which you don’t want! Second, there are two links in each email – one to a google doc to fill out a rubric about the proposal, and one to a separate site to recommend whether the proposal should be accepted (with major or minor/no revisions) or rejected. There is a note that you must complete both parts. Really? How cumbersome is that?!?! Why not just add that as a question to the google doc? Or if you need it to be done through the digital commons site, why not just put the whole rubric in there?

I plan to provide this feedback after I submit my reviews this year. I’m dreading it because it looks like a pain in the butt, so I haven’t done it yet. The deadline is June 20, so I still have some time! But this will definitely be a factor when I decide whether to sign up to review proposals next year.

That said, I’m not writing that to complain. This year I also served on the planning committee for a new campus conference focused on pedagogy. Our professors have limited funds for conference travel, so most of our professors travel to conferences in their fields and present on their research, instead of getting to take time for professional development of their pedagogical techniques. So we inaugurated a new on campus conference focusing on that. The director of our new Center for Teaching & Learning (yes, our campus is a bit behind the times and just this past year established that) took the lead on organizing the conference, but I worked with her on a lot of the planning. Figuring out a good way to make it easy for reviewers to do the reviews, but also easy for us to compile the results is not the easiest thing in the world. And what seems easiest to me doesn’t always appeal as much to others – of course finding a good solution to that is a perpetual challenge of any kind of committee work!

So the point of this post is really to think through the logistics of reviewing and remember some ideas of what not to do in the future. There may not be a perfect solution to any of this, but there are certainly some solutions that are more cumbersome than others – so let’s at least try to not repeat those mistakes!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

Gearing up for another week

I made some progress last week on writing. At least I got a draft written up about the Empty Bowls workshops. It’s more of a “what we did” kind of article instead of anything scholarly, but I do those workshops for fun and to connect with an important cause, not as a scholarly activity.

In addition to more writing, this week I need to start working on my summer project: designing a large test bank to use as an assessment tool for our semester-long course.

Last year, we decided to implement Project SAILS testing. Unfortunately, we were not able to get buy-in from campus partners who could have included the test in either the first year writing program (ENGL 1101 & 1102) or the first year experience program. So, just to get some data which could be useful in pushing for campus-wide implementation, we decided to include it in most of our LIBR 1101 sections. At the same time, a few of us had some pedagogical research we were wanting to pursue. I was planning to do research comparing my two sections (the study I mentioned last week, which I have presented on but still need to write up). A couple of colleagues had designed a video game to use as the text book for the course, and they wanted to test the efficacy of that.

To combine all of these interests, we decided to use the individual scores version of the SAILS test. We did one round of testing at the very beginning of the semester and another at the end. Students had to complete the test and turn in either a screenshot or print out of the final confirmation page that showed their score. They were NOT required to give consent to have their responses included in research, but they did have to do the test for class participation credit. Some instructors offered extra credit to those who improved their scores from the pretest to the posttest as an incentive to get students to take the time to do well on this exercise at a time when term papers and final exams for other classes were competing for their attention.

I’m not finding anything on their website directly addressing this type of use, but we discovered that this particular assessment does not work as well as we had hoped to test learning over the course of a single semester. It makes sense that it would not be directly addressed, since only a minority of colleges & universities offer a semester-long course on information literacy. Instead, most librarians have to attempt to scaffold students across a series of “one-shot” instruction sessions throughout their four (or six) years of college. So, for most librarians, the best option to test the impact of their instruction is to do a pre-test on incoming freshmen and then try to catch graduating seniors for a post-test. This is an important difference because of some of the more advanced questions on the SAILS test. For example, most of my students missed the question on the post-test that asked something about when you need IRB approval for research… Hopefully, at least some of them will learn this by the time they graduate, but I don’t cover that in my course.

SAILS has a test bank of over 500 questions (I don’t know the actual number, but it’s a large pool!). We did repeat the testing this spring, and I haven’t yet looked at the data from that. Maybe the randomly selected questions were a better fit for the way we were using the test this time?

Ultimately, though, what we learned from this process was that we need to develop our own test bank of questions, written to assess our course learning outcomes, to draw from for our various purposes. That seems kind of obvious in hindsight, but we were trying to cut some corners by using the SAILS test.

So one of my projects for the summer is to develop that test bank. I want to be able to draw from it for the purposes of pedagogical research, including a pre-test and post-test to see which section improves more. We will also be able to select a subset of questions to use in all course sections for the assessment data we need to collect for accreditation paperwork. We also expect to be able to implement a pre/post-test in sections of our course that would more clearly illustrate the value of our course to administrators. And, we’ll have more flexibility to use the results as we like in publicity types of stuff, instead of worrying about how much we’re allowed to say. For example, it would be great to include 2 or 3 of the questions and the responses we get from students on their pre-tests in the faculty newsletter. As it is, though, I don’t think we’re allowed to reprint any questions, so we have to be careful about how we present those results. Also, all we get is how many students got that correct, not how many gave each answer… Sometimes it’s helpful to see what wrong answers were selected – if 75% of those who answered incorrectly gave the same response, does that show a pattern in how they’re misunderstanding the information or are they misreading the question?

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

Write, dammit!

I’ve had “write, dammit!” on my office whiteboard, under the list of things to do soon, for at least a couple of months. It was right below completing my pre-tenure review binder, which I turned in last week… And which was a reminder of why I need to write!

I am my own greatest obstacle in writing. I do things that seem like great ideas… And then I sit down to think about how to write it up… And then I convince myself I’m being silly, this is nothing new or exciting, is this really worthy of publication? I tell myself that I’m not that weird, that this is an aspect of imposter syndrome, but it really does make writing a struggle.

Last week I mentioned needing to work on a book chapter. I got a draft of that done and shared with my co-authors, and am now waiting for their feedback and contributions. That one is about a mapping exercise we’ve used in a few different settings, tying it to the development of visual literacy skills. I had set a deadline to get a draft to my co-authors, and got a draft to them by, oh, 11pm or so on the date I told them. Having a deadline and someone I’m accountable to makes such a difference for me in being able to actually write anything.

Next on my plate is to write up a research project I did last fall. I presented on it at the Conference on Higher Education Pedagogy last February, but have made little progress in actually writing it up to submit for publication. I could write a whole blog post on that alone, though. Long story short, I taught two sections of the same course, in which I designed one recurring assignment to be done differently in different sections but kept everything else the same. The goal was to test the impact of that one assignment on student learning. Unfortunately, my results were inconclusive. This may have been partially due to qualitative differences between the classes – does a higher average gpa indicate that those students on average are more effective at learning new material, regardless of the pedagogical methods used? But it was also due to flaws in the assessment tool I used. So, for tenure & promotion purposes, I should write it up and submit it, even if inconclusive results rarely actually get published… But it’s really hard to make myself write something that probably won’t get published because the results are inconclusive!

Another thing I should write up is a case study of a series of events I’ve organized at the library. I’ve written here before about the Empty Bowls workshops I organize at the library. That post was from the first iteration. I’ve switched to scheduling the events in fall to work better with the Empty Bowls organizers’ schedule, and have done it each academic year since. I love these events because they let me do more to contribute to the fundraiser than I could by just making a bowl myself or volunteering at the annual event. And I can sell it as a library event because it is a service learning opportunity when we discuss the purpose and outcomes of the fundraiser with them while they have fun playing with clay or painting on the glaze. But when I sit down to write, I wind up blocking myself – it’s just another library event, whoop-dee-doo, as if nobody else has ever done this before…

And, speaking of library events, I’ve also been encouraged to write up something or at least do a presentation on the annual Celebration of Faculty Publications that I’ve organized at my library. It’s a big project that takes a lot of time, since my university does not currently have a central place that tracks faculty publications. But is it really worthy of publication? I suppose I could go for the angle of presenting it as a “lessons learned” type of article/presentation, discussing what worked and what didn’t in terms of getting buy-in.

So, focusing on this blog as a place to reflect and organize my thoughts, I guess the point of this post is to organize my writing projects and try to get my butt in motion. But, if anyone is reading this, please add any words of encouragement or thoughts on which ideas sound like something you’d want to read!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

An attempt to revive my blog…

Like many students, I sat down this morning to work on a paper. For me, it’s a draft of a book chapter instead of a term paper, but whatever. I read through what I had already written, and as soon as I got to a part where I needed to add a paragraph, I realized it had been way too long since I last checked twitter!

And checking twitter led me to stumble upon this post:

Yeah, sure, some people have gained visibility through blogging, both in academia and beyond academia. But I’m one of a bajiliion librarians with a blog, so I don’t really expect to stand out. However, regularly reflecting on my work, where I’m at in my research, and what I want to accomplish next would be useful. Yeah, sure, I try to do that, but forcing myself to sit down and write it out would be a lot more systematic than thinking about it whenever I have time.

Plus, maybe writing more regularly will help the words flow when I sit down to try to write a draft of a publication! This morning, I’m trying to flesh out a draft of a book chapter, with little success. I still need to work on writing up a draft of the research I did on my classes last fall… Which I haven’t even mentioned on here! And there are a few other things I really should write up. Attempting to write for publication feels daunting, while writing for myself here (because, really, very few people read this at my peak, and even fewer now since I haven’t posted in nearly a year!) is not so bad.

So, I’m setting a new goal: write something here once a week. It may be short, and it doesn’t need to be brilliant or insightful. But hopefully it will help me reflect more regularly on what I’m doing and help me organize my thoughts on projects I really should be writing up!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

ALA Annual Conference

It has been a while since I posted! Keeping up with the blog dropped a few notches on the list of priorities as other things have come up, in part because I didn’t think anybody was really reading this. It’s a good exercise in reflective practice to write up events and teaching experiences for myself, but it’s easy to set that aside for more pressing tasks if that’s all you’re doing.

At ALA, though, I attended a lunch hosted by EBSCO for academic librarians. I got there late and needed to run out early, so sat at a table in the back with one seat left open, though I didn’t know anyone at that table. The person sitting next to me recognized me because she has read my blog! That is so freaking cool! So I guess I should make more time to post here, even if it is just to organize my thoughts and reflect on projects I’ve done.

For now, though, here is the presentation that a colleague and I gave at the 19th Annual Reference Research Forum at ALA in Chicago, June 29, 2013. Or you can visit the full google presentation with notes on what we planned to say about each slide. Some of that got cut due to time in the actual presentation, though we probably elaborated more than we had planned for in other parts!

Andrew Walsh was one of our Information Literacy Fellows, 2012-13. He is starting a new permanent position at a library in Dayton, OH this month.

Adriana Gerena and Aundryel (Auni) Breland are/were graduate research assistants in the Instructional Services department at Ingram Library. They both ran the usability tests, and Adriana ran the focus groups after Auni left us for bigger and better things (an internship in her field and perhaps graduation by now!).

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

Teaching Outcomes

This past Friday night, I went out for dinner and drinks with a friend. It was a full week after the last day of final exams here, so we expected our tiny little town square to be free of students. We had dinner at one place, then moved to another location and just took seats at the bar and continued catching up.

And then a student that was in my class this fall came over and gave me a hug. He told me that my class was the first time he had gotten an A (I assumed he meant just in college, but who knows). And that he had asked his mom to proofread his paper, and she was so impressed that she asked him whether he had really written it himself. I didn’t know what to say, except that I didn’t give him an A, he earned that A, and just keep putting in that effort in future classes.

That was pretty awesome. But seeing this student progress through the semester was pretty awesome, even before Friday.

At the beginning of the semester, I warned them that my class would be a LOT of work. I told them that it wouldn’t be hard like calculus, but it would be a lot of work. I built in a lot of small assignments at the beginning so that I could do a lot of formative assessment and have lots of low-risk opportunities for feedback… In hindsight, I did too much of that, because it took up an inordinate amount of my time to give all of that feedback, but it was better than not enough feedback until it’s too late.

My class is capped at 24 students. There were a few who stood out from the beginning as really putting a lot of thought into assignments. They might not have always gotten to the conclusion I hoped they would on the first try, but they were thinking about it instead of just tossing out a simple superficial response. And a few stood out as really on top of this stuff – probably the “over-achiever wonder-weenies” that have always found school pretty easy as they skate through with straight As. They make teaching much easier, though the best part is when you catch them helping a classmate along, explaining things in a different way than how I presented it to help that other student get the underlying concept.

This guy, though, was one of a handful that I had pegged as slackers. I teach information literacy, not rocket science – every single one of those kids is fully capable of getting these concepts, but they have to be willing to put in the time and effort. I can’t just pour the knowledge straight into their brains without their cooperation.

For the first month or so, all of the stuff he turned in seemed kind of half-assed. And then he stayed after class one day to talk about his low grades on some assignments, and I told him what he needed to do to get better grades. I include a minimum word count on each assignment, not because I want to be a bean counter, but because I want to see that they’re actually thinking about the question and trying to apply the concept to the real world. I want more than two basic sentences giving a simple superficial answer. I also gave him a couple of options to go back and add some information to at least one earlier assignment to make up points, but stressed that the most important part was just to meet all of the requirements and show me that he’s thinking about this stuff on future assignments.

After that, he improved, but it was his annotated bibliography that really made my day. Backing up, I suppose I should explain the way I structured my class this semester. I assigned groups in the second week of classes and had each group pick a topic to research through the semester. And then each member of that group had to come up with a research question related to that group topic. I made sure to mix up the majors, so that I could point out the wide variety of ways different researchers could approach a single topic. They worked with these research questions through the semester, culminating in a final project. This includes a “state of the presentation presentation,” an annotated bibliography, a short paper, and a final presentation. You can see more details over here, if you’re interested.

On the annotated bibliographies, they had to do proper citations in either APA or MLA format, and they had to do the standard annotation fodder of telling me what the source is about and how it will help answer their research question. But I also made them include how they evaluated the source. Critically evaluating a source in order to select quality, credible sources is an important learning outcome in my class, so I wanted to see that they actually thought about the author’s credentials or the reputation of the publication or whatever other evaluation criteria applies when selecting their sources. My goal is to teach how to do good library research, so I’m more interested in their process than in the actual information found in those sources.

This kid knocked it out of the park on the annotated bibliography. He had plenty of formatting errors and whatnot, which I pointed out because other profs will take off points for those sorts of errors, but his was one of the best at addressing the relevant evaluation criteria for each source and how he applied it. It seriously made my day.

And, again, on the paper, he did a great job. I gave them a fairly detailed outline to follow, all they had to do was follow directions. I had them include an overview of what they found, but then hammered again on the evaluation stuff – they had to talk about a source that surprised them and how they evaluated the credibility. Is it something that looks credible to begin with, but turns out to be the equivalent of a climate change denier, or it actually good information about some new breakthrough in that field or something? His paper followed all of the directions and he did a great job talking about how he evaluated his surprising source. I really hate having to mark up papers, and I love when students give me A quality work!

This was one really great experience of watching a student turn things around and really get the important learning outcomes in the class. But I love teaching this semester-long credit-bearing class because I get to have little victories like this every week. I have the pleasure of getting to know the students as individuals, and seeing how they progress through the semester. One shots are fun, and it’s great working with smaller departments where you can get to know some of the majors. But having a whole semester with a small group of kids is great.

And to think, when I did my practicum in my second-to-last semester of library school, I wasn’t even planning to include any instruction! My previous experience of teaching was pretty harrowing – basically tossed to the wolves to teach a 200-person lecture course with very little training. I am so thankful to the library director who suggested I include some instruction time in my practicum, too, since I had a fair bit of teaching experience already on my CV. I don’t know what area I would have gone into if it hadn’t been for that, but I’m pretty sure it would not have been nearly as awesome as the job I have now!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr

ACRL Immersion 2012

This past summer I got to attend the ACRL Immersion Teacher Track program. I probably should have written something up about it sooner… But, then again, there was so much information that giving it time to marinate was probably a good thing!

For those who are unfamiliar with the Immersion program, it’s a week-long training program to improve your skills in teaching information literacy. It started on Sunday afternoon/evening with a dinner and an icebreaker activity. On Monday through Friday, breakfast was from 7:30 to 8:30 (in the campus dining hall, so no need to be there right at 7:30)… And then we launched into a full day of lessons and activities. Some lessons were presented to the whole group, and then some of the time we were split according to our tracks (teacher or program), and then we also split off to work with our smaller cohorts (around 10 or 12 people) for some activities. Monday was grueling, lasting until around 8:30 pm. The rest of the days were shorter, ending around 5:30 pm, but there is SOOO much information to take in that that was still pretty heavy.

The thing I most wanted to get out of the program before going was a grounding in learning theory and a way to be more systematic in my instruction. Many of the librarians with me in the teacher track only teach one-shots, so I don’t know how they viewed some of the learning theory stuff, but I really wanted to build a more systematic approach to my semester-long course. Luckily, that’s what I got out of the program!

Many of the activities were geared toward teaching us to use active learning techniques to get students more engaged in class. As you may guess from a glance at my list of presentations, I’ve been drinking that kool-aid since I started at UWG. The head of my department has done the teacher track, program track, and just finished the assessment track of the ACRL Immersion program offerings. Another member of my department, who did the program track this summer, did the teacher track a few years ago. And these two had the biggest influence on the type of teaching I tried in my first year here.

On the first day, we watched a Nightline segment about a corporate design firm, IDEO — you can catch it on youtube. While everyone else was reacting like “that’s such a great idea,” it just reminded me of May, when my department worked together to rework our info lit course for a summer bridge program! I also recognized several of the activities that we did at Immersion from exercises we have already been using in classes at UWG… So that aspect of the program wasn’t nearly as enlightening for me as it may have been for other participants.

The parts that I found most useful were the basic grounding in educational theory, a way of thinking about different learning styles, and an overview of assessment techniques.

I have a decent amount of teaching experience, but had no real training in how to teach before Immersion. My first teaching gig was as a grad student at UVA – I got a TA position in which I led 3 discussion sections of about 20 students each. I had no frickin’ clue what I was doing! But, with only 20 students in each section, it worked out ok. My next time in front of a class was much more nerve-wracking: teaching a 225 person lecture course as the graduate instructor! After getting my MA at UVA and starting the PhD program in anthropology at Mizzou, I got a TA position my first year, but there were no discussion sections. I just helped with grading and lectured when the instructor was ill or had a family emergency. And that was considered my training for teaching the course the following year. Eeep! I kind of pity the students who were in that class that year… Or, really, most years, because my experience was pretty typical of the grad students who taught that course. And, so, I did straight up lecture with powerpoint slides. Snooze-fest. I only did 4 scantron exams each semseter, and the questions were almost all drawn from the textbook’s test bank. Which, by the way, you’d think those questions should be clear and unambiguous, since they’re written by “experts”, but I think every time I had a problem with a bad question on a test, it was one from the test bank, not one I had written over an example from the lecture that wasn’t in the book.

So I learned from trial and error, but was flying kind of blind when it comes to planning.

At Immersion, Char Booth took the lead on teaching learning theory. She covered different learning theories, including behaviorism, cognitivism, and constructivism. Behaviorism is basically the Pavlov’s dog approach of providing information and repeating it until it is “learned”. You can have engagement with students in this approach, but it’s more in the form of questioning, with a call and response type of environment. They are awake and responding, but trained to respond with a correct answer to a given cue. Cognitivism pays more attention to the cognitive processes involved in learning – so scaffolding to build on prior knowledge, parceling out the new information to avoid overloading the learner. This is the approach that got us to start telling people what they’re going to get out of the instruction session at the beginning to make it more relevant to them. This approach involves using problem solving to create “aha” moments, where the learner has some insight into the concept and reaches the next level on the scaffold. Constructivism is the trendy one right now – as you might guess from the name, this theory suggests that learners construct their own meaning. So, learning is: process based, socially and culturally influenced, rooted in experience (in contrast to abstract ideas), interpreted through individual lenses, and dependent on motivation and reflection. (And that list is copied directly from my notes, not sure whether I copied exactly or paraphrased Char Booth’s lecture on this!) This is where a lot of the inquiry-based teaching approaches come into play.

The point of learning about learning theory is not just to fill our heads with theory, but to inform our planning. So, if you have X outcome that you want students to learn, which approach will be most effective? Some lessons lend themselves well to constructivist approaches, while others really are best taught through a behaviorist approach. Constructivism is “in” right now, but Char emphasized that we shouldn’t assume that it is good and the older models are bad. You need different tools in your toolbox for different tasks.

Deb Gilchrist took the lead on teaching about assessment. At some point, I need to look back through my notes and the program notebook to pull more nuggets of wisdom out of this section. The main thing I took from this part, so far, is the importance of learning outcomes. To be able to effectively assess student learning, you need to walk in with a clear idea of what you want them to learn! Learning outcomes can come in a variety of levels — what are the overarching outcomes I want students to get from my semester-long course, and then what are the more specific outcomes that will scaffold them up to those overarching outcomes? And then, what are the smaller component outcomes that will build them up to those mid-level outcomes… Of course, you can go the other way with it too – if I want them to learn to use the different limiters and results filtering options in Academic Search Complete, what’s the point? What outcome does that support? If it doesn’t lead to an outcome that I want to teach, what is the point of doing it?

There’s a lot of confusion out there about what is a learning outcome vs. learning objective, and several other terms that I’m blanking on at the moment. Deb defined learning outcomes as “[verb phrase] in order to [why phrase].” So you have two key components – the skill you want them to learn and the reason for learning that skill, separated by “in order to”. As I mentioned above, these can be written at a variety of levels, but it’s important to make sure that the statements are balanced. One way to do this is to use Bloom’s taxonomy (I’m using this list for categories mentioned below). If the first half of your outcome addresses a basic database searching skill that fits in the comprehension category, you want the second half to fit the application category, not jump to the evaluation end of the spectrum. Deb stressed that there isn’t any one correct answer for writing outcomes, the important part is to know why you selected the components you did. Another important detail is to select verbs that are actually observable — how do you assess whether someone “understands” something?

Beth Woodard led the discussions about learning styles. There are all kinds of different approaches people take to this topic, and plenty of skepticism out there about taking learning styles too seriously. But I think Beth’s approach was really useful. Instead of trying to pigeon-hole people into a few pre-defined categories, we talked about different styles as a continuum along two axes: concrete experience <--> abstract conceptualization and reflective observation <--> active experimentation. My Immersion notebook is in the office and I’m at home, so I consulted wikipedia to jog my memory – it tells me that this version is David Kolb’s model.

The take-away for me is that some people do better by getting their hands on a problem and experimenting with it, while others prefer some time to think and create a plan. And on the other axis, some people do better with abstract ideas, while others prefer more concrete examples… We also talked some about social relations being important for the concrete experience end of the spectrum – so those people like to work in groups and build on one anothers’ experiences, while the more abstract folks prefer solitary work. The people with high reflective observation and abstract conceptualization scores would rather sit through a long, boring lecture than have to participate in a lot of constructivist, inquiry-based lesson plans – weirdos! So mix it up. Don’t find one teaching method that you like and use that for everything, try to vary your approaches and incorporate activities that allow time for active experimentation AND reflective observation, group AND individual work. Again, more tools in your toolbox is a good thing, and will help you reach more students.

So, I thought I would also talk some about what I’ve done so far to incorporate these ideas and what I plan to do in the foreseeable future… But this post is already oh-my-god long! So I’ll leave that for another post!

  • Facebook
  • Twitter
  • Delicious
  • Digg
  • StumbleUpon
  • Email
  • RSS
  • Tumblr