A common complaint about the literature in library science is that, well, it basically sucks. This is, in my anecdotal experience, more often than not, claimed by people who don’t actually write such literature themselves, and seem to have a relatively dismal view of theorizing and reflecting on the nature of what they do. Those claims need not be taken very seriously.
But a lot of the library science literature out there really does, in my never humble opinion, kind of suck. The reasons I’ve usually heard cited to support this tend to be something like small sample sizes; lack of rigor; lack of transferrability from one library context to another; and so forth. I mean, this seems true enough, I guess, though I’ve never personally taken these reasons to be especially compelling evidence for the lack of usefulness of our professional literature: even if a study is pretty small it can still, for example, give you useful ideas to try at your institution, inform you of stuff other people are trying that you may be able to adapt for your own purposes, or whatever.
I’ve been thinking a lot about this and I guess, for me, the main reason I think a lot of library literature sucks is that it’s superficial: it’s not really grounded in any deep sense of what it means to be a human being.
For example, I don’t really understand how you can discuss, say, strategies for teaching students how to evaluate reliable information without, like, taking into account deeper issues like the nature of truth and how it’s established, the structure of knowledge and how it’s acquired, and related issues. If you write an article that (say) talks about a strategy you used to teach students to evaluate information without the underlying reasons why you thought that strategy would work for creatures like us, I probably think the article sucks.
If you talk about how the students loved the scavenger hunt in your library but don’t discuss the underlying issues in the psychology of motivation you’re tapping into that gives us reason to believe in this strategy’s long-term effectiveness, I probably think your article sucks.
On the other hand, if you examine research in the psychology of education about how the way we view intelligence impacts our motivation to learn, and evaluate how that can help impact research instruction in a positive way, I probably will think your article is really great.
If you talk about a particular teaching technique you used where you gave the example of twitter as a relevant real world parallel metaphor for scholarly conversation, and you back it up with evidence from cognitive psychology that and explain that this is a good strategy because research shows that learners remember abstract ideas better when they are illustrated by familiar concrete examples, then I probably think your paper is really great, too.
See what I mean?
It’s kind of like movies. Like, most movies that come out in theaters really suck. Like probably almost all of them. But every once in a while, for some reason, a really good one fights its way into existence. When I was in library school I kind of thought there were way more articles of the bad kind than of the good: they really didn’t seem to have anything to do with helping me improve my instruction. But every once in a while, like a great movie, some really good articles of substance slip through, ones with an actual worldview that can help you change the way you think and practice.
So. This is aaaaaaaaallllll by way of saying that I’m going to start randomly posting a new series of what I think are important articles in library science, starting with this one on evaluating information, by Don Fallis, called “On Verifying the Accuracy of Information: Philosophical Perspectives.”
Here’s the abstract:
How can one verify the accuracy of recorded information (e.g., information found in books, newspapers, and on Web sites)? In this paper, I argue that work in the epistemology of testimony (especially that of philosophers David Hume and Alvin Goldman) can help with this important practical problem in library and information science. This work suggests that there are four important areas to consider when verifying the accuracy of information: (i) authority, (ii) independent corroboration, (iii) plausibility and support, and (iv) presentation. I show how philosophical research in these areas can improve how information professionals go about teaching people how to evaluate information. Finally, I discuss several further techniques that information professionals can and should use to make it easier for people to verify the accuracy of information.
I think it can change the way you look at teaching about reliability, and can take you far beyond the CRAAP test while still helping you develop practical ideas for how to teach information evaluation to students.