For academic librarians:
We think that our undergrads go to Google because it’s easier to search than our databases, with their powerful syntaxes and fields, and we’re plowing ahead with federated searching to give our resources “Google appeal” based on this idea. But we’re mistaken. Our databases can be searched with keywords pretty easily, and students who want to keep it simple can just go to the general databases like Academic Search Premier and Expanded Academic Index. But they still tend to prefer using Google. Why?
The real reason undergrads like Google is that it gives them more reading material that they are actually able to understand. And this is not a reflection on our students’ intelligence or general preparedness – it goes for the brightest of our undergrads. We tell our students to go to our databases for articles that are “scholarly and reliable,” but we don’t often tell them that most of the articles they will find there, in addition to being scholarly and reliable, are not really intended for an undergraduate audience. These journal articles are mostly narrowly defined studies intended for an audience of scholars who are advancing their fields at the highest levels of learning. Our students can’t even understand the titles of half of these articles. It’s not our students’ fault; many of the articles in our databases require a high level of disciplinary background knowledge. But what does it say to them when they are not able to comprehend the materials that we tell them are “scholarly and reliable?” If they are unaware that these articles are intended for an audience of professors and graduate students, it probably makes them feel dumb and somewhat resentful toward the library.
Often our undergrads need to read articles that present more of an overview of a topic (even if it’s a relatively narrow topic), something that gives them a little bit of depth about it, a bit of background, and in some areas a bit of a picture of the scholarly discourse. Books are usually better than doing this than journal articles, but electronic resources are preferred and tend to be more promoted (I believe this has to do with political issues related to academic library funding). There are some articles in scholarly journals here and there that offer a broader treatment of a topic (often they’re literature reviews), but they are not always easy to find, especially for undergrads left to their own devices.
It would be helpful if our databases included a field that estimated the level of disciplinary background knowledge required to understand an article, on, say, a scale of one to five. Working with a sophomore student just beginning a biology major, we could search Biological Abstracts with the limit on this field set to two, and have a much more useful and less intimidating result set. Maybe it wouldn’t work in Biological Abstracts, because everything in it would count as a four or a five; maybe the place to start using such a system would be in one of our general databases, with coverage of magazines like Science and The New Scientist.
This difficulty rating could be determined by an indexer, or possibly automatically. Software could analyze the full text of the articles against a dictionary of discipline-specific terms rated for obscurity and difficulty, or count the frequency of obscure terms, and rate the articles for difficulty this way. Maybe some databases already do something like this and I’m not aware of it.
You know what I’m talking about, right? What is to be done?