Saturday, February 18, 2017

Librarian & fake news - Bayes, metaknowledge & epistemic humility

The recent rise in interest in fake news has given us librarians a reason to once again trumpet loudly the value of what we do in teaching information or media literacy.

Librarians were quick to establish our turf by calling out articles that mention information literacy without mentioning librarians.

Besides the expected library sources, pieces began to appear in mainstream sources such as the Salon, U.S. News & World Report and most recently PBS began to praise the role librarians can play in fighting the rise of fake news. Many librarians were ecstatic, finally our moment in the sun has come!

So how do librarians fight fake news? A running joke among some librarians is that the librarian's standard solution to all the world's ills is to build a LibGuide.

And indeed, librarians and libraries  such as Cornell University LibraryIndiana University East Library, University of Virginia Library have created or adapted existing material to create guides on fake news.

As I write this, there are at least 1,651 Libguide pages that mention "fake news".

While I salute the efforts of librarians to create guides, I fear the actual impact is more like the sentiment below I saw expressed by someone on Twitter*.

* Modified 21/2/2017 with actual Tweet made by Wilkinson

All this talk about helping or teaching our users deal with fake news, made me curious. What roles can librarians play in this? Is it a matter of teaching the CRAP test or worse some black and white view that only .org/.gov or peer reviewed article is reliable (do you automatically trust information on .gov sites under the Trump administration?) Or does teaching information literacy using various "Threshold concepts" be the solution? Is the line between fake news sites and biased news always clear and distinct?

While the answers to these questions are probably not going to be easy, below are three articles I've read that made me think more deeply on the topic.

1. Boyd’s interesting yet scary argument

Danah boyd is a well known Researcher at Microsoft Research and has been influential in helping us understand how young people relate to technology.

Recently she wrote a very provocative piece Did Media Literacy Backfire?  that made me think.

I covered the whole argument over here at Medium  but in a nutshell her argument seems to be that, certain topics are extremely complicated and that it takes a real expert with years of experience and expertise to pick apart the opposing counter arguments particularly for topics where there are many who for various reasons spend years horning their arguments (think anti-evolution arguments). As such it would be better for people to just accept the consensus judgement of experts.

She argues that media literacy may backfire if we train people to believe they should and are capable of evaluating all arguments and statements. We train them to doubt and make up their own minds.

I would add that ACRL's new Framework for Information Literacy for Higher Education states that “Authority Is Constructed and Contextual”, even gives individuals the license to decide they shouldn't automatically trust mainstream sources.

She writes "If the media is reporting on something, and you don’t trust the media, then it is your responsibility to question their authority, to doubt the information you are being given."
Add the natural tendencies of people to privilege evidence that supports their original beliefs and media literacy backfires.

"People believe in information that confirms their priors. In fact, if you present them with data that contradicts their beliefs, they will double down on their beliefs rather than integrate the new knowledge into their understanding."

As such she implies that in many matters it’s better for people not to try to figure out the truth themselves but to just trust the experts.

You can read my full coverage and response to this scary argument here but it's a interesting question to think about, in our rush to teach students to think for themselves and to evaluate information do we teach them humility to say we don't know enough to decide either way? Do we teach the concept of "Epistemic learned helplessness"?

2. Wilkinson's new research Agenda for information literacy - Bayesian inference

You may be wondering what Boyd means above by "People believe in information that confirms their priors...", this is where the idea of Bayesian inference comes in.

Lane Wilkinson is a  "philosophically-inclined instruction librarian" at the University of Tennessee at Chattanooga. Currently acting as Director of Instruction for the library, he is also known for his heavy criticism of ACRL's new Framework for Information Literacy for Higher Education.

He recently wrote an interesting piece that suggests brings in the idea of Bayeisan inference into information literacy.

It's a fascinating idea, and while I have come across the idea of bayesian models of reasoning in other contexts but like many librarians the idea of its intersection with information literacy passed me by.

One reason I suspect for this is that the concept of bayes thinking is not easy to grasp for many (including me). While many of us memorised the formula for bayes' theorem in school, a intuitive understanding of it eludes many. IMHO some of the best explanations can be found here

Lane setups the traditional explanation by using cancer detection reliability as an analogy. But I will skip this and go directly to the implication (in a simplified manner).

As Lane explains the idea here is that people have "priors", their belief in whether a certain fact is true or not. New information that they learn will shift their beliefs with the magnitude of change depending on how reliable they think the source is.

So say before reading anything a person's belief that Obama was born in the US was say 50%, call this P(A) = 50%. This is their "prior" probability. In this case they are undecided.

Say they read a news story that gives evidence he was indeed born in the US call that B.

Say they perceive the source as quite reliable, in other words the articles from that source are usually correct.

This implies two things, firstly

P(B|A) , the probability that B occurs (the article exists) given that it is true (aka Obama was born in the US = A) is high - assume 80%


P(B| Not A) or P (B | 'A) is low, aka if Obama was not born in the US, an article saying so is unlikely to appear - assume 10%

Throw in the bayes formula (see below) and you find  P(A|B) aka the posterior probability that they should believe A given that (or conditional on the fact) there is a news article B rises to 89%. So reading the news article means they should increase their belief that Obama was born in the US to 89%.

If on the other hand if they think the source is not so reliable say a liberal source (if they are conservatives), then say they feel P(B|A) is only 60% and P(B| 'A) is say 50%, plug into bayes theorem and P(A|B) rises to only 55%.

In more extreme cases where they think the source is more likely to be wrong than right, e.g. P(B|A)< 50%, reading the article makes the reader even more doubtful than before!

Hopefully I got that all right! If you understood all that, then you understand what Boyd was saying earlier.

"People believe in information that confirms their priors. In fact, if you present them with data that contradicts their beliefs, they will double down on their beliefs rather than integrate the new knowledge into their understanding."

In other words, people do not trust sources that disagree with what they already believe. Based on bayes theorem this means evidence from such sources will either not change their belief much or even in extreme cases drives their belief in the opposite direction.

Lane clarifies that he doesn't intend librarians to teach students bayes interference (It can get pretty complicated as this involves philosophical issues in epistemology after all),  but that information literacy can be studied under the lens of bayesian interference. He lists quite a few intriguing questions to study for example he asks "How do we adjust when our trusted, reliable sources publish something false? For example, when peer-reviewed journals retract articles." and "What are the information-seeking behaviors of students researching something they have a strong opinion on?"

3. Improving crowd sourcing by weighting metaknowledge

Lane talks about bringing "theories of cognitive science, psychology, information science, economics, philosophy, law, decision theory, and so on into library studies".

It's an interesting thought, I have been reading quite a bit in the past few years on cognitive biases, decision theory and while like Lane, I don't expect librarians to teach this in information literacy classes, it does seem to be an interesting domain that is related and can inform information literacy.  For a taste of this, recently I read an interesting idea about improving wisdom of the crowds by using meta knowledge. 

You can find the argument on Nature "A solution to the single-question crowd wisdom problem", or if you prefer to read more layman friendly articles at Aeon or coverage by MIT News.

Essentially the problem with Wisdom of the crowds is one averages across everyone equally. The idea is if we can identify who are the "experts" in the crowd, we can improve the reliability of our results by counting their opinions more.

How do we identify such experts if we are not experts ourselves? The key insight is that experts not only have more content knowledge, they also have better metaknowledge. Here's how you exploit this.

From the Aeon article,

"When you take a survey, ask people for two numbers: their own best guess of the answer (the ‘response’) and also their assessment of how many people they think will agree with them (the ‘prediction’). The response represents their knowledge, the prediction their metaknowledge. After you have collected everyone’s responses, you can compare their metaknowledge predictions to the group’s averaged knowledge. That provides a concrete measure: people who provided the most accurate predictions – who displayed the most self-awareness and most accurate perception of others – are the ones to trust"

Confused? Here's a concrete example used in the article. They asked a group of MIT and Princeton Undergraduates the following question " Is Philadelphia the capital of Pennsylvania?"

The correct answer is "No", in fact the capital is Harrisburg. Most people who make this mistake, would say "Yes" and predict that most people say 90% would say "Yes" too as they aren't aware that the answer is in fact wrong.

People who know the answer is "No", mostly will also know that "Yes" is a common error and when asked to predict how many % will agree with them will guess a lower figure say only 30% will agree with them (or alternatively 70% will say "yes"). In other words they have better metaknowledge, they not only know the fact, they know others know less.

When you look at the final result, the "No" group would most probably have a more accurate prediction of the overall yes/no split as the "Yes" group thinks "Yes" is the obvious answer that most will go for.

I highly recommend you read the Aeon article and the Nature article, it goes more in depth into how metaknowledge can be leveraged and the various experiments the authors did to verify the effectiveness of this technique, the way variants of this technique can act as a lie detector and/or "truth serum" when  asking questions that respondents have a bias to hide the truth e.g asking if they have committed plagiarism or made up data.

I find this article fascinating as it provides a partial answer to the question of how to reliably verify the question, how do we know who is an expert?


As I have admitted before, information literacy particularly for freshman hasn't been a big interest of mine. Part of it is because often for me it reduces to teaching Boolean operators (something that I'm of the view is getting less and less necessary) , showing undergraduate how to push buttons in databases , helping freshman who are worried a misplaced dot will get marks deducted or pushing mechanical rules like the CRAP test.

Most probably I'm doing it wrong, but still I do enjoy thinking and discussing deep epistemological questions like "How do we know who is an expert?" , "When should we know to express humility and rely on experts?" etc.

I of course understand that due to the constraints of time and the type of audience, a deep discussion isn't always appropriate, though I see hints of deeper engagement with the new ACRL information literacy model's focus on threshold concepts.

Sunday, February 5, 2017

4 different ways of measuring library eresource usage

How does one measure library eresources usage? This is a question I've bumped into numerous times recently in the course of my work whether it be trying to do correlation studies between student success and electronic usage , choosing the right metric for the library dashboard or even more mundanely just evaluating a database for subscription.

My way of looking at it is two fold.

Firstly you can classify metric by the source, that is where you get the data from. Secondly you can classify by the type of usage metric.

For many electronic resource librarians, when you talk about electronic resource usage, the main source of such statistics would be via publishers, which usually but not always is COUNTER compliant.

But that's not the only possible source. A secondary source of electronic resource usage perhaps less commonly used would be via the library's own systems which typically means via Ezproxy (or perhaps openathen logs).

Of the usage statistics that you can derive from these two sources, I divide them into 2 main types of statistics, download based and non-download (session) based.

This creates a 2x2 grid of possible statistics.

My thoughts on the strengths and weaknesses of the 4 types of electronic usage metric and when you should them are as follows

Type (1) - Publisher based download metrics

This is probably the most common type of usage metric used. Typically for most big journal based publishers, you will get standardised COUNTER compliant statistics (up to Release 4 now). While there are many different type of COUNTER reports, the ones generally most well used are JR1 and BR1 and perhaps BR2

JR1 - Number of Successful Full-Text Article Requests by Month and Journal
BR1 - Number of Successful Title Requests by Month and Title
BR2 - Number of Successful Section Requests by Month and Title

There are others like Multimedia Report 1 (basically JR1 for multimedia) and more complicated ones like "Title Report 1 Mobile", but are rarely known to most librarians.

These three metrics are easy to understand by all and basically tell you how many times the journal article/book title/book chapter was downloaded.

Pros : Easy to understand., after all a download is a download! Heavily used to calculate cost per downloads for decision on renewals. JR1 and BR1 are pretty much industry standard and almost always comparable across vendors if they implement COUNTER statistics.

Cons : While journal based platforms are mostly COUNTER compliant , many resources are not COUNTER compliant (e.g many law and finance/business databases).

Many non-traditional type of resources that don't serve up journal articles or books don't adapt well to the concept of downloads. Most obviously are A&I databases, or even databases that have a variety of different types of content.

A bigger issue is that COUNTER statistics a) only provides monthly reports b) only shows total counts.

As a result if you are doing correlation type studies where you correlate say student GPAs with electronic resource use COUNTER statistics can't be used as you can't relate usage to individuals.

Firstly, COUNTER only statistics would mean you wouldn't be able to track usage of a lot of NON COUNTER resources.  More seriously, using JR1, BR1 is not appropriate as you can't do any granular analysis by discipline much less individual. Even tracking time of heaviest use (beyond month) is impossible.

Type (2) - Publisher based non-download metrics

COUNTER include other statistics that don't count "successful requests" (AKA downloads). These include among others

Journal Report 4 -  Total Searches Run By Month and Collection
Database Report 1 - Total Searches, Result Clicks and Record Views by Month and Database
Book Report 5 -  Total Searches by Month and Title

These are what I call "non download based". They count number of searches made or views. Some people are of the view these are of lesser value than downloads, since one can search or view a lot but still not gain any value. Of course a possible counter is even a download might still be useless when read.

Still they share the advantages of other COUNTER statistics as they are standardised. and theoretically comparable across publishers. Of course they share the same issues in that many content providers are not COUNTER in particular inability to drill down further beyond monthly data.

Type (4) - Log based non-download metrics

The point is do you 100% trust what publishers tell you? What if you want to double check? The main way to do so would be to do an analysis your ezproxy logs.

This is a lot less often done in my experience because of the size and complexity of ezproxy logs. As such, the simplest way that most libraries deal with this is to count "sessions". This can be done fairly easily using various methods.

For those who are unaware, when you start can ezproxy session, a sessionID is created logged and stored in your browser cookie. This will continue until you timeout/signout or close the browser.

As such to measure usage of say Scopus, one can just count the number of unique sessions where there is a request for say So One can count unique session counts for each database or journal of interest.

In practice one just counts unique sessions of domains, though this can sometimes get complicated whether you count subdomains together, or where a content provider might have multiple domains. How much extra work you want to do here is up to you.

The main advantage of using this method is that a) It works for pretty much all types of resources (including those that don't have "Downloads" or aren't COUNTER compliant) as long as they are accessed over ezproxy b) It's fairly simple to get technically and fairly comparable and most importantly c) if you setup your ezproxy config properly you can uniquely identify the individual using it (e.g by NT logins/email).

You can then link up the email with the data sitting in your library management system and you have access to a rich source of data on who , what and when they are accessing your eresources.

Main disadvantage is that for many libraries not all traffic needs to be channeled via ezproxy particularly when in campus. In my current and former institution, all traffic is required to go through the proxy even if in campus so this isn't an issue.

I'm not too familiar with openathens type systems but I understand those by default make it trivial to calculate sessions by users by date/time since those are already recorded in the logs but make it hard to go further and study downloads and other details recorded by ezproxy, but I could be word.

Type (3) - Log based download metrics

Sessions obtained from ezproxy are well and good, but what if you want downloads to calculate cost per download?

This can be very time consuming as you need to setup complicated ruleset to be able to identify from the logs which lines are downloads and for which journals or platforms they refer to and this is the main reason why people tend to stick with COUNTER download statistics or other publisher provided download statics

This is where the open source ezpaarse comes in.

I'm already referred to this open source software in the past. It's a amazing software that can crunch your logs and spit out downloads it recognises. It's a community effort, with rulebases been updated constantly.

It even allows you to create COUNTER like statistics for comparisons!

Once you have obtained the logs crunched out by ezpaarse, you can then further enrich the data with more user information similar to in (4).

Since the last time I tried it, I've done more bulk processing of our logs, my main learning point is that as good as Ezpaarse is, at least for our set of databases, it is still incapable of identifying a lot of aggregator platforms. It could be my setup but for example it doesn't seem to identify Proquest at all for us. Even for platforms it does recognise like EBSCO it can't reliably identify journal titles. A lot more testing is needed.

Of course the project is opensource and always looking for help in creating new rule sets.


Obviously, what type of statistics you use depends on the usage case you are looking for and there's no reason you can't combine the two.

If all you want to do is to evaluate or renewal of a specific journal database and it has COUNTER JR1 statistics, that is the obvious thing to do.

But if you need to go down to the level of which schools or types of users who use the journal (perhaps for allocating costs), then you would need to use some sort of ezproxy/openathen  log based metric.

Another question you need to consider is do you need to compare across a variety of resources? A correlation study that tries to compare usage of library resources vs student grades would obviously need a metric that firstly covers as broad a range of resources as possible and secondly do it in a consistent way.

I've found generally counting sessions from ezproxy/openathens probably fits the bill best here. It's still not perfect since many resources are not (some particularly important like high-end financial databases like bloomberg aren't tracked this way), but that's the best I can do.

For showing data into the dashboard, it is harder to say what would be useful. Perhaps all of them?

Which of the types of electronic usage statistics I have outlined do you use? Are any of them useless to you? Most importantly if you have a library dashboard tracking such statistics, which one do you use?

Share this!

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.
Related Posts Plugin for WordPress, Blogger...