Proposal: the Open Scholar Rating

Danah Boyd, of Microsoft Research and NYU, on Monday put out a much-circulated question on Twitter, asking “how many academics have a personal commitment to #openaccess — that is, free public online access to published research. Conversation ensued:

ORCID is an emerging standard identifier for scientists/researchers, designed to facilitate easier identification and tracking of research work. There are also forms of researcher ID used by Google Scholar, Elsever, etc, but ORCID is more of an open, international standards effort .

There are many issues and nuances to consider in such a rating, but I loosely imagined something like this:

  1. A transparent, well-defined formula (or set of defined variant formulae) which can run on standard inputs such as ORCIDs and DOIs (Digital Object Identifiers).
    .
  2. For any given scholarly work, say an article, chapter, or book, a quantitative evaluation would be made based on e.g. the degree, manner, and timeline upon which the work was made publicly accessible.
    Note, this would be based on time from original publication, not whether it is publicly accessible now. So an article published in 2005, now available after a 2-yr embargo, counts as 2-year public-access delay.

I then heard from Theo Andrew, Open Scholarship Development Officer, University of Edinburgh, who pointed to his and colleague’s very similar 2012 project Open Access Index, aka #OAIndex.

This was funded by a startup-style mini-grant from Jisc Elevator, UK.  The team cites as precursor a May 2012 presentation on “Metrics for Openness,” given by David Nichols, University of Waikato (NZ), at University of Illinois’s Graduate School of Library and Information Science.

The OAIndex team produced a nice 3-minute video explaining their project idea:

While Open Access Index is broadly similar to what I had in mind with Open Scholar Rating OSR, there are a few differences I’d suggest:

1) OSR avoids being specifically tied to “Open Access,” which has quite specific and contested meanings in various contexts. It might be useful to have an OSR rating, or variant ratings, which have different criteria such as availability of layman/public/educational research summaries or popular-media dissemination; or which apply to contexts where no Open Access technically exists, etc.

2) “Open Scholar Rating” emphasizes the point that it describes a person‘s set of work, not just any collection. While I can see it might be useful to have an index describing any collection, e.g. a department or university’s research output, I am particularly interested in OSR as an evaluative and incenting measure to help individual scholars embrace more open scholarly practices.

.

What do you think? Is this a useful or feasible idea, what are the (fatal?) problems, or how might it be done? Please feel free to comment using Disqus box below (login w/Twitter, Facebook, or Google+; or use/create a Disqus account), or on Twitter with a link to the post and/or me @tmccormick and/or hashtag #oaindex; or email me at tmccormick (at) gmail.com.