Skip to content

Researching alternative ways of measuring impact in Learning Technology

Last year I worked on finding a sustainable new home for the Open Access journal Research in Learning Technology. As part of my work for ALT, this was the third transition I have worked on since 2008 and during this period I have contributed to the thinking around Open Access publishing in Learning Technology, often through ALT’s contribution to initiatives such as the 2012/3 ‘Gold Open Access Project‘. This year I will be working with a new group set up by ALT to steer the future development of the journal:

A new Strategic Journal Working Group to help steer the development of the journal now being published by ALT in partnership with Open Academia has been established and we are grateful that representatives from other scholarly bodies who are publishing in a similar model, have agreed to join the group to share best practice and support each other. The group is chaired by Prof Neil Morris, who also chairs the Editorial Board, and we are delighted to welcome colleagues from ascilite, ILTA and the OLC alongside our Editors.

As well as learning from each other, the group is going to be examining alternative ways of measuring impact (alternative to the established impact factor, which the journal has not been awarded to date). This is an area I am particularly interested in for three reasons:

Knowledge exchange happens elsewhere
Firstly, much of the most cutting edge research and practice in Learning Technology is not published in formal journals. Even the most responsive Open Access peer-review system can’t necessarily keep pace with the quickly changing technology landscape we work in and so less formal ways of knowledge exchange on blogs, on social media or in project reports is often more important and useful.

Different media
Secondly, a lot of the most interesting ideas may be shared as videos, drawings, data visualisations and so on, in short, they may not easily fit into the traditional formats measures like an impact factor were designed for. What we cite and where we link to can be harder to track. As we use new technologies to communicate and share information the way in which we cite/link to sources needs to adapt.

Crossing boundaries
Another aspect of what makes measuring impact interesting in Learning Technology is the way we cross boundaries of disciplines in research, policy and practice. Coming from a discipline like Anthropology, which has a hugely broad frame of reference depending on what you specialise in, it still seems challenging to what extent the work of Learning Technologists crosses boundaries.

So, keeping all this in mind, here is where I am in my work to research alternative ways of measuring impact…

I started with a a blog post DOAJ LAUNCHES THE DOAJ BEST PRACTICE GUIDE which I came across as Research in Learning Technology was recently awarded the DOAJ best practice seal. It’s a useful new guide that provides a lot of helpful information to publishers, authors and policy makers interested in Open Access publishing. One of the resources it referred me to was a tool for authors called ThinkCheckSubmit. Whilst not specifically talking about how the impact of the journal is measured, it does ask authors to check the publisher’s information for example how the journal is indexed or whether the publisher is a member of OASPA or COPE.

Also in a blog post, this time on the altmetric website, I discovered that “the next-generation of Altmetric donut is here!“. If you are new to altmetrics, here is how they explain what it’s all about:

Altmetrics are metrics and qualitative data that are complementary to traditional, citation-based metrics. They can include (but are not limited to) peer reviews on Faculty of 1000, citations on Wikipedia and in public policy documents, discussions on research blogs, mainstream media coverage, bookmarks on reference managers like Mendeley, and mentions on social networks such as Twitter.

Sourced from the Web, altmetrics can tell you a lot about how often journal articles and other scholarly outputs like datasets are discussed and used around the world. For that reason, altmetrics have been incorporated into researchers’ websites, institutional repositories, journal websites, and more.

Whilst I have been familiar with altmetrics for some time, I hadn’t actually come across the history in much detail and I found it really helpful to visit http://altmetrics.org/manifesto/ and read up on some of the older posts. It gave a me a better insight into the thinking that informed the development of the tools and policies involved. It also reminded me of the 2014/5 HEFCE publication called “The Metric Tide” which includes an executive summary, literature review and correlation analysis. As part of the recommendations the report featured, it states:

These recommendations are underpinned by the notion of ‘responsible metrics’ as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. Responsible metrics can be understood in terms of the following dimensions:

  • Robustness: basing metrics on the best possible data in terms of accuracy and scope
  • Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment
  • Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results
  • Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system
  • Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.

The recommendations outlined in the report apply mostly to HEIs, funders and government bodies. There are some however that are directly aimed at publishers. These are:

Publishers should reduce emphasis on journal impact factors as a promotional tool, and only use them in the context of a variety of journal-based metrics that provide a richer view of performance. As suggested by DORA, this broader indicator set could include 5-year impact factor, EigenFactor, SCImago, editorial and publication times. Publishers, with the aid of Committee on Publication Ethics (COPE), should encourage responsible authorship practices and the provision of more detailed information about the specific contributions of each author. Publishers should also make available a range of article-level metrics to encourage a shift toward assessment based on the academic quality of an article rather than JIFs. (Publishers)

Publishers should mandate ORCID iDs and ISNIs and funder grant references for article submission, and retain this metadata throughout the publication lifecycle. This will facilitate exchange of information on research activity, and help deliver data and metrics at minimal burden to researchers and administrators. (Publishers and data providers)

Interestingly there are a number of recommendations for HEFCE and future REF exercises, that as far as I can tell seem to have not necessarily been picked up given the recent closure of HEFCE. Still, it is useful to revisit this report and its recommendations within the wider context of thinking about alternative ways of measuring impact.

I also came across an ebook that is new to me, published by altmetrics and Scholastica, entitled “The Evolution of Impact Indicators”.  The whole publications looks extremely useful and has a lot of references that are relevant to my work, but the chapter that I am particularly interested in is called “Beyond the Impact Factor”. It discusses a number of alternatives to the impact factor including the EigenFactor and the H-Index. The H-index is probably the one I am most familiar with, but it’s also useful to remind myself of how this is tracked:

Google Scholar: Google Scholar provides the h index for authors who have created a profile.

Publish or Perish: Publish or Perish is a software program that retrieves and analyzes academic citations from Google Scholar and provides the h index among other metrics. Publish or Perish is handy for obtaining the h index for authors who do not have a Google Scholar profile.

Scopus: Scopus provides a Citation Tracker feature that allows for generation of a Citation Overview chart to generate a h index for publications and citations from 1970 to current. The feature also allows for removal of self-citations from the overall citation counts.

Web of Science: Web of Science allows for generation of the h index for publications and citations from 1970 to current using the “Create Citation Report” feature.

Now that I have started to refresh my memory of some recent developments, my next step will be to take this back to my desk, do some work on the journal itself and compare notes with my colleagues from the other publishers.

Read more: