Skip to content

Which direction to take… researching alternative ways of measuring impact in Learning Technology

This is the second post about my current work on researching alternative ways of measuring impact in Learning Technology. Go back to the first post in which I have set out the context of my work and what I am particularly focused on.

Alongside the practical work with the ALT Journal Strategic Working Group, I am pleased that my proposal of a short session The quality of metrics matters: how we measure the impact of research in Learning Technology’ has been accepted for ILTA’s Annual Conference in Carlow, Ireland later this month. 

In the meantime, I have been doing more reading and research into innovative ways of measuring impact and this time my work has come up against some very practical questions, not least because as a UK-based publisher we are in the process of ensuring the the journal’s operations comply with the incoming GDPR legislation. Open Source journal systems are not at the forefront of compliance and like other independent publishers we work as part of the community to move towards compliance.

At first glance factors like GDPR may not seem to be closely related to how impact is measured, but my thinking links them closely as a lot of the opportunities around developing the journal are dependent on technical solutions that have data processing implications:

A convincing alternative
Discussing how important having an impact factor is quickly runs into the question of what the alternative looks like. As well as the technical challenges in implementing innovative tools or mechanism for measuring impact (to which the new GDPR legislation adds another level of complexity), the sustainability and longevity of both tool and data storage need to be examined. For example, introducing a tool like Altmetrics requires us to educate all stakeholders and ensure that the level of digital literacy required is not a barrier to making the tool useful. The user interface and experience needs to be robust and practical, building confidence in alternative or innovative ways of measuring impact. With new tools and platforms being created all the time there is a certain amount of churn and in order to really build a convincing alternative there needs to be a certain level of consistency.

Scrutiny of new vs. established ways of measuring impact
The kind of scrutiny with which we are examining alternative ways of measuring impact isn’t easily applied to the established method. There is a critical discourse, for example this recent blog post on the LSE impact blog, which argues:

Many research evaluation systems continue to take a narrow view of excellence, judging the value of work based on the journal in which it is published. Recent research by Diego ChavarroIsmael Ràfols and colleagues shows how such systems underestimate and prove detrimental to the production of research relevant to important social, economic, and environmental issues. These systems also reflect the biases of journal citation databases which focus heavily on English-language research from the USA and north and western Europe. Moreover, topics covered by these databases often relate to the interests of industrial stakeholders rather than those of local communities. More inclusive research assessments are needed to overcome the ongoing marginalisation of some peoples, languages, and disciplines and promote engagement rather than elitism.

It’s really helpful to read this kind of perspective, but in my experience there is a strong sense that institutions and senior management place much importance on the established value of the impact factor. We have decided to carry out consultation with stakeholders, but in the absence of a convincing alternative (which in our case we simply haven’t had time to implement as yet) I am not sure what we would be asking our stakeholders to compare or comment on. There is such a range of options being implemented by Open Access publishers, that we can a learn a lot from their example and work towards putting in place improvements that will help establish what might be an alternative or a complimentary perspective to the traditional impact factor.

Measuring beyond impact: peer review
Through our Editorial Board, the working group has now also begun to look at platforms like Publons, which promises to ‘integrate into the reviewer workflow so academics can track and verify every review and editorial contribution on the fly and in complete compliance with journal review policies’ (read more). It’s clearly a widely-used platform and some colleagues seem to be enthusiastic users, so it’s made me consider what this kind of platform could add to the user experience alongside innovative tools to measure impact. As a journal that does not charge any APCs, the value proposition for authors is clear, but resources to improve the experience of reviewers are limited. More work is needed for us in this area to examine whether we can compliment our efforts to improve the ways in which the impact is measuring could be complimented by enhancing the experience of peer review.

 

Read more (with thanks to everyone who’s sent me comments or links):

Information for publishers from DOAJ: 
DOAJ does not believe in the value of impact factors, does not condone their use on journal web sites, does not recognise partial impact factors, and advocates any official, alternative measure of use, such as article level metrics.

There is only one official, universally recognised impact factor that is generated by Thomson Reuters; it is a proprietary measure run by a profit-making organisation. This runs against the ethics and principles of open access and DOAJ is impact-factor agnostic. DOAJ does not collect metadata on impact factors. Displaying impact factors on a home page is strongly discouraged and DOAJ perceives this as an attempt to lure authors in a dishonest way.

Full information here.