Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Library Subject Guides

6. Measure Impact: Impact Metrics Explained

Author Impact Metrics

Author Impact Metrics are useful to:

  • Find out how much impact a particular author has had, by showing to what extent an author's overall body of work has been cited.
  • Determine how much impact a particular author has had by looking at his/her total number of citations.

 

Check the following tabs to see what author impact metrics are available and find out more.

Icons made by Freepik from www.flaticon.com

                                                                                           

The H-index is a measure of the number of publications published by an individual and how often they are cited 

An author's h-index, or Hirsch index, is the number (integer) n for which the author has published at least n papers which have each been cited at least n times.  Eg

  • 5 papers cited 1, 2, 3, 4, 5 times: h-index is 3 (3 papers cited at least 3 times)
  • 5 papers cited 1, 1, 2, 2, 2 times: h-index is 2 (at least 2 papers cited 2 times)
  • 5 papers cited 1, 1, 1, 1, 20 times: h-index is 1 (at least 1 paper cited 1 time)

 

How to find your H-index

Scopus - Select author search or ORCiD. click view Citation Report

Google Scholar  - Create a Google Scholar profile which will generate your h-index

Publish or Perish - Based on a variety of data sources including Google Scholar and Microsoft Academic, this free software can calculate a variety of author metrics

 

 

Pros:

  • Allows for direct comparisons of researchers within disciplines
  • Measures quantity and impact by a single value
  • Attempts to minimize the influence of one or two articles that have been cited numerous times
  • Attempts to minimize the influence of having published numerous articles with few or no citations

Cons:

  • Does not give an accurate measure for early-career researchers
  • Calculated by using articles that are indexed in specific databases. If a researcher publishes an article in a not indexed  the article and any citations will not be included in the the h-index
  • The h-index cannot be compared across disciplines or different subjects. H-scores can often be higher in one field than another field.
  • Like any citation metric, h-scores are open to manipulation through practices like self-citation and citation practices that involve academics citing each others' work
  • The h-index also ignores the impact of author order. Any article may be published by many authors. The h-index does not identify the position of the author on the hierarchy of author importance.
  • h-scores do not evaluate the ideas presented in these highly cited papers

                            The Field-Weighted Citation Impact (FWCI) score for an author's combined Scopus ouputs can be viewed in SciVal database and shows how the authors outputs's citation counts compare to similar outputs in the same field and timeframe.  

A score of 1.00 means the author is cited as they would be expected, greater than 1.00 means the author is doing better than expected.

Important Points:

  • Because the FWCI comes from the Scopus database, only documents within the database (1996 to the present) will have a FWCI.
  • Because the FWCI includes field normalization, theoretically, the score should be a better indicator of performance than a raw citation count. 
  • The exact way the FWCI is calculated is not made publicly available.

The g-Index

There are claims that the  g-index is more accurate than the h-index. It gives more weight to highly-cited articles. The g-index is calculated by ranking a set of articles in decreasing order of the number of citations that occur. The g-index is the (unique) largest number that the top g2 articles received (together) at least g citations. It is not as widely accepted as h-index.

The i10-index

i10-Index is the number of publications with at least 10 citations. The i10-index was created by Google Scholar and is used in Google's My Citations feature. This author metric is very simple, free and straightforward to calculate but only used in Google Scholar.

The are many other metrics that can be used. Publish of Perish software is able to calculate many of them and provides short descriptions of them.

Article Impact Metrics

Article Impact Metrics are useful to:

  • Find out how much impact a particular article has had, by showing which other authors based some work upon it or cited it as an example within their own papers.
  • Find out more about a field or topic; i.e. by reading the papers that cite a seminal work in that area.

 

Check the following tabs to see what article impact metrics are available and find out more..

Icons made by Freepik from www.flaticon.com

Once an article is published, different online tools keep track of the number of times this article is cited by other academic authors in their own publications.

The number of times an article is cited can indicate how important it is in a particular field of study, or how controversial it is, or just how popular the topic is.Dimensions is a linked research knowledge system that re-imagines discovery and access to research. Developed by Digital Science in collaboration with over 100 leading research organizations around the world, Dimensions brings together grants, publications, citations, alternative metrics, clinical trials, patents and policy documents to deliver a platform that enables users to find and access the most relevant information faster, analyze the academic and broader outcomes of research, and gather insights to inform future strategy. For more information about the product have a look at our support site or visit https://dimensions.ai

The number of citations is useful but it should not be the only criteria used to evaluate the impact of the author.

Please note that 

  • Articles in the medical field are cited the most
  • Many articles are never cited
  • Science articles also tend to have high rates of citation
  • Humanities and Social Science articles are cited the least 
  • Topics of regional interest may not be cited as many times as articles of world impact
  • Self citations may affect the total number of citations recorded 
  • Some articles are cited soon after the original publication, others may not be cited for a further five or ten years

Microsoft Academic

 

The Field-Weighted Citation Impact (FWCI) score comes from the Scopus database and shows how the article's citation count compares to similar articles in the same field and timeframe.  

A score of 1.00 means the article is cited as it would be expected, greater than 1.00 the article is doing better than expected, and less than 1.00 the article is underperforming.

Important Points:

  • Because the FWCI comes from the Scopus database, only documents within the database (1996 to the present) will have a FWCI.
  • Because the FWCI includes field normalization, theoretically, the score should be a better indicator of performance than a raw citation count. 
  • The exact way the FWCI is calculated is not made publically available.

Dimensions is a linked research knowledge system that re-imagines discovery and access to research. Developed by Digital Science in collaboration with over 100 leading research organizations around the world, Dimensions brings together grants, publications, citations, alternative metrics, clinical trials, patents and policy documents to deliver a platform that enables users to find and access the most relevant information faster, analyze the academic and broader outcomes of research, and gather insights to inform future strategy. For more information about the product have a look at our support site or visit https://dimensions.aiThe Field Citation Ratio (FCR) is a citation-based measure of scientific influence of one or more articles. It is calculated by dividing the number of citations a paper has received by the average number received by documents published in the same year and in the same Fields of Research (FoR) category.

The FCR is calculated for all publications in Dimensions which are at least 2 years old and were published in 2000 or later. Values are centered around 1.0 so that a publication with an FCR of 1.0 has received exactly the same number of citations as the average, while a paper with an FCR of 2.0 has received twice as many citations as the average for the Fields of Research code(s).

 

Dimensions is a linked research knowledge system that re-imagines discovery and access to research. Developed by Digital Science in collaboration with over 100 leading research organizations around the world, Dimensions brings together grants, publications, citations, alternative metrics, clinical trials, patents and policy documents to deliver a platform that enables users to find and access the most relevant information faster, analyze the academic and broader outcomes of research, and gather insights to inform future strategy. For more information about the product have a look at our support site or visit https://dimensions.aiRelative Citation Ratio (RCR) is a citation-based measure of scientific influence of a publication. It is calculated as the citations of a paper, normalized to the citations received by NIH-funded publications in the same area of research and year. 

The area of research is defined by the corpus of publications co-cited with the article of interest (the “co-citation network”) - it is therefore dynamically defined. In other words, the RCR indicates how a publication has been cited relative to other publications in its co-citation network and this is assumed to be reflective of the article’s area of research.

The RCR is calculated for all PubMed publications which are at least 2 years old. Values are centered around 1.0 so that a publication with an RCR of 1.0 has received the same number of citations as would be expected based on the NIH-norm, while a paper with an RCR of 2.0 has received twice as many citations as expected. 

Journal Impact Metrics

Journal Impact Metrics are useful to:

  • Find out how much impact a particular journal has, by showing to what extent an articles in the journal are cited.
  • Determine where to publish to maximise the impact of a piece of research

 

Check the following tabs to see what journal impact metrics are available and find out more.

Icons made by Freepik from www.flaticon.com

CiteScore is Elsevier's answer to Clarivate's Journal Impact Factor (JIF) and is based on Scopus data rather than that of Web of Science. CiteScore ranking uses a three-year window to calculate the score. CiteScore counts the citations received in 2016 to documents published in 2013, 2014 or 2015, and divides this by the number of documents published in 2013, 2014 and 2015.

Traditionally researchers are encouraged to publish in journals with higher impact factors in order to raise their research profiles. The mostly widely quoted measure for journals is the Journal Impact Factor (JIF or IF)  found in the Journal Citation Reports (JCR) based on Web of Science data (Note: UC no longer subscribes to JCR or Web of Science).

  • The IF measures how often an article is cited in any year
  • Most journals are rated as an impact of between 1 and 2
  • Many publishers will also provide information about impact of their journals on the journal web pages
  • Impact factors measure the impact of the journal not of individual articles
  • The most highly rated journals are the weekly interdisciplinary journals Nature and Science

SCImago Journal Rank (SJR) measures the scientific prestige of a scholarly source by assigning a relative score based the number of citations it receives and the relative prestige of the journals where the citations come from. A journals SJR score for a given year is based on its citation performance over the previous 3 years. It can be used as an alternative the Impact Factor and is based on Scopus data rather than that of Web of Science.

Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. SNIP was developed using Scopus data and adjusts citation impact measures by taking into account how often articles are cited in a particular field and calculating how quickly the paper is likely to have an impact. It is intended to be a fairer measure of a journal's impact than metrics based purely on citation counts because it takes into account the academic field it is publishing in and is calculated over a set time period.

SNIP offers the ability to benchmark and compare journals from different subject areas. This is especially helpful to researchers publishing in multidisciplinary fields.

The Journal h-index is calculated in the same way as an author h-index but for all the publications in a particular journal. See the author h-index section above for a full description of how an h-index is calculated.

Alternative Impact Metrics

Alternative Impact Metrics are useful to:

  • Find out how how research is being used and cited outside of traditional academic sources.
  • Ccomplement, not replace, traditional measures of impact.
  • Track how research is being cited in sources such as online news, social media and academic networking sites.

 

Check the following tabs to see what alternative impact metrics are available and find out more.

Icons made by Freepik from www.flaticon.com

Altmetrics, or alternative metrics, uses new sources of online data to measure the impact of academic researchers' publications. These are meant to complement, not replace, traditional measures of impact. Examples include:

  • Number of times an article has been viewed or downloaded from a journal website, or database
  • Shares on academic networking sites such as Mendeley and CiteULike
  • Number of times an article has been exported to a citation manager
  • Number of times an article has been emailed or shared on Facebook or Twitter or other social media sites 
  • Number of times an article is mentioned in the mainstream media

​​Altmetric.com

Tracks social media sites, newspapers, and magazines. Altmetrics is based on three main factors: the number of individual mentioning a paper, where the mentions occurred (e.g. newspaper, a tweet), and how often the author of each mention talks about scholarly articles. Adopted by Springer, Nature Publishing and BioMed Central

  • Symplectic Elements (UC academics only)

  • Install an  Altmetric Bookmarklet to capture this data from Google Scholar.

Plum Analytics

​Can be viewed in Scopus and some Ebsco databases (e.g. PsycInfo and Business Source Complete).

Figshare

Allows researchers to publish all of their data in a citable, searchable and sharable manner. All data is persistently stored online under the most liberal Creative Commons licence, waiving copyright where possible. Outputs display altmetric badges.

ImpactStory

Is an open-source altmetric tool which draws data from Facebook, Twitter, CiteULike, Delicious, PubMed, Scopus, CrossRef, scienceseeker, Mendeley, Wikipedia, slideshare, Dryad, and figshare. Use Firefox to create your free account. Offers a free widget that can be embedded into repositories.

Kudos

Kudos is a free service through which you can broaden readership and increase the impact of your research. Kudos also provides a unique one-stop shop for multiple metrics relating to your publications: page views, citations, full text downloads and altmetrics. 

Mendeley

A social reference manager that tracks readership of scholarly articles posted to the site.

PLOS ALMs (Article Level Metrics)

Custom searches to track the access and reuse of articles published in PLOS journals.

To improve your altmetric scores you need to create an online presence and share information about your work and your research outputs online.

There are many ways to do this such as:

Blog

Blog about your articles or work and ask others to write blog posts about your work.

Tweet

Become active on Twitter and tweet links to your articles and other work.

Use social networks for researchers

Create a profile and add your publication list to social networking sites for researchers, such as Academia.edu, ResearchGate and Mendeley.

Register for researcher IDs

Register for ids such as an ORCID id, ResearcherID and keep your list of publications up-to-date

Make all your research outputs available online

Make all your research outputs including data, code, videos and presentations available online by using on content hosting tools such as YouTube and Slideshare figshare

Deposit your work in an institutional or subject repository

We have our own repository at Canterbury 

Attempts to use data derived from social media sources as measures of research influence are intriguing efforts to refine and improve accepted methods, which are widely seen as unsatisfactory for various reasons. It is important to note that these attempts may bring real improvement, or may simply generate more numbers and graphs.

Altmetrics, like established scholarly metrics, measure the activity surrounding a particular scholarly work which is in turn being taken as an indication of the report's scholarly significance. In that respect, it should not be assumed that altmetrics show an altogether different or “better” picture than that which is revealed through other scholarly metrics. Altmetrics are merely seeking to provide a more complete version of that picture.

Concerns have also been raised about the manipulation of these metrics.  A paper published in December of 2012, linked below, examined Google Scholar's services in particular and concluded that it was quite easy to atifically inflate a paper's scores as determined by Google Scholar's metrics.  For further reading on these topics, follow the links below:

Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting
Rise of 'Altmetrics' Revives Questions About How to Measure Impact of Research
Altmetrics are the central way of measuring communication in the digital age but what do they miss?

Who to Contact

Kiera Tauro

Internal Phone: 93904

Other Viewpoints 

Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51-61. https://doi.org/10.1089/ees.2016.0223
Erren, T. C., & Groß, J. V. (2016). Research metrics: What about weighted citations? Scientometrics, 107(1), 315-316. https://doi.org/10.1007/s11192-016-1841-5
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. https://doi.org/10.1038/520429a
Jarwal, S. D., Brion, A. M., & King, M. L. (2009). Measuring research quality using the journal impact factor, citations and ‘Ranked Journals’: Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management, 31(4), 289-300. https://doi.org/10.1080/13600800903191930
MacRoberts, M. H., & MacRoberts, B. R. (2018). The mismeasure of science: Citation analysis. Journal of the Association for Information Science and Technology, 69(3), 474-482. https://doi.org/10.1002/asi.23970
Stephan, P., Veugelers, R., & Wang, J. (2017). Reviewers are blinkered by bibliometrics. Nature, 544(7651), 411-412. https://doi.org/10.1038/544411a
Teixeira da Silva, J. A. (2017). The Journal Impact Factor (JIF): Science Publishing’s Miscalculating Metric. Academic Questions, 30(4), 433-441. https://doi.org/10.1007/s12129-017-9671-3