Other Research Support Guides 1. Plan (Design and Discover) your Research >> 2. Find & Manage Research Literature >> 3. Doing the Research >> 4. Writing up your Research >> 5. Publish & Share |
Check the following tabs to see what author impact metrics are available and find out more.
Icons made by Freepik from www.flaticon.com
The H-index is a measure of the number of publications published by an individual and how often they are cited An author's h-index, or Hirsch index, is the number (integer) n for which the author has published at least n papers which have each been cited at least n times. Eg
How to find your H-indexScopus - Select author search or ORCiD. click view Citation Report Google Scholar - Create a Google Scholar profile which will generate your h-index Publish or Perish - Based on a variety of data sources including Google Scholar and Microsoft Academic, this free software can calculate a variety of author metrics |
The Field-Weighted Citation Impact (FWCI) score for an author's combined Scopus ouputs can be viewed in SciVal database and shows how the authors outputs's citation counts compare to similar outputs in the same field and timeframe.
A score of 1.00 means the author is cited as they would be expected, greater than 1.00 means the author is doing better than expected.
There are claims that the g-index is more accurate than the h-index. It gives more weight to highly-cited articles. The g-index is calculated by ranking a set of articles in decreasing order of the number of citations that occur. The g-index is the (unique) largest number that the top g2 articles received (together) at least g citations. It is not as widely accepted as h-index.
i10-Index is the number of publications with at least 10 citations. The i10-index was created by Google Scholar and is used in Google's My Citations feature. This author metric is very simple, free and straightforward to calculate but only used in Google Scholar.
The are many other metrics that can be used. Publish of Perish software is able to calculate many of them and provides short descriptions of them.
Check the following tabs to see what article impact metrics are available and find out more..
Icons made by Freepik from www.flaticon.com
Once an article is published, different online tools keep track of the number of times this article is cited by other academic authors in their own publications.
The number of times an article is cited can indicate how important it is in a particular field of study, or how controversial it is, or just how popular the topic is.
The number of citations is useful but it should not be the only criteria used to evaluate the impact of the author.
The Field-Weighted Citation Impact (FWCI) score comes from the Scopus database and shows how the article's citation count compares to similar articles in the same field and timeframe.
A score of 1.00 means the article is cited as it would be expected, greater than 1.00 the article is doing better than expected, and less than 1.00 the article is underperforming.
The Field Citation Ratio (FCR) is a citation-based measure of scientific influence of one or more articles. It is calculated by dividing the number of citations a paper has received by the average number received by documents published in the same year and in the same Fields of Research (FoR) category.
The FCR is calculated for all publications in Dimensions which are at least 2 years old and were published in 2000 or later. Values are centered around 1.0 so that a publication with an FCR of 1.0 has received exactly the same number of citations as the average, while a paper with an FCR of 2.0 has received twice as many citations as the average for the Fields of Research code(s).
Relative Citation Ratio (RCR) is a citation-based measure of scientific influence of a publication. It is calculated as the citations of a paper, normalized to the citations received by NIH-funded publications in the same area of research and year.
The area of research is defined by the corpus of publications co-cited with the article of interest (the “co-citation network”) - it is therefore dynamically defined. In other words, the RCR indicates how a publication has been cited relative to other publications in its co-citation network and this is assumed to be reflective of the article’s area of research.
The RCR is calculated for all PubMed publications which are at least 2 years old. Values are centered around 1.0 so that a publication with an RCR of 1.0 has received the same number of citations as would be expected based on the NIH-norm, while a paper with an RCR of 2.0 has received twice as many citations as expected.
Check the following tabs to see what journal impact metrics are available and find out more.
CiteScore is Elsevier's answer to Clarivate's Journal Impact Factor (JIF) and is based on Scopus data rather than that of Web of Science. CiteScore ranking uses a three-year window to calculate the score. CiteScore counts the citations received in 2016 to documents published in 2013, 2014 or 2015, and divides this by the number of documents published in 2013, 2014 and 2015.
Traditionally researchers are encouraged to publish in journals with higher impact factors in order to raise their research profiles. The mostly widely quoted measure for journals is the Journal Impact Factor (JIF or IF) found in the Journal Citation Reports (JCR) based on Web of Science data (Note: UC no longer subscribes to JCR or Web of Science).
SCImago Journal Rank (SJR) measures the scientific prestige of a scholarly source by assigning a relative score based the number of citations it receives and the relative prestige of the journals where the citations come from. A journals SJR score for a given year is based on its citation performance over the previous 3 years. It can be used as an alternative the Impact Factor and is based on Scopus data rather than that of Web of Science.
Source Normalized Impact per Paper (SNIP) measures contextual citation impact by weighting citations based on the total number of citations in a subject field. SNIP was developed using Scopus data and adjusts citation impact measures by taking into account how often articles are cited in a particular field and calculating how quickly the paper is likely to have an impact. It is intended to be a fairer measure of a journal's impact than metrics based purely on citation counts because it takes into account the academic field it is publishing in and is calculated over a set time period.
SNIP offers the ability to benchmark and compare journals from different subject areas. This is especially helpful to researchers publishing in multidisciplinary fields.
The Journal h-index is calculated in the same way as an author h-index but for all the publications in a particular journal. See the author h-index section above for a full description of how an h-index is calculated.
Check the following tabs to see what alternative impact metrics are available and find out more.
Altmetrics, or alternative metrics, uses new sources of online data to measure the impact of academic researchers' publications. These are meant to complement, not replace, traditional measures of impact. Examples include:
Tracks social media sites, newspapers, and magazines. Altmetrics is based on three main factors: the number of individual mentioning a paper, where the mentions occurred (e.g. newspaper, a tweet), and how often the author of each mention talks about scholarly articles. Adopted by Springer, Nature Publishing and BioMed Central
Install an Altmetric Bookmarklet to capture this data from Google Scholar.
Can be viewed in Scopus and some Ebsco databases (e.g. PsycInfo and Business Source Complete).
Allows researchers to publish all of their data in a citable, searchable and sharable manner. All data is persistently stored online under the most liberal Creative Commons licence, waiving copyright where possible. Outputs display altmetric badges.
Is an open-source altmetric tool which draws data from Facebook, Twitter, CiteULike, Delicious, PubMed, Scopus, CrossRef, scienceseeker, Mendeley, Wikipedia, slideshare, Dryad, and figshare. Use Firefox to create your free account. Offers a free widget that can be embedded into repositories.
Kudos is a free service through which you can broaden readership and increase the impact of your research. Kudos also provides a unique one-stop shop for multiple metrics relating to your publications: page views, citations, full text downloads and altmetrics.
A social reference manager that tracks readership of scholarly articles posted to the site.
Custom searches to track the access and reuse of articles published in PLOS journals.
To improve your altmetric scores you need to create an online presence and share information about your work and your research outputs online.
There are many ways to do this such as:
Blog about your articles or work and ask others to write blog posts about your work.
Become active on Twitter and tweet links to your articles and other work.
Create a profile and add your publication list to social networking sites for researchers, such as Academia.edu, ResearchGate and Mendeley.
Register for ids such as an ORCID id, ResearcherID and keep your list of publications up-to-date
Make all your research outputs including data, code, videos and presentations available online by using on content hosting tools such as YouTube and Slideshare figshare
We have our own repository at Canterbury
Attempts to use data derived from social media sources as measures of research influence are intriguing efforts to refine and improve accepted methods, which are widely seen as unsatisfactory for various reasons. It is important to note that these attempts may bring real improvement, or may simply generate more numbers and graphs.
Altmetrics, like established scholarly metrics, measure the activity surrounding a particular scholarly work which is in turn being taken as an indication of the report's scholarly significance. In that respect, it should not be assumed that altmetrics show an altogether different or “better” picture than that which is revealed through other scholarly metrics. Altmetrics are merely seeking to provide a more complete version of that picture.
Concerns have also been raised about the manipulation of these metrics. A paper published in December of 2012, linked below, examined Google Scholar's services in particular and concluded that it was quite easy to atifically inflate a paper's scores as determined by Google Scholar's metrics. For further reading on these topics, follow the links below:
Manipulating Google Scholar Citations and Google Scholar Metrics: simple, easy and tempting
Rise of 'Altmetrics' Revives Questions About How to Measure Impact of Research
Altmetrics are the central way of measuring communication in the digital age but what do they miss?
Who to ContactKiera TauroPhone: +6433693914
Internal Phone: 93904
|
Other Viewpoints |
Edwards, M. A., & Roy, S. (2017). Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science, 34(1), 51-61. https://doi.org/10.1089/ees.2016.0223 |
Erren, T. C., & Groß, J. V. (2016). Research metrics: What about weighted citations? Scientometrics, 107(1), 315-316. https://doi.org/10.1007/s11192-016-1841-5 |
Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520(7548), 429-431. https://doi.org/10.1038/520429a |
Jarwal, S. D., Brion, A. M., & King, M. L. (2009). Measuring research quality using the journal impact factor, citations and ‘Ranked Journals’: Blunt instruments or inspired metrics? Journal of Higher Education Policy and Management, 31(4), 289-300. https://doi.org/10.1080/13600800903191930 |
MacRoberts, M. H., & MacRoberts, B. R. (2018). The mismeasure of science: Citation analysis. Journal of the Association for Information Science and Technology, 69(3), 474-482. https://doi.org/10.1002/asi.23970 |
Stephan, P., Veugelers, R., & Wang, J. (2017). Reviewers are blinkered by bibliometrics. Nature, 544(7651), 411-412. https://doi.org/10.1038/544411a |
Teixeira da Silva, J. A. (2017). The Journal Impact Factor (JIF): Science Publishing’s Miscalculating Metric. Academic Questions, 30(4), 433-441. https://doi.org/10.1007/s12129-017-9671-3 |