What are violations of research integrity?
What are violations of research integrity?
Living up to the standards of research integrity is a professional responsibility. It is therefore important for all researchers to be aware of what are considered violations of research integrity. Knowing the concept of violations can help you in case you find yourself in a challenging situation, when you are doubting the right choice or when you are asked to do something you don’t feel comfortable with. Knowledge of the concept is additionally useful in attending or delivering training, supporting or collaborating with colleagues who may not be aware. Finally, (alleged) violations are the subject of investigations by the Committee for Research integrity.
Traditionally, violations of research integrity have focused on fabrication, falsification and plagiarism (FFP). In recent years, however, the scope has widened beyond actions that directly affect the scientific record.
The ALLEA Code defines the framework of good and bad research practices for all European (and beyond) subscribers to the code, including all Flemish Universities. The code identifies three different types of violations of research integrity:
- Research misconduct: fabrication, falsification and plagiarism (FFP categorisation)
- Violations of good research practice that distort the research record or damage the integrity of the research process or of researchers
- Other unacceptable practices, see non-exhaustive list in the ALLEA Code (p.10, pdf).
Honest mistakes
Reviewers, other researchers and integrity committees may use a variety of different techniques to check if research data are reliable and credible. Honest mistakes can (also) be picked up during these checks. Typical examples of honest mistakes can be:
- Typos
- Inclusion of the wrong figure / graph
- Forgetting to account for missing data
- Mistakes during the data collection, such as errors in measurements, mislabelling of samples, or problems with data recording.
- Errors in the statistical analysis of data, such as miscalculations, misinterpretation of statistical tests, or overlooking assumptions of statistical methods.
- Misunderstanding the context of existing studies.
- Unintentionally misrepresenting the findings of others.
Note that all of the above happen in an unintentional way, as an error.
These possibilities emphasize the importance of being clear and accurate in conducting and reporting research: if the description of the research fails to include details such as those referred to above, it could appear that data has been falsified or fabricated.
Honest mistakes are not considered a violation of research integrity if these are made unintentionally, researchers take precautions to avoid them, and as soon as they become aware communicate about it transparently and do the necessary to correct them. Researchers must develop a behaviour that allows them to work carefully in order to avoid (repeatedly making) mistakes.
Some numbers and how things evolved
Fanelli carried out a meta-analysis of surveys about misconduct (Fanelli, 2009). He found that 1,97% of researchers admitted having fabricated, falsified or modified data or results at least once. In addition, 34% admitted other questionable practices. Furthermore, the analysis suggested that 14% of respondents admitted that colleagues had been involved in falsification, and up to 72% admitted that colleagues had been involved in other questionable research practices.
Fanelli’s research raises some interesting questions on the researchers’ willingness to admit to misconduct, when it was committed by themselves or by others:
- Researchers seem more willing to identify misconduct when it is committed by others. This may be explained by the ‘Mohammed Ali-effect’, whereby people perceive themselves as more honest than others. Indeed, researchers may even be overzealous in judging others: in one study, 24% of supposed integrity violations observed by respondents did not meet an official definition of misconduct (Titus et al, 2008).
- There might be a decrease in self-reporting of misconduct. This can be accompanied by more training and awareness of integrity issues. However, training in integrity does not seem to decrease propensity to commit misconduct. So it may be that researchers are aware of misconduct, but are more likely to identify it in others than admit it to themselves.
A 2018 survey in Belgium about research misconduct in the domain of the biomedical sciences confirmed the findings from Fanelli (Godecharle, 2018). The more recent Dutch National Survey on Research Integrity (NSRI) found the prevalence of fabrication and falsification to be 4.3% (95% CI: 2.9, 5.7) and 4.2% (95% CI: 2.8, 5.6), respectively (Gopalakrishna G, 2022). The survey also found the prevalence of QRPs ranging from 0.6% (95% CI: 0.5, 0.9) to 17.5% (95% CI: 16.4, 18.7) with 51.3% (95% CI: 50.1, 52.5) of respondents engaging frequently in at least one QRP. It is difficult to interpret the reasons behind these higher prevalences compared to the 2009 Fanelli study. One possible explanation is an increased awareness among researchers about what constitutes violations of research integrity and questionable research practices and/or a higher willingness to admit engaging in these practices.
Who is involved?
Academic research can have a large impact on the general public. Overviews of the history of misconduct (Lafolette, 2000) suggest that the public tends to hold scientists in high regard, and to view fraud as the action of a few ‘bad apples’. In some cases, the public has tended to deny that serious fraud has taken place. This seems to be particularly prevalent when scientists have a high public profile and are working in areas that the public considers to be important. Andrew Wakefield (who falsely claimed a link between the Measles, Mumps and Rubella Vaccine and autism in children in a high profile publication) still has a career as an anti-vaccine advocate, despite the comprehensive debunking of his work and the withdrawal of his medical qualifications.
Fanelli suggested that scientists are more likely to report others of fraud than to admit to it themselves (Fanelli, 2009). He suggested that better education and greater awareness about misconduct mainly encourages researchers to report the misconduct of their colleagues as it can lead to reputation damage when admitting it for themselves. At the same time, scientists have tended to resist political and public pressure for external regulation of misconduct. This has led to the dominance of practices of self-regulation and peer-regulation of misconduct in many countries.
In the past, journals and publishers demonstrated an ambivalent attitude to violations of research integrity, sometimes ignoring problems with the work they have published. Fortunately, more and more journals and publishers take action to correct and retract faulty research. In addition, publishers and journals now often issue overall integrity statements (such as their adherence to the guidelines drafted by the Committee on Publication Ethics (COPE)) together with good research practices by which they communicate their expectations towards researchers considering submitting their research for publication.
Interactions between government and the scientific community in relation to research integrity have often been marked by conflict. When a government suggested possible legislation to deal with violations of research integrity (even as a last resort), some scientists have responded in an outspoken way by arguing against government interference with science. Some sources suggest that this has been counterproductive (Lafolette, 2000), with these outspoken demands drowning out more constructive proposals to address violations of research integrity. One area that is disputed is the claim that certain countries (including particularly Asian countries) have a competitive advantage because science is less regulated. In fact, it has been argued that the high–profile Hwang affair in South-Korea was actually a response to increased regulation and governance of science. The researchers involved wanted to undermine ethics regulation in their field by presenting spectacular (and ultimately fraudulent) results that would create public pressure to allow their research to continue (Bogner & Menz, 2006).