• checkmark Research funding
    • checkmark Statistics in research
    • checkmark Preregistration and registered reports
    • checkmark Possible flaws in a study design
    • checkmark Reproducibility and replicability of research
    • checkmark Design and methodology
    • checkmark Design and conduct

Research funding

jumping-icon base

Research funding

Due to competition and low chances of obtaining funding (Garner et al 2013), researchers are now more than ever struggling to obtain the resources to fund their research project. As applying for funding can be time-intensive, researchers might feel like spending more time on writing applications than on research. So, when having a great idea, why not try and sell it to multiple funders and see which one ‘bites’?

With grant success at all-time low, scientists are working harder than ever to fund their research. They respond to the competitive economic times by submitting more applications. They may also simultaneously or serially submit applications to multiple funding agencies to increase their odds of getting funding. Some grant agencies allow the submission of applications with identical or highly similar specific aims, goals, objectives and hypotheses. But we believe that researchers should not accept duplicate funding for the same work – either the whole study or any part of it.

Quote from: Same work, twice the money? – Harold R. Garner, Lauren J. McIver & Michael B. Waitzkin – Nature 493 (2013).

It is therefore not unsurprising for a research proposal to be submitted to multiple funding bodies, either in identical or slightly modified form. This system of parallel applications increases the chance of obtaining funding. In addition, when funding is only partially granted, additional funding obtained from (an)other source(s) can help to acquire the budget needed for the complete project.

mindthegap

The Flemish Commission for Scientific Integrity (Vlaamse Commissie voor Wetenschappelijke Integriteit – VCWI) has formulated a general advice on Plagiarism in funding applications (2017).

mindthegap

Double dipping

Although there might be acceptable reasons to motivate the need for complementary funding by different funders, for example personnel costs under one and consumables at different funders, researchers should make sure not to accept funding for the same aspect of the research project.

While parallel applications aren’t necessarily problematic, some good practices should be kept in mind. First of all, researchers should be transparent towards the funding body and acknowledge if a (partially overlapping) proposal is under evaluation elsewhere. The same principle should be followed in case of overlap with an already granted project. Although there might be good reasons for the overlap, this should be communicated in a clear way. Finally, if during the application phase, the proposal is granted elsewhere, proper action should be taken in order not to obtain double funding. Researchers should not accept funding twice.

mindthegap

Note that more and more funders, as part of the application phase, are asking to declare whether a related proposal has been submitted/approved elsewhere. Do not assume related proposal(s) will go unnoticed if you don’t mention them, as this might have severe consequences on the further processing of your current and future application(s) and can be reported to the host institution.

mindthegap

ALLEA Code:

  • Researchers make proper and conscientious use of research funds.

Statistics in research

jumping-icon base

Statistics in research

Depending on your discipline, statistical analysis is crucial to substantiate your results. The problem that many researchers are faced with is that they are trained to be researchers and not necessarily to be a statistician, which is a whole field of its own. With this in mind, researchers have to be critical with regards to the statistical analysis they perform and consult experts wherever necessary. Just like when working with advanced technical equipment, you consult someone with expertise in working with this equipment, you have to be aware of how the equipment works and what information can and cannot be drawn from it.

Good academic practices on statistical analysis

The way the data are analysed can severely impact the conclusion and validity of your research. As such, researchers should aim to have a sound statistical analysis strategy even before collecting the first data. Some good practices:

  • Everything starts with a clearly defined research question. Set up your study and plan the statistical analysis according to the research question. Consider the number of observations you will go for? Which are the variables you will consider? Which are the relevant statistical tests?
  • Take courses in the field of data analysis. Although this does not make you an expert statistician, you get a much better idea what the different techniques are capable of and which pitfalls you should be aware of.
  • Accross all stages of a research project, statisticians are involved. In the budget calculation of your project, foresee some money for statistical assistance.
  • Be aware of the limitations of different statistical tests. Be critical and do not use a test just because this is how another researcher did it or because it is the standard method in your research group.
  • Be critical of your data. A statistically significant result does not automatically mean that the result is robust or relevant. Are there reasons to explain the result?
  • Exploring the use of different statistical tests is not in itself a problem as long as the tests have been designed for the intended purposes and not only to be able to ‘choose’ the best results. Analyses should account for the number of tests carried out.
  • Similarly, omission of data might be acceptable, but the reasons for this should be acknowledged and valid.
Source: Best practices for statistical data analysis
mindthegap

Flames, Flanders’ training network for statistics and methodology is an interuniversity training network rooted in the 5 Flemish universities.

mindthegap

A few other methods to make your analysis more robust are:

Who is involved?

Junior Researcher - Phd student

Junior researchers/PhD researchers will in most cases be responsible for the practical aspects such as data collection and the statistical analysis. Training in statistical data analysis is part of your education as a researcher.

Senior Researcher

Statistics is something many researchers are not familiar with. As such, the more experienced researchers should provide guidance to the junior ones in ensuring the statistical analysis is scientifically sound and/or referring to experts for help if necessary.

(Co-) Author

(Co-)Authors have to be sufficiently critical to ensure that the statistical analysis is robust.

Journals - Publishers

Journals should make sure the submitted work provides sufficient information to the readers to allow interpreting the data and defining the significance of it.

mindthegap

Journals don’t always provide a dedicated section to indicate who’s responsible for the statistical analysis. You can use the CRediT contributer roles to add the statisticians to your paper in the best possible way.

mindthegap

The perception exists that results with a p-value below 0.05 have more value and attract more attention within science. As such, researchers may have incentives to look out for statistically significant results. This slippery slope can be summarised as p-hacking in which researchers try out several analyses and/or data eligibility specifications and then selectively pick those that produce significant results. This increases the possibility of false positive association (merely by chance). Practices that can result in p-hacking include:

  • Conducting analyses midway through experiments and terminating data collection prematurely because significant p-value has been obtained
  • Adjusting the sample size in the hope that the difference becomes significant
  • Limitting analysis to a subset of the data
  • Removing outliers without a good reason
  • Exploring different statistical tests and only continue with the ones that meet the researcher’s personal beliefs or support the hypothesis.
  • When analysing data, the researcher may evidently look for data that confirm their hypotheses or confirm personal experience, overlooking data inconsistent with personal beliefs. P-hacking is a problem as it influences the data collection process and/or statistical analysis, which in turn can lead to inflation bias as the effect sizes reported in the literature do not correspond with the (experimental) observations. This can finally severely impact the robustness of the data and the reproducibility/replicability of the results.
mindthegap

Tip:

You can use statcheck to check for inconsistencies (in p-values) on the paper level.

Cartoon by xkcd under a Creative Commons Attribution – NonCommercial 2.5 license

Preregistration and registered reports

jumping-icon base

Preregistration and registered reports

Preregistration

Several (questionable) research practices can lead to publication bias and false positive findings. Important examples of this kind of practices are p-hacking, HARKing (see below) and not publishing negative research results.

An effective solution is to define the research questions and analysis plan before observing the research outcomes – a process called preregistration. As such, preregistration helps the researcher to interpret the data and focus on the original research question (what can and cannot be concluded), to increase transparency when reporting research, and prevent from falling into questionable research practices. As a consequence, the focus of the work shifts from having ‘nice’ results to having ‘accurate’ results.

What is preregistration about?

When preregistering a study, the researcher (or research team) determines the plan of the research to be performed before starting the data collection and/or data analysis. Basically, this plan contains:

  • the research question
  • the hypothesis
  • the dependent and independent variables that will be collected
  • the study design
  • the planned methodology
  • the sample size and the method to determine the sample size
  • the way the data will be analyzed
  • the outlier and data exclusion criteria

The plan is then timestamped and deposited into a registry. This does not necessarily mean that the plan has to be publicly available or that there is no flexibility in the analysis of the data after these have been obtained, nor that one should stop performing exploratory research. It should, however, be clear to the reader of the article which parts of the research are hypothesis testing and which are unexpected findings that drive a new hypothesis. Furthermore, any deviations from the preregistration should be made explicit and justified.

mindthegap

One of the organisations that provide the possibility to deposit and preregister research plans, protocols and data is the Open Science Framework. OSF is a free, open platform to support research and enable collaboration.

Preregistration is well taken up in the field of clinical trials. Here, more and more medical journals will only consider publication if the trial has been registered beforehand. Nowadays, preregistration is finding its way in psychology and into other fields as well.

mindthegap

HARKing (Hypothesizing After the Results are Known) refers to the practice of presenting a post hoc hypothesis (i.e. based on or informed by results) in a research report as if it were, in fact, an a priori hypothesis. This can be done in several ways, leading to several types of HARKing. For example: (1) by changing or replacing the initial hypothesis by a post hoc hypotheses after the researcher finds out the results; (2) by excluding a priori hypotheses that were not confirmed; (3) by retrieving hypotheses from a post hoc literature search and reporting them as a priori hypotheses.

mindthegap

In many cases researchers are motivated to HARKing by a research and publication culture that values confirmation of a priori hypotheses over post hoc hypotheses. In an attempt to make their results appear stronger and increase the likelihood of publication, initial insights and hypotheses are being HARKed. The temptation of HARKing can be eliminated through preregistration of study designs and analysis plans. Open data sharing can also guard from these practices. We will go into detail into these practices later on.

mindthegap

In a fishing expedition the collected data is analysed repeatedly to search for other, “any”, patterns than the ones supported by the initial hypothesis motivating the study (design). This kind of “exploratory research” on the same dataset is different from the exploratory research designed to answer the hypothesis. Findings from exploring the dataset for “any” existing patterns should be declared so, even (especially) when these results confirm an a priori hypothesis, as this kind of phishing is unacceptable and detrimental to research.

mindthegap

This kind of phishing, also called data snooping, increases the chance of getting false positive results: every dataset contains “spurious correlations”, popping up for your particular dataset by coincidence, but not existing for similar datasets. When a good (statistical) model is built, it should be validated with a new, similar dataset, whose data did not contribute to the building of the model. Otherwise, the model might be so fit for the data it was built for, that is too specific, becoming non-generalizable to similar datasets. The term cherry picking is also used, as in “picking only the best-looking cherries”, or the best looking results or observations. False positive findings, found through data snooping, can be unmasked by replication studies. Open data sharing can also guard from these practices. We will go into detail into these practices later on.

Registered reports

Registered reports are a more formalised way of looking at preregistration and are an even more powerful tool to prevent publication bias. Within this approach, the proposed plan is sent to a journal before the start of the study (figure 1). The journal subsequently sends the plan (not the results) for peer-review, provides feedback and in principles ommits itself to publish the study if conducted according to the submitted plan. One of the main advantages of this approach is that the study design is reviewed early on, allowing to intervene before starting the study. Moreover, in the form of a registered report, preregistration reduces publication bias as journals will publish the work, independent of the outcome (figure 2 and 3) and acknowledges the value of negative results.

Figure 1: Copied from Center for Open Science https://www.cos.io/initiatives/registered-reports available for reuse under a CC BY 4.0 license.

Figure 2: An excess of positive results: comparing the standard psychology literature with registered reports.

Copied from: Anne Scheel, Mitchell Schijen ,Daniel Lakens, PsyArXiv Preprints – https://psyarxiv.com/p6e9c/ – CC- By Attribution 4.0 International

Figure 3: Percentages of null findings among RRs and traditional (non-RR) literature, with their respective 95% confidence intervals.

Copied from: Christopher Allen, David M.A. Mehler – Open science challenges, benefits and tips in early career and beyond – PLOS Biology 2019 https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3000246#pbio-3000246-g001

mindthegap

Not all journals provide the possibility to register a report but a list of journals that do can be found here:

mindthegap

(Publication) bias

Although often unintended, biases in analysing and interpreting the experimental data, and the hope to find positive/significant data, might distract from the original research question. In addition, as negative results are often (incorrectly) perceived to be of lower quality and not worth pursuing, this can bias which results are reported into the literature (publication bias). These practices reduce the credibility and replicability of research.

Possible flaws in a study design

Slippery Slope
mindthegap

Unreliable research findings due to flawed study design

When a study is designed in a way not aligned with methodological good practice, you may get wrong answers to your research question. This is not an improbable scenario, but a major problem in science today, as shown by Ioannidis in his 2005 article “Why most published research findings are false”.

Unreliable results, attributable to flawed study design, constitute a terrible waste. That your research is useless, is not only a waste of your time and energy, but even worse, it may distort the literature with wrong insights, guiding future research in irrelevant directions.

mindthegap

As such, designing bad research if it could have been prevented, may be seen as an unacceptable research practice – precisely because of its detrimental impact on the literature. Unreliability of findings can be reduced by carefully designing your research, learning good design practices from the literature and double-checking methodology matters with your supervisor and colleagues.

mindthegap

Bias

Inappropriate study design is one of the most common causes of bias in research which results in false positive findings.

According to Ioannidis, the most important drivers of the high rate of false positive findings in clinical medicine and biomedical research, are:

  • Solo, siloed investigator limited to small sample sizes
  • No pre-registration of hypotheses being tested
  • Post-hoc cherry picking of hypotheses with best p-values
  • Only requiring p <0.5
  • No replication
  • No data sharing

mindthegap

Integration of sex and gender analysis as a mark of research excellence

Wherever human beings or animals are involved in research, sex and gender will be an issue and should be considered and addressed in the research design. Addressing the sex and gender dimension of research implies that sex and gender are considered as key analytical and explanatory variables in research. If relevant sex or gender issues are missed or poorly addressed, research results will be partial and potentially biased. Responsible and excellent research practices therefore consider sex and gender aspects in all stages of the research cycle.

mindthegap

Gender biased research

Gender biases originate in the often unintentional and implicit differentiation between men and women by placing one gender in a hierarchical position relative to the other in a certain context, as a result of stereotypical images of masculinity and femininity. An example of gender bias in research focuses on the experience and point of view of either men or women, while presenting the results as universally valid for both sexes. Gender biases influence the participation of men and women in research and the validity of research.

mindthegap

These tools can support and inspire you in making your research sex and gender-sensitive:

  • Gendered Innovations  Practical methods for sex and gender analysis in science, health & medicine, engineering and environment. Case studies provide concrete illustrations of how sex and gender analysis leads to innovation. (Stanford University)
  • Toolkit Gender in EU-funded research Toolkit with accessible introduction and practical tools on the integration of the gender dimension in research. Analysis of case studies drawn from health; food, agriculture and biotechnology; nanosciences, materials and new production technologies; energy; environment; transport; socio-economic sciences and humanities; science in society and international cooperation.
  • Webinars
  • Integration of sex and gender analysis into research: how to integrate sex, gender, and intersectional analysis into research design, and how this can lead to discovery and improved research methodology.
  • Gender in research (proposals and projects)
mindthegap

Gender-blind research

Gender-blind research does not take gender into account when relevant, being based on the often incorrect assumption that possible differences between men and women are not relevant for the research at hand. This is the case when general categories such as ‘people’, ‘patients’ or ‘users’ do not distinguish between men and women. Research based on such categories may well draw partial conclusions based on partial data. This limits the generalizability and applicability of research findings.

mindthegap

Gender-sensitive research is qualitatively better and more valid. When research takes into account the differences between men and women in the research population, the results will be more representative.

Reproducibility and replicability of research

jumping-icon base

Reproducibility and replicability of research

mindthegap

ALLEA Code:

According to the ALLEA Code, researchers should design, carry out, analyse and document research in a careful and well-considered manner. In addition, researchers should aim to report their results in a way that is compatible with the standards of the discipline and, where applicable, can be verified and reproduced.

Based on the definition proposed by the US National Academies of Sciences, Engineering, and Medicine (Reproducibility and Replicability in Science (2019), p. 6), “reproducibility is obtaining consistent results using the same input data; computational steps, methods, and code; and conditions of analysis. This is different from replicability which is obtaining consistent results across studies aimed at answering the same scientific question, each of which has obtained its own data.” Please note that within the literature reproducibility and replicability are often used interchangeably. So while they are strictly speaking not synonyms, a failure to either reproduce or replicate findings both may point to the same, namely inconsistent results.

During the last decade, concerns have been raised regarding the lack of reproducibility and replicability of research findings in many disciplines, including social sciences (Caramer, 2018), psychology (Open Science Collaboration, 2015) and biomedical sciences (Pusztai, 2013), raising concerns on the reliability of science (Nature, 2016), as denoted as the ‘reproducibility crisis’.

Why is a study not reproducible or replicable?

TEDEd Animation Is there a reproducibility crisis in science? by Matt Anticole (licensend under CC BY-NC-ND 4.0 International). Please note that within the video the concepts of reproducing, replicating and repeating research results are sometimes used as synonyms.

There are many different reasons why a study might not be reproducible, including a faulty experimental design, variables that are not taken into account, wrong statistical analysis, biased reporting focusing on the desired effects, no access to all information necessary to reproduce or conditions that can’t be recreated. Within biomedical sciences, failures to replicate research sometimes have been explained by the use of wrong reagentia, including incorrect labelled mice or cell lines (Kafkafi et al, 2018; Eckers et al, 2018).

Failing to reproduce and replicate data is an inherent feature of science. Therefore, failure to reproduce/replicate a study does not necessarily mean that a study is faulty and cannot be trusted. In some research fields the interpretation of the data, f.e. using historical sources, doesn’t necessarily requires reproducibility yet might still be desirable. However in other research fields, to be able to draw conclusions, researchers have to take sufficient measures in order to make reproducibility/replicability of the research possible. Potential measures can include:

  • clear standard operation protocols
  • detailed logging of all aspects of the research throughout the project
  • detailed logging of all metadata

In addition, sufficient details regarding research methodology and analysis have to be available to everyone not directly involved in the research.

mindthegap

The Reproducibility Manifesto provides some concrete actions that can help make your research more reproducible

mindthegap

When to think about this?

In order to make research reproducible/replicable, transparency is needed across all stages of research, starting at the conceptualization (what is the research protocol?) and data collection/analyses stages (which data are shown, which are not, and why this decision?), continuing into publication of the work (does the manuscript provide sufficient details regarding the used methodology/analysis protocol-are the data available?).

Available tools for reproducible and replicable research

As mentioned before, p-hacking and HARKing significantly reduce the potential to replicate research findings. As such, the use of preregistration and registered reports will have a very positive effect on the replicability of research.

Moreover, to increase the reproducibility and replicability of research, reporting guidelines and checklists have been drawn up for several disciplines. These usually “specify a minimum set of items required for a clear and transparent account of what was done and what was found in a research study, reflecting, in particular, issues that might introduce bias into the research” (Adapted from the blog “It’s a kind of magic: how to improve adherence to reporting guidelines” Marshall, D. & Shananhan, D. (2016, February 12)).

Some examples include:

  • The ARRIVE (Animal Research: Reporting of in Vivo Experiments) guidelines intended to improve the reporting of research using animals.
  • The Materials Design Analysis Reporting (MDAR) Checklist for Authors that is applicable to studies in the life sciences.
  • The ICMJE Recommendation for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals.
  • Within psychology, reporting standards for quantitative research in psychology have guidelines published by the American Psychological Association
  • Inspiration for qualitative research can be drawn from the SRQR and COREQ standards..
  • Researchers in humanities or empirical social sciences can look into the standard published by the American Educational Research Association (AERA).
  • Resources for Reproducible Research by Reproducibility for Everyone

Finally, within certain fields, specific guidance and tools may exists to increase the reproducibility of the analyses (focusing on the code and software used).

mindthegap

Please be aware that some journals have their own guidelines or refer to specific other guidelines. Please consult the instructions for authors while preparing your manuscript for submission. Additional reporting guidelines have been listed by NIH and the EQUATOR network.

Design and methodology

jumping-icon base

Design and methodology

Research integrity is all about doing research in a responsible way. Research wants to answer or reflect upon questions about the real world. Therefore, researchers set up studies or experiments, designed specifically to be able to answer their research questions. The choices in the study design are crucial, largely determining the quality of the research. Because of their importance, the methods, rules, models and principles used in research are a topic of (theoretically) study on their own, together called methodology.

It is essential for all researchers to be familiar with the good practices of methodology and design in their field of study.

mindthegap

ALLEA Code:

The ALLEA Code even states it as the first of four principles of research integrity:

  • Reliability in ensuring the quality of research, reflected in the design, methodology, analysis, and use of resources.

Good academic practices on (different) methodologies

Methodologies differ largely among disciplines – literature researchers of course use other methods than molecular biologists. Courses in discipline specific methodology are an obligatory part of most university education programmes, as well as doctoral training programmes.

mindthegap

ALLEA Code:

  • Research institutions and organisations ensure that researchers receive rigorous training in research design, methodology, analysis, dissemination, and communication.

It’s impossible to summarise all disciplines’ methodologies, but in most research fields, for instance, the following considerations are key:

  • The variables measured (both influencing factors and outcome) should correspond to the question you want to answer, as directly as possible.
  • The design of the experiment or study should enable you to answer the research question.
  • Some form of randomisation might be advisable to eliminate the unwanted influence of confounding factors (like unknown or unmeasurable ones).
  • The sample size should be appropriate: large enough to be able to statistically detect existing phenomena (according to a power calculation) – but, if applicable, not larger than ethically responsible.

Good academic practices on design and methodology

When you have carefully designed and planned your research actions and analysis, many research disciplines consider it a good practice to:

  • Specify all choices for the study design and research activities in a protocol (that may even be published).
  • Preregister the study and plan of analysis.
  • Have your preregistered study and analysis plan peer reviewed, and in principle accepted for publication, before gathering data – so-called registered reports.
  • These exemplary practices reduce the risks of unacceptable research practices like bad study design (because your study design will be reviewed before actually doing the research) or HARKing.

Cartoon by Patrick Hochstenbach under a Creative Commons CC BY-SA 4.0 license

Collaborating

If you perform a study or experiment together with colleagues from other disciplines or institutions, it is vital to share the same vision on the study. Take the time to make sure everyone is on board with all aspects of the plan, including why choices were made. Agree on the plan for data analysis with all partners, preferably before gathering the data.

Design and conduct

jumping-icon base

Getting a headstart together

As a researcher you have the responsibility to conduct research in line with (inter)national and institutional norms and values, and where applicable, legal obligations. This does, however, not mean that this responsibility should lie entirely in the hands of the researcher. Although research usually starts with an idea, this idea can only be further developed when the necessary infrastructure and research tools are available. Given the complexity of modern research, support by the host research institution and/or your department/faculty is indispensable when aiming for high-quality research and research integrity. The following module will look at how good academic research practices respect the basic principles of integer research:

Who is involved?

Researchers Researchers in general
Group - Faculty - Department
University

Support from your host institution can come in many forms. Institutions can promote awareness and encourage a culture of research integrity, for example by having clear guidelines and policies in place to promote good research practices. However, research institutions are also expected to provide the necessary proper infrastructure. This includes the necessary space and equipment required to carry out a research project, including an infrastructure for the management and protection of data and research materials in all their forms.

mindthegap

ALLEA Code:

  • Research institutions and organisations promote awareness and resource incentives to ensure a culture of research integrity.
  • Research institutions and organisations create an environment of mutual respect and promote values such as equity, diversity, and inclusion.
  • Research institutions and organisations create an environment free from undue pressures on researchers that allows them to work independently and according to the principles of good research practice.
  • Research institutions and organisations demonstrate leadership in clear policies and procedures on good research practice and the transparent and proper handling of suspected research misconduct and violations of research integrity.
  • Research institutions and organisations actively support researchers who receive threats and protect bona fide whistleblowers, taking into account that early career and short-term employed researchers may be particularly vulnerable.
  • Research institutions and organisations support appropriate infrastructure for the generation, management, and protection of data and research materials in all their forms that are necessary for reproducibility, traceability, and accountability.

In addition, due to the complexity of research, it is necessary for institutions to have support in place to assist researchers in need of advice. Researchers should be able to find help with good academic practices, ethics approvals (in the form of ethics committees), legal issues and their own safety and that of others when planning and conducting research.

Researchers themselves have responsibilities. In the first place, it is of utmost importance that researchers familiarise themselves with the good academic research practices described in the ALLEA Code, and the host institution’s expectations and procedures with regards to research. Researchers should ensure that they use scientific methods that are justifiable within their field, and must be precise and accurate when performing research. They should check whether the tools they intend to use (for instance, laboratory equipment, standard questionnaires, archives) are adapted to the work to be performed and are ready to be used in optimal conditions. (Code of ethics for scientific research in Belgium, 2009)