Artificial Intelligence (AI)
Artificial Intelligence (AI)
Funding agencies, scientific journals and other stakeholders are increasingly paying attention to the ethical aspects of AI. Several organisations have developed guidelines to make clear what should be understood by ‘responsible artificial intelligence’.
In Europe, the EU ethics guidelines for trustworthy AI (2019) have been recognised as the guiding ethics principles on AI. The guidelines state that when developing or deploying AI, the following requirements must be observed:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Transparency
- Diversity, non-discrimination and fairness
- Societal and environmental well-being
- Accountability
A useful tool to ensure that your research project complies with the European guidelines is the Assessment List for Trustworthy Artificial Intelligence (ALTAI), which was also developed by the EU. Don’t forget to refer to the EU ethics guidelines and ALTAI when applying for research funding, in particular EU funding.
Moreover, in December 2023, a European legal framework went into effect to regulate Artificial Intelligence, called the Artificial Intelligence Act. The AI act starts from a risk-based approach in which AI applications are divided into three risk categories.
Specific challenges related to Generative AI (GenAI)
Although GenAI tools can definitely assist researchers within different phases of the research cycle, these tools should be used in a critical and responsible manner. Some of the main integrity related considerations regarding GenAI are, that AI-generated texts lack the necessary source citations or can provided invented or non-obvious citations. As a result, it is not always possible to correctly attribute the content of the texts to the original author and there is a risk of plagiarism and violation of intellectual property rights. Furthermore, the generated information is not always correct or up to date either. It has also been found that (generative) AI is not free of bias or can even develop harmful content. Since tools like ChatGPT have access to very broad datasets that sometimes contain personal and/or sensitive information, respecting privacy and the GDPR also plays an important role. This should also be taken into account when inputting data into the AI system or when creating prompts. Finally, researchers using AI should also consider how they acknowledge its use in publications, presentations and/or project applications.
ALLEA Code:
Good academic practice:
- Researchers report their results and methods, including the use of external services or AI and automated tools, in a way that is compatible with the accepted norms of the discipline and facilitates verification or replication, where applicable.
Unacceptable practice:
- Hiding the use of AI or automated tools in the creation of content or drafting of publications.