Helpful Reminders to Ensure Integrity of NIH-Supported Research When Using Artificial Intelligence
By NIH and HHS Office of Research Integrity Staff
The rapid adoption of artificial intelligence (AI) has led to powerful new advances in biomedical research, from protein folding to cancer diagnosis and more. Recognizing its importance, we have also sought investigator-initiated research involving these tools to address various diseases and conditions affecting the health of Americans (see more in our Highlighted Topics resource). Alongside these NIH-supported scientific advances and priority research areas involving AI, we have also occasionally observed challenges with the use of these tools that affect the integrity of the science we support. Today, we are sharing some helpful reminders on the appropriate use of AI tools when applying, managing awards, and conducting the research process itself.
As a reminder, NIH:
- Limits the number of applications to six that designate the same principal investigator per calendar year, partly in response to the rising numbers of applications generated with AI tools (see these FAQs for more)
- Prohibits peer reviewers from using generative AI for their study section critiques (see these FAQs for more)
- AI tools may be appropriate to assist in application preparation for limited aspects or in specific circumstances, but researchers should be aware that using AI comes with its own risks. Applications that are either substantially developed by AI or contain sections substantially developed by AI are not considered the original ideas of applicants and will not be considered by NIH.
As we explain in this archived NIH Open Mike blog, when we receive a grant application, it is our understanding that it is the original idea proposed by the institution and their affiliated research team. Using AI tools may introduce several concerns related to research misconduct, like including text from someone else’s work without acknowledgment or hallucinating references.
In situations where NIH identifies possible misconduct arising from the use of these tools, NIH and the HHS Office of Research Integrity (ORI) work together per our standard practices. Generally, research misconduct includes fabrication, falsification, and plagiarism. In certain situations, the definition of these behaviors can also be directly related to the inappropriate use of AI. When researchers intentionally, knowingly, or recklessly use AI tools in a manner that deviates from accepted research practices, they may cross the line into research misconduct. And depending on the extent, we may take specific actions to remedy any possible non-compliance.
Situations we may consider:
- Generating data with AI and claiming it was obtained another way, which could constitute fabrication or falsification
- Completely generating grant applications and/or papers using AI tools (including agents) that may have incorrect, misleading, or copied information
- Altering images with AI without full disclosure, which may constitute data falsification
- Presenting AI-generated, non-existent references overtly as real, which could constitute data fabrication (see this 2026 paper)
- Copying substantial portions of text using AI tools without disclosure, which could be plagiarism
What researchers can do:
- Clearly describe in applications, manuscripts, and presentations the use of the AI tools and how they may have been used during the development of an application, research itself, and/or data analysis and subsequent results, including a section on described methods for reproducibility
- Disclose any specific image-editing processes used
- Appropriately cite references and carefully review and confirm information is accurate
- Consult institutional and journal policies and procedures to determine their acceptable practices, including if they allow for disclosing AI use, prohibiting AI-generated images or text, and crediting the tools as authors
- Use the tools to edit or polish text in alignment with institutional, journal, or funders policies
Together, and as part of implementing NIH’s Gold Standard Science Plan, we aim to protect the integrity of the research NIH supports, resulting outcomes, and help foster the responsible use of AI. We do not, however, plan to raise concerns when AI is used in accordance with responsible research practices.
In addition to currently available guidance, ORI is developing best practices to prevent AI-related research misconduct and guidance on the appropriate use of AI tools throughout the research process. If you have any related suggestions, challenges, or opportunities to consider, please reach out directly to [email protected]. Please include “AI” in the subject line. Your input will help ensure any future guidance is practical and effective for the NIH-supported research community.
Additional Resources:
National Science Foundation Policy Notice: includes falsification, fabrication, and plagiarism, “whether committed by an individual directly or through the use or assistance of other persons, entities, or tools, including [AI]-based tools.”