Policy on the use of artificial intelligence

Statement of Use and Transparency

Authors are required, when applicable, to explicitly declare the use of generative artificial intelligence (GAI) tools in the writing of their articles, considering them as an additional tool in the process of data analysis or processing. This declaration must be included in the methods section of the manuscript, specifying, among other details, which tools were used, as well as the purpose and manner in which the artificial intelligences were employed. This measure aims to ensure transparency in content creation, allowing reviewers and readers to fully understand how GAIs influenced the development of the work.

Authorship and Legal Responsibility

Authors are responsible for ensuring the accuracy, originality, and ethical compliance of the data and analyses presented in the article. Any infringement of copyright, such as plagiarism resulting from improper citations or AI-generated hallucinations, is entirely the responsibility of the authors, and under no circumstances of the publisher or the tools used. Therefore, GAIs cannot be considered as authors of the manuscript under any circumstances.

Confidentiality Risks

Authors and reviewers must be aware of the potential risks associated with the use of AI, including the possibility of confidential data leaks. All parties involved are expected to take the necessary precautions to protect the confidentiality of sensitive data used in the research, ensuring that the use of AI does not compromise the security of the information.

Bias Review

It is imperative that authors review and report any potential bias introduced by the use of AI in content creation. They must also describe the measures taken to identify and mitigate these biases, ensuring that the final work reflects a fair and balanced interpretation of the data.

Ethics in AI

The use of generative AI in article writing must align with the ethical standards of scientific research. Therefore, authors must avoid practices such as plagiarism or the generation of misleading content and ensure that the AI used does not distort the results or their interpretation. In all cases, authors are required to validate the results obtained through AI to ensure their accuracy and compliance with scientific standards.

This policy aims to promote a responsible and ethical use of artificial intelligence in scientific writing, ensuring that generative AI tools effectively contribute to the advancement of knowledge and productivity without compromising the integrity of the research.