The use of generative artificial intelligence (GEN-AI) tools in articles published in the journal must be conducted within an ethical, transparent, and responsible framework. This policy defines the limits of the use of AI-assisted tools for authors, editors, and reviewers.
The ethical, transparent, and responsible use of generative AI tools in articles to be published in our journal must be ensured. The IHEAD Artificial Intelligence Policy is designed to clearly define the limits of the use of AI-assisted tools for authors, editors, and reviewers.
Our journal adopts the principles established by COPE regarding the use of generative artificial intelligence (GEN-AI)-assisted technologies in the article preparation process. GEN-AI tools cannot be identified or presented as "authors" under any circumstances in academic or scientific studies. These tools do not have the authority to assume authorship or evaluation responsibility; scientific publications must be based on the researcher's original thought and original findings. GEN-AI tools may only be used for language and style adjustments; However, this situation must be clearly stated within the article.
Our journal is committed to adhering to copyright and publication ethics regulations. Considering the ongoing legal uncertainties and potential copyright infringements related to images generated by generative artificial intelligence, the use of such materials in our publications is generally not permitted. The exception to this rule is stated below:
Our journal is committed to adhering to copyright and publication ethics regulations. Considering the ongoing legal uncertainties and potential copyright infringements related to images generated by generative artificial intelligence, the use of such materials in our publications is generally not permitted. The exception to this rule is stated below:
For images directly cited as examples or referenced for analysis in research in the field of artificial intelligence, it must be clearly stated in the notes section below the image that it was created using generative artificial intelligence and which tool was used (Example: Note. The image was created using artificial intelligence (DALL·E, OpenAI, 2026)). Furthermore, this statement should be included in more detail before the references section of the article.
It is essential that generative artificial intelligence (GEN-AI) be used responsibly and under human supervision. To ensure transparency and accountability in the use of these systems, authors must meticulously verify the accuracy and appropriateness of images generated by GEN-AI and clearly state the AI method used in the study. All content created through GEN-AI must comply with scientific standards and ethical principles.
Authors may utilize GEN-AI tools for purposes such as language editing, spell checking, or technical improvement. However, the scientific content of the study, the analyses conducted, and the results obtained are entirely the responsibility of the authors. The process of developing original ideas and formulating hypotheses belongs solely to the author. Similarly, stages such as interpreting, discussing, and drawing conclusions from findings cannot be delegated to generative AI. Generating methods used in data analysis by GEN-AI is considered ethically problematic. Generative AI systems cannot be listed as authors under any circumstances. If artificial intelligence (AI) is used in any way, this must be clearly stated in the article. Authors are ultimately responsible for the accuracy and ethical appropriateness of the AI-generated content. Confidential patient information, data requiring ethical committee approval, or copyrighted works cannot be uploaded to AI systems. These tools cannot function to create the scientific content of the study.
Editors may use AI-supported systems for limited purposes during the preliminary review phase, such as examining language quality or conducting similarity analysis. However, all editorial decisions are solely the authority and responsibility of the editors. Even in this process, confidential patient data, information requiring ethical committee approval, or copyrighted works cannot be uploaded to AI systems.
Reviewers cannot upload the content of the articles they review to any AI system during the evaluation process; doing so constitutes a violation of data confidentiality. Reviewers may only use AI tools for linguistic editing of their own review texts. The responsibility for the scientific content of the reviewer reports rests entirely with the reviewer.
A violation is considered to have occurred if the disclosure obligation is not fulfilled, if an incomplete disclosure is made, or if the content is used in a prohibited manner. In such a case, one of the following sanctions may be applied: rejection of the article, retraction after publication, editorial ban imposed by the journal on the author in question, and official notification to the author's institution.

