Generative AI Policy
The Journal of Computer for Science and Mathematics Learning (JCSML) acknowledges the rapid advancement of generative Artificial Intelligence (AI) and AI-assisted technologies, including Large Language Models (LLMs) such as ChatGPT, in academic research and scholarly publishing. While these technologies may enhance efficiency in writing, editing, and data processing, their use must remain responsible, ethical, and transparent.
AI tools cannot replace human intellectual contribution, critical thinking, scholarly expertise, or evaluative judgment. Authors retain full responsibility and accountability for the content of their manuscripts, including accuracy, validity, originality, and compliance with ethical and legal standards.
A. Policy for Authors
1. Authorship and Accountability
No AI Authorship. AI tools do not qualify for authorship. Authorship entails responsibilities, such as intellectual contribution, approval of the final manuscript, and accountability for the integrity of the work, that can only be fulfilled by humans. Accordingly, AI tools must not be listed as authors or co-authors.
Human Oversight and Responsibility. Authors must critically review, verify, and validate any content generated or assisted by AI. AI outputs may contain biases, inaccuracies, or fabricated information (hallucinations). Authors are fully responsible for ensuring that the manuscript is free from errors, plagiarism, and infringement of third-party rights.
2. Disclosure and Transparency
Authors must be transparent regarding the use of AI or AI-assisted technologies in the preparation of their manuscripts. Any use of such tools in writing, data collection, or data analysis must be explicitly disclosed.
Exceptions. The use of basic tools for spelling, grammar, references, and punctuation checks (e.g., standard spell-checkers or conventional Grammarly functions) does not require explicit disclosure unless such tools substantially alter the content or meaning of the text.
Mandatory Declaration. When applicable, authors should include a formal statement at the end of the manuscript using the following format:
Title of new section:
Declaration of generative AI and AI-assisted technologies in the manuscript preparation process.
Statement:
During the preparation of this work, the author(s) used [NAME OF TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article.
3. Placement of Disclosure
The declaration should be included in one of the following sections, as appropriate:
- Acknowledgements (preferred for general writing or language assistance)
- Methods (if AI was used in specific methodological procedures, such as data analysis or coding)
- A standalone “Declaration of AI Use” section is placed before the References
4. Use of AI in Figures and Images
In accordance with international publishing ethics, the use of generative AI to create, modify, or manipulate figures, images, or scientific illustrations is generally prohibited. An exception is made only when AI-generated content constitutes an integral part of the research methodology itself (e.g., studies examining AI-generated outputs). In such cases, the use of AI must be clearly justified and comprehensively described in the Methods section.
B. Policy for Peer Reviewers
Confidentiality is fundamental to the peer review process. Reviewers invited to evaluate manuscripts submitted to JCSML must treat all materials as strictly confidential.
Prohibition on Uploading Manuscripts. Reviewers are strictly prohibited from uploading submitted manuscripts, in whole or in part, into generative AI platforms (e.g., ChatGPT, Claude). Such actions compromise author confidentiality and may violate proprietary and data protection rights, as these tools may store or use input data.
Preparation of Review Reports. Reviewers must not use generative AI tools to write peer review reports. The evaluative, analytical, and ethical judgment required in peer review is a fundamentally human responsibility.
C. Policy for Editors
Editors are responsible for safeguarding the integrity of the editorial process and protecting author confidentiality.
Editorial Decision-Making. Generative AI must not be used to support or replace human judgment in final editorial decisions. Editorial evaluations require scholarly expertise, discretion, and accountability that cannot be delegated to AI systems.
Data Privacy and Confidentiality. Editors must not upload manuscripts, decision letters, reviewer reports, or confidential correspondence into public or third-party generative AI tools in order to prevent data breaches and protect the privacy of authors and reviewers.