The Journal of Digital Pedagogy recognises that artificial intelligence tools are increasingly integrated into scholarly research and publishing. This policy establishes a comprehensive framework for the responsible, transparent, and ethical use of AI technologies by authors, reviewers, and editors. It applies to all stages of the publication process, from manuscript preparation through peer review to editorial decision-making.
The journal’s position is that AI tools may serve as valuable assistants in the scholarly process, but they cannot replace human intellectual contribution, professional judgment, or ethical responsibility. The human author, reviewer, or editor remains solely accountable for the quality, accuracy, and integrity of their work.
AI USE BY AUTHORS
Authors may utilise AI tools and technologies to assist in various aspects of manuscript preparation, including but not limited to writing assistance, data analysis, literature review, translation, formatting, and research support. When employing AI technologies in their research and writing process, authors must:
Provide transparent disclosure of any AI tools used in the preparation of their manuscript, including the specific tools (name and version where applicable), the purpose of their use, and the extent of their involvement. This disclosure should be included in the Acknowledgments section or in a dedicated AI Disclosure Statement placed before the reference list.
Ensure that the use of AI does not compromise the originality and integrity of their work. The substantive intellectual contribution — including the research question, study design, interpretation of results, and scholarly argumentation — must originate from the human authors.
Maintain full responsibility and accountability for all content in their manuscript, regardless of AI assistance. This includes the accuracy and reliability of any AI-generated content, including citations, data analysis, and factual claims. Authors should be aware that AI tools may produce inaccurate, fabricated, or biased outputs, and must verify all such content before submission.
Comply with institutional guidelines and ethical standards regarding AI use in academic research. Authors should also respect copyright and intellectual property rights when using AI tools that may process copyrighted materials, and ensure that any data processed by AI systems complies with relevant data protection regulations and privacy requirements.
AI-generated content cannot be listed as an author or co-author of the manuscript. Authorship implies accountability, which can only be borne by human individuals.
Authors should use AI technologies as supportive tools while preserving the fundamental principles of academic integrity, original scholarship, and responsible research conduct.
AI USE BY REVIEWERS
Reviewers may utilise AI tools to assist in various aspects of the peer review process, including but not limited to manuscript analysis, literature verification, statistical review, language assessment, and preparation of review reports. When employing AI technologies in their review activities, reviewers must:
Maintain absolute confidentiality of manuscript content and ensure that AI tools used comply with strict data protection and privacy standards. Reviewers must never input confidential manuscript content — including text, data, figures, or any identifiable information — into public or unsecured AI systems that may store, learn from, or redistribute the data. This includes general-purpose chatbots, publicly available large language models, and any tools whose data handling practices are unclear or that do not guarantee confidentiality.
Use only AI tools that guarantee data security, confidentiality, and compliance with academic publishing ethics. When in doubt about whether a particular tool meets these standards, reviewers should consult the editor before use.
Retain full professional judgment and responsibility for all review conclusions and recommendations, with AI serving only as an analytical support tool. The scholarly evaluation — including the assessment of originality, significance, methodological rigour, and contribution to the field — must reflect the reviewer’s own expert judgment.
Disclose to the editor any significant use of AI technologies in their review process when transparency is warranted. A brief note indicating the nature of AI assistance (e.g., “AI tools were used to assist with statistical verification” or “Language analysis was supported by AI”) is sufficient.
Ensure that AI assistance does not compromise the thoroughness, objectivity, or quality of their peer review, and verify and validate any AI-generated insights, suggestions, or analyses before incorporating them into their review.
Respect the original work of authors and avoid using AI in ways that could lead to inappropriate extraction or replication of unpublished research. Manuscript content processed through AI tools must not be retained, shared, or used beyond the scope of the review.
Reviewers must exercise the same level of professional responsibility and ethical conduct when using AI tools as they would in traditional peer review, ensuring that confidentiality, integrity, and scholarly rigour remain paramount throughout the review process.
AI USE BY EDITORS
Editors may utilise AI tools to assist in the editorial and peer review processes, including but not limited to manuscript screening, plagiarism detection, language assessment, reviewer matching, and administrative tasks. When employing AI technologies, editors must:
Ensure full compliance with the EU General Data Protection Regulation (GDPR) and all applicable data protection laws. Manuscript content and author information must be processed only through tools that guarantee data security and privacy protection.
Maintain strict confidentiality of manuscript content and author information. AI tools used in editorial processes must not store, learn from, or redistribute manuscript data beyond the immediate editorial purpose.
Retain ultimate editorial judgment and responsibility for all decisions, with AI serving only as an assistive tool. No editorial decision — including desk rejection, reviewer selection, or final publication decision — may be delegated to an AI system. The editor’s professional assessment remains the basis of all decisions.
Disclose the use of AI technologies when transparency is required or when it materially affects the review process. The journal will communicate to authors if AI tools are used in plagiarism screening or other standard editorial checks.
Regularly assess and audit AI tool usage to ensure compliance with ethical standards and data protection requirements. The editorial team maintains awareness of the evolving capabilities and limitations of AI tools and updates practices accordingly.
The use of AI technologies by editors should enhance the efficiency and quality of the editorial process without compromising the integrity, confidentiality, and human oversight essential to scholarly publishing.
GENERAL PROVISIONS
This policy applies to all submissions received by the journal and to all participants in the editorial process. Non-compliance with the disclosure requirements may result in the manuscript being returned for revision, delayed in the review process, or, in cases of deliberate concealment that compromises the integrity of the work, rejected.
The journal recognises that AI technologies evolve rapidly and commits to reviewing and updating this policy periodically to reflect current best practices, technological developments, and emerging guidance from COPE, DOAJ, and other scholarly publishing organisations.
Questions about the application of this policy to specific situations should be directed to the Editor-in-Chief (olimpius.istrate@unibuc.ro) or the journal secretariat (editor@digital-pedagogy.eu).