Artificial Intelligence (AI) Policy

These policies have been developed in response to the rapid advancement and widespread adoption of generative artificial intelligence and AI-assisted technologies in scholarly research and publishing. As these tools continue to evolve, their responsible and transparent use has become essential to maintaining academic integrity and trust in the scholarly record. The purpose of these guidelines is to clearly define acceptable and unacceptable uses of AI technologies across all stages of the publication process. They are designed to support authors, reviewers, editors, readers, and contributors by promoting ethical conduct, accountability, and clarity in the creation, evaluation, and dissemination of research. The journal recognizes that technological innovation is ongoing and that best practices surrounding AI use will continue to develop. Accordingly, these policies will be periodically reviewed and updated to reflect emerging standards, regulatory considerations, and community expectations, ensuring continued alignment with responsible research and publishing practices.

For Authors 

Use of Generative Artificial Intelligence and AI-Assisted Technologies in Manuscript Preparation

Environmental Reports: An International Journal acknowledges that generative artificial intelligence (AI) and AI-assisted technologies can support researchers in improving efficiency, clarity, and organization during manuscript preparation when used responsibly and transparently. Such tools may assist with literature synthesis, structuring content, language refinement, and preliminary idea development. However, AI technologies must not replace human scholarly judgment, originality, or critical analysis. All academic decisions, interpretations, conclusions, and responsibility for the work remain entirely with the human authors.

Author Responsibility and Accountability

  • Authors are fully accountable for all aspects of their submitted work, including any content that has been supported by AI tools. This responsibility includes, but is not limited to:
  • Verifying accuracy and reliability of all AI-assisted outputs, including checking facts, data, references, and citations, as AI-generated content may contain errors or fabricated sources.
  • Ensuring originality and scholarly contribution, by thoroughly revising and adapting AI-assisted material so that the manuscript reflects the authors’ own intellectual input, interpretation, and conclusions.
  • Maintaining transparency, by clearly declaring the use of AI tools where applicable.
  • Protecting ethical standards, including data privacy, intellectual property, confidentiality, and compliance with applicable laws and institutional policies

Responsible Use of AI Tools

Authors must review and comply with the terms and conditions of any AI tool used during manuscript preparation. Special attention should be given to:

  • Data privacy and confidentiality, particularly when handling unpublished manuscripts, sensitive datasets, or personally identifiable information.
  • Intellectual property rights, ensuring that uploaded materials are not used by AI tools beyond providing the requested service (e.g., training models without consent).
  • Bias and reliability, by carefully reviewing AI-assisted outputs for factual inaccuracies, misrepresentation, or unintended bias.
  • AI tools must not be used to generate content that imitates copyrighted material, identifiable individuals, proprietary products, or brand identities, nor to replicate voices or likenesses.

Disclosure of AI Use

Authors are required to provide a clear AI Declaration Statement at the time of manuscript submission if AI tools were used during manuscript preparation. The declaration should include:

  • The name of the AI tool(s) used
  • The purpose of use (e.g., language editing, organization, data summarization)
  • The extent of human oversight and revision
  • Basic spell-checking or grammar correction tools do not require disclosure.
  • If AI tools were used during the research process itself, this must be described in detail in the Methods section to ensure transparency and reproducibility

Authorship Criteria

AI tools must not be listed as authors or co-authors, nor cited as authors. Authorship implies responsibility, accountability, and the ability to approve the final manuscript—functions that only human contributors can fulfill. All listed authors must meet standard authorship criteria, approve the final version of the manuscript, and accept responsibility for the integrity and originality of the work.

Use of AI in Figures, Images, and Artwork

The use of generative AI or AI-assisted tools to create, modify, or manipulate images, figures, or visual data in submitted manuscripts is not permitted. This includes adding, removing, altering, or enhancing visual elements in a way that could misrepresent the original data.

  • Permitted adjustments are limited to minor technical corrections (e.g., brightness or contrast), provided they do not alter or obscure scientific information.
  • The only exception applies when AI-assisted image processing is an integral part of the research methodology (e.g., AI-based environmental imaging or modeling). In such cases, authors must:
  • Describe the AI methodology in a reproducible manner in the Methods section
  • Identify the tool name, version, and developer
  • Provide original, unprocessed images if requested for editorial review
  • Graphical Abstracts and Cover Art
  • The use of generative AI for graphical abstracts is not permitted.
  • AI-generated cover artwork may be considered only with prior written approval from the journal’s editorial office and upon confirmation that all rights and attributions are properly cleared.

For Reviewers

Use of Generative AI and AI-Assisted Technologies in the Peer Review Process

Environmental Reports: An International Journal upholds peer review as a fundamental pillar of scholarly publishing, grounded in confidentiality, academic integrity, and independent expert judgment. Reviewers are expected to follow the highest ethical standards when evaluating submitted manuscripts.

Confidentiality of Manuscripts and Review Reports

All manuscripts received for review must be treated as strictly confidential. Reviewers must not upload, share, or input any part of a submitted manuscript into generative AI or AI-assisted tools. Doing so may:

  • Compromise the confidentiality and intellectual property rights of the authors
  • Violate data protection and privacy regulations, particularly where personal or sensitive information is involved
  • This confidentiality obligation also applies to peer review reports. Reviewers must not use AI tools to draft, edit, summarize, or improve the language of their review comments, as these reports may contain confidential assessments, critiques, or recommendations.
  • Prohibition of AI Use in Scientific Evaluation
  • Reviewers are not permitted to use generative AI or AI-assisted technologies to conduct, support, or influence the scientific evaluation of a manuscript. Peer review requires:
  • Independent critical thinking
  • Subject-matter expertise
  • Contextual and ethical judgment

These responsibilities cannot be delegated to automated systems. The use of AI in the review process introduces risks, including inaccurate interpretations, incomplete assessments, or biased conclusions. Reviewers remain fully responsible and accountable for the content, quality, and conclusions of their review reports.

Transparency Regarding Author AI Declarations

Authors submitting manuscripts to Environmental Reports may be permitted to use AI tools during manuscript preparation, provided such use is appropriately disclosed. When applicable, author disclosures regarding AI use will appear in a dedicated AI Declaration section within the manuscript, typically placed before the references. Reviewers are encouraged to consider this information as part of their overall evaluation, without bias.

  • Use of AI by the Journal and Publisher
  • Environmental Reports, an International Journal may employ identity-protected, AI-assisted technologies to support editorial and administrative workflows. These may include:
  • Initial manuscript screening
  • Plagiarism detection
  • Completeness checks
  • Reviewer selection and workflow management
  • Such tools are used internally, comply with responsible AI principles, and adhere to applicable data privacy and confidentiality standards. They do not compromise the anonymity, rights, or intellectual property of authors or reviewers.
  • Commitment to Ethical Peer Review

Environmental Reports: An International Journal supports the responsible and ethical integration of AI technologies that assist—but never replace—human editorial and reviewer judgment. The journal remains committed to protecting confidentiality, ensuring integrity in peer review, and continuously improving editorial processes through transparent and well-governed innovation.

For editors

The use of generative AI and AI-assisted technologies in the journal editorial process

A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.

This confidentiality requirement extends to all communication about the manuscript including any notification or decision letters as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI tool, even if it is just for the purpose of improving language and readability.

Peer review is at the heart of the scientific ecosystem and Environmental Reports, an International Journal abides by the highest standards of integrity in this process. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI or AI-assisted technologies should not be used by editors to assist in the evaluation or decision-making process of a manuscript as the critical thinking and original assessment needed for this work is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision and the communication thereof to the authors.

Environmental Reports, an International Journal’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the manuscript preparation process before submission, but only with appropriate oversight and disclosure, as per our instructions in Environmental Reports, an International Journal’s Guide for Authors. Editors can find such disclosure at the bottom of the paper in a separate section before the list of references. If an editor suspects that an author or a reviewer has violated our AI policies, they should inform the Research Floor or Environmental Reports, An International Journal editorial team.

Please note that Environmental Reports, an International Journal owns identity protected AI-assisted technologies which conform to the RELX Responsible AI Principles, such as those used during the screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These in-house or licensed technologies respect author confidentiality. Our programs are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.

Environmental Reports, an International Journal embraces new AI-driven technologies that support reviewers and editors in the editorial process, and we continue to develop and adopt in-house or licensed technologies that respect authors’, reviewers’ and editors’ confidentiality and data privacy rights. Generative AI refers to artificial intelligence systems capable of creating new content such as text, images, audio, video, or synthetic data based on user input. Common examples include large language models and content-generation platforms such as ChatGPT, DALL·E, Jasper, Rytr, and similar tools.

Policy last updated: December 2025.

Join us in our efforts to shape a sustainable future through research and innovation.