Generative Artificial Intelligence (AI) Policy

Core Principles for Using Generative AI in ESA Submissions

At Energy Systems and Applications (ESA), we recognize the evolving role of generative AI in academic research and publishing. We embrace technological advancements while upholding the highest standards of scientific integrity and ethical conduct. Our policies on generative AI aim to provide clear guidelines for authors, editors, and reviewers.


Authorship and Responsibility

  • Generative AI tools cannot be listed as authors or co-authors. Human accountability for all aspects of a manuscript is paramount.
  • Authors are solely responsible for the entire content, accuracy, and originality of their articles. The use of AI tools does not absolve authors of their scientific and ethical obligations.

Transparency and Declaration

  • You must clearly declare all instances where AI tools were used in your article.
  • This declaration should appear in the "Method" or "Acknowledgements" section of your manuscript.
  • The declaration needs to be detailed, specifying the full names of the AI tools, their version numbers, how they were used, and for what purposes.

Permitted Usage Areas for Authors

Generative AI can be a valuable aid in specific aspects of manuscript preparation:

  1. Language and Readability Improvements: AI can help refine grammar, spelling, punctuation, and fluency. However, these edits should not alter the original content, only enhance its readability.
  2. Idea Development and Research Planning: AI tools can support the generation of research questions, idea brainstorming, and research planning. Still, the fundamental conceptual basis and methodological framework must stem from the author's unique scientific perspective.
  3. Coding Assistance and Data Analysis: AI can assist in creating codes for data analysis. Nevertheless, authors retain full responsibility for the consistency and appropriateness of all statistical analyses.
  4. Organization in Literature Reviews: AI can aid in organizing and categorizing existing literature. The depth and accuracy of the literature review remain the author's responsibility.
  5. Creating Visuals, Graphics, and Tables: AI can be used for:
    • Conceptual Diagrams and Explanatory Visuals: For visualizing theoretical constructs or processes, provided they accurately reflect the author's interpretations.
    • Data Visualization: To improve the visual quality of graphs, diagrams, and tables presenting research data.
    • Illustrations and Representative Visuals: To simplify complex concepts, as long as they are clear and do not mislead readers.
    • Transparency is key here: If AI tools are used for any visual elements (images, graphs, tables), this must be clearly stated in the description below the relevant visual, including the AI tool's name, version, and purpose. The scientific validity and accuracy of AI-generated visuals are the authors' sole responsibility.

Restricted or Prohibited Areas of Use for Authors

  • Content Creation: AI should not be used to write essential parts of the article, such as the abstract, introduction, literature review, or discussion, in their entirety. AI outputs should be treated as drafts, requiring significant modification, improvement, and detailed checking by the authors.
  • Generation and Interpretation of Results: AI cannot be used to generate, report, or interpret research results. Authors bear all responsibility for the accuracy, scope, and validity of data analysis results.
  • Reference Creation and Citation: Creating fabricated or unverifiable references or citing non-existent studies using AI tools is strictly prohibited. All sources must be verified and properly cited by the authors.
  • Academic Writing and Argument Development: The author is responsible for developing the article's core arguments, theoretical contributions, and main theses. AI can only serve as a supportive tool in these processes.

Procedures for Policy Violations

  • Failure to disclose AI use or using AI in violation of these policies may lead to rejection of the article.
  • If a violation is discovered in a published article, actions such as article retraction or the publication of a correction may be taken.
  • Repeated violations could result in the author's future submissions to ESA being rejected.

Policies for Generative AI Use for Editors

  • Confidentiality and Intellectual Property: Editors must not upload unpublished articles or related files, images, and information to AI tools. Protecting the confidentiality of manuscript content and authors' intellectual property rights is a primary responsibility.
  • Use of AI in Evaluation: Editors may use AI tools for evaluation processes (e.g., eligibility checks, reviewer selection) only with explicit permission from the journal management. Authors must be notified of any such AI use.
  • Reviewing Author Statements: Editors should carefully review authors' declarations regarding AI use and request additional information if needed. They are responsible for ensuring AI use complies with ESA policies.
  • Managing Suspicious Situations: If there are uncertainties about AI use, editors should discuss the issue openly with authors and seek additional evidence if necessary. Situations requiring detailed investigation should be escalated to the journal management.
  • Policy Updates: Editors are expected to stay informed about developments in generative AI technologies and updates to ESA's policies.

Policies for Generative AI Use for Reviewers

  • Confidentiality and Ethical Responsibility: Reviewers must never upload unpublished articles or associated files submitted for review to generative AI tools. This constitutes a breach of privacy and jeopardizes intellectual property rights.
  • Use of AI in Review Process: Reviewers should avoid using generative AI tools during the manuscript evaluation process. Reviews should be conducted using the reviewer's own expertise and knowledge.
  • Detecting AI Use: Reviewers should be vigilant for undeclared AI use in articles they review and report any suspected instances to the editors. Such determinations must be based on objective evaluation criteria.
  • Evaluation Ethics: Reviewers must evaluate authors fairly regarding AI use, separating journal rules from personal biases. Criticisms concerning AI use should be constructive and align with ESA's policies.

We believe these guidelines will help ensure that AI tools are used responsibly and ethically within the academic publishing process at ESA. If you have any questions, please don't hesitate to reach out.