Artificial Intelligence (AI) Policy

TIERS Information Technology Journal recognizes the value of artificial intelligence (AI) and its potential to assist authors in the research and writing process. We welcome developments in this area that enhance opportunities for generating ideas, accelerating research discovery, synthesizing, or analyzing findings, polishing language, and structuring submissions. Large language models (LLMs) and Generative AI offer significant opportunities for accelerating research and its dissemination. While these tools can be transformative, they cannot replicate human creative and critical thinking. As such, TIERS Information Technology Journal has developed this policy to guide authors, reviewers, and editors in making ethical decisions regarding the use of AI technology.

For Authors

  1. AI Assistance
    We acknowledge that AI-assisted writing tools have become more common as the technology becomes increasingly accessible. AI tools that suggest improvements to enhance your work, such as language improvement, grammar, or structure, are considered assistive AI tools and do not require disclosure by authors or reviewers. However, authors remain responsible for ensuring their submission is accurate and meets the rigorous standards of scholarship.

  2. Generative AI
    The use of AI tools that can generate content such as references, text, images, or any other form of content must be disclosed when used by authors or reviewers. Authors should cite original sources rather than Generative AI tools as primary sources in the references section. If your submission was primarily or partially generated using AI, this must be disclosed upon submission so the Editorial team can evaluate the content generated.

    Authors are required to adhere to the following guidelines:

    • Clearly indicate the use of language models in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgments section, as appropriate.

    • Verify the accuracy, validity, and appropriateness of the content and any citations generated by language models. Correct any errors, biases, or inconsistencies.

    • Be aware of the potential for plagiarism, as LLMs may reproduce substantial text from other sources. Check the original sources to ensure you are not plagiarizing.

    • Be mindful of the potential for fabrication, where the LLM may generate false content, including incorrect facts or citations that do not exist. Ensure all claims in your article are verified before submission.

    • AI tools such as ChatGPT should not be listed as authors on your submission.

    While submissions will not be rejected solely due to the disclosed use of Generative AI, if the Editor becomes aware that Generative AI was used inappropriately without disclosure, the Editor reserves the right to reject the submission at any stage of the publishing process. Inappropriate use includes generating incorrect content, plagiarism, or inappropriate attribution to sources.

For Reviewers and Editors

  1. AI Assistance
    Reviewers may choose to use Generative AI tools to improve the quality of language in their review. If they do so, they maintain responsibility for the content, accuracy, and constructive feedback within the review.

    Journal Editors maintain overall responsibility for the content published in their journal and act as gatekeepers of the scholarly record. Editors may use Generative AI tools for assistance in identifying suitable peer reviewers.

  2. Generative AI
    Reviewers who inappropriately use ChatGPT or other Generative AI tools to generate review reports will not be invited to review for the journal, and their review will not be included in the final decision.

    Editors must not use Generative AI tools like ChatGPT to generate decision letters or summaries of unpublished research.

  3. Undisclosed or Inappropriate Use of Generative AI
    Reviewers who suspect inappropriate or undisclosed use of Generative AI in a submission should flag their concerns with the Journal Editor. If Editors suspect the use of ChatGPT or any other Generative AI tool in a submitted manuscript or review, they should consider this policy when undertaking an editorial assessment or contact their TIERS representative for advice.

    TIERS Information Technology Journal and the Editor will jointly investigate any concerns raised about inappropriate or undisclosed use of Generative AI in a published article. This investigation will be conducted in accordance with guidance issued by the Committee on Publication Ethics (COPE) and our internal policies.

TIERS Information Technology Journal encourages the ethical and transparent use of AI in academic writing and review processes. The primary focus is on upholding the integrity of scholarly work and ensuring that AI tools are used responsibly. Authors, reviewers, and editors must adhere to these guidelines to maintain the standards of rigorous scholarship and the ethical use of AI in academic publishing.