Use of Artificial Intelligence in Academic Research: What Is Acceptable and What is Not
Main Article Content
Abstract
Artificial intelligence (AI), mostly in the form of chatbots based on large language models (LLM), now permeates society at large, whether for positive, negative, trivial or even toxic purposes. AI is also increasingly used in various aspects of scientific research, from large-scale data analysis to manuscript preparation. At Tektonika, the Executive Editor’s group and the Core members engaged in discussions while drafting our guidelines on the use of AI. These exchanges revealed a divide between those who see AI as bringing significant positive contributions and those who are more skeptical, fearing a loss of expertise and a decline in cognitive skills. Some of the key questions that emerged about the benefits of AI are: Do we accept or even encourage the use of AI? At what stages of the research process? What is acceptable or not for manuscript preparation or reviewing? Conversely, several challenges were also identified prompting questions such as: do we just see the use of AI as inevitable, placing us on a damage-limitation exercise? Are we concerned about laziness in research and writing, about false information? Are we able to discern whether or not a manuscript is AI-generated, partially or totally?
Article Details
References
Clark, A. (2025), Extending minds with generative AI, Nature Communications, 16(1), 4627, https://doi.org/10.1038/s41467-025-59906-9.
Heidt, A. (2025), AI for research: the ultimate guide to choosing the right tool, Nature, 640(8058), 555–557, https://doi.org/10.1038/d41586-025-01069-0.
Holweg, M., Y. Wen, and J. A. Silva (2023), What to do when AI behaves badly, https://www.sbs.ox.ac.uk/oxford-answers/what-do-when-ai-behaves-badly, accessed: 2026-1-18.
Hosseini, M., L. M. Rasmussen, and D. B. Resnik (2024), Using AI to write scholarly publications, Accountability in Research, 31(7), 715–723, https://doi.org/10.1080/08989621.2023.2168535.
Kacena, M. A., L. I. Plotkin, and J. C. Fehrenbacher (2024), The use of artificial intelligence in writing scientific review articles, Current Osteoporosis Reports, 22(1), 115–121, https://doi.org/10.1007/s11914-023-00852-0.
Kwon, D. (2025), Is it OK for AI to write science papers? Nature survey shows researchers are split, Nature, 641(8063), 574–578, https://doi.org/10.1038/d41586-025-01463-8.
Macnamara, B. N., I. Berber, M. C. Çavuşoğlu, E. A. Krupinski, N. Nallapareddy, N. E. Nelson, P. J. Smith, A. L. Wilson-Delfosse, and S. Ray (2024), Does using artificial intelligence assistance accelerate skill decay and hinder skill development without performers’ awareness?, Cognitive Research: Principles and Implications, 9(1), 46, https://doi.org/10.1186/s41235-024-00572-8.
Messeri, L., and M. J. Crockett (2024), Artificial intelligence and illusions of understanding in scientific research, Nature, 627 (8002), 49–58, https://doi.org/10.1038/s41586-024-07146-0.
Salvagno, M., F. S. Taccone, and A. G. Gerli (2023), Can artificial intelligence help for scientific writing?, Critical Care (London, England), 27 (1), 75, https://doi.org/10.1186/s13054-023-04380-2.
Shabanov, I. (2024), A complete guide to using AI for academic writing, https://effortlessacademic.com/a-complete-guide-to-using-ai-for-academic-writing/, accessed: 2026-1-18.
Smith, C. (2025), Expert comment: Generative AI and the courts - a wake-up call, https://www.salford.ac.uk/news/expert-comment-generative-ai-and-the-courts-a-wake-up-call, accessed: 2026-1-18.
Walters, W. H., and E. I. Wilder (2023), Fabrication and errors in the bibliographic citations generated by ChatGPT, Scientific Reports, 13(1), 14,045, https://doi.org/10.1038/s41598-023-41032-5.