•  
  •  
 

Keywords

Generative AI, Information Literacy, ACRL Framework, Fake Scholarship

Abstract

There is much interest in higher education in the teaching and learning aspects of generative AI, including its potential uses in information literacy instruction. However, less attention has been paid to the educational implications of generative AI’s impact on academic publishing. Because AI can produce deceptively authentic scholarly articles and falsified data sets, it is fueling the scientific paper mill industry. This has led to record numbers of AI-generated article submissions and scientific paper retractions, even in reputable journals. As a result, AI-generated fake scholarship poses a threat to students’ information literacy and learning development. This paper discusses the implications of generative AI for the Framework for Information Literacy for Higher Education.

Creative Commons License

Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 4.0 License.

References

ACRL. Association of College and Research Libraries. (2015). Framework for Information literacy for higher education. https://www.ala.org/acrl/standards/ilframework

Carey, M. A., Steiner, K. L., & Petri Jr, W. A. (2020). Ten simple rules for reading a scientific paper. PLoS computational biology, 16(7), e1008032. https://doi.org/10.1371%2Fjournal.pcbi.1008032

Chawla, D. S. (2024). Is ChatGPT corrupting peer review? Telltale words hint at AI use. Nature, 628(8008), 483-484. https://doi.org/10.1038/d41586-024-01051-2

Chen, A. (2024, May 17). Data integrity watchdogs call for stronger safeguards in scientific journals. STAT. https://www.statnews.com/2024/05/17/data-integrity-watchdogs-urge-stronger-safeguards/

Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 1-22. https://doi.org/10.1186/s41239-023-00392-8

Fetzer, M. (2024, May 12). Q&A: The increasing difficulty of detecting AI- versus human-generated text. Tech Xplore. https://techxplore.com/news/2024-05-qa-difficulty-ai-human-generated.html

Jafari, F., & Keykha, A. (2024). Identifying the opportunities and challenges of artificial intelligence in higher education: a qualitative study. Journal of Applied Research in Higher Education, 16(4), 1228-1245. https://doi.org/10.1108/JARHE-09-2023-0426

Khalifa, M., & Albadawy, M. (2024). Using artificial intelligence in academic writing and research: An essential productivity tool. Computer Methods and Programs in Biomedicine Update, 5, 100145. https://doi.org/10.1016/j.cmpbup.2024.100145

Kidd, C., & Birhane, A. (2023). How AI can distort human beliefs. Science, 380(6651), 1222-1223. https://doi.org/10.1126/science.adi0248

Leung, T. I., de Azevedo Cardoso, T., Mavragani, A., & Eysenbach, G. (2023). Best practices for using AI tools as an author, peer reviewer, or editor. Journal of Medical Internet Research, 25, e51584. https://doi.org/10.2196/51584

Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4), 102720. https://doi.org/10.1016/j.acalib.2023.102720

Maiberg, E. (2024, March 18). Scientific journals are publishing papers with AI-generated text. 404 Media. https://www.404media.co/scientific-journals-are-publishing-papers-with-ai-generated-text/

Májovský, M., Černý, M., Kasal, M., Komarc, M., & Netuka, D. (2023). Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s Box has been opened. Journal of Medical Internet Research, 25, e46924. https://doi.org/10.2196/46924

Mitchell, G. R., Church, S., Bartosh, T., Godana, G. D., Stohr, R., Jones, S. & Knowlton, A. (2011). Measuring scholarly metrics. Papers in Communication Studies, 25. https://digitalcommons.unl.edu/commstudiespapers/25

Mitchell, J. (2024, July 24). ChatGPT vs. Google search engine – which is better? Future Skills Academy. https://futureskillsacademy.com/blog/chatgpt-vs-google-search-engine/

Niemeyer, K., & Varanasi, L. (2024, Jume 30). The copyright lawsuits against OpenAI are piling up as the tech company seeks data to train its AI. Business Insider. https://www.businessinsider.com/openai-lawsuit-copyrighted-data-train-chatgpt-court-tech-ai-news-2024-6

Peck, J. (2023, September 26). What is generative AI and how does it work? Search Engine Land. https://searchengineland.com/what-is-generative-ai-how-it-works-432402

Pogla, M. (2023, November 23). ChatGPT generates fake data set to support scientific hypothesis. AutoGPT, https://autogpt.net/chatgpt-generates-fake-data-set-to-support-scientific-hypothesis/

Shaw, C., Yuan, L., Brennan, D., Martin, S., Janson, N., Fox, K., & Bryant, G. (2023, October 23). GenAI in higher education: Fall 2023 update. Tyton Partners. tytonpartners.com/time-for-class-2023/GenAI-Update

Stokel-Walker, C. (2024, May 1). AI chatbots have thoroughly infiltrated scientific publishing. Scientific American. https://www.scientificamerican.com/article/chatbots-have-thoroughly-infiltrated-scientific-publishing/

Van Noorden, R. (2023, December 12). More than 10,000 research papers were retracted in 2023 — a new record. Nature, 624(7992), 479-481. https://www.nature.com/articles/d41586-023-03974-8

Vicente, L., & Matute, H. (2023). Humans inherit artificial intelligence biases. Scientific Reports, 13, Article 15737. https://doi.org/10.1038/s41598-023-42384-8

Wilson, P. (1983). Second-hand knowledge: An inquiry into cognitive authority. Greenwood Press.

Zhang, L. (2024). Exploring generative AI with ChatGPT for possible applications in information literacy instruction. Journal of Electronic Resources Librarianship, 36(1), 64-69. https://doi.org/10.1080/1941126X.2024.2306058

Share

COinS