Journal of Multimedia Information System
Korea Multimedia Society
Section C

Revolutionizing Education: The Era of Large Language Models

Ho-Woong Choi1,*, Sardor Abdirayimov2
1Department of Media Software, Sungkyul University, Anyang, Korea, techimpress@naver.com
2Department of AI and Big Data, Woosong University, Daejeon, Korea, 202112112@live.wsu.ac.kr
*Corresponding Author: Ho-Woong Choi, +82-10-8972-0517, techimpress@naver.com

© Copyright 2024 Korea Multimedia Society. This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Received: Feb 28, 2024; Revised: Mar 08, 2024; Accepted: Mar 18, 2024

Published Online: Mar 31, 2024

Abstract

This paper examines the rapid evolution of Large Language Models (LLMs) and their profound impact on the education sector since the emergence of ChatGPT in 2022. It explores the integration of LLMs into pedagogical practices, addressing both the enhancement of the personalized learning and the potential dependency issues. The paper also discusses the fast development of offline LLMs, emphasizing their benefits for promising intelligent tutoring systems (ITS). In addition, it high-lights current challenges of running offline LLMs. To sum up, the paper shows future of Artificial Intelligence in Education (AIEd) driven by advancements in LLM technologies.

Keywords: LLM; AIEd; ITS; ChatGPT; Language Models

I. INTRODUCTION

From the boom of ChatGPT in late 2022, the domain of LLM starts expanding dramatically. Our society was amazed by the practical capability of such ai agent to create human-like sentences. ChatGPT was dominating in the market until new model were introduced like Bard, LLaMA, and other language models from tech giants. ChatGPT model is one application of powerful Large Language Models paradigm. ChatGPT enabled new capabilities for each person to deal with texts. McMurtrie (2022) compared the writing tools like ChatGPT to the calculator and computers, which were part of math and science [1]. Indeed, ChatGPT and similar chatbot technologies are dynamically being adopt to our daily life. Over 2,000 papers were published in arxiv.org regarding the ChatGPT and LLM domain [2].

Many organizations including Microsoft, Notion etc. implemented the chatbot functionality to their products [3-4]. This work will explain the current stage of AIEd and the importance of private language models.

II. EDUCATIONAL SHIFT BROUGHT BY LLM

2.1. ChatGPT Influence

First appearance of ChatGPT has sent an earthquake through the higher educator sector for its capability of generating human-like texts in very short time. They were initially used as quick solution to complete writing as-signments. Big concern of educators with ChatGPT is that it might addict young learners to refer to AI agent instead of developing own critical thinking skills. Practicing “writing skill” is very crucial to de-velop mature logical argumentation and strong reasoning approach. According to Milano at el., 2023, foreign-language students or students who are educationally dis-advantaged are going to have strong dependance to chat-bots rather than crafting own sentences [5].

Many academies have begun to explore pedagogical usage of language models. For example, Professor Ethan Mollick from Wharton Business School has written blog about his trials of using ai agents in his courses [6]. Other groups re-arranged their assignments format from essay format to oral or another format which was challenging task to do for that ChatGPT. At last, education institutes started to integrate the chatbot technologies and train their faculty to use it for creativity point of view.

Rudolph et al. argued that a major benefit of ChatGPT is allowing students to learn through experimentation and experience [7]. By using ChatGPT, students can evaluate different strategies and approaches to solving problems and achieving goals through game-based learning (Sutton and Allen, 2019) [8].

2.2. The Emergence of Llama, Bard, and GPT-4

As time goes, language models have become safer and more concrete to generate answers. Unlike previous ver-sion of chatbots, new version of LLMs is capable of surf-ing the web, providing the references of their response. According to David et al., 2023, chatbots in education are being used to clarify subject concepts, problem-solving, data analysis [9]. Chatbots offer interactive practice ses-sions and quizzes to assess student’s under-standing of academic concepts.

ChatGPT is one example of LLMs. Meta company in-troduced own LLaMA family of open-source models to make the field of LLM more accessible and fairer [10]. Recently, google also introduced own Gemma models family as open-source project [11]. Such actions forward to think about fine-tuning own language model for own use case.

III. CONSTRASTING PARADIGMS: OPEN-SOURCE AND PROPRIETARY LAN-GUAGE MODELS

Proprietary language models, while providing advanced features and consistent updates, can cause data privacy issues. The use of these models might often involve the sharing of users’ input data with the company. It becomes very trivial when students and educa-tor’s interaction with models could have sensi-tive information. On the other hand, open-source language models offer open flexibility and emphasizes transparency. They are available to everyone and allow for modifications to suit specific need without thinking about privacy concerns. Thanks to open-source model developments, new paradigm of SLMs is being dramatically emerged. SLMs differ from LLMs on compact number of parameters. SLMs are inference-optimal models. SLMs like TinyGPT-V, TinyLlama are first steps on building cost-effective, efficient, and high-performing language models [12-13].

Framework called LangChain gained more than 77,600 starts in February 2024 [14]. LangChain provides latest tools and APIs to manipulate the output of any the lan-guage models for certain tasks. Moreover, the Hugging Face hub has more than 800,000 public accessible AI models available for developers and researchers [15]. The increasing ability to run sophisticated LLMs offline, as seen with open-source projects like Ollama and Lla-ma.cpp, present numerous advantages [16-17]. It provides a more secure interaction with AI agents, retaining user data privacy and advancing the development of intelligent tutoring systems, thereby enrichening the education experience.

IV. FUTURE OF LLM IN EDUCA-TION

According to (Rudolph et al., 2023), intelligent tutoring systems (ITS) are the best solution of transforming educa-tion with AI technologies [7]. With such AI-powered algo-rithms, ITSs can simulate the assistance provided by a tutor, such as by providing personalized assistance in solv-ing problems. The benefit of ITS technologies will be the ability of adopting to the characteristics of students and their emotional state in every aspect of their learning in real-time, resulting in personalized adaptive learning (PAL) (Peng et al., 2019) [18].

LLMs are at the heart of the personalized learning revo-lution. Instead of traditional, static, one-size-fits-all educational materials and contexts, LLMs can provide personalized, adaptive, and dynamic insight and feedback — signaling a paradigm shift in individualized learning more reflective of the power of AI itself. They are able to adjust their progress and style to suit each and every learner, with leading platforms such as Hugging Face offering such an extensive range of models and datasets that it is enabling advanced educational tools that go far beyond traditional consumer programming language frameworks.

4.1. Benefits of Running Offline Models

Running language models offline offers substantial benefit for organizations aiming for secure and private AI interactions. Processing queries locally without transmit-ting information over the internet, ensures data privacy.

One of the big benefits of the offline chatbots is their special customization to organization’s specific needs. Lastly, but not least, offline models are a step to-wards realizing sophisticated intelligent tutorial systems (ITS), which is the most promising innovation in Artificial Intelligence in Education (AIEd).

4.2. Challenges of Running Offline Models

Offline language models face significant challenges, while they offer advantages in privacy and security. First-ly, maintaining offline language models poses distinct challenges compared to their cloud-based counterparts. In other words, without the pipeline for automatic updates, these models can quickly become outdated. Secondly, the performance of offline models often lags without the ro-bust computational support of the cloud, potentially lead-ing to slower responses that unpleasant the users. Addi-tionally, interfacing with these models may also prove less efficient causing use frustration due to increased wait times. To sum up, integration of current offline models demands careful planning and resources to enable swift interaction with chatbots in educational environments.

4.3. Prompt Engineering as New Subject

In chatbot dimension, prompt is set of guidance for lan-guage model to follow in response generation process. Language models are quite sensitive to given prompt. While well-written prompt unlocks full potential of chat-bot, poor prompt might lead to non-sequitur response. Leveraging the potential of LLMs in productive and ethi-cal ways requires a systematic focus on prompt engineer-ing. In this study [19] authors implemented flipped inter-action pattern, which forces the chatbot to ask the user questions until it obtains enough information to achieve a particular goal. Educating nuances of prompt engineering to young learners would enhance their digital literacy.

V. CONCLUSION

To conclude, the paper has discussed the impact of Large Language Models like ChatGPT on the education, highlighting both their transformative potential and the challenges they present. While these AI tools have revolu-tionized personal learning and brought new opportunity for ITS. In addition, the paper highlighted the differences between company models and open-source models as innovation of offline available models. Two chatbots were presented as an example of offline chatbots in devices. However, maintaining these offline systems poses its own sets of challenges, in terms of ensuring responsive interac-tions. As AI continues to evolve, it would be possible to overcome challenges occurred with offline chatbots. The future of education with AIEd appears promising, where students will obtain personalized and inclusive education.

REFERENCES

[1].

B. McMurtrie, “ChatGPT is everywhere: Love it or hate it, academics can’t ignore the already pervasive technology,” The Chronicle of Higher Education, vol. 69, no. 15, pp. 32-38, Mar. 2023.

[3].

All Things How Blog, https://allthings.how/how-to-use-notion-ai/, 2023.

[5].

S. Milano, J. A. McGrane, and S. Leonelli, “Large language models challenge the future of higher education,” Nature Machine Intelligence, vol. 5, no. 4, pp. 333-334. 2023.

[6].

E. Mollick and L. Mollick, “Assigning AI: Seven approaches for students, with prompts,” arXiv.2306.1 0052, 2023.

[7].

J. Rudolph, S. Tan, and S. Tan, “ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?” Journal of Applied Learning and Teaching, vol. 6, no. 1, 2003.

[8].

R. Bina, M. J. Sutton, and K. Allen, “Emotify! The power of the human element in game-based learning, serious games and experiential education,” EI Games LLC, 2019.

[9].

D. S. Calonge, L. Smail, and F. Kamalov, “Enough of the chit-chat: A comparative analysis of four AI chatbots for calculus and statistics,” Journal of Applied Learning and Teaching, vol. 6, no. 2, 2023.

[10].

LLaMA, Meta language Model, https://llama.meta.com/

[11].

Gemma, Open-Source Model by Google, https://blog.google/technology/developers/gemma-open-models/, 2024.

[12].

Z. Yuan, L. Zhaoxu, and S. Lichao “Tinygpt-v: Efficient multimodal large language model via small backbones,” arXiv preprint arXiv:2312.16862, 2023.

[13].

P. Zhang, Z. Guangtao, W. Tianduo, and L. Wei, “Tinyllama: An open-source small language model,” arXiv preprint arXiv:2401.02385, 2024.

[14].

LangChain GitHub repository, https://github.com/langchain-ai/langchain.

[15].

[16].

Ollama Language Model website, https://ollama.com/

[17].

[18].

H. Peng, S. Ma, and J. M. Spector, “Personalized adaptive learning: An emerging pedagogical approach enabled by a smart learning environment,” Smart Learning Environments, vol. 6, no. 1, pp. 1-14, 2019.

[19].

J. White, Q. Fu, S. Hays, M. Sandborn, C. Olea, and H. Gilbert, et al., “A prompt pattern catalog to enhance prompt engineering with chatgpt,” arXiv preprint arXiv:2302.11382, 2023.

[20].

P. Limna, T. Kraiwanit, K. Jangjarat, P. Klayklung, and P. Chocksathaporn, “The use of ChatGPT in the digital era: Perspectives on chatbot implementation,” Journal of Applied Learning and Teaching, vol. 6, no. 1, 2023.

[21].

M. M. Van Wyk, “Is ChatGPT an opportunity or a threat? Preventive strategies employed by academics related to a GenAI-based LLM at a faculty of education,” Journal of Applied Learning and Teaching, vol. 7, no. 1, 2024.

[22].

J. Rudolph, S. Tan, and S. Tan, “ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?,” Journal of Applied Learning and Teaching, vol. 6, no. 1, pp. 342-363, 2023.