Chatbots in Education: A Dual Perspective on Innovation and Ethics

The integration of Artificial Intelligence (AI) in education, particularly through the use of chatbots, has garnered significant attention for its potential to revolutionize e-learning. Chatbots, powered by Natural Language Processing (NLP), offer a promising avenue for personalizing educational experiences and enhancing student engagement (Bradeško & Mladenić, 2012; Anghelescu & Nicolaescu, 2018). This article synthesizes the current literature on the application of chatbots in e-learning, highlighting their benefits, the benefits of sentiment analysis, and the ethical considerations that arise from their deployment. Recent studies underscore the role of chatbots in fostering student engagement and reducing dropout rates in e-learning environments (Labadze, L., 2023; Tapalova & Zhiyenbayeva, 2022). The use of chatbots in adaptive e-learning systems has been shown to dynamically adjust to student needs, thereby reducing anxiety and promoting higher-order cognitive skills (Hsu et al., 2023; Info, A et al., 2024).

On the basis of such proliferation, this article proposes the use of a new Socratic method, applying the traditional Socratic method to interactions with chatbots to further enhance critical thinking and epistemological analysis in students. This method encourages learners to engage in critical dialogue with the chatbot, challenging the information provided. Despite concerns about chatbots’ ability to accurately solve complex problems, such as in physics (Gregorcic et al., 2023), their “hallucinations” or errors can be leveraged as educational tools to stimulate critical thinking and identify misconceptions.

Another important innovation in Digital Pedagogy is represented by sentiment analysis. Its deployment leads teachers or even chatbots to understand and respond to students’ emotions, aiming to improve engagement and reduce dropout rates (Meroto, M.B. et al., 2024; Pant, H.V., et al., 2023). However, this technology raises ethical concerns related to privacy, data manipulation, and potential biases in decision-making systems (Singh et al., 2024; Susser et al., 2019; Burrell, 2016; Barocas et al., 2023). The ethical challenges associated with sentiment analysis in e-learning necessitate careful consideration and adherence to privacy and consent regulations.

In conclusion, chatbots in e-learning offer significant potential for enhancing the educational experience through personalized and adaptive learning. However, the ethical implications of their use, particularly concerning sentiment analysis, must be addressed to ensure the protection of student data and the equitable treatment of all learners. Future research should continue to explore the optimization of digital interactions, considering the potential for educational innovation that chatbots bring to the academic industry (Denny et al., 2024).

____________________

Journal of Digital Pedagogy – ISSN 3008-2021
2024, Vol. 3, No. 1, pp. 3-10
https://doi.org/10.61071/JDP.2420
HTML  |  PDF

____________________

Literature Review on Chatbots in E-Learning

There is wide acceptance and turmoil in literature about the use of AI in education. For defining chatbots, it is adopted as a starting point the Oxford English Dictionary definition, which states that a chatbot is “a computer program designed to simulate conversation with a human user, usually over the internet; especially one used to provide information or assistance to the user as part of an automated service” (Oxford English Dictionary, 2023).

The rise of chatbots is a cross-sector phenomenon that affects multiple areas. Thanks to their ability to simulate human interactions through Natural Language Processing (NLP) (Bradeško & Mladenić, 2012), chatbots are already implemented in sectors such as healthcare (Oh et al., 2017), customer service (Xu et al., 2017), education (Anghelescu & Nicolaescu, 2018), and academic advising (Alkhoori et al., 2020), demonstrating a versatility that favours their widespread adoption. Particularly significant is their contribution to the educational sector, where chatbots personalise the educational experience, combining Learning Analytics and Educational Data Mining with techniques such as fuzzy logic, decision trees, Bayesian networks, neural networks, genetic algorithms, and hidden Markov models (Colchester et al., 2017). Recent literature confirms a growing interest in these technologies, which promise to optimize the interaction between students and educational content, outlining new horizons for digital pedagogy (Labadze, L., 2023; Tapalova & Zhiyenbayeva, 2022).

One of the most interesting areas of innovation in this field is the optimization of interactivity, to solve the problem of lack of engagement of students in traditional e-learning. For example, the chatbots can provide personalized support in large classes (Winkler & Söllner, 2018), providing immediate answers on various aspects of the course, such as teaching materials (Cunningham-Nelson et al., 2019) and exercises (Ismail & Ade-Ibijola, 2019). Thomas (2020) and Okonkwo and Ade-Ibijola (2021) have also highlighted the efficacy of chatbots in improving the personalization of learning, a contribution that is expanded upon by Hsu, Huang, Hwang, and Chen (2023), who have observed how adaptive e-learning increases engagement and contributes to reducing anxiety in learners by dynamically adapting to their needs.

In fact, given the limited interactivity in traditional e-learning, the student tends to lose interest in the course, resulting in high dropout numbers. In adaptive e-learning, enhanced with chatbots, not only does the student interact with content suitable to their situation, but the student can also cultivate their own higher-order cognitive skills. A recent study in Zimbabwe revealed that graduate students’ perceptions of AI chatbots indicate that chatbots foster the development of higher-order cognitive skills, which are closely related to meta-cognition (Info, A et al. 2024). In addition to the previous benefits, a recent study reports that social robots prove to be a great help for students affected by autism, supporting the hypothesis that learning through AI allows for the establishment of a sense of trust (Vagnetti, R., 2024). In the evolving landscape of educational technology, it is increasingly recognized that the integration of social robots and chatbots into e-learning platforms can significantly enhance the personalization of content delivery. While social robots may not always directly incorporate chatbot functionalities into their physical frameworks, the overarching principle is that both technologies can be leveraged to tailor educational experiences more closely to individual learner needs. This customization is pivotal in elevating the efficacy of traditional e-learning modalities. Furthermore, the fusion of social robots with multimedia-enriched chatbots emerges as a potent strategy for enriching e-learning environments making learning more accessible, personalized, and effective. (Lamerichs, N., 2019)
In recent years, there have been notable exploratory efforts in the domain of educational technology. Specifically, in 2020, a significant experiment was conducted utilizing MathBot (Cai et al., 2020) to investigate the potential of chatbots in replicating the interactive nature of conventional mathematical instruction. Leveraging Amazon Mechanical Turk, MathBot was able to demonstrate its capability to provide learning outcomes that were on par with those achieved through Khan Academy’s videos and tutorials. This finding underscores the potential of chatbots as valuable adjunct tools in the realm of online education. Furthermore, the researchers incorporated a bandit algorithm to customize the conversational pace, thereby adapting the educational trajectory to meet the unique requirements of each student.

However, chatbots used for teaching aims in the mathematic domain usually face a technical challenge. The Mathematical language is not flexible as the Natural one. Natural language, used in everyday communication, is characterized by a degree of flexibility and error tolerance that often does not compromise the understanding of the conveyed message. In contrast, mathematical language is noted for its precision and rigidity, where each symbol, formula, or expression carries a specific and well-defined meaning. A minor error in a mathematical expression can lead to vastly different interpretations, altering the original meaning and potentially leading to incorrect conclusions. For instance, there is a study that underlies the peculiar significance of unambiguity in mathematical domain: when transitioning from a graphical representation to an algebraic one and vice versa (Lo Sapio et al, 2022). Besides, the need for precision in mathematical language is implicit in the discussion on equation-solving techniques, where a minor error can lead to incorrect graphical interpretations or incorrect algebraic solutions. Therefore, the successful communication of mathematical language is different from a successful communication using natural language where context and inferences can help overcome minor grammatical errors without losing the overall message meaning.

Nonetheless, also in the mathematical field chatbots can contribute significantly. In fact. Khan Academy has very recently introduced a notable case of innovation in the field of digital education with the development of Khanmigo, a teaching assistant powered by AI aimed at both students and teachers. This platform stands out for its intelligent tutoring approach, which, instead of providing direct answers, guides users towards self-discovery through questions and suggestions, and for its ability to generate dialogues with historical and literary characters. This innovation allows an LLM to address the teaching of mathematical language, through digital pedagogy and in particular allowing the chatbot to become a tutor. Although Khanmigo is not yet available in Europe, the growing interest in the USA, as reported by journalistic sources (Singer, N., 2023), highlights enthusiasm and challenges, particularly regarding the problem of hallucinations (Bidarian, N., 2023).

As an outstanding result of this literature review emerges that almost every publication consulted reports strong interest and great results in implementing chatbots in the digital pedagogy. Future research could continue to explore ways to further optimize these digital interactions, taking into account the economic implications and the potential for educational innovation that these tools bring to the academic industry, as highlighted by Denny et al. (2024). In this sense, the proposal in the next paragraph will shed light on a possible integration into chatbots starting from a widely recognized issue in the literature: the problem of hallucinations.

 

The Opportunity of Hallucinations

Gregorcic and Pendrill observed that ChatGPT failed to solve a fundamental physics problem, highlighting a lack of consistency in the responses and a tendency to contradict itself, despite its advanced linguistic skills (Gregoric et al. 2023). This behaviour was compared to the Dunning-Kruger effect, where an individual with limited skills in a certain field overestimates their abilities. The authors suggest that teachers could exploit these characteristics to recognize and address similar behaviours in students. The conclusion of the study is that ChatGPT might not be suitable as a physics tutor for students who use it independently for learning.

The approach and methodology of Gregoric and Pendrill pave the way for an innovative perspective on the use of chatbots like ChatGPT in education. The authors hinted at the possible use of ChatGPT as a useful platform for teachers. In this paragraph we can hypothesize that a ChatGPT-powered chatbot subject to hallucinations can be a very valid support in education, not as a software that holds True Knowledge, but precisely as holder of truth-like knowledge. Interacting with a chatbot in this sense requires a lot of critical thinking and can stimulate the learning of a high level of critical thinking and epistemological analysis. The student can interact with the chatbot starting from an inverted paradigm, aimed at encouraging students to engage in critical dialogue with the machine, challenging the information provided, and refining their analytical and critical skills through a Socratic type of interaction.

Contrary to the conclusions of Gregoric and Pendrill, it is hypothesized that chatbots can be valuable not only for teachers, helping them to recognize students influenced by the Dunning-Kruger effect, but also for the students themselves. The latter can deepen their understanding of the subject and develop more critical thinking by adopting an approach of constructive contradiction, similar to “Obstinate Opposer”, establishing a peculiar form of dialogue in a Socratic manner. Already in 2014, research by Goda et al. (2014) demonstrated that preparation via chatbot can strengthen critical thinking and student engagement, especially in language learning contexts. As far as the author of the article is aware, there are no studies yet focused on the systematic application of the Socratic method via chatbots in higher education (HE), suggesting that this innovation could be relevant in order to optimise even more the digital pedagogy.

The Socratic method, named after the ancient Greek philosopher Socrates, is a dialectical technique that fosters inquiry and discussion through stimulating questions, encouraging critical thinking and self-analysis. It seeks to deepen understanding and facilitate problem-solving by engaging interlocutors in roles that represent opposing viewpoints. The Socratic method applied to interaction with chatbots allows the participants to debate in critical ways. For instance, while the interlocutor may advocate for position “A” the Chatbot might argue against it, position “-A” and viceversa. This method not only strengthens logical argumentation, often employing quantifiers to sharpen the debate, but also enhances judgment by evaluating the coherence of responses and bolsters epistemological inquiry through the critical analysis of information sources of the selected discipline in which the student is involved.

While the Socratic method does not ensure unfettered access to knowledge, it serves as an invaluable adjunct to traditional educational models, enriching the learning experience through the art of dialogue and dialectic. It encourages the examination of the premises and conclusions of opposing arguments, thereby improving analytical skills and critical reflection. The integration of this method with chatbot technology offers fresh avenues for education, tailoring to diverse learning goals. For example, a curriculum might be designed to hone debating skills, urging students to navigate and articulate intricate concepts. Alternatively, it could focus on enhancing logical reasoning or deepening the epistemological understanding of specific subjects, guiding learners in the investigation and assessment of information sources. Incorporating chatbots into Socratic dialogues presents a novel strategy for education, where students engage with contrasting arguments, stimulating them to apply their knowledge critically and respond thoughtfully. This approach not only makes the educational journey more dynamic and interactive but also significantly contributes to the development of a well-rounded intellectual foundation. Additionally, the use of chatbots can play a significant role in teaching the respect for appropriate timing in an interaction within a dialogue. The respect for communication timing in interactions with artificial intelligence systems is mediated by the intrinsic waiting period of the computational process that precedes the response. This dynamic imposes that the interaction with the machine becomes an exercise in respecting communication timing, offering a pedagogical advantage that can also refine the logical-deductive ability in dissecting premises or conclusions of opposing arguments in a specific disciplinary field. Traditionally, the Socratic method is already well known for its potential. As an example of application of the traditional Socratic method we can highlight the experience of the Great Books program at the University of Chicago, which emphasizes the qualities, benefits, and virtues of the Socratic method in contexts characterized by challenges such as pandemics and technological advancements (González Díaz, J.R. 2021).

While the Socratic method is known as an optimal teaching methodology, its  application to chatbots is still not implemented in courses. Therefore, the proposal is to enhance the digital pedagogy with the New Socratic Method. In an era characterized by the prevalence of digital technology and post-pandemic challenges, critical thinking proves to be an essential tool for distinguishing between authentic and deceptive information. Critical and reflective skills are particularly important, especially considering that one of the fundamental missions of educational agencies and education, is to propose itself as a facilitator for human reflection, essential for navigating complex contexts (Capogna, 2022).

 

Ethical Concerns on Chatbots and Sentiment Analysis in Education

Traditional e-learning is often associated with high dropout rates, a problem attributed to a lack of interactivity, personalization, and, consequently, engagement. Unlike the in-person learning environment, where direct observation of the student is possible, traditional e-learning significantly limits the ability to monitor the learner’s status and prevent dropout.

In technologies that incorporate General Artificial Intelligence (GAI), sentiment analysis emerges as a promising tool. This technique offers an unprecedented opportunity to understand and respond to students’ emotions, with the ultimate goal of enhancing engagement and reducing dropout rates. For example, research on the use of Artificial Intelligence (AI) in Brazilian higher education suggests that AI, including sentiment analysis, can be tailored to individual student needs, thereby enhancing engagement and reducing dropout rates (Meroto, M.B. et al, 2024). Sentiment analysis thus is an innovative solution to address the challenges of traditional e-learning.

In further support of this thesis, a study on sentiment analysis of student feedback in Massive Open Online Courses (MOOCs) revealed that assessing learners’ sentiments can unveil factors influencing course completion rates. These data can be used to develop effective student retention strategies (Pant, H.V., et al., 2023).

As a virtuous example, the approach of Clarizia et al. (2018) for sentiment analysis in the e-learning context is cited. The authors utilized Latent Dirichlet Allocation (LDA) to extract sentiments from student comments, using chats, forums, and emails as data sources. This study highlights the utility of LDA in sentiment classification, offering a probabilistic framework to understand the emotional dynamics of students in digital environments. Through the construction of a Mixed Graph of Terms, Clarizia et al. demonstrate the ability to discriminate between positive and negative sentiments, contributing to a more effective management of the online learning environment. By combining these techniques with learning analytics analysis from platforms like Moodle, Clarizia’s article provides a practical overview on the use of sentiment analysis in adaptive e-learning, highlighting how such technology can serve as a thermometer for the emotional climate of the class, allowing teachers or chatbots to adapt content and teaching methodologies based on the emotional feedback of students.

However, sentiment analysis, by collecting sensitive data, requires further examination from an ethical and legal standpoint. First and foremost, by gathering a vast amount of data on learning preferences, progress, and emotional reactions of students, a problem arises regarding the potential manipulation and compromise of student behaviour through mechanisms described by Singh et al. (2024), who discussed the ethical implications of data-driven personalized learning. By subtly influencing the choices and preferences of students, adaptive e-learning could be programmed to promote particular courses or educational materials, modifying the learner’s preferences. Examples of this can be found in the works of Susser, Roessler, and Nissenbaum (2019), who examine how digital technologies can subtly influence users’ choices, raising issues of autonomy and consent. The problem clearly does not concern e-learning alone but rather a much broader question: how do technologies influence our behaviour, making us dependent and, more importantly, falsifying choices through self-fulfilling prophecies?

Moreover, by operating in ways that are not transparent to students or teachers, a sentiment analysis algorithm could determine a student as frustrated or unmotivated based on the analysis of the language used in their online communications, influencing the type of support offered by the system. Burrell (2016) explored the causes of opacity in algorithmic systems, identifying three main types: intentional, technical, and intrinsic. This opacity can lead to distrust and difficulties in interpreting or contesting the decisions made by AI systems.

Given the probabilistic nature of the outputs and since the engines train using data already present in our society, it is very likely that biases in decision-making systems will manifest, negatively affecting students’ educational experience. For example, if a sentiment analysis system is trained on a dataset that does not fairly represent the diversity of the student population, it could misinterpret the emotional expressions of certain groups of students. Barocas, Hardt, and Narayanan (2023) discussed how biases in training data can lead to unfair or discriminatory outcomes in machine learning systems.

From the review of the literature examined, it is clear that although sentiment analysis can innovate and personalize content even for large classes, the ethical challenges could represent significant problems for this form of innovation. Indeed, this approach offers considerable opportunities to tailor the educational experience to the individual needs of students, potentially improving engagement and the effectiveness of learning through more relevant and targeted content. However, the use of sentiment analysis in the educational field raises important ethical issues, such as the privacy of student data, informed consent, and the potential polarization or discrimination that could result from the interpretation and application of the analysed data. Therefore, it is crucial that educators and technologists intending to adopt this technology proceed with caution, establishing clear guidelines and governance mechanisms to ensure that the innovation is carried out responsibly and ethically, with the well-being and rights of students at the forefront.

 

Classification of Sentiment Analysis under the AIA

Under the proposed EU Artificial Intelligence Act (AIA), the use of sentiment analysis in the field of education would likely be categorized as a “high-risk” application. In fact, the AIA defines four risk categories for AI systems: unacceptable, high, limited, and minimal (European Parliament, 2024). High-risk AI systems are those that pose significant risks to the health and safety or fundamental rights of individuals. Given that sentiment analysis in education could impact students’ privacy, psychological well-being, and educational outcomes, it would be subject to stringent requirements under the AIA that emphasizes the need for transparency and human oversight in high-risk AI systems (Panigutti, C. et al. 2023). Although sentiment analysis requires the respect of high-risk regulation, of course this technology is highly promising. This suggests that while AI is not banning the use of black-box AI systems, it is needed to ensure clear communication about how sentiment analysis is used, what data is collected, and how decisions are made based on the analysis. The importance of documentation and the ability for human oversight to ensure that the systems align with ethical standards and respect fundamental rights are priorities according to the AIA (Panigutti, C. et al. 2023).  Therefore, while sentiment analysis can potentially improve educational outcomes and reduce dropout rates, its classification as a high-risk application under the AIA means it must be carefully managed to address ethical issues before being widely implemented in educational settings.  While sentiment analysis poses problems and requires close monitoring, the use of chatbots seems to promise, in terms of policy, a significant proliferation and few controls. This could facilitate their installation in the large market of courses. The AIA categorizes chatbots as a tool with limited risks, thus suggesting a proliferation of such technology in Europe.

 

Conclusions

The comprehensive literature review on the use of chatbots in e-learning environments underscores the transformative potential of AI in education. Chatbots, powered by NLP, have been successfully implemented across various sectors, with their adaptability being particularly beneficial in personalizing educational experiences. The integration of chatbots within digital pedagogy has been shown to enhance student engagement, reduce dropout rates, and foster the development of higher-order cognitive skills.

Studies have demonstrated that chatbots can effectively address the lack of interactivity in traditional e-learning by providing personalized support and immediate feedback. This is crucial in large classes where individual attention is limited. The use of chatbots for personalized learning, as highlighted by Thomas (2020) and Okonkwo and Ade-Ibijola (2021), and the adaptive e-learning models discussed by Hsu et al. (2023), indicate a positive impact on student engagement and anxiety reduction. It is also particularly noteworthy that Chatbots have potential to promote higher-order cognitive. The study by Info, A. et al. (2024) suggests that chatbots can facilitate meta-cognitive skill development, which is essential for critical thinking and problem-solving. Additionally, the use of social robots and multimedia-enriched chatbots, can further enrich e-learning environments, making them more accessible and effective.

Based on the literature review, this article proposed also a methodology that innovates perspective on the use of chatbots for educational purposes. The study by Gregorc et al. (2023) suggests that chatbots, despite their limitations in solving complex problems, can be used to stimulate teachers. In this article it is suggested that chatbots are beneficial also for students through the new Socratic method. This approach encourages students to challenge the information provided by chatbots, thereby refining the analytical skills.

Lastly, we can conclude that the application of sentiment analysis in e-learning raises significant ethical concerns. While sentiment analysis can provide insights into students’ emotions and contribute to personalized learning experiences, it also poses risks related to data privacy, manipulation, and bias. The works of Singh et al. (2024), Susser et al. (2019), Burrell (2016), and Barocas et al. (2023) highlight the need for transparency, fairness, and respect for student autonomy in the deployment of AI systems in education.
​​Further research could focus on the application of the new Socratic method via chatbots exploring a novel approach to promoting critical thinking and epistemological analysis within higher education. Future research should systematically explore the integration of this pedagogical technique with chatbot technology, assessing its impact on student engagement, learning outcomes, and the development of higher-order cognitive skills. This research could lead to the design of chatbots that not only deliver content but also engage students in meaningful dialogue and reflection. Perhaps empirical studies could be lead, to examine the implementation of chatbots across various educational settings to evaluate their practical impact on learning outcomes and student satisfaction. Such research should consider diverse educational contexts, subject areas, and student demographics to gain a comprehensive understanding of how chatbots can be most effectively deployed. These studies will provide valuable insights into the pedagogical benefits and limitations of chatbots, informing their future development and integration.

 

Bibliography

Alkhoori, A., Kuhail, M.A., & Alkhoori, A. (2020). Unibud: A virtual academic adviser. In 2020 12th annual undergraduate research conference on applied computing (URC) (pp. 1–4).

Anghelescu, P., & Nicolaescu, S.V. (2018). Chatbot application using search engines and teaching methods. In 2018 10th international conference on electronics, computers and artificial intelligence (ECAI) (pp. 1–6).

Barocas, S., Hardt, M., & Narayanan, A. (2023). Fairness and machine learning: Limitations and opportunities. MIT Press.

Bidarian, N. (2023, August 21). Meet Khan Academy’s chatbot tutor. CNN Business. Retrieved from https://edition.cnn.com/2023/08/21/tech/khan-academy-ai-tutor/index.html

Bradeško, L., & Mladenić, D. (2012). A survey of chatbot systems through a loebner prize competition. In Proceedings of Slovenian language technologies society eighth conference of language technologies (pp. 34–37)

Cai, W., Grossman, J., Lin, Z., Sheng, H., Tian-Zheng, J., Wei, T., Williams, J.J., & Goel, S. (2020). MathBot: A Personalized Conversational Agent for Learning Math.

Capogna, S. (2022). Oltre l’habitus. Dialogo (a più voci) con P. Bourdieu tra destino e progetto.

Chen, X.; Cheng, G.; Zou, D.; Zhong, B.; Xie, H. (2023). Artificial Intelligent Robots for Precision Education. Educ. Technol. Soc., 26, 171–186

Clarizia, F., Colace, F., De Santo, M., Lombardi, M., Pascale, F., & Pietrosanto, A. (2018). E-learning and sentiment analysis: a case study. Proceedings of the 6th International Conference on Information and Education Technology.

Colchester, K., Hagras, H., Alghazzawi, D., & Aldabbagh, G. (2017). A Survey of Artificial Intelligence Techniques Employed for Adaptive Educational Systems within E-Learning Platforms. Journal of Artificial Intelligence and Soft Computing Research, 7, 47 – 64. https://doi.org/10.1515/jaiscr-2017-0004

Cunningham-Nelson, S., Boles, W., Trouton, L., & Margerison, E. (2019). A review of chatbots in education:practical steps forward. In 30th annual conference for the australasian association for engineering education (AAEE 2019): educators becoming agents of change: innovate, integrate, motivate (pp. 299–306).

Denny, P., Prather, J., Becker, B. A., Finnie-Ansley, J., Hellas, A., Leinonen, J., … & Sarsa, S. (2024). Computing education in the era of generative AI. Communications of the ACM, 67(2), 56-67.

European Parliament. (2024). Artificial Intelligence Act: European Parliament legislative resolution of 13 March 2024 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (COM(2021)0206 – C9-0146/2021 – 2021/0106(COD))

Goda, Y., Yamada, M., Matsukawa, H., Hata, K., & Yasunami, S. (2014). Conversation with a chatbot before an online EFL group discussion and the effects on critical thinking. The Journal of Information and Systems in Education, 13(1), 1-7.

González Díaz, J.R. (2021). Método socrático: el diálogo y la educación en la universidad. Estudios: filosofía, historia, letras.

Gregorcic, B., & Pendrill, A. M. (2023). ChatGPT and the frustrated Socrates. Physics Education, 58(3), 035021.

Hsu, T.C.; Huang, H.L.; Hwang, G.J.; Chen, M.S. (2023). Effects of Incorporating an Expert Decision-making Mechanism into Chatbots on Students’ Achievement, Enjoyment, and Anxiety. Educ. Technol. Soc., 26, 218–231.

Hwang, G.-J., & Chang, C.-Y. (2021). A review of opportunities and challenges of chatbots in education. Interactive Learning Environments, 1–14.

Info, A., Kouam, F., William, A., Regis, M., & Misheal (2024). Exploring graduate students’ perception and adoption of AI chatbots in Zimbabwe: Balancing pedagogical innovation and development of higher-order cognitive skills. 1.

Ismail, M., & Ade-Ibijola, A. (2019, November). Lecturer’s apprentice: A chatbot for assisting novice programmers. In 2019 international multidisciplinary information technology and engineering conference (IMITEC) (pp. 1-8). IEEE.

Labadze, L., Grigolia, M., Machaidze, L. (2023). Role of AI chatbots in education: systematic literature review. International Journal of Educational Technology in Higher Education 20(1): 56. https://doi.org/10.1186/s41239-023-00426-1

Lamerichs, N. (2019). Characters of the Future. Machine Learning, Data, and Personality.

Lo Sapio, R.M., Mellone, M., & Coppola, C. (2022). La risoluzione di equazioni: tra rappresentazioni grafiche e linguaggio algebrico. Didattica della matematica. Dalla ricerca alle pratiche d’aula.

Meroto, M.B., Franqueira, A.D., De Queiróz, C.L., Dos Santos Filho, E.B., Da Costa, I.T., Cunha, P.R., Da Silva, R.G., & Lima, V.V. (2024). Combating school dropout with Artificial Intelligence in Brazilian higher education. Contribuciones a las Ciencias Sociales.

Novelli, C., Casolari, F., Rotolo, A., Taddeo, M., & Floridi, L. (2023). How to Evaluate the Risks of Artificial Intelligence: A Proportionality-Based, Risk Model for the AI Act. SSRN Electronic Journal.

Oh, K.-J., Lee, D., Ko, B., & Choi, H.-J. (2017). A chatbot for psychiatric counselling in mental healthcare service based on emotional dialogue analysis and sentence generation. In 2017 18th IEEE international conference on mobile data management (MDM) (pp. 371–375).

Okonkwo, C.W., & Ade-Ibijola, A. (2021). Chatbots applications in education: A systematic review. Computers and Education: Artifcial Intelligence, 2, 100033.

Oxford English Dictionary, s.v. “chatbot (n.),” July 2023, https://doi.org/10.1093/OED/2981785869

Panigutti, C., Hamon, R., Hupont, I., Fernández Llorca, D., Fano Yela, D., Junklewitz, H., Scalzo, S., Mazzini, G., Sanchez, I., Soler Garrido, J., & Gómez, E. (2023). The role of explainable AI in the context of the AI Act. Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.

Pant, H.V., Manoj, M.C., & Jeetendra, J.P. (2023). Thematic and Sentiment Analysis of Learners’ Feedback in MOOCs. Journal of Learning for Development.

Salisa, R.D., & Meiliasari, M. (2023). A literature review on dyscalculia: What dyscalculia is, its characteristics, and difficulties students face in mathematics class. Alifmatika: Jurnal Pendidikan dan Pembelajaran Matematika.

Singer, N. (2023, September 10). To test the A.I. learning hype, I visited classrooms. The New York Times. https://www.nytimes.com/2023/09/10/business/ai-learning-classrooms.html

Singh, V., & Ram, S. (2024). Impact of Artificial Intelligence on Teacher Education. Shodh Sari-An International Multidisciplinary Journal.

Susser, D., Roessler, B., & Nissenbaum, H. (2019). Online manipulation: Hidden influences in a digital world. Geo. L. Tech. Rev., 4, 1.

Tapalova, O.; Zhiyenbayeva, N. (2022) Artificial Intelligence in Education: AIEd for Personalised Learning Pathways. Electron. J. e-Learn., 20, 639–653.

Thomas, H. (2020). Critical literature review on chatbots in education.

Vagnetti, R., Di Nuovo, A., Mazza, M., & Valenti, M. (2024). Social Robots: A Promising Tool to Support People with Autism. A Systematic Review of Recent Research and Critical Analysis from the Clinical Perspective. Review Journal of Autism and Developmental Disorders, 1-25.

Winkler, R., & Söllner, M. (2018, July). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In Academy of Management Proceedings (Vol. 2018, No. 1, p. 15903). Briarcliff Manor, NY 10510: Academy of Management.

Winkler, R., & Söllner, M. (2018). Unleashing the potential of chatbots in education: A state-of-the-art analysis. In Academy of management annual meeting (AOM).

Xu, A., Liu, Z., Guo, Y., Sinha, V., & Akkiraju, R. (2017). A new chatbot for customer service on social media. In Proceedings of the 2017 CHI conference on human factors in computing systems (pp. 3506–3510).

____________________

Author:

Sergio Pappagallo
Link Campus University, Italy
M.A. in Philosophical Sciences
s.pappagallo@unilink.it
https://orcid.org/0009-0005-5960-636X
  https://www.webofscience.com/wos/author/record/KHW-9593-2024

 

Received: 13.04.2024. Accepted: 18.04.2024
© Sergio Pappagallo, 2024. This open access article is distributed under the terms of the Creative Commons Attribution Licence CC BY, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited:

Citation:
Pappagallo, S. (2024). Chatbots in Education: A Dual Perspective on Innovation and Ethics. Journal of Digital Pedagogy, 3(1) 3-10. Bucharest: Institute for Education. https://doi.org/10.61071/JDP.2420

____________________

 

Similar articles