«   Guidelines: main page     |     D-ChallengHE: project's website   *

 

 

Digital Challenges in Higher Education
Guidelines
for online and blended learning

Premises for academic curriculum digitalisation

 

 

6

 

Chapter 4   The new digital pedagogy, a field of opportunities and challenges
                 4.3.6.   The ethical issues of using AI in education

 

Feedback form for this chapter: https://forms.gle/tNyWC1HYMsP46t6WA
 

4.3.6.   The ethical issues of using AI in education

Plagiarism. Can content generated by AI applications be detected?

Obviously, the presentation of texts, images or films made with specialized content creation software without specifying the contribution of AI constitutes a moral problem. Cases where pupils and students complete essays, practical projects and scientific articles with the help of AI are common and almost impossible to detect. Paraphrasing software, built ”to avoid plagiarism” (sic!) are old tools, but, with the new technology, fraud techniques are evolving rapidly, even changing the meaning of the term fraud; now, apps like gocopy.ai, copyshark.ai, instatext.io or quillbot.com are created as ”writing assistants”.

For the evaluation of academic papers for fraud detection, AI-based tools have emerged (eg. Plag.ai or Oxsico), as well as software specialized in AI plagiarism detection.

All over the world, schools and universities have begun to take measures, some outright banning the use of AI or others inviting teachers to explore with students and pupils the potential of this new tool for intellectual work activities, productivity, human creativity.

It seems certain that we need to rethink what we do, how we build knowledge and how we develop (our) skills.

In short, the answer is that, for the moment, the partial or exclusive contribution or intervention of artificial intelligence in the development of a text or any other product in digital format cannot be detected.

No one can say with certainty whether a text was constructed by an AI program, except particular cases where the generated product has specific flaws (eg, invented bibliography or incorrect factual data). But a brief review of the text can remove any error. AI plagiarism detection apps provide a likelihood score – their output cannot be used to accuse someone of fraud. And the truth is that many of the texts constructed with artificial intelligence software resemble many of the texts produced by human beings (or vice versa):

Texts generated with AI have some characteristics. With some exercise, one can determine if an essay sent by a student could be ”fake” or not, without help from AI-specialised anti-plagiarism software, with the same degree of accuracy. The signs to look for are:

These signs are not definitive, and some AI-generated text can be very realistic and convincing. However, they can help you to be more critical and cautious when reading online content and when assessing students’ work. They can definitely help you have a valid second opinion regarding the results of automatic AI plagiarism detectors.

 Automatic tools to identify AI-generated textual content with a reasonable degree of accuracy:

Other instruments to detect AI-based plagiarism:

These tools are not perfect and may have some limitations or errors. Therefore, it is advisable to use them with caution and cross-check their results with your intuition and other plagiarism checkers. It is important to develop our own critical thinking and your specific digital literacy skills to evaluate the credibility and quality of the textual content that you encounter online or in your activities with students.

About the ethics of AI

Being a disruptive and rapidly growing technology, AI also raises important ethical questions that must be addressed, not only in fields as healthcare, transportation systems, and scientific research. Some say it also poses significant risks and even existential threats to humanity.

The ethics of AI is concerned with identifying and addressing these risks and ensuring that the development and deployment of AI is done in a responsible and ethical manner. This involves considering a wide range of issues, such as algorithmic bias, transparency and accountability, data privacy and security, and the impact of AI on society as a whole.

In recent years, there has been growing interest in the ethics of AI from both academia and industry. Many organizations have developed ethical guidelines for the development and deployment of AI systems, and there is an increasing recognition of the need for interdisciplinary collaboration to address these complex issues.

This chapter aims to provide an introduction to the ethics of AI in education field. It will explore the key ethical issues raised by AI, examine existing ethical frameworks for addressing these issues, and propose practical approaches for ensuring that the development and deployment of AI is done in a responsible and ethical manner. The content is based on the European Commission’s ethical guidelines on AI and data usage in teaching and learning (issued in oct. 2022), designed to help educators understand the potential that the applications of AI and data usage can have in education and to raise awareness of the possible risks so that they are able to engage positively, critically and ethically with AI systems and exploit their full potential.

Risks and misconceptions in using AI in education

The fast pace of technological innovation creates many dangers and difficulties that have not been adequately addressed by policy discussions and regulations. The main AI risks in education consist of:

These examples of the risks associated with AI in education reminds us how important it is to ensure that the development and deployment of AI in education is done in a responsible and ethical manner. On the other hand, there are as well several ”fake risks”, generated by a series of misconceptions about AI (extracted from: European Commission, 2022):

AI is too difficult to understand

Many people who don’t have a computer science background are put off by jargon associated with AI and data systems. Even those who do have the relevant background can struggle to fully understand how AI works, as it is a broad and complex domain. This is sometimes referred to as the ‘black box’ problem as it is difficult to understand the AI system’s inner workings. Artificial Intelligence is not a specific thing but a collection of methods and techniques to build an AI system. Rather than trying to understand the full functionality of AI systems, it is more important that educators are aware of the basic mechanisms and limitations of AI systems and how AI systems can be used to support teaching and learning in a safe and ethical way. These guidelines are designed to provide some basic questions one should ask when considering the use of an AI system and provide easy to understand use scenarios from education as well as a glossary to help with the terminology that is used to describe these systems and what they do.

AI has no role in education

AI is already changing how we learn, work and live and education is being impacted by this development. Everyone should be able to contribute to the development of AI and also benefit from it. By making ethical principles a key focus of the conversation about the role of AI in education, we can open the way for AI systems and solutions to be developed and used in an ethical, trustworthy, fair and inclusive way.

AI is not inclusive

AI can result in new forms of inequalities or discrimination and exacerbate existing ones. However, if properly designed and used, it can also offer opportunities to improve access and inclusion - in everyday life, in work, and in education. There is also significant potential for AI to provide educational resources for young people with disabilities and special needs. For example, AI-based solutions such as real-time live captioning can assist those with impaired hearing, while audio description can make access easier and more effective for people with low levels of vision.

AI systems can’t be trusted

As AI systems become more powerful, they will increasingly supplement or replace specific tasks performed by people. This could raise ethical and trust issues regarding the ability to make fair decisions using AI, as well as protecting the data collected and used to support those decisions. The complexity of the legal area can be a real challenge for educators. However, the proposed EU AI Act will help to ensure that certain AI systems classified as “high-risk” (in view of the risks that they may pose to the health, safety and fundamental rights of individuals) are developed by providers according to mandatory requirements to mitigate such risks and ensure their reliability.

Education authorities and schools should therefore be able to verify that AI systems comply with the AI regulatory framework and focus on the ethical use of AI and data to support educators and learners in teaching, learning and assessment, while also adhering to the applicable data protection regulations.

AI will undermine the role of the teacher

Many teachers fear that as the use and impact of Artificial Intelligence in education broadens in the future, these systems will diminish their role or even replace them. Rather than replacing teachers, AI can support their work, enabling them to design learning experiences that empower learners to be creative, to think, to solve real-world problems, to collaborate effectively, and provide learning experiences that AI systems on their own cannot do. Moreover, AI can automate repetitive administrative tasks allowing more time to be dedicated to the learning environment. In this way the role of the teacher is likely to be augmented and evolve with the capabilities that new innovations for AI in education will bring. However, this requires diligent governance of the development and use of AI applications and focus on sustaining teacher agency.

Ethical considerations and requirements

Nowadays, educational institutions and teaching professionals should carefully reflect about the implications of employing any new digital technology in their activities. Some guiding elements can help us understand if and to what extent the AI system is trustworthy. The questions below are proposed by the European Commission (2022, pp. 19-21) based on the main ethical standards for AI systems regarding practical aspects and/or ethics, structured on four values: human agency, fairness, humanity, and justified choice.

Human agency relates to an individual’s capability to become a competent member of society. A person with agency can determine their life choices and be responsible for their actions. Agency underpins widely used concepts such as autonomy, self-determination, and responsibility.

Fairness relates to everyone being treated fairly in the social organisation. Clear processes are required so that all users have equal access to opportunity. These include equity, inclusion, non-discrimination, and fair distribution of rights and responsibilities.

Humanity addresses consideration for the people, their identity, integrity, and dignity. We need to consider the well-being, safety, social cohesion, meaningful contact, and respect that is necessary for a meaningful human connection. That connection implies, for example, that we approach people with respect of their intrinsic value and not as a data object or a means-to-an-end. It is at the essence of the human-centric approach to AI.

Justified choice relates to the use of knowledge, facts, and data to justify necessary or appropriate collective choices by multiple stakeholders in the school environment. It requires transparency and is based on participatory and collaborative models of decision-making as well as explainability.

The guiding questions for educators relies on these four key considerations.

(1) Human Agency and Oversight

(2) Transparency

(3) Diversity, non-Discrimination and Fairness

(4) Societal and Environmental Wellbeing

(5) Privacy and Data Governance

(6) Technical Robustness and Safety

(7) Accountability

Guidance for teaching staff and for governing board

The following use cases were adapted from the European Commission’s Guidelines, in order to help setting the basis for a thorough examination towards the implementation of artificial intelligence in the educational institution.

  1. Using adaptive learning technologies to adapt to each learner’s ability

A faculty is using an Intelligent Tutoring System to automatically direct learners to resources specific to their learning needs. The AI based system uses learner data to adapt problems to the learner’s predicted knowledge level. As well as providing constant feedback to the learner, the system provides real-time information on their progress on a teacher dashboard.

The following guiding questions highlight areas that require attention:

  1. Using student dashboards to guide learners through their learning

A faculty is considering the use of a personalised online student dashboard which will provide feedback to learners and support the development of their self-regulation skills. Instead of focusing on what the learner has learned, the visualisations provide the student with a view of how they are learning.

The following guiding questions highlight areas that require attention:

  1. Providing individualised interventions for special needs

A faculty is considering how AI systems can help reduce barriers for students with special educational needs. The faculty is currently trialling an AI system to detect student support demands early on and provide tailored instructional support. By detecting patterns of corresponding characteristics from measures such as learning performance, standardised tests attention span or reading speed, the system suggests probabilities of specific diagnoses and related recommendations for interventions.

The following guiding questions highlight areas that require attention:

  1. Scoring essays using automated tools

A faculty is looking at how AI systems can support the assessment of student written assignments. A provider has recommended an automated essay scoring system which uses large natural language models to assess various aspects of text with high accuracy. The system can be used to check student assignments, automatically identify errors, and assign grades. The system can also be used to generate sample essays. Over time, the system can train large artificial neural networks with historical cases that contain various types of student mistakes to provide even more accurate grading. The system has a plagiarism detection option which can be used to automatically detect instances of plagiarism or copyright infringement in written work submitted by students.

The following guiding questions highlight areas that require attention:

  1. Managing student enrolment and resource planning

A faculty uses the data collected when students enrol to predict and better organise the number of students who will attend in the coming year. The AI system is also used to assist with forward planning, resource allocation, class allocations and budgeting. This has enabled the faculty to consider more student attributes than before, for example, to increase gender parity and student diversity. The faculty is now considering using prior grades and other metrics like standardised tests to develop targets for their students to achieve and to support professors to predict student success on a per subject basis.

The following guiding questions highlight areas that require attention:

  1. Using chatbots to guide learners and parents through administrative tasks

A faculty uses a chatbot virtual assistant on its website to guide learners through administrative tasks such as enrolment for courses, paying course fees or logging technical support issues. The system is also used to help students to find learning opportunities, provide feedback on pronunciation or comprehension. The virtual assistant is also used to support students with special educational needs through administrative tasks.

The following guiding questions highlight areas that require attention:

Two benchmarks are commonly used to assess the quality of an educator's work, students’ performance and participation in the proposed learning activities. Performance (or improvement) is interdependent with participation to a great extent: it is more likely that a good performer is actively involved in activities and it is expected that a student who is attentive and active to learn better. Students' participation in learning activities is not achieved by forcing them to be careful or waiting for us to understand the ultimate goal of learning and the potential value of the information we present (although they also have their role at higher school levels in other). Exclusive use of this tactic in education is counterproductive, leading to the association of school learning with monotony, boredom, frustration, anxiety (Macklem, 2015).

Using AI tools (and in general digital technologies), the challenge for teachers is to prepare exciting activities in order to attract students into learning. The AI tools are coming to complete and intensify educational situations, placing themselves in the zone of external conditions for learning. In digital environment, class-group dynamics may be different from how students respond and interact in conventional contexts. Group structure and classroom relationships are partially changed on the basis of the digital competence of each student – often, shy students or students who are not "rated" as being the best can find new learning activities using new technologies, new benchmarks and new ways of expression, more familiar and facilitating their academic progress. Therefore, teachers can view AI-enabled teaching activities as opportunities to stimulate the progress of certain learners, to bring (some of) them closer to the knowledge domain, and to strengthen class cohesion and adherence to the proposed learning paths. A better design of education situations, by using a variety of instructional methods and strategies, by including the tools and resources available at the moment, constitutes the premise for better development of didactic activities, resulting in a better participation of students in activities and higher academic achievement.

 

 

» Provide feedback for this chapter: https://forms.gle/tNyWC1HYMsP46t6WA
« Get back to main page: digital-pedagogy.eu/Guidelines

An open access guide. A perfectible product, for an evolving reality.

You can use the form available on this page to provide feedback and/ or suggestions.
For social annotations on any chapter, you can use Hypothesis or any other similar tool.
You can also send direct feedback to: olimpius.istrate@iEdu.ro | +40 722 458 000

 

Guidelines for online and blended learning
Available online: https://digital-pedagogy.eu/Guidelines
Full pdf version to download: Guidelines (version 6)

The Romanian partner in D-ChallengHE project in charge with WP5 is
the Institute for Education (Bucharest): https://iEdu.ro
Contact: office@iEdu.ro

 

 

 

 

 


 

 

«   Guidelines: main page     |     D-ChallengHE: project's website   *