Chapter 4 The new digital pedagogy, a field of opportunities and challenges
4.3.1. About intelligence and artificial intelligence
4.3.1. About intelligence and artificial intelligence
Access to artificial intelligence programs is easy, many of them being free. In education, they have opened up a range of possibilities, among which the most immediate relate to helping teachers create teaching materials and supporting learners to study independently. The difficult part is sometimes to understand the significance of the output, to put it in the right context, to differentiate it from human creations, to use it for the right purposes; the key here is to grasp the essence of the AI tools – what they are and how they work, what are the ethical implications of using (or not using) the AI, how and why the AI is embedded into tools that we use every day.
Intelligence has different meanings, and with the emergent artificial intelligence, new nuances have been added, in a tendency to separate „human intelligence” and to find the specific notes that make a human ”humane”. In short, intelligence is defined as the (mental) capacity to learn from experiences, to adapt to new situations, to understand and handle abstract concepts, and to use knowledge to manipulate one’s environment.
In a first instance, looking at each process and thinking of the (cognitive) products or outputs of both human and artificial intelligence, we may say there is no difference between human intelligence and artificial intelligence. However,
Human intelligence is the intellectual capability of humans, marked by complex cognitive features and high levels of motivation and self-awareness. The distinction is the baseline to explain why and how we should employ the AI tools in our professional and personal tasks, what to expect from AI, and how to properly interpret the outputs that we get for various queries. A very important competence, mentioned in European Union’s Competence Framework for Citizens (DigComp 2.2) refers to the awareness that what AI systems can do easily (e.g. identify patterns in huge amounts of data), humans are not able to do; while many things that humans can do easily (e.g. understand, decide what to do, and apply human values), AI systems are not able to do.
Of course, there are much more differences between AI and HI, and some scientists proposed criteria such as origin, speed, decision making, accuracy, adaptation, energy used. The most often mentioned difference, however, is that AI lacks the creativity, intuition, adaptability, and emotional intelligence that humans display.
Human intelligence is traditionally measured through IQ tests which typically covers working memory, verbal comprehension, processing speed, and perceptual reasoning.
The definition of an Artificial Intelligence system (AI system) proposed in the draft European Union’s AI Act (2021) is “software that is developed with one or more of the techniques and approaches (listed below) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments it interacts with”.
The listed AI techniques and approaches are:
Another well-known definition is provided by UNICEF: „AI refers to machine-based systems that can, given a set of human-defined objectives, make predictions, recommendations, or decisions that influence real or virtual environments. AI systems interact with us and act on our environment, either directly or indirectly. Often, they ap-pear to operate autonomously, and can adapt their behaviour by learning about the context.” (UNICEF, 2021)
ANI and AGI
A fundamental piece of information regarding AI is the distinction between Artificial Narrow Intelligence, for example today’s AI, capable of narrow tasks such as game playing, and Artificial General Intelligence, i.e. AI that surpasses human intelligence, which is still a hypothetical type of intelligent agent, remaining (for now) in the science fiction domain. Like humans, AGI would need self-awareness, consciousness, and the ability to learn and act through intuition and experience.
Artificial Narrow Intelligence (Narrow AI), also known as Weak AI, refers to a type of artificial intelligence that is goal-oriented, designed and trained for a specific, limited task or a narrow range of tasks. Unlike General Artificial Intelligence (AGI), which aims to replicate the broad cognitive abilities and adaptability of human intelligence, Narrow AI is specialized and focused on performing well-defined functions.
Key characteristics of Narrow AI are:
Examples of Narrow AI applications include:
Narrow AI has proven to be highly valuable in automating specific tasks, improving efficiency, and enhancing user experiences in various industries. However, it is important to distinguish between Narrow AI and AGI, as the latter represents a more ambitious and complex goal of creating AI systems with human-like general intelligence and adaptability across a wide range of tasks and contexts.
Artificial General Intelligence (GAI or AGI), also known as Strong AI or Full AI, refers to a form of artificial intelligence that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks and domains in a manner that is indistinguishable from human intelligence. Unlike Narrow AI, which is designed for specific, predefined tasks, AGI aims to replicate the broad and adaptable cognitive abilities of human beings.
Key characteristics of General Artificial Intelligence include:
Developing AGI is a complex and ambitious goal in the field of artificial intelligence. While there has been significant progress in creating AI systems that excel in specific tasks, achieving true AGI remains a long-term aspiration. Researchers continue to work on developing the necessary technologies, algorithms, and methodologies to bring us closer to the realization of General Artificial Intelligence.
The successful development of AGI would have profound implications for society, potentially revolutionizing industries, addressing complex global challenges, and posing important ethical questions regarding its governance and impact on humanity.
Nowadays, the digital technology is enriching with new possibilities brought by the AI, and most of them concerns all of us. Terms like ChatGPT, digital safety, diffusion, generative AI, emergent behaviour, AI hallucination, LLM, text-to-image and so on belongs to a common language and denotes a new reality in which we play a part.
The following glossary of terms has been put together to help circumscribing many fashionable buzzwords and have a structured overview on the spread and use of AI in educational settings.
AI ethics: Values, principles, and techniques to guide moral conduct in the development and use of AI, aimed at preventing AI from harming humans (e.g. determining how AI systems should collect data or deal with bias).
AI safety: An interdisciplinary field concerned with the long-term impacts of AI, especially the aspects regarding accidents, misuse, or other harmful consequences. It involves developing techniques and policies that ensure AI systems are reliable, trustworthy, and aligned with human values.
Algorithm: A process or a set of rules/ instructions that allows a computer program to solve problems and to analyse data in a particular way, such as recognizing patterns. In educational context (EC, 2022), AI algorithms can uncover patterns in students’ performance and can help teachers optimise their teaching strategies/ methodologies to personalise learning and improve outcomes.
Alignment: Adjusting an AI to better achieve the intended result (e.g. filtering content, ensuring friendly interactions with humans).
Anthropomorphism: Assuming a chatbot is more humanlike and conscious than it really is, like thinking it’s happy, sad, or even sentient altogether.
Artificial intelligence (AI): The use of technology to mimic human intelligence, either in software or hardware. AI systems can learn, reason, understand, and interact with humans in natural ways. Some examples of are chatbots, self-driving cars, and smart assistants.
Automation. The computer system performs a function that normally requires human involvement. A system that can perform tasks without needing continuous human supervision is described as autonomous. In education (EC, 2022), educational institutions and teachers can use software to perform many repetitive and time-consuming tasks like timetabling, attendance, and enrolment. Automating such tasks can allow teachers to spend less time on routine tasks and more time with their students.
Bias: Errors resulting from the training of AI system, occurred from the biases within the databases content, with effects such as falsely attributing certain characteristics to certain races or groups based on stereotypes. In educational contexts (EC, 2022), assumptions made by AI algorithms, could amplify existing biases embedded in current education practices i.e., bias pertaining to gender, race, culture, opportunity, or disability status. Bias can also arise due to online learning and adaptation through interaction. It can also arise through personalisation whereby users are presented with recommendations or information feeds that are tailored to the user’s tastes.
Big data: The input and raw material that artificial intelligence uses to analyse and generate insights and decisions. Big data and AI have a synergistic relationship, as AI requires data to function and big data analytics leverages AI for better analysis. Big data can help AI to detect anomalies, predict future outcomes, and recognize patterns. In education (EC, 2022), through big data analysis, educators can potentially identify areas where students struggle or thrive, understand the individual needs of students, and develop strategies for personalised learning.
Chatbot: A computer program that simulates human conversation through voice commands or text chats or both, using AI to understand and respond to human inputs, often using natural language processing (NLP) and generative AI. Chatbots can be used for various purposes, such as customer service, entertainment, education, etc. In educational settings (EC, 2022), chatbots can be virtual advisors for learners and in the process adapt to their learning pace and so help personalise their learning. Their interactions with students can also help identify subjects with which they need help.
ChatGPT: An AI chatbot developed by OpenAI that uses large language model technology.
Cognitive computing: In short, this is another term for artificial intelligence. CC is a branch of computer science that uses artificial intelligence and signal processing to solve complex problems that involve dynamic, rich, and sometimes conflicting data.
Data augmentation: Remixing existing data or adding a more diverse set of data to train an AI.
Database: A computer file that stores a series of independent items, such as works, data or other materials, in an organized or logical way and allows them to be accessed individually by electronic or other means. In education (EC, 2022), school teaching staff and faculty board administration systems contain databases of student information in including personal profiling and learning attainment data. These are sometimes linked timetabling, assessment and learning management systems.
Deep learning: A method of AI, and a type of machine learning, that uses artificial neural networks to learn from large amounts of data. It uses multiple parameters to recognize complex patterns in pictures, sound, and text. It can make decisions and create new features based on unstructured, unlabelled data. It can be used in education (EC, 2022) to predict minute aspects of educational performance which can aid in the development of strategies for personalised learning.
Diffusion: A method of machine learning that takes an existing piece of data, like a photo, and adds random noise. Diffusion models train their networks to re-engineer or recover that photo.
Emergent behaviour: When an AI model shows capabilities that were not planned.
End-to-end learning (E2E): A deep learning process in which a model is instructed to perform a task from start to finish. It's not trained to accomplish a task sequentially but instead learns from the inputs and solves it all at once.
GenAI: (Generative Artificial Intelligence) A type of machine learning that uses algorithms to create new data from existing information. It is used for creating images or text, predicting outcomes, and recommending products. Most of the AI applications nowadays are genAI.
General artificial intelligence (general AI/ AGI): A type of artificial intelligence that can learn to accomplish any intellectual task that human beings or animals can perform. General AI emulates the human mind and behaviour to solve any kind of complex problem. General AI is sometimes called strong AI, as opposed to weak AI, which is limited to a single task or a narrow range of tasks.
Generative adversarial networks (GAN): A generative AI model composed of two neural networks to generate new data: a generator and a discriminator. The generator creates new content, and the discriminator checks to see if it's authentic.
Generative AI (GenAI): A content-generating technology typically uses AI/ deep learning algorithms, such as generative adversarial networks (GANs) to create text, video, computer code or images. The AI is fed large amounts of training data, finds patterns to generate its own novel responses, which can sometimes be similar to the source material.
Guardrails: Strategies, mechanisms, and policies designed to ensure the ethical and responsible use of AI technologies. They serve as safeguards to prevent misuse, bias, and unethical practices of AI systems, and to protect user privacy, promote transparency and fairness, and respect the rights of individuals. Guardrails are especially important for generative AI models, which can create new data or content based on a given input. Guardrails can help to define the boundaries within which generative AI models may operate, and to enforce technology and security controls for all interactions. In short, they ensure that the model doesn't create disturbing content.
Hallucination: Is a confident response by an AI that does not seem to be justified by its training data. This can happen when the training data is insufficient, biased, or too specialized, or when the AI model is ”overzealous” with its storytelling. Hallucination in AI can lead to nonsensical or false outputs that do not match the real-world input and can affect user trust and model accuracy. Hallucination in AI can be prevented by using high-quality data, ensuring model transparency, and enacting effective quality control.
Large language model (LLM): An AI model trained on mass amounts of text data to understand language and generate novel content in human-like language.
Learning analytics: The process of measuring, collecting, analysing and reporting data about learners and their settings, to understand and improve learning and the conditions that enable it. In educational settings (EC, 2022), learning management systems record data on student interaction with course materials, their interaction with teachers and other peers, and how they perform on digital assessments. Educational institutions can use analysis of this data to monitor student performance, predict overall performance and facilitate the provision of support through personalized feedback to each student.
Machine learning (ML): A component in AI that allows computers to learn and to accomplish tasks on its own (e.g. make better predictive outcomes without explicit programming). In education (EC, 2022), ML is a form of personalised learning that is used to give each student an individualised educational experience. Learners are guided through their own learning, can follow the pace they want, and make their own decisions about what to learn based on system prompts.
Multimodal AI: A type of AI that can process multiple types of inputs, including text, images, videos and speech.
Natural language processing (NLP): A branch of AI that enables computers or machines to understand, generate, manipulate, and interact with human language in text or voice form. NLP combines different techniques, such as rule-based modelling, statistical, machine learning, and deep learning models, to analyse the meaning, intent, and sentiment of language data. NLP has various applications, such as text generation, chatbots, text-to-image, spell-check, text translation, and topic classification. NLP developers need to understand the structure and rules of language before building intelligent systems. In education (EC, 2022), virtual tutoring system can use speech recognition to identify problems in a student’s reading ability and can provide real-time, automatic feedback on how to improve as well as helping to match the student with reading material that’s best suited to them.
Narrow artificial intelligence (narrow AI/ ANI): A type of artificial intelligence that is designed to perform a single task or a limited range of tasks, and any knowledge gained from performing that task will not automatically be applied to other tasks. Examples of narrow AI include weather forecasting, data analysis, facial recognition, or playing games. Narrow AI is sometimes called weak AI, as opposed to strong AI, which is capable of handling a wide range of tasks and simulating human intelligence. Most of AI systems today are weak AI.
Neural network: A computational model that resembles the human brain's structure and is meant to recognize patterns in data. Consists of interconnected nodes, or neurons, that can recognize patterns and learn over time. In educational context (EC, 2022) a neural network can be trained to learn a new skill or ability by using the repetition method of learning.
Overfitting: Error in machine learning where it functions too closely to the training data and may only be able to identify specific examples in said data but not new data.
Parameters: Numerical values that give LLMs structure and behaviour, enabling it to make predictions.
Predictive analytics: A type of AI software that uses machine learning to predict outcomes using historical data. Predictive analytics models can find patterns, observe trends, and use that information to predict future trends. It can help businesses improve forecasting, optimize processes, and enhance customer experience. In education area (EC, 2022), predictive analytics can provide insight into which students require additional support, not only based on their current and historical performance, but their predicted future performance.
Prompt chaining: An ability of AI to use information from previous interactions to colour future responses. The technique is used in conversational AI to create more dynamic and contextually-aware chatbots. It is the process of using previous interactions with an AI model to create new, more finely tuned responses, specifically in prompt-driven language modelling. Prompt chaining improves the accuracy and relevance of generated content by optimizing each AI Task Card to perform a specific task, and using the output of one as input for the next.
Stochastic parrot: An analogy of LLMs that illustrates that the software is good at generating convincing language, but does not actually understand the meaning of the language it is processing. The term ”parrot” refers to the repetition of learned items, while ”stochastic” provides the randomization that can lead to potential hallucination. Stochastic parrots can have serious consequences for AI development and deployment, as well as for users who rely on these technologies for important tasks.
Style transfer: The ability to adapt the style of one image to the content of another, allowing an AI to interpret the visual attributes of one image and use it on another. For example, taking the self-portrait of Rembrandt and re-creating it in the style of Picasso.
Supervised learning: A kind of machine learning where an algorithm is trained and developed using structured datasets, which have inputs and labels. In education (EC, 2022), supervised learning systems are defined by their use of labelled datasets to train algorithms to classify data or predict outcomes accurately. They can help teachers identify at-risk students and target interventions. They can also improve the efficiency of teaching, assessments, and grading by helping to personalise learning.
Temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks.
Text-to-speech: The generation of synthesised speech from text. The technology is used to communicate with users when reading a screen is either not possible or inconvenient. In education process (EC, 2022), text-to-speech technology allows learners to focus on the content rather than on the mechanics of reading, resulting in a better understanding of the material, better retention and increased confidence and motivation.
Text-to-image generation: Creating images based on textual descriptions.
Training data: The datasets used to help AI models learn, including text, images, code, or data. Machine learning algorithms find relationships, develop understanding and make decisions from the training data they are given. In an educational context (EC, 2022) this data can be used to make learning more efficient, adaptable, and personalised by providing detailed analytics of past and predicted future achievement.
Transformer model: A neural network architecture and deep learning model that learns context and meaning by tracking relationships in data, like in sentences or parts of images. So, instead of analysing a sentence one word at a time, it can look at the whole sentence and understand the context.
Turing test: Named after famed mathematician and computer scientist Alan Turing, it assesses a machine's ability to behave like a human. In the Turing test, a human evaluator engages in a text-based conversation with a machine and a human, both of which are hidden from view. The evaluator's task is to determine which of the two, the machine or the human, is responsible for each response in the conversation. If the evaluator is unable to reliably distinguish between the machine and the human based on the responses, then the machine is said to have passed the Turing test and demonstrated a level of artificial intelligence that simulates human-like conversation and understanding. It's important to note that the Turing test is not a definitive measure of a machine's overall intelligence, as it primarily focuses on linguistic and conversational abilities.
Unsupervised learning. A form of training where an algorithm is programmed to make inferences from datasets that don’t contain labels. These inferences are what help it to learn. In education (EC, 2022), Unsupervised learning is conducted to discover hidden and interesting patterns in unlabelled data. These patterns are valuable for the prediction of students’ performance analysing a range of contextual information like demographics and how these relate to overall attainment.
Virtual personal assistant: An application that can understand natural language voice commands and do tasks for the user such as dictating, reading text or email messages out loud, scheduling, making calls and setting reminders. In education (EC, 2022), VPAs can enable interaction with technology using voice only thus saving time by providing instant access to information. Students can access class schedules, information and resources and communicate with teachers and peers. VPAs are also used by teachers to prepare lessons, set assignments, and provide feedback.
Zero-shot learning: A test in which a model must complete a task without being given the requisite training data. An example would be recognizing a lion while only being trained on tigers.
”Knowing that it is essentially a synthesis and knowing that AI is equidistant, one may be tempted to consider it an objective content, or rather an objective perspective on the field; but the result generated by AI only reflects a collective subjectivity, a current trend in the field, with its hesitations, biases, and shortcomings. Basically, at this stage in the development of artificial intelligence, we are only looking in a mirror; therefore we should not necessarily seek novel answers and solutions, but we should rather seek to better understand ourselves, as individual contributors to a scientific domain and as a collective.” (Istrate, O., Velea, S. & Ștefănescu, D., 2022)
» Provide feedback for this chapter: https://forms.gle/tNyWC1HYMsP46t6WA
« Get back to main page: digital-pedagogy.eu/Guidelines
An open access guide. A perfectible product, for an evolving reality.
You can use the form available on this page to provide feedback and/ or suggestions.
For social annotations on any chapter, you can use Hypothesis or any other similar tool.
You can also send direct feedback to: olimpius.istrate@iEdu.ro | +40 722 458 000
Guidelines for online and blended learning
Available online: https://digital-pedagogy.eu/Guidelines
Full pdf version to download: Guidelines (version 6)
The Romanian partner in D-ChallengHE project in charge with WP5 is
the Institute for Education (Bucharest): https://iEdu.ro
Contact: office@iEdu.ro