New competences for the teaching staff
In an era in which digital technologies are all-present, the skills to use them in professional, social, personal, cultural areas are more and more complex. The knowledge, abilities and attitudes necessary to cope with everyday tasks that involves digital technologies are structured in DigComp document (Vuorikari at al., 2022), promoted by the European Commission.
A little more than that, education professionals are called to master them and to guide their students towards digital skills development. The new set of competences are listed in DigCompEdu framework (Punie et al, 2017).
DigComp is the European Digital Competence Framework for Citizens. It is a tool to improve the digital competence of European people by providing a common language and understanding of what digital competence is.
DigComp defines digital competence as ”the confident, critical and responsible use of, and engagement with, digital technologies for learning, at work, and for participation in society”. It identifies the key components of digital competence in five areas and 21 specific competences. The five areas are:
- Information and data literacy: To articulate information needs, to locate and retrieve digital data, information and content. To judge the relevance of the source and its content.
- Communication and collaboration: To interact, communicate and collaborate through digital technologies while being aware of cultural and generational diversity. To participate in society through public and private digital services and participatory citizenship. To manage one’s digital identity and reputation.
- Digital content creation: To create and edit digital content in different formats, to express oneself through digital means. To copyright and license one’s own and use others’ digital content with respect to intellectual property rights. To integrate and re-elaborate digital content and data according to one’s needs. To know how to apply digital tools for innovative problem-solving.
- Safety: To protect devices, content, personal data and privacy in digital environments. To protect physical and psychological health, and to be aware of digital technologies for social well-being and social inclusion. To be aware of the environmental impact of digital technologies and their use.
- Problem solving: To identify needs and problems, and to resolve conceptual problems and problem situations in digital environments. To use digital tools to innovate processes and products. To keep up-to-date with the digital evolution.
DigComp also describes eight proficiency levels, examples of knowledge, skills and attitudes, and use cases in education and employment contexts.
DigCompEdu is the European Framework for the Digital Competence of Educators. It is a tool to support educators in developing and assessing their digital competence by providing a common reference frame and a common language.
DigCompEdu is based on DigComp, but it adapts and extends it to the specific needs of educators. It defines digital competence as ”the confident, critical and creative use of ICT to achieve goals related to work, employability, learning, leisure, inclusion and/or participation in society”. It identifies the key components of digital competence for educators in six areas and 22 specific competences. The six areas are:
- Professional engagement: Using digital technologies for communication, collaboration and professional development.
- Digital resources: Sourcing, creating and sharing digital resources.
- Teaching and learning: Using digital technologies and strategies to enhance teaching and learning.
- Assessment: Using digital technologies and strategies to enhance assessment.
- Empowering learners: Using digital technologies and strategies to empower learners as digital citizens and creative thinkers.
- Facilitating learners’ digital competence: Using digital technologies and strategies to facilitate learners’ development of digital competence.
DigCompEdu also describes six stages or levels along which educators’ digital competence typically develops, from newcomer to leader.
DigComp and AI
The 2022 release of DigComp (version 2.2) includes references to new and emerging systems such as the ones driven by artificial intelligence, virtual and augmented reality, robotisation, the Internet of things.
For example, area 5 [Problem solving], competence 5.3 [Creatively using digital technology], on the highly specialised proficiency level (7/8), an example of attitudinal trait is: Open to engage in collaborative processes to co-design and co-create new products and services based on AI systems to support and enhance citizens’ participation in society. Competence 5.4 [Identifying digital competence gaps] is also addressing important elements of nowadays world: Has a disposition to keep learning, to educate oneself and stay informed about AI (e.g. to understand how AI algorithms work; to understand how automatic decision-making can be biased; to distinguish between realistic and unrealistic AI; and to understand the difference between Artificial Narrow Intelligence, i.e. today’s AI capable of narrow tasks such as game playing, and Artificial General Intelligence, i.e. AI that surpasses human intelligence, which still remains science fiction).
As most of these digital competences are developed and practiced in educational institution, in various domains, the main role and responsibility in designing appropriate education situations remains with the teacher. The higher the level of education, the more diverse and advanced AI skills are required from both apprentice and mentor.
Rather than limiting to the knowledge of AI, DigComp is focusing on the interaction of citizens with AI systems, grouped in 5 areas (see DigComp 2.2., pg. 77: publications.jrc.ec.europa.eu/repository/handle/JRC128415). We are providing below some DigComp examples of the knowledge, skills, and attitudes that we found more relevant in this early stage of introducing AI in education process:
- What do AI systems do and what do they not do?
- Able to identify some examples of AI systems: product recommenders (e.g. on online shopping sites), voice recognition (e.g. by virtual assistants), image recognition (e.g. for detecting tumours in x-rays) and facial recognition (e.g. in surveillance systems). [5.2. - knowledge]
- Aware that AI systems collect and process multiple types of user data (e.g. personal data, behavioural data and contextual data) to create user profiles which are then used, for example, to predict what the user might want to see or do next (e.g. offer advertisements, recommendations, services). [2.6. - knowledge]
- Aware that AI systems can be used to automatically create digital content (e.g. texts, news, essays, tweets, music, images) using existing digital content as its source. Such content may be difficult to distinguish from human creations. [3.1. - knowledge]
- Aware that AI systems can help the user to edit and process digital content (e.g. some photo editing software uses AI to automatically age a face, while some text applications use AI to suggest words, sentences and paragraphs). [3.2. - knowledge]
- Aware that some AI systems can detect users’ moods, sentiments and emotions automatically from one’s online content and context (e.g. content posted on social media), but this application is not always accurate and can be controversial. [2.5. - knowledge]
- Aware that some AI systems have been designed to support teaching and training humans (e.g. to carry out tasks and assignments in education, at work or doing sports). [5.4. - knowledge]
- How do AI systems work?
- Aware that AI systems use statistics and algorithms to process (analyse) data and generate outcomes (e.g. predict what video the user might like to watch). [1.3. - knowledge]
- Aware that sensors used in many digital technologies and applications (e.g. facial tracking cameras, virtual assistants, wearable technologies, mobile phones, smart devices) automatically generate large amounts of data, includ-ing personal data, that can be used to train an AI system. [1.3. - knowledge]
- Knows that AI per se is neither good nor bad. What determines whether the outcomes of an AI system are positive or negative for society are how the AI system is designed and used, by whom and for what purposes. [2.3. - knowledge]
- Aware that what AI systems can do easily (e.g. identify patterns in huge amounts of data), humans are not able to do; while many things that humans can do easily (e.g. understand, decide what to do, and apply human values), AI systems are not able to do. [5.2. - knowledge]
- When interacting with AI systems
- Knows how to formulate search queries to achieve the desired output when interacting with conversational agents or smart speakers, e.g. recognising that, for the system to be able to respond as required, the query must be unambiguous and spoken clearly so that the system can respond. [1.1. - knowledge]
- Open to AI systems supporting humans to make informed decisions in accordance with their goals (e.g. users actively deciding whether to act upon a recommendation or not). [2.1. – attitude]
- Able to interact and give feedback to the AI system (e.g. by giving user ratings, likes, tags to online content) to influence what it next recommends (e.g. to get more recommendations on similar movies that the user previously liked. [2.1. - skill]
- Knows how to modify user configurations (e.g. in apps, software, digital plat-forms) to enable, prevent or moderate the AI system tracking, collecting or analysing data (e.g. not allowing the mobile phone to track the user’s location). [2.6. - knowledge]
- Knows how to incorporate AI edited/manipulated digital content in one’s own work (e.g. incorporate AI generated melodies in one’s own musical composition). This use of AI can be controversial as it raises questions about the role of AI in artworks, and for example, who should be credited. [3.2. - knowledge]
- Open to AI systems supporting humans to make informed decisions in accordance with their goals (e.g. users actively deciding whether to act upon a recommendation or not). [2.1. – attitude]
- The challenges and ethics of AI
- Aware that the data, on which AI depends, may include biases. If so, these biases can become automated and worsened by the use of AI. For example, search results about occupation may include stereotypes about male or fe-male jobs (e.g. male bus drivers, female sales persons). [1.2. - knowledge]
- Knows that the term “deep-fakes” refers to AI-generated images, videos or audio recordings of events or persons that did not really happen (e.g. speeches by politicians, celebrity faces on pornographic imagery). They may be impossible to distinguish from the real thing. [1.2. - knowledge]
- Attitudes regarding human agency and control
- Open to AI systems supporting humans to make informed decisions in accordance with their goals (e.g. users actively deciding whether to act upon a recommendation or not). [2.1. – attitude]
- Recognises that while the application of AI systems in many domains is usually uncontroversial (e.g. AI that helps avert climate change), AI that directly interacts with humans and takes decisions about their life can often be controversial (e.g. CV-sorting software for recruitment procedures, scoring of exams that may determine access to education). [2.3. – skill]
- Willing to collaborate with AI projects for social good in order to create value for others (e.g. by sharing data so long as appropriate and robust controls are in place). [2.2. – attitude]
- Open to engage in collaborative processes to co-design and co-create new products and services based on AI systems to support and enhance citizens’ participation in society. [5.3. – attitude]
- Has a disposition to keep learning, to educate oneself and stay informed about AI (e.g to understand how AI algorithms work; to understand how automatic decision-making can be biased; to distinguish between realistic and unrealistic AI; and to understand the difference between Artificial Narrow Intelligence (i.e. today’s AI capable of narrow tasks such as game playing) and Artificial General Intelligence (i.e. AI that surpasses human intelligence, which still remains science fiction). [5.4. – attitude]
DigCompEdu and AI
Several competence elements included in the European Framework of Digital Competence of Educators (DigCompEdu) are useful landmarks regarding the necessary knowledge and skills for a proper employ of digital technologies in the educational professional field. In its ”Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators”, the European Commission presented some potential indicators to be included in the future versions of DigCompEdu, encompassing the ethical use of AI in the six areas:
- Professional Engagement
Is able to critically describe positive and negative impacts of AI and data use in education
- Takes an active part in continuous professional learning on AI and learning analytics and their ethical use.
- Able to give examples of AI systems and describe their relevance.
- Knows how the ethical impact of AI systems is assessed in the school.
- Knows how to initiate and promote strategies across the school and its wider community that promote ethical and responsible use of AI and data
Understand the basics of AI and learning analytics
- Aware that AI algorithms work in ways that are usually not visible or easily understood by users.
- Able to interact and give feedback to the AI system to influence what it recommends next.
- Aware that sensors used in many digital technologies and applications generate large amounts of data, including personal data, that can be used to train an AI system.
- Aware of EU AI ethics guidelines and self-assessment instruments
- Digital resources
Data governance
- Aware of the various forms of personal data used in education and training.
- Aware of responsibilities in maintaining data security and privacy.
- Knows that the processing of personal data is subject to national and EU regulation including GDPR.
- Knows that processing of personal data usually cannot be based on user consent in compulsory education.
- Knows who has access to student data, how access is monitored, and how long data are retained.
- Knows that all EU citizens have the right to not be subject to fully automated decision making.
- Able to give examples of sensitive data, including biometric data.
- Able to weigh the benefits and risks before allowing third parties to process personal data especially when using AI systems.
AI governance
- Knows that AI systems are subject to national and EU regulation (notably AI Act to be adopted).
- Able to explain the risk-based approach of the AI Act (to be adopted).
- Knows the high-risk AI use cases in education and the associated requirements under the AI Act (when adopted).
- Knows how to incorporate AI edited/manipulated digital content in one’s own work and how that work should be credited.
- Able to explain key principles of data quality in AI systems
- Teaching and Learning
Models of learning
- Knows that AI systems implement designer’s understanding of what learning is and how learning can be measured; can explain key pedagogic assumptions that underpin a given digital learning system.
Objectives of education
- Knows how a given digital system addresses the different social objectives of education (qualification, socialisation, subjectification).
Human agency
- Able to consider the AI system impact on teacher autonomy, professional development, and educational innovation.
- Considers the sources of unacceptable bias in data-driven AI.
Fairness
- Considers risks related to emotional dependency and student self-image when using interactive AI systems and learning analytics.
Humanity
- Able to consider the impact of AI and data use on the student community.
- Confident in discussing the ethical aspects of AI, and how they influence the way technology is used.
Participates in the development of learning practices that use AI and data
- Can explain how ethical principles and values are considered and negotiated in co-design and co-creation of learning practices that use AI and data (linked to learning design).
- Assessment
Personal differences
- Aware that students react in different ways to automated feedback.
Algorithmic bias
- Considers the sources of unacceptable bias in AI systems and how it can be mitigated.
Cognitive focus
- Aware that AI systems assess student progress based on pre-defined domain specific models of knowledge.
- Aware that most AI systems do not assess collaboration, social competences, or creativity.
New ways to misuse technology
- Aware of common ways to manipulate AI-based assessment.
- Empowering learners
AI addressing learners’ diverse learning needs
- Knows the different ways personalised learning systems can adapt their behaviour (content, learning path, pedagogical approach).
- Able to explain how a given system can benefit all students, independent of their cognitive, cultural, economic, or physical differences.
- Aware that digital learning systems treat different student groups differently.
- Able to consider impact on the development of student self-efficiency, selfimage, mindset, and cognitive and affective self-regulation skills.
Justified choice
- Knows that AI and data use may benefit some learners more than others.
- Able to explain what evidence has been used to justify the deployment of a given AI system in the classroom.
- Recognises the need for constant monitoring of the outcomes of AI use and to learn from unexpected outcomes.
- Facilitating learners’ digital competence
AI and Learning Analytics ethics
- Able to use AI projects and deployments to help students learn about ethics of AI and data use in education and training.