AI in education
Is it ethical for school or university students to use AI tools to help them with schoolwork?
Plagiarism is always unacceptable, and teachers are using more tools to identify whether students’ work has been copied from online or other sources.
Whether AI can play a role in students’ research, grammar checking, and other elements, is a judgement call for individuals and schools. One way of looking a the question is to say that using AI tools like ChatGPT, Bard and Bing for research is part of a 21st century skill-set that is beneficial to students.
However, students in this podcast from The Daily reach the conclusion that AI generated answers hold them back from learning the very skills they wanted to acquire at university, especially critical thinking.
Clearly students must comply with the rules of their institution or education system, and if AI-generated text is banned in coursework, for example, then they should comply with that and explore AI in other contexts.
How can I integrate AI into my work in education, ethically and safely?
Check out the European Union’s guidelines, published in September 2022, to help school leaders and teachers decide how to use AI in schools and universities.
They consider that safe AI must cover the following components:
- Human agency and oversight
- Transparency
- Diversity, non-discrimination, and fairness
- Societal and environmental wellbeing
- Privacy and data governance
What does AI mean for the education system as a whole?
Market intelligence firm HolonIQ concludes from a 2023 survey of the AI/edtech industry, that “AI is expected to have most impact on Testing and Assessment, followed by Language Learning, Corporate Training/ Upskilling and Higher Education.”
While there is enormous potential for AI to drive efficiency and quality improvements throughout the education sector, such tools require careful policy and planning to avoid some of the negative effects explored in this article.