The ETHICAL USE OF ARTIFICIAL INTELLIGENCE IN EDUCATION: PROBLEMS AND PROPOSED SOLUTIONS

I think it wouldn’t be wrong to say that Artificial Intelligence has actually been part of our lives for a long time. But if we ask when it truly made a difference in our lives, it wouldn’t be wrong to say it was when ChatGPT was first made available. From that day on, we wake up with AI and go to sleep with AI every day. Everyone, knowingly or unknowingly, talks about artificial intelligence, and suddenly we’re all AI experts! Most importantly, artificial intelligence technology has developed rapidly since that day and has brought many innovations into our lives. One of these many areas is education. Although studies are being conducted on its use in education, I think these are unfortunately insufficient. Furthermore, what I see around me, as in many sectors, is that artificial intelligence is being added to everything, and applications that do not use artificial intelligence are being marketed to people, which is a situation that bothers me greatly.

At a time when Artificial Intelligence technology is developing so rapidly and becoming part of our lives, I believe that the ethical use of artificial intelligence in education is particularly important. Unfortunately, despite some small efforts, we are still at the beginning of the road when it comes to ethical use. Studies have been conducted in the literature, but it is an inevitable fact that these are very limited and insufficient.

After this brief introduction, let’s discuss the ethical issues we face regarding the use of artificial intelligence in education and talk about possible solutions.

1. Data Privacy and Security: Although open source LLM models have been released and used for the past few years, it is a fact that commercial products are mostly used. This is due to both the high cost of the hardware required to set up and run these systems and the scarcity of human resources capable of managing them. This means that all of the students’ personal and academic data is collected by AI systems. Failure to protect this data could lead to privacy violations and data misuse. Therefore, it is crucial that data is anonymized when transferred to commercial LLMs or AI models, that strict access controls are applied when accessing these systems, and that strong encryption methods are used to protect data both durinAlgorithmic Bias and Inequity: This point is very important. Artificial intelligence models used in education are trained using past student data. At this point, if the training data is imbalanced in terms of race, gender, socioeconomic status, disability, and other demographic groups, the performance of the trained AI model will be low for certain groups while providing an advantage to other groups. A study by Gándara and colleagues indicated how the accuracy of university student success predictions differs across racialized groups and points to algorithmic bias [1]. Similarly, Pham and colleagues emphasized in their work that generative artificial intelligence shows promise in compensating for learning loss in K-12 education and increasing opportunities for disadvantaged students, but it also raises serious concerns that it could reinforce existing structural inequalities and deepen racial divisions. Therefore, they cautioned that artificial intelligence applications in educational technologies must be supported by justice-based design and continuous monitoring processes [2]. Consequently, diversifying the data used when creating models and continuously testing the models used is crucial. transfer and storage.

3. Lack of Transparency and Accountability: Deep learning-based models, in particular, have highly complex internal structures. All actors in the education system (teachers, students, administrators, parents) struggle to understand how these models make decisions. Although studies have been conducted on Explainable Artificial Intelligence (XAI) in recent years, it is difficult to say that the desired point has been reached. This reveals the lack of transparency in artificial intelligence. This situation also prevents all actors in the education system from questioning the system’s decisions and intervening in erroneous decisions. At this point, Altukhi and Pradhan have systematized the academic, technical, and regulatory barriers to integrating XAI technologies into the field of education, creating a comprehensive knowledge base for future XAI applications [3]. Furthermore, in October 2024, a workshop organized by the European Digital Education Hub discussed the challenges and opportunities of using explainable artificial intelligence (XAI) systems in the classroom. Participants proposed recommendations such as developing an “explainability score” to increase the transparency of XAI, creating an AI literacy framework, and measuring AI transparency using the SELFIE tool. Additionally, the necessity of ethical, human-centered AI design, stakeholder collaboration, and funding for R&D was emphasized [4]. These efforts should be supported to increase work on the explainability of models and their comprehensibility by education actors.

4. Human–AI Collaboration: AI systems are powerful in tasks such as creating customized content, automatic assessment, and performance analysis. However, they fall short in recognizing students’ emotional states, understanding their motivation, or providing empathetic responses. At this point, teachers must step in to fill this gap. In other words, AI should not replace the teacher but should complement the teacher.

5. Access Inequality: This issue is actually one of the biggest problems that has existed in the world since the past and cannot be solved! In particular, students in rural and low-income areas do not have access to a reliable internet connection or adequate computers. This situation also prevents these students from accessing AI-supported tools. This further deepens inequality of opportunity in education. One of the most important points to address here is that even if technological access is provided, there is still a lack of AI literacy among actors in the education system. If these actors are not provided with the necessary training to become AI literate, providing access will be meaningless. At this point, it is a fact that the world has not yet reached the desired point, and people are far behind in terms of artificial intelligence literacy (including educated groups!). Therefore, free or low-cost AI tools must be provided for all students. In addition, the awareness of education actors regarding artificial intelligence should be increased, and training on AI literacy and ethics should be organized.

Academic Integrity and Cheating Ethical Issues: This topic is also one of the most significant issues today. With the reopening of schools, complaints and debates on this matter will once again be on the agenda. The inability to determine whether a student has used AI to complete their homework (even if some say it is possible, I still believe there are ways to mask it), the lack of clear regulations in most schools regarding whether AI use is prohibited or permitted and at what level, and the unreliability of tools that monitor AI use are among these problems. Having students complete their assignments using AI will make it difficult to measure their actual learning process and will weaken their ability to express themselves. Furthermore, every assignment completed using AI without thinking will also reduce students’ reasoning skills.

The lack of clear regulations on AI usage practically encourages students to use AI to complete their assignments. At this point, the idea of “promptly establishing a culture of academic integrity and requiring students to include an AI usage statement at the end of their assignments” comes to mind, but the issue of disclosure has been problematic for years. AI has made this even more complicated. If no incentive mechanism is offered to students through the declaration (and even if it is, I don’t think it will be very effective), there is always the possibility that students will choose to conceal rather than disclose honestly. If it is stated that penalties will be imposed for AI use, students may choose not to declare even if they have used it. If a clear measure is not established (even if it is, there is currently no reliable tool to measure it), uncertainty will arise. Considering all this, I believe that a declaration system alone will not be sufficient.

At this point, I believe that the issue of homework in the education system needs to be re-examined and transformed. This is because homework assigned to be done at home means that the homework is done in an uncontrolled environment. It is impossible to know whether the student has had the AI do the homework. Asking the student for a declaration on this matter does not solve the problem either. Finally, it will not be possible for the teacher to evaluate the student based on homework that cannot be done under controlled conditions. So what can be done about this? Let me explain my suggestions below.

a. Assignments should be completed in class. Students should complete their assignments in a computer or laboratory environment under teacher supervision.
b. Assignments should be evaluated not only based on the final result but also considering the steps taken to reach that result.
c. Students should briefly present the assignments they have completed. This will foster a culture of accountability among students.
d. The use of AI should not be prohibited; students should be taught how to use AI correctly. This will help students understand that AI is a tool.
e. Clear guidelines on the use of AI should be prepared for all schools and all levels. These guidelines should clearly state in which situations AI is permitted, in which situations it is prohibited, and what penalties will be imposed for violating the rules.

As I conclude my writing, I believe it is appropriate to mention that the uncontrolled use of artificial intelligence from an early age can also have a negative impact on children’s development. In particular, it carries the risk of weakening their ability to express themselves, put their thoughts into writing, solve problems, and think critically.

At the elementary and middle school levels, AI should be used as a supportive learning tool (word suggestions, concept maps) rather than a direct production tool. It should definitely not be used in subjects such as Turkish, composition, and math problem solving. Students should first attempt an assignment or question on a topic themselves, then receive an answer from AI and compare the two. Under the guidance of the teacher, the answers should be compared in a discussion format. Relevant higher institutions should prepare guidelines answering the question of at what age, in which subjects, and at what level AI use is appropriate. Finally, teachers should be trained in pedagogical strategies to prevent students from becoming overly dependent on AI.

This fact should not be forgotten. It is not possible to reach a solution in education by leaving parents out. Education that begins at school continues at home. It is also very important to work hard to make parents literate in Artificial Intelligence.

We cannot escape the fact that AI will change our lives. Therefore, considering that we will live with it, we must raise our children, to whom we will entrust our future, in accordance with this reality.

Ecir Uğur KÜÇÜKSİLLE

References

1. Gándara, D., Anahideh, H., Ison, M. P., & Picchiarini, L. (2024). Inside the black box: Detecting and mitigating algorithmic bias across racialized groups in college student-success prediction. aera Open, 10, 23328584241258741.

2. Pham, H., Kohli, T., Olick Llano, E., Nokuri, I., & Weinstock, A. (2024). How will AI impact racial disparities in education. Stanford Center for Racial Justice. Available online: https://law. stanford. edu/2024/06/29/how-will-ai-impact-racial-disparities-in-education/(accessed on 6 September 2025).

3. Altukhi, Z. M., & Pradhan, S. (2025). Systematic literature review: Explainable AI definitions and challenges in education. arXiv preprint arXiv:2504.02910.

4. Insights from the community workshop on explainable AI in education (2024). online: https://education.ec.europa.eu/news/insights-from-the-community-workshop-on-explainable-ai-in-education(accessed on 6 September 2025).

Bir yanıt yazın

E-posta adresiniz yayınlanmayacak. Gerekli alanlar * ile işaretlenmişlerdir

This site uses Akismet to reduce spam. Learn how your comment data is processed.