Innovative Teaching Methods in vocational Commercial and Trade School of Split, Croatia: A Case Study on Ethical Challenges of AI Hallucinations
[[{“value”:”
CC-BY
Introduction
As part of the subject Artificial Intelligence: From Concept to Application, students explored the moral and ethical challenges associated with the use of artificial intelligence, with a particular focus on so-called AI hallucinations—cases in which AI systems generate false or misleading data. Students analyzed real-life examples such as inaccurate product recommendations, fake market trends, and inappropriate emotion recognition, fostering deeper reflection on the responsible use of AI technologies.
To enhance their understanding of the topic, students visited the Museum of Illusions in December 2024. There, they examined visual illusions and explored the similarities between human perceptual errors and AI misinterpretations. This visit helped students better grasp the challenges of implementing AI technologies and emphasized the importance of ethical awareness in their use.
Learning Outcomes
By the end of the lesson, students had:
- Defined the term AI hallucination and understood its implications.
- Discussed ethical dilemmas caused by AI hallucinations.
- Created presentations on the moral and ethical challenges related to AI hallucinations.
Lesson Overview
1. Introduction to AI Hallucinations
The lesson began with an explanation of the concept of hallucination in artificial intelligence: when an AI generates incorrect or fictitious information without any grounding in real data.
Several practical examples were presented, including false financial product recommendations and misinterpretation of user preferences.
2. Group Work – Analyzing Ethical Dilemmas
Students were divided into groups and analyzed specific scenarios involving AI hallucinations in fields such as economics and commerce. Each group discussed questions such as:
- What are the consequences of AI hallucinations in the economy?
- Who is responsible for the harm or misunderstandings caused by false data?
- What are possible solutions or preventative measures?
3. Examples of AI Hallucinations
Students presented and discussed various real-world scenarios, including:
- Non-existent medicine: A medical app recommended a fabricated drug, potentially misleading users lacking medical knowledge.
- Inaccurate financial advice: AI systems suggested high-risk investment options to users seeking low-risk portfolios, leading to possible financial losses.
- Invented historical events: A history chatbot fabricated events or alliances, misinforming students or researchers relying on AI for educational purposes.
- Unusual culinary combinations: Recipe-generation AIs suggested strange or even dangerous ingredient mixes, such as allergens or toxic combinations.
- Fake ecological tips: AI assistants recommended harmful substances as “eco-friendly” gardening solutions, risking environmental or health damage.
- Incorrect navigation routes: A navigation AI proposed inaccessible or closed routes, potentially endangering users or wasting their time.
4. Presentation Development
Each group created a short digital presentation (4–5 slides), which included:
- A description of the scenario and its potential impact on users.
- An analysis of the associated moral and ethical dilemmas.
- Suggestions for responsible AI usage in real-life settings.
Students used tools such as Google Slides and prepared to present their conclusions to the class.
5. Conclusion and Discussion
Each group presented its work to the class. A follow-up discussion was held on the importance of ethical AI implementation in both professional and everyday contexts. The students also reflected on the need to regulate AI technologies that handle biometric and personal user data.
December Field Trip – Museum of Illusions
This year, students visited the Museum of Illusions to deepen their understanding of AI hallucinations. By experiencing optical illusions firsthand, they explored how human perception can be deceived—just as AI systems can “hallucinate” or misinterpret data. This interactive experience reinforced critical thinking skills and highlighted the limitations of both human and artificial cognition. It served as a meaningful parallel between perceptual illusions in humans and data misrepresentation in AI.
Exhibits | Museum of Illusions Zagreb
Final Reflection
Through this learning activity and the museum visit, students gained a clearer understanding of the risks and responsibilities involved in using artificial intelligence—particularly in situations where AI generates misleading or fabricated information. This approach exemplified an innovative and practical teaching method that encouraged critical thinking, collaboration, and ethical awareness in vocational schools with a commercial focus.
About the author: Ivana Prezzi graduated in Management at the University of Economics in Zagreb in 1999. In the past, she worked both in several companies and as an entrepreneur. Today, she is a teacher who enjoys passing on her knowledge to students and working with them. She teaches Economics at The Commercial and Trade School of Split, Croatia, EU. Along with loving Economics, she is also a fan of games.
The post Innovative Teaching Methods in vocational Commercial and Trade School of Split, Croatia: A Case Study on Ethical Challenges of AI Hallucinations appeared first on Scientix blog.
“}]]