Résumé

Since it was launched in November 2022, the conversational agent Chat Generative Pre-Trained Transformer (ChatGPT) has undoubtedly impacted the higher educational world. The ease of interaction, the swift response, and the ostensible relevance of the output contribute to making this chatbot the first revolutionary educational tool. ChatGPT goes beyond tasks such as organization, note sharing or fostering communities to enable exchanges. It can replace teacher expertise, and manage numerous tasks required from students: searching for references, summarizing documents, or drafting academic papers. Whether to allow or ban the use of this tool, has divided the opinion among the higher educational world. Upon further scrutiny and dispassionate analysis, ChatGPT is found to produce inaccurate results. In spite of mistakes, fabrications, and superficial texts, it remains a valued tool, but raises a fundamental question: can we take the results produced by ChatGPT for granted? How will our students cope with the lack of veracity? This popularized paper intends to explain the occurrence of hallucination in Artificial Intelligence (AI) driving conversational agents to produce fabrications, and explores means to mitigate these effects, aiming at training both students and professors to apply a critical approach to AI agents and its uses. In addition to explaining the hallucinations of AI, this paper shows how to interact with conversational robots to minimize this behavior. In conclusion, it suggests that the use of conversational agents can be positive for learning as long as the student adopts a critical view of the tool.

Détails

Actions

PDF