¿Puede GPT4 identificar modelos mentales erróneos?

  1. Gallego-Durán, Francisco J.
  2. Compañ, Patricia
  3. Villagrá-Arnedo, Carlos-José
Journal:
Actas de las Jornadas sobre la Enseñanza Universitaria de la Informática (JENUI)
  1. Cruz Lemus, José Antonio (coord.)
  2. Dapena, Adriana (coord.)
  3. Paramá Gabia, José Ramón (coord.)

ISSN: 2531-0607

Year of publication: 2024

Issue: 9

Pages: 25-33

Type: Article

More publications in: Actas de las Jornadas sobre la Enseñanza Universitaria de la Informática (JENUI)

Abstract

This paper explores the use of GPT4 to evaluate erroneous mental models in C++ programming students, comparing it with faculty evaluations. Despite the challenges of optimizing token consumption and minimizing costs, it was found that GPT4 tended to identify more erroneous models than the teaching staff. However, a deeper analysis indicated a significant match between GPT4 and faculty evaluations in several cases, suggesting its potential utility in this context. During the research, it was discovered that a specific error in the handling of responses led to incorrect interpretations by GPT4 due to inaccuracies in the input data. This finding highlights the importance of precision and detail in data preparation for AI analysis. Upon reviewing the discrepancies between GPT4 and the faculty evaluations, it was concluded that in about half of them, GPT4’s conclusions were more accurate. This demonstrates that GPT4 can be used to enhance the understanding and evaluation of students’ mental models by the teaching staff. It is concluded that GPT4 can be a valuable tool to assist in the evaluation of erroneous mental models in programming students, provided that the experiment design and data quality are carefully managed. This approach not only improves the efficiency of the evaluative process but also allows for a rethinking of the faculty’s own criteria, leveraging artificial intelligence to complement and enrich teaching and evaluation.

Bibliographic References

  • [1] Daniel Amo-Filva, David Fonseca, David Vernet, Eduard De Torres, Pol Muñoz Pastor, Víctor Caballero, Eduard Fernandez, Marc Alier forment, Francisco José García-Peñalvo, Alicia García- Holgado, Faraón Llorens-Largo, Rafael Molina- Carmona, Miguel Á. Conde, y Ángel Hernández- García. Usos y desusos del modelo gpt-3 entre estudiantes de grados de ingeniería. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 415–418, Granada, julio 2023.
  • [2] Laurent Avila-Chauvet, Diana Mejía, y Christian Oswaldo Acosta Quiroz. Chatgpt as a Support Tool for Online Behavioral Task Programming, enero 2023.
  • [3] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, y Dario Amodei. Language models are few-shot learners. En Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc.
  • [4] Óscar Cánovas Reverte. Explorando el papel de la IA en la educación universitaria de la informática a través de una conversación. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 217–224, Granada, julio 2023.
  • [5] Francisco de Sande y Pablo López Ramos. El impacto de asistentes basados en IA en la enseñanza-aprendizaje de la programación. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 163– 170, Granada, julio 2023.
  • [6] Francisco J. Gallego-Durán, Patricia Compañ- Rosique, Carlos-José Villagrá-Arnedo, Gala M. García Sánchez, y Rosana Satorre Cuerda. Modelos mentales erróneos y persistentes en programación. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 277–286, Granada, julio 2023.
  • [7] Sajed Jalil, Suzzana Rafi, Thomas D. LaToza, Kevin Moran, y Wing Lam. ChatGPT and Software Testing Education: Promises & Perils, abril 2023. En 2023 IEEE international conference on software testing, verification and validation workshops (ICSTW) pp. 4130-4137. IEEE.
  • [8] Rosária Justi y Jan Driel. The use of the interconnected model of teacher professional growth for understanding the development of science teachers’ knowledge on models and modelling. Teaching and Teacher Education, 22:437–450, mayo 2006.
  • [9] Isaac Lera, Gabriel Moyà-Alcover, Carlos Guerrero, y Antoni Jaume-i Capó. Reflexiones y perspectivas del uso de chatgpt en la docencia del grado en ingeniería informática. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 315–322, Granada, julio 2023.
  • [10] Luis Jiménez Linares, Julio Alberto López- Gómez, José Ángel Martín-Baos, Francisco P. Romero, y Jesus Serrano-Guerrero. Chatgpt: reflexiones sobre la irrupción de la inteligencia artificial generativa en la docencia universitaria. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 113– 120, Granada, julio 2023.
  • [11] Roberto Rodríguez-Echeverría, Juan D. Gutiérrez, José M. Conejero, y Álvaro E. Prieto. Impacto de ChatGPT en los métodos de evaluación de un grado de Ingeniería Informática. En Actas de las XXIX Jornadas de Enseñanza Universitaria de Informática, Jenui 2023, pp. 33–40, Granada, julio 2023.
  • [12] Sami Sarsa, Paul Denny, Arto Hellas, y Juho Leinonen. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. En Proceedings of the 2022 ACM Conference on International Computing Education Research V.1, pp. 27–43, Lugano and Virtual Event Switzerland, agosto 2022. ACM.