Development and psychometric properties of the Student Evaluation of Dissertation Tutoring Scale (SEIDTUS) in Higher Education

  1. Veas Iniesta, Alejandro 1
  2. Navas, Leandro
  1. 1 Universitat d'Alacant
    info

    Universitat d'Alacant

    Alicante, España

    ROR https://ror.org/05t8bcz72

Revista:
Anuario de psicología

ISSN: 0066-5126

Año de publicación: 2023

Volumen: 53

Número: 2

Páginas: 23-32

Tipo: Artículo

Otras publicaciones en: Anuario de psicología

Resumen

Los procesos de tutorización de trabajos académicos suponen una competencia importante en el profesorado universitario. Sin embargo, no se disponen de escalas capaces de medir las destrezas asociadas a la calidad de la tutorización. Partiendo de 2 estudios empíricos, se pretende desarrollar y evaluar las propiedades psicométricas de la Escala de Evaluación estudiantil de los procesos de tutorización de trabajos (EDITUS). En el estudio 1 (N = 82, 72% mujeres), se propusieron 8 ítems iniciales ajustaron a un modelo unidimensional a partir del modelo de Rasch politómico. Los análisis de funcionamiento diferencial de los ítems (FID) no mostraron diferencias significativas de género en ningún ítem, y la estructura de 4 categorías mostró un adecuado rendimiento. En el estudio 2 (N = 1046, 69, 03% mujeres), un comité de expertos decidió eliminar un ítem debido a su incapacidad de generalización del proceso descrito en las titulaciones asociadas a las facultades. Se empleó un modelo de Rasch multinivel que tuvo en cuenta la estructura anidada de los datos (estudiantes anidados en 8 facultades). Los resultados replicaron los del primer estudio, mostrando propiedades psicométricas apropiadas a nivel de ítem y de constructo. En conjunto, ambos estudios sugieren que la escala SEDITUS es recomendable para una medición rápida de los procesos de tutorización de trabajos académicos.

Referencias bibliográficas

  • Andrich, D. (1978). A rating information for ordered response categories. Psychometrica, 43, 561-573.
  • American Educational Research Association, American Psychological Association, & National Council on Measurement in Education (2014). Standards for educational and psychological testing. American Educational Research Association.
  • Bettany-Saltikov, J., Kilinc, S., & Stow, K. (2009). Bones, boys, bombs and booze: an exploratory study of the reliability of marking dissertations across disciplines. Assessment & Evaluation in Higher Education, 34(6), 621-639. https://doi. org/10.1080/02602930802302196
  • Boice, R. (1991). New faculty as teachers. Journal of Higher Education, 62(2), 150-173.
  • Bond, T. G., & Fox, C. M. (2015). Applying the Rasch model: Fundamental measurement in the human sciences (2nd ed.). Routledge.
  • Butterfield, E., Hacker, J., & Albertson, L. (1996). Environ- mental, cognitive, and metacognitive influences on text revision: Assessing the evidence. Educational Psychology Review, 8(3), 239-297.
  • Castelló, M., Iñesta, A., Pardo, M., Liesa, E., & MartínezFernánez, R. (2012). Tutoring the end-of-studies dissertation: helping psychology students find their academic voice when revising academic texts. Higher Education, 63, 97-115. https://doi.org/10.1007/s10734-011-9428-9
  • Chaimongkol, S., Huffer, F. W., & Kamata, A. (2007). An explanatory differential item functioning (DIF) model by the WinBUGS 1.4. Songlanakarin Journal of Science and Technology, 29, 449-458.
  • Chalmers, P. (2018). mirt: A multidimensional item response theory package for the R environment. Retrieved from https:// cran.r-project.org/web/packages/mirt/mirt.pdf
  • Choi, S. W., Gibbons, L. E., Crane, P. K. (2011). lordif: An R package for detecting differential item functioning using it- erative hybrid ordinal logistic regression/Item Response Theory and Monte Carlo simulations. Journal of Statistical Software, 39(8), 1-30.
  • Christensen, K. B., Makransky, G., & Horton, M. (2017). Critical values for Yen’s Q3: Identification of local dependence in the Rasch model using residual correlations. Applied Psychological Measurement, 41(3), 178-194. https:// doi.org/10.11770/0146621616677520
  • Cooper, A., & Petrides, K. V. (2010). A psychometric analysis of the Trait Emotional Intelligence Questionnaire-Short Form (TEIQue-SF) using item response theory. Journal of Personality Assessment, 92(5), 449-457. https://doi.org/10. 1080/00223891.2010.497426
  • Couzijn, M., & Rijlaardsdam, G. (2005). Learning to write by reader observation and written feedback. In G. Rijlaarsdam, H. Van den Bergh, & M. Couzijn (eds). Effective teaching and learning of writing: Current trends in research (pp. 224- 253). Amsterdam University Press.
  • Dietrich, J., Dicke, A. L., Kracke, B., & Noack, P. (2015). Teacher support and its influence on students’ intrinsic value and effort. Dimensional comparison effects across subjects. Learning and Instruction, 39, 45-54. https://doi.org/10.10 16/j.learninstruc.2015.05.007
  • Evans, C. Kandiko-Howson, C., Forsythe, A., & Edwards, C. (2020). What constitutes high quality higher education pedagogical research? Assessment & Evaluation in Higher Education. https://doi.org/10.1080/02602938.2020.1790500
  • Fox, J, P., & Verhagen, A. J. (2010). Random item effects modeling for cross-national survey data. In E. Davidov, P. Schmidt, & J. Billiet (eds.), Cross-cultural analysis: Methods and Applications (pp. 467-488). Routledge Academic.
  • García-Moya, I., Brooks, F., & Moreno, C. (2020). A new measure for the assessment of student-teacher connectedness in adolescence. European Journal of Psychological Assessment, 37(5), 357-367. https://doi.org/10.1027/1015-5759/a000621
  • Glaser, R., Lesgold, A., & Lajoie, S. P. (1988). Toward a cognitive theory for the measurement of achievement. In R. Ronning, J. Glover, J. S. Conoley, & J. C. Wittrock (eds). The influence of cognitive psychology on testing (vol. 3). Erlbaum.
  • Helm, F., Wolff, F., Möller, J., Zitzmann, S., Marsh, H., & Dicke, T. (2022). Individualized teacher frame of reference and student self-concept within and between school subjects. Journal of Educational Psychology. Advanced online publication. https://doi.org/10.1037/edu0000737
  • Huybers, T. (2014). Student evaluation of teaching: the use of best-worst scaling. Assessment & Evaluation in Higher Education, 39(4), 496-513. https://doi.org/10.1080/02602938. 2013.851782
  • Keeley, J., Furr, R. M., & Buskist, W. (2010). Differentiating psychology students’ perceptions of teachers using the Teacher Behavior Checklist. Teaching of Psychology, 37, 16-20. https://doi.org/10.1080/00982890342682
  • Lamprianou, I. (2013). Application of single-level and multilevel Rasch models using the lme4 package. Journal of Applied Measurement, 14(1), 1-12.
  • Larsson, S. (1986). Learning from experience: teachers’ conceptions of changes in their professional practice. Journal of Curriculum Studies, 19(1), 37-43.
  • Leinhardt, G., & Greeno, J. G. (1986). The cognitive skill of teaching. Journal of Educational Psychology, 78(2), 75-95.
  • Linacre, J. M. (2002). Optimizing rating scale category effectiveness. Journal of Applied Measurement, 3, 85-106.
  • Linse, A. R. (2017). Interpreting and using student rating data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 54, 94-106. https://doi.org/10.1016/j.stueduc.2016.12.004
  • Lüdtke, O., Köller, O., Marsh, H. W., & Trautwein, U. (2005). Teacher frame of reference and the big-fish-little pond effect. Contemporary Educational Psychology, 30(3), 263-285. https://doi.org/10.1016/j.cedpsych.2004.10.002
  • Luo, Y., & Jiao, H. (2018). Using the Stan program for Bayesian Item Response Theory. Educational and Psychological Measurement, 78(3), 384-408.
  • Maher, D., Seaton, L., McMullen, C., Fitzgerald, T., Otsuji, E., & Lee, A. (2008). Becoming and being writers: The experiences of doctoral students in writing groups. Studies in Continuing Education, 30(3), 263-275.
  • Marais, I., & Andrich, D. (2008). Effects of varying magnitude and patterns of response dependence in the unidimensional Rasch model. Journal of Applied Measurement, 9(2), 105-124.
  • Marsh, H. W. (1982). SEEQ: a reliable, valid and useful instrument for collecting students’ evaluation of university teaching. British Journal of Educational Psychology, 52, 77-95.
  • Marsh, H. W., Muthén, B., Asparouhov, T., Lüdtke, O., Robitzsch, A., Morin, A. J. S., & Trautwein, U. (2009). Exploratory structural equation modeling, integrating CFA and EFA: Application to students’ evaluations of university teaching. Structural Equation Modeling, 16, 439-476. https:// doi.org/10.1080/10705510903008220
  • Pajares, M. F. (1992). Teachers’ beliefs and educational research: cleaning up a messy construct. Review of Educational Research, 307-332.
  • Pratt, D. D. (1992). Conceptions of teaching. Adult Education Quarterly, 42(4), 203-220.
  • Rasch, G. (1960). Probabilistic models for some intelligence and attainment tests. Copenhagen, Denmark: Danish Institute for Educational Research (Expanded edition, 1980). University of Chicago Press.
  • Richardson, J. T. E. (2005). Instruments for obtaining student feedback: a review of the literature. Assessment & Evaluation in Higher Education, 30(4), 387-415. http://doi.org/10.10 80/02602930500099193
  • Robitzsch, A. (2021). sirt: Supplementary item response theory models. R package version 3.11-21. https://CRAN.R-project. org/web/packages/sirt/sirt.pdf
  • Rubio, V. J., Aguado, D., Hontangas, P. M., & Hernández, J. M. (2007). Psychometric properties of an emotional adjustment measure: An application of the graded response model. European Journal of Psychological Assessment, 23, 39-46.
  • Sánchez, T., Veas, A., Gilar-Corbí, R., & Castejón, J. L. (2021). Psychometric perspectives in educational and learning capitals: Development and validation of a scale on student evaluation of teaching in Higher Education. Psychological test and Assessment Modeling, 63(2), 149-167.
  • Saroyan, A., & Amundsen, C. (2001). Evaluating university teaching: Time to take stock. Assessment & Evaluation in Higher Education, 26(4), 341-353. https://doi.org/10.1080/ 02602930120063493
  • Stierer, B., & Antoniou, M. (2004). Are there distinctive methodologies for pedagogic research in Higher Education? Teaching in Higher Education, 9(3), 275-285. https://doi.org/10. 1080/13562510422000216606
  • Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics. Pearson.
  • Toland, M., & De Ayala, R. J. (2005). A multilevel factor analysis of students’ evaluation of teaching. Educational and Psychological Measurement, 65, 272-296. https://doi.org/10. 1177/001316440426866
  • Webb, M., & Jones, J. (2009). Exploring tensions in developing assessment for learning. Assessment in Education: Principles, Policy & Practice, 16(2), 165-184. https://doi.org/ 10.1080/09695940903075925
  • Wetzel, E., & Greiff, S. (2018). The world beyond rating scales. Why we should think more carefully about the response format in questionnaires [Editorial]. European Journal of Psychological Assessment, 34(1), 1-5. https://doi.org/ 10.1027/1015-5759/a000469
  • Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. MESA Press.
  • Wyatt-Smith, C., Klenowski, V., & Gunn, S. (2010). The centrality of teachers’ judgement practice in assessment: a study of standards in moderation. Assessment in Education: Principles, Policy & Practice, 17(1), 59-75. http://doi.org/10.10 80/09695940903565610