Design and validation of a rubric to assess the use of American Psychological Association Style in scientific articles

  1. Gladys Merma Molina 1
  2. Hilda Peña Alfaro 2
  3. Silvia Peña Alfaro 3
  1. 1 Universitat d'Alacant
    info

    Universitat d'Alacant

    Alicante, España

    ROR https://ror.org/05t8bcz72

  2. 2 New World Spanish-Comunicart
  3. 3 Consultoría en Lingüística Aplicada
Journal:
NAER: Journal of New Approaches in Educational Research

ISSN: 2254-7339

Year of publication: 2017

Volume: 6

Issue: 1

Pages: 78-86

Type: Article

DOI: 10.7821/NAER.2017.1.220 DIALNET GOOGLE SCHOLAR lock_openDialnet editor

More publications in: NAER: Journal of New Approaches in Educational Research

Abstract

In this study, the researchers will explore the process of designing and validating a rubric to evaluate the adaptation of scientific articles in the format of the American Psychological Association (APA). The rubric will evaluate certain aspects of the APA format that allow authors, editors, and evaluators to decide if the scientific article is coherent with these rules. Overall, the rubric will concentrate on General Aspects of the article and on the Citation System. To do this, 10 articles that were published within 2012-2016 and included in the Journal Citation Report will be analyzed using technical expertise. After doing 5 pilot studies, the results showed the validity and the reliability of the instrument. Furthermore, the process showed the evidence of the possibilities of the rubric to contribute to uniform criteria that can be used as a didactic tool in different scenarios.

Funding information

This work was developped under the projects Grupo de Investigación Interdisciplinar en Docencia Universitaria (GIDU) and Proyecto Diseño y Atención a las Oportunidades de Género en la Educación Superior de la Universidad de Alicante.

Bibliographic References

  • American Psychological Association (2010). Manual de Publicaciones de la American Psychological Association (3ª ed.). México: Manual Moderno.
  • Andrade, H. G. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–30. doi:10.3200/CTCH.53.1.27-31
  • Brown, G. T. L., Glasswell, K., & Harland, D. (2004). Accuracy in the scoring of writing: Studies of reliability and validity using a New Zealand writing assessment system. Assessing Writing, 9(2), 105–121. doi:10.1016/j.asw.2004.07.001
  • Burt, C. G., Cima, R. R., Koltun, W. A., Littlejohn, C. E. Ricciardi, R. Temple, L. K., Rothemberger, D. A., & Baxter, N. N. (2009). Developing a research agenda for the American Society of Colon and Rectal Surgeons: Results of a Delphi approach. Diseases of the Colon Rectum, 52, 898–905. doi:10.1007/DCR.0b013e3181a0b358
  • Camacho, M. E., Rojas, M. E., & Rojas, L. (2014). El artículo científico para revista académica: pautas para su planificación y edición de acuerdo con el modelo APA. e-Ciencias de la Información, 4(2), 1–29.
  • Cho, K., Schunn, C. D., & Wilson, R. W. (2006). Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. Journal of Educational Psychology, 98(4), 891–901. doi:10.1037/00220663.98.4.891
  • Cooksey, R. (2014). Illustrating Statistical Procedures. Finding Meaning in Quantitative Data. University of New England, Australia: Tilda University Press.
  • De Villiers, M. R., De Villiers, P. J., & Kent, A. P. (2005). The Delphi technique in health sciences education research. Medical Teacher, 27(7), 639–643. doi:10.1080/13611260500069947
  • Floyd, R. G., Cooley, K. M., Arnett, J. E., Fagan, T. K., Mercer, S. H., & Hingle, C. (2011). An overview and analysis of journal operations, journal publication patterns, and journal impact in school psychology and related fields. Journal of School Psychology, 49(6), 617–647. doi:10.1016/j.jsp.2011.11.008
  • Giménez, E. (2014). Imposturas en el ecosistema de la publicación científica. Revista de Investigación Educativa, 32(1), 13–23. doi:10.6018/rie.32.1.190251
  • Greenberg, K. P. (2012). A reliable and valid weighted scoring instrument for use in grading APA-Style empirical research report. Teaching of Psychology, 39(1), 17–23. doi:10.1177/0098628311430643
  • Gwet, K. L. (2014). Handbook of inter-rater reliability: The definitive guide to measuring the extent of agreement among raters. Advanced Analytics, LLC.
  • Hafner, J. C., & Hafner, P. M. (2003). Quantitative analysis of the rubric as an assessment tool: An empirical study of student peer-group rating. International Journal of Science Education, 25, 1509–1528. doi:10.1080/0950069022000038268
  • Hernández, M., Fernández, C., & Baptista, P. (2003). Metodología de la investigación (3ª ed.). México: McGraw-Hill.
  • Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2, 130–144. doi:10.1016/j.edurev.2007.05.002
  • Moskal, B. M. (2000). Scoring rubrics: What, when and how? Practical Assessment, Research & Evaluation, 7(3), 1–5.
  • Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: Validity and reliability. Practical assessment, research & evaluation, 7(10), 71–81.
  • Newell, J. A., Dahm, K. D., & Newell, H. L. (2002). Rubric development and inter-rater reliability issues in assessing learning outcomes. Chemical Engineering Education, 36(3), 212–215.
  • Nickerson, R. S. (2005). What authors want from journal reviewers and editors. American Psychologist, 60, 661–662. doi:10.1037/0003-066X.60.6.661
  • Peat, B. (2006). Integrating writing and research skills: Development and testing of a rubric to measure student outcomes. Journal of Public Affairs Education, 12, 295–311.
  • Picón, E. (2013). The Role of Rubrics in Fair Assessment Practices. Íkala, Revista de Lenguaje y Cultura, 18(3), 7–994.
  • Roddy, E., Zhang, W., Doherty, M., Arden, N. K., Barlow, J., Birrell, F., Carr, A., Chakravarty, K., Dickson, J; Hay, E., Hosie, G., Hurley, M., Jordan, K. M.; McCarthy, C., McMurdo, M., Mockett, S., O’Reilly, S., Peat, G., Pendleton, A., & Richards, S. (2006). Evidence based clinical guidelines: a new system to better determine true strength of recommendation. Journal of Evaluation in Clinical Practice, 12(3), 347–352. doi:10.1111/j.1365-2753.2006.00629.x
  • Stellmack, M. A., Konheim-Kalkstein, Y. L., Manor, J. E., Massey, A. R., & Schmitz, J. A. P. (2009). An assessment of reliability and validity of a rubric for grading APA-style introductions. Teaching of Psychology, 36(2), 102–107. doi:10.1080/00986280902739776
  • Steurer J. (2011). The Delphi method: an efficient procedure to generate knowledge. Skeletal Radiol, 40(8), 959–61. doi:10.1007/s00256-011-1145-z
  • Thaler, N., Kazemi, E., & Huscher, C. (2009). Developing a rubric to assess student learning outcomes using a class assignment. Teaching of Psychology, 36(2), 113–116. doi:10.1080/00986280902739305