English Summative Test: The Quality of Its Items

Authors

  • Thresia Trivict Semiun University of Timor
  • Maria Wihelmina Wisrance
  • Merlin Helentina Napitupulu

DOI:

https://doi.org/10.29407/jetar.v7i2.18347

Keywords:

items, the quality, English summative tests

Abstract

It is crucial to implement evaluation after the teaching and learning process. Evaluation will reflect the success of teaching and more important the achievement of the students. Therefore, EFL teachers should develop a good test to measure students' achievement. This study analyzed multiple-choice items of English summative tests constructed by junior high school EFL teachers in Kupang, NTT. The result of this analysis functions as feedback to the English teachers on the quality of English summative tests they had created. This research was descriptive research with documentation for data collection.The English summative tests for grades VIII and IX were collected and then analyzed by using ITEMAN software to reveal item difficulty, item discrimination, and distracters effectiveness of the tests. The findings revealed that the English summative tests were developed with easy items. However, the tests still had a good discriminatory level. The test items which had all distracters perform well were only half of the total items.

Downloads

Download data is not yet available.

References

Allen, M. (2004). Assessing academic programs in higher education. Bolton: Anker Publishing Company, Inc.

Arifin, Z. (2013). Evaluasi pembelajaran: prinsip, teknik, prosedur. Bandung: PT. Remaja Rosdakarya.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Designing and developing useful language tests. New York: Oxford University Press.

Brown, H. D. (2004). Language assessment. Principles and classroom practices. New York: Pearson Education, Inc.

Cizek, G., & O’Day, D. (1994) Further investigation of nonfunctioning options in multiplechoicetest items. Educational and Psychological Measurement, 54, 861-872. doi:http://dx.doi.org/10.1177/0013164494054004002

Danuwijaya, A. A. (2018). Item analysis of reading comprehension test for postgraduate students. English Review: Journal of English Education, 7(1), 29-40. doi: 10.25134/erjee.v7i1.1493.

DiBattista, D., & Kurzawa, L. (2011). Examination of the quality of multiple-choice items on classroom tests.The Canadian Journal for the Scholarship of Teaching and Learning, 2(2). doi:https://doi.org/10.5206/cjsotl-rcacea.2011.2.4

Djiwandono, S. (2011).Tes Bahasa. Pengangan bagi pengajar bahasa (2nd ed.). Jakarta: PT. Indeks Jakarta.

Fraenkel, J. & Wallen, N. (2006). How to design and evaluate research in education (6th ed.). New York: McGraw-Hill.

Gronlund, N. E. (1982). Measurement and evaluating in teaching (4th ed.). New York: Macmillan

Gronlund, N. E. (1998). Assessment of student achievement (6th ed.). Boston: Allyn and Bacon

Hadi, S. & Kusumawati. (2018). An analysis of multiple-choice qustions (mcqs): item and test statistics from Mathematics assessments in senior high school. Research and Evaluation in Education, 4(1), 70-78. doi:https://doi.org/10.21831/reid.v4i1.20202

Haladyna, T. M., & Downing, S. M. (1993). How many options is enough for a multiple-choice test item? Educational and Psychological Measurement, 53(4), 999 1010. doi:https://doi.org/10.1177/0013164493053004013

Karim, S. A., Sudiro, S., & Sakinah, S. (2021). Utilizing test items analysis to examine the level of difficulty and discriminating power in a teacher-made test. EduLite: Journal of English Education, Literature, and Culture, 6 (2), 256-269. http://dx.doi.org/10.30659/e.6.2.256-269

Koretz, D. (2002). Limitation in the Use of Achievement Tests as Measures of Educators Productivity. The Journal of Human Resource, 37(4), 752-777. doi:https://doi.org/10.2307/3069616

Maharani, A. & Putro, N. (2020). Item analysis of English final semester test. Indonesian Journal of EFL and Linguistics, 5(2), 491-504. doi:http://dx.doi.org/10.21462/ijefl.v5i2.302

Nana, E. (2018). An analysis of English teacher-made tests. State University of Makasar.

Pradanti, S., Martono, M., & Sarosa, T. (2018). An Item Analysis of English Summative Test for The First Semester of The Third Grade Junior High School Students in Surakarta. English Education Journal, 6(3), 312-318. doi:https://doi.org/10.20961/eed.v6i3.35891

Rudner, L.M. and Schafer, W.D. (2002). What teachers need to know about assessment. Washington DC : National Education Association.

Salwa, A. (2012). The validity, reliability, level of difficulty and appropriateness of curriculum of the English test. Diponegoro University.

Santyasa, I, W. (2005). Analisis Butir dan Konsistensi Internal Tes, (Online), http://johannes.lecture.ub.ac.id.

Suharsimi, A. (2010). Dasar-Dasar Evaluasi Pendidikan, Jakarta: Bumi Aksara.

Setiyana R. (2016). Analysis of summative tests for English. EEJ 7(4), 433-447. doi:https:// 10.35308/ijelr.v2i2.2781

Semiun, T., & Luruk, F. (2020). The quality of an English summative test of a public junior high school, Kupang-NTT. English Language Teaching Educational Journal, 3(2), 133-141. doi:https://doi.org/10.12928/eltej.v3i2.2311

Tarrant, M., Ware, J. & Mohammed, A.M. (2009). An assessment of functioning and non-functioning distractors in multiple-choice questions: a descriptive analysis. BMC Med Educ 9, 40- https://doi.org/10.1186/1472-6920-9-40

Wells, C. S., & Wollack, J. A. (2003). An instructor’s guide to understanding test reliability. Madison: University of Wisconsin.

Downloads

PlumX Metrics

Published

2022-10-29

How to Cite

Semiun, T. T., Wisrance, M. W., & Napitupulu, M. H. (2022). English Summative Test: The Quality of Its Items. English Education:Journal of English Teaching and Research, 7(2), 119–127. https://doi.org/10.29407/jetar.v7i2.18347