Evaluating the Comparability of PPT and CBT by Implementing the Compulsory Islamic Culture Course Test in Jordan University

Abdelnaser Sanad Alakyleh


Study aims to determine whether the university students' scores in the compulsory Islamic culture course test on a selected sample differ across the paper-and pencil test (PPT) & computer-based test (CBT) versions, and to reveal the relationship between gender and the student's level of performance in the test. Therefore, the study evaluated the comparability of two versions of a compulsory Islamic culture course test (PPTs) and (CBTs). The importance of conducting the study in Jordan stems from the fact that public and private universities have begun to move away from the traditional patterns of tests such (PPTs) and went towards (CBTs). In addition to detecting which model gives the best in the output and has the characteristics of the psychometric test, furthermore, indicates whether there were any differences between males and females, the study sample consisted of 120 individuals, 67 females and 53 males from scientific, health and humanities colleges. The results showed that there was no significant difference between the two versions provided to students CBT and PPT with 0.36 moderate correlation indicators in the pre-CBT test, no significant differences between the males and females in the CBT test results. Therefore, on the basis of the results of the present study, the CBT test is an option and a preferred alternative for regular students of the bachelor's level at the University of Jordan.


PPT; CBT; Comparability; gender difference; test preference

Full Text:



Ackerman, R, & Goldsmith, M. (2011). Metacognitive regulation of text learning: On screen versus on paper. Journal of Experimental Psychology, 17(1), 18-32.

Al-Amri, S. (2009). Computer-based testing vs. paper-based testing: establishing the comparability of reading tests through the evolution of a new comparability version in a Saudi EFL context. (Unpublished doctoral dissertation), University of Essex, England.

Anne, A., Walgermo, B., & Bronnick, K. (2013). Reading linear texts on paper versus computer screen: Effects on reading comprehension. International Journal of Educational Research, 58 (2), 61-68.

Aslan, O. (2006). New way of learning: E-Learning, Fırat University, Journal of Social Science, 16(2), 121-131.

Chapelle, C. A, & Douglas, D. (2006). Assessing language through computer technology. New York: Cambridge University Press.

Clariana, R, & Wallace, P. (2005). Paper-based versus computer-based assessment: key factors associated with the test mode effect. British Journal of Educational Technology, 33 (5), 593–602.

Gallagher, A., Bridgeman, B., & Cahalan, C. (2002). The effect of computer-based tests on racial/ethnic and gender groups. Journal of Educational Measurement, 39 (1), 133-147.

Creed, A., Dennis, I., & Newstead, S. (1987). Proof-reading on VDUs. Behaviour & Information Technology, 6 (1), 3–13.

Destefano, D, & Lefevre, J. (2007). Cognitive load in hypertext reading: A review. Computers in Human Behaviour, 23(3), 1616–1641.

Dillon, A. (1994). Designing usable electronic text: Ergonomic aspects of human information usage. London: Taylor & Francis.

Dundar, H, & Akcayır, M. (2012). Tablet vs. Paper: The effect on learners' reading erformance. International Electronic Journal of Elementary Education, 4 (3), 441-450.

Ekrem, S. (2014). Computer versus Paper-Based Reading: A Case Study in English Language Teaching Context, Mevlana International Journal of Education, 4 (1).

Evans, J. D. (1996). Straightforward statistics for the behavioral sciences. Pacific Grove, CA: Brooks/Cole Publishing.

Fuhrer, S. (1973). A comparison of a computer-assisted testing procedure and standardized testing as predictors of success in community college technical mathematics (Doctoral dissertation), New York University, (1973). Dissertation Abstracts International, 34(6), 3086.

Fleming, S., Hiple, D., & Du, Y. (2002). Foreign language distance education at the University of Hawaii. In C. A. Spreen (ed.), New technologies and language learning: issues and options (Technical Report #25) Honolulu, HI: University of Hawaii, Second Language Teaching and Curriculum Center, 13-54.

Flowers, C., Do-Hong, K., Lewis, P., & Davis, V. C. (2011). A comparison of computer-based testing and pencil-and-paper testing for students with a read- aloud accommodation, Journal of Special Education Technology, 26 (1), 1-12.

Gallagher, A., Bridgeman, B., & Cahalan, C. (2002). The effect of computer-based tests on racial-ethnic and gender groups M, Journal of Educational Measurement, 39 (2), 133-147.

Khoshsima, M. H, Hosseini, S., & Hashemi, A. (2017). Cross-Mode Comparability of Computer-Based Testing (CBT) Versus Paper-Pencil Based Testing (PPT): An Investigation of Testing Administration Mode among Iranian Intermediate EFL LearnersToroujeni1, English Language Teaching, 10 (2), 64-72.

Higgins, J., Russell, M., & Hoffmann, T. (2005). Examining the effect of computer-based passage presentation on reading test performance. Journal of Technology, Learning, and Assessment, 3 (4), 1-36.

Holtzman, W. H. (1970). Individually tailored testing: Discussion. In W. H. Holtzman (Ed.), Computer-assisted instruction, testing, and guidance. New York: Harper & Row.

Kate Tzu, C. C. (2012). Elementary EFL teachers’ computer phobia and computer self-efficacy in Taiwan. TOJET: The Turkish Online Journal of Educational Technology, 11 (2), 100-107.

Kim, J. (2013). Reading from an LCD monitor versus paper: Teenagers’ reading performance. International Journal of Research Studies in Educational Technology, (2) 1, 15-24.

Lottridge, S.M., Nicewander, W.A., Schulz, E.M., & Mitzel, H.C. (2008). Comparability of paper-based and computer-based Tests: A review of the methodology. Monterey, CA: Pacific Metrics Corporation, Section 2: Studies of Comparability Methods, 13-32 .

Makiney, J. D., Rosen, C., Davis, B. W., Tinios, K., & Young, P. (2003). Examining the measurement equivalence of paper and computerized job analyses scales. 18th Annual Conference of the Society for Industrial and Organizational Psychology, Orlando, FL.

Mojarrad, H., Hemmati, F., Jafari Gohar, M.,& Sadeghi, A. (2013). Computer-based assessment (CBA) vs. Paper/pencil-based assessment (PPBA): An investigation into the performance and attitude of Iranian EFL learners' reading comprehension, International Journal of Language Learning and Applied Linguistics World, 4 (4), 418-428.

Niemeyer, C. (1999). A Computerized Final Exam for a Library Skills Course. Reference Services Review, 27 (1), 90-106.

Pinsoneault, T. B. (1996). Equivalency of computer-assisted and paper-and-pencil administered versions of the Minnesota Multiphasic Personality Inventory-2. Computers in Human Behavior, 12, 291-300.

Poggio, J., Glasnapp, D., Yang, X.,& Poggio, A. (2005). A Comparative Evaluation of Score Results from Computerised and Paper & Pencil Mathematics Testing in a Large Scale State Assessment Program, The Journal of Technology Learning and Assessment, 3 (6), 1-31.

Pommerich, M. (2004).Developing computerized versions of paper-and-pencil tests: Mode effects for passage-based tests. The Journal of Technology Learning, and Assessment, 2(6), 1-45.

Salimi, H., Rashidy, A., Salimi, H.,& Amini, M. (2011). Digitized and non-Digitized Language Assessment: A Comparative Study of Iranian EFL Language Learners. International Conference on Languages, Literature and Linguistics IPEDR vol.26. IACSIT Press, Singapore.

Woolfolk, A. (2007). Educational psychology (10th ed.).New York: Pearson Education, Inc.

Zandvliet, D.,& Farragher, P. (1997). A comparison of computer-administered and written tests, Journal of Research on Computing in Education, 29(4), 423-438.


  • There are currently no refbacks.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.