JALT Testing & Evaluation SIG Newsletter
Vol. 6 No. 1. Feb. 2002. (p. 14 - 15) [ISSN 1881-5537]

Voices in the Field: An Interview with Gholam Reza Haji Pour Nezhad

by Tim Newfields

Photo of Gholam Reza Haji Pour Nezhad c. 2001
Gholam Reza Haji Pour Nezhad is head of the English department at the University of Social Welfare Sciences in Iran and an adjunct professor of English at the American University of Hawaii. He received his Ph.D. in TESOL from Tehran University in 2001. His research interests include applications of IRT to testing reading comprehension, the study of perceived complexity, and the use of structural equation modeling in complex designs. He was a panelist at the JALT2001 testing roundtable and was a presenter at the May 11-12, 2002 Conference on Language Testing in Asia. His publications are available online at www.geocities.com/g_hajipournezhad/.


Q: How did you become interested in testing?

A: I was an English teacher while earning my B.A. in psychology and wanted to find work that combined both studies. Clinical psychologists in Iran don't have many chances to use English. However, English teachers have many chances to use psychology. Furthermore, taking MA level courses under Hossein Farhady inspired me to continue learning about this field.

Q: What language testing books and journals have you found particularly useful?

A: One of the first works I read was Hatch and Farhady's (1982) Research Design and Statistics for Applied Linguistics. This had a tremendous influence on me. Another guiding work was Bachman's (1990) Fundamental Considerations in Language Testing.
As for journals, though Language Testing is considered the most prestigious publication in our field. Those with a specific interest in reading may find Reading Research Quarterly of equal or perhaps even greater value.

Q: What statistical software do you generally use?

A: In Iran, SPSS (Statistical Package for Social Sciences) has been the most widely used statistical software package for a long time. I started out with this program, but soon discovered it was not a panacea for all my research needs. I have found FACETS particularly useful for IRT modeling, and Easystat (www.uni-koeln/themen/Statistik/easystat/index.e.html) very useful when conducting ANCOVA (Analysis of Covariance). And The Survey System (www.surveysystem.com/) has been extremely helpful in dealing with complex surveys. As you can guess, I am eclectic about statistical software. A good website with more information on this topic is www.execpc.com/~helberg/statistics.htm [Expired Link].

Q: What sort of tests are used in Iran to screen university applicants? Is there just one unified national test or does each university have its own entrance exam?

A: The Iranian National University Entrance Examination is called 'Konkoor,' which comes from the word 'conquer' in English. This exam has an English Section which normally consists of a battery of reading comprehension subtests. In 1990, the Supreme Council of Education of the Ministry of Science, Research, and Technology announced that reading comprehension was to be considered the most important skill in Iranian universities. This is why the English section of Konkoor focuses on reading. The test usually containes 40 items. Each year however, to reduce the coachability of the test, its format and content are slightly changed. This is why the English section of Konkoor is one of the most discriminating parts of the exam.

Q: Recently on the LTEST-L electronic forum you mentioned that how there were many unanswered concerns about test ethics. Could you elborate further?

A: We still have many unresolved issues about ethicality. Punch (1994) casts doubts on the ethicality of the whole field of language testing by raising "consent, deception, privacy, and confidentiality" issues. Garrison (1995) also sheds further light on the discrimination (which is, most of the time, the ultimate goal in language testing) and inequality of language tests. To give a straightforward example of an unfair testing practise, research shows that males generally outperform females on multiple-choice items (Davies et al, 1999). Nevertheless, many tests are still consist solely of multiple choice items. And concerning the deception issue, Lynch (1996) remarks, "Internal to some tests there may be a deception. In the case of the ubiquitous multiple-choice test format, are not distractors (wrong answers) deceptive by definition?" [(p. 3)].

[ p. 14 ]

What these points show is we still have a good number of unresolved issues which invite us to consider whether defining fairness in this area is possible at all. It seems we have to improve our definitions of research in testing, and also our conceptualization of a code of ethics. However, is there a way out of such ethical dilemmas without being equipped with an impeccable code of practice in language testing? I think the best solution is presented by Hamp-Lyons and Prochnow's (1989) guideline: "no test-taker shall be harmed by the test." [p. 13.] I believe if we base our concepts of ethicality on this very simple principle in every testing situation and also base decisions on pooled judgment (to avoid individual subjective judgments), we will have paved most of the way toward ethical testing.

Q: Your M.A. thesis described the role of metamemory in EFL reading comprehension. Could you briefly describe what metamemory is and how it relates to reading?

A: Metamemory (MM) is one's own knowledge or awareness of one's memory system and functioning. Flavell (1971) has been credited with introducing this term. The main premise underlying MM is that the awareness of one's own memory processes contributes to performance in different tasks. In this conceptualisation, it is not short-term or long-term memory stores which determine performance on a mental task such as reading, but rather the knowledge of how your brain approaches memory related tasks. So MM-based instructional approaches aim to add to the learner's self-awareness of memory and hence to expand the learner's capacity to handle textual clues to decode and to interpret the printed text. I invite readers to a article I published on this topic in a recent issue of the Academic Exchange Quarterly.

Q: You'll be giving a presentation about language testing judgements at the upcoming language testing conference in Kyoto, Japan. Could you mention the key points you wish to make during that presentation?

A: The main assumption of the presentation is that every stage of language test development is under the influence of subjective judgments and decisions. I will discuss these in detail, and also review three main solutions which the testing profession has used to try to overcome subjectivity so far. I will then argue why none of the attempts have been very productive, and finally propose a new approach to possibly overcome the effects of subjectivity in test development.

References

Bachman, L. (1990). Fundamental Considerations in Language Testing. Oxford: Oxford University Press.

Davies et al. (1999). Studies in Language Testing: Dictionary of Language Testing. Cambridge: Cambridge University Press.

Flavell, J. H. (1971). First discussant's comments: What is memory development the development of? Human Development, 14, 272-278.

Hamp-Lyons, L. and S. Prochnow. (1989, March). Person dimensionality, person ability, and item difficulty in writing. In J. Upshur (Chair), Eleventh Annual Language Testing Research Colloquium. San Antonio, Texas, USA.

Haji Pour Nezhad, G. (2001). The Impact of Metamemory on Reading Performance. Academic Exchange Quarterly, 5 (3), 88 - 93.

Hatch, E. and Farahady, H. (1982). Research Design and Statistics for Applied Linguistics. Rowley, MA: Newbury House.

Lynch, B. (1996, August). To Test or Not to Test? The Morality of Assessment. In C. Candlin (Chair), International Association for Applied Linguistics Conference, Jyaskyla, Finland.

[ p. 15 ]


Newsletter: Topic IndexAuthor IndexTitle IndexDate Index
TEVAL SIG: Main Page Background Links Network Join
www.jalt.org/test/haj_new.htm
Main Page