TEST BIAS IN AN ACADEMIC ENVIRONMENT by Helena Kriel Department of Student Counselling Technikon Pretoria, 1999.
Introduction
The Employment Equity Act prohibits the psychometric testing of an employee (or prospective employee) unless the test was validated and could be used fairly and unbiased with all persons from all culture groups. The debate over the terms fairness and bias has been going on for some time and the difference between the two concepts is not always clearly understood.
According to Murphy and Davidshoffer (1994: 275) fairness refers to a value judgement regarding decisions or actions taken as a result of test scores. Bias, on the other hand, is a statistical characteristic of the test scores, or of the predictions based upon these scores. Thus, bias is said to exist when a test makes systematic errors in measurement or prediction. (Murphy and Davidshoffer, 1994: 275).
Brown (1983: 224) defines bias as follows: “A test can be considered biased if it differentiates between members of various groups on bases other than the characteristic being measured. That is, a test is biased if its content, procedures, or use result in a systematic advantage (or disadvantage) to members of certain groups and if the basis of this differentiation is irrelevant to the test purpose.”
According to Brown, it is important to realize that his definition does not imply that a test is biased merely because members of different groups perform differently. When a group performs differently on a test and the scores reflect differences in the trait being measured, the test is not biased. However, Brown concludes that as soon as a mean and/or distribution difference between groups are found, the possibility of test bias should be investigated.
Theorists such as Brown (1983: 227), Welsh and Betz (1985: 379) as well as Murphy and Davidshoffer (1994: 286) distinguish between different kinds of test bias, viz.: Content Bias occurs when the content of test items gives a systematic advantage to a specific group of testees, for example when the test contains questions that one group is more familiar with than the other. Content bias can also be found in item format and presentation, when for instance pictorial material only depict white males and never females or blacks.
Murphy and Dadvidshoffer (1994: 286) refer to this type of bias as cultural bias, where a group of testees had the opportunity to become familiar with the test content and another not. The example they use is the application of test items which are highly academic in nature to disadvantaged groups, to whom schooling might be a relatively distant phenomenon. Verbal items are furthermore more likely to be biased than nonverbal ones. The argument here is that verbal items are likely to be presented in standard language (for example, English), which more closely resembles the language of the middle classes.
Internal Structure Bias occurs if the internal or factor structure of a test and/or behaviour of items in relationship to each other, differ across culture groups. This would imply that the test measures different things across different groups. Atmosphere Bias refers to the effects of the testing conditions on test takers performance for example, the type of motivation elicited, factors related to tester-testee interaction and factors in the evaluation and scoring of responses. Prediction/Selection Bias is caused when a test has different predictive validities across groups. Selection bias is examined through the comparison of regression lines obtained with the different groups.
Aim of the study
In this study attention was given specifically to content bias in regard to the Potential Index Batteries used for selection purposes by the Technikon Pretoria. The aim of the study was to examine whether the items of the relevant indices gave a systematic advantage to a specific group of testees.
Method
The groups used for this study were selected on the basis of home language supplied by the testee at the time of testing. A data bank of 6 190 records was analyzed. Item bias was calculated by means of Leaderware’s SmartStats programme.
Results
The following results were found in respect to the indices used by the Technikon Pretoria:
Table 1: Results of Item Bias Analysis Conducted on Index 1 [General Knowledge]
N=1 160
Some items from Index 1 showed definite bias as far as the Afrikaans and African languages were concerned. Items that showed bias are item 8 (against group 1), 12, 1, 17, 20, 23 (all against group 2).
Table 2: The Result of an Item Analysis Conducted on Index 3 [Reading Comprehension]
N=6 162
Table 3: The Results of an Item Bias Analysis Conducted on Index 5 [Mental Alertness]
N=6 190
Table 4: The Results of an Item Bias Analysis Conducted on Index 12 [Vocabulary]
N=1 658
Conclusion
Comprehensive statistical data on the question of content bias clearly justifies the utilization of the Potential Index Batteries at the Technikon Pretoria. The results of this study prove that the guidelines set by the new Employment Equity Act, are adhered to by the Potential Index Batteries. PIB.s situation-specific validity and reliability have over the past 3 - 4 years also repeatedly been proven by a substantial number of studies which were, inter alia, executed by the University of Pretoria and also by the present author in her capacity as researcher in the Department of Student Counselling of the Technikon Pretoria.