PIB and VPIB Reliability

PIB and VPIB Reliability



Legal and professional guidelines state that selection procedures need to be supported by thoroughly researched and documented situation specific reliability and validity evidence used as a foundation for selection decisions.

As it is known that the validity of a scale is always limited by its reliability, and therefore unreliable measurements will hamper efforts to predict behaviour, a study was made by the Bureau of Student Counseling at the Technikon Pretoria in order to determine the situation specific reliability of the PIB-indices utilized in the assessment of the potential of prospective students.


The assessment of scale reliability is based on the correlations between the individual items or measurements that make up the scale, relative to the variances of the items (Smit, 1991). Each measurement (response to an item) reflects to some extent the true score for the intended concept, and to some extent esoteric, random error (some other aspects of the question or person). This can be expressed in an equation as :

Actual measurement = True score + Random error

A measurement is thus reliable if it reflects mostly true score, relative to the error (StatSoft, 1994).

Measures of reliability

An index or reliability may be defined in terms of the proportion of the true score variability that is captured across subjects or respondents, relative to the total observed variability. In equation form :

Reliability = 02 (true score) / 02 (total observed) (Smit, 1991).

Sum Scales

If the error component in subjects� responses to each question is truly random, then you may expect that the different components will cancel each other out across items. In more technical terms, the expected value or mean of the error component across items will be zero. The true score component remains the same when summing across items. Therefore, the more items are added, the more true score (relative to the error score) will be reflected by the sum scale (StatSoft, 1994).


The validity of an instrument can be determined by relating scores obtained on such an instrument to performance on a relevant criterion. If your instrument correlates with the set criterion, it will raise your confidence in the validity of the instrument.

How will validity be affected by less than perfect instrument reliability?

The random error portion of the scale is unlikely to correlate with the set criterion. Therefore, if the proportion of true score in a scale is only 50% (that is the reliability is only 0.50), then the correlation between the scale and the criterion will be smaller than the actual correlation of true scores (the correlation will be attenuated). Thus, the validity of a scale is always limited by its reliability (StatSoft, 1994).


A data bank consisting of almost 9 500 records was analyzed with the help of the SmartStats programme in order to determine the situation specific Internal-consistency reliability of relevant PIB indices. It is important to remember that situation specific evaluation batteries are being used by the Technikon Pretoria and therefore not all respondents are evaluated with all Indices. The number of respondents will thus differ from

Index to Index.

The Cronbach�s Alpha and Kuder-Richardson-20 methods are used by the SmartStats Programme to calculate the reliability coefficients of the relevant Indices.


The following situation-specific reliability coefficients were obtained for the PIB indices utilized by the Bureau for Student Counseling at the Technikon Pretoria, in the assessment of the academic potential of prospective students. When the possible answers to an item consist of an item range, Cronbach�s coefficient Alpha was computed. Where the possible answers were dichotomous, the Kuder- Richardson-20 (KR20) formula was used. [If all items are perfectly reliable and measure the same thing (true score), then the reliability coefficient is equal to 1,0.]

Reliability coefficients for VPIB Indices

Index 4 - [Composition of wholes]

N* = 1 350

Reliability coefficient [KR20] = 0.84

N* represents the size of the sample used in the calculation.

Index 6 & 7 [Spatial Reasoning and Perception]

N = 1 564

Reliability coefficient [KR20] = 0.79

Reliability coefficients for PIB Indices

Index 2 - [Creativity]

N =3 038

Reliability coefficient [Cronbach�s Alpha] = 0.75

Index 3 - [Reading Comprehension]

N =7 450

Reliability coefficient [KR20] = 0.83

Index 5 - [Mental Alertness]

N =7 240

Reliability coefficient [KR20] = 0.87

Index 12 - [Vocabulary]

N = 1 340

Reliability coefficient [KR20] = 0.76

Index 9 - [Interpersonal Relations]

N =2 978

Reliability coefficient [Cronbach�s Alpha] = 0.82

Index 10 -[Self-Image]

N =1 567

Reliability coefficient [Cronbach�s Alpha] = 0.79

Index 15 -[Motivation]

N =1 930

Reliability coefficient [Cronbach�s Alpha] = 0.71

Index 18 -[Stress Management]

N =1 585

Reliability coefficient [Cronbach�s Alpha] = 0.92

Index 21 -[Assertiveness]

N =1 950

Reliability coefficient [Cronbach�s Alpha] = 0.88


Generally a reliability coefficient of 0.6 for social and emotional indices and a coefficient of larger than 0.7 for cognitive indices are regarded as being acceptable levels of reliability in psychometric evaluation. It is clear from the above stated results that the situation specific reliability coefficients of the relevant PIB Indices, used by the Technikon Pretoria for potential assessment purposes, fall within the acceptable range set for psychometric testing. Furthermore, exceptionally high reliability coefficients are shown by indices such as Composition of  Wholes [0.84] (VPIB), Reading Comprehension [0.83], Mental Alertness [0.87], Interpersonal Relations [0.82], Stress Management [0.92] and Assertiveness [0.88] (PIB).


As it remains the responsibility of the instrument user to ensure that decisions based on the interpretation of evaluation results are fair and accountable, studies like these have become invaluable in all assessment procedures. PIB enables its users to do on-site, situation specific reliability studies and base their selection (and other HR) decisions on empirically researched results.


Smit, G. J. 1991 Psigometrika, Pretoria : HAUM StatSoft 1994 STATISTICA : User�s manual, Volume III

This site was built using