Aptitude


Aptitude

Students approach new educational tasks with a repertoire of knowledge, skills, attitudes, values, motivations, and other propensities developed through life experiences to date. The school experience may be conceptualized as a series of situations that sometimes demand, sometimes evoke, or sometimes merely afford the use of these characteristics. Of the many characteristics that influence a person.s behavior, only a small set aid goal attainment in a particular situation. These are called aptitudes. Specifically, aptitude refers to the degree of readiness to learn and to perform well in a particular situation or fixed domain (Corno, Cronbach, Kupermintz, Lohman, Mandinach, Porteus, & Talbert, 2002). Thus, of the many characteristics that individuals bring to a situation, the few that assist them in performing well in that situation function as aptitudes. Examples include the ability to comprehend instructions, to manage one.s time, to use previously acquired knowledge appropriately, to make good inferences and generalizations, and to manage one.s emotions. Aptitudes for learning thus go beyond cognitive abilities. Aspects of personality and motivation commonly function as aptitudes as well.

Prior achievement is commonly an important aptitude for future achievement. Whether prior achievement functions as aptitude in a particular situation depends both on the person.s propensity to use prior knowledge in new situations and on the demand and opportunity structure of the situation. Therefore, understanding which characteristics of individuals are likely to function as aptitudes begins with a careful examination of the demands and affordances of the target environment. In fact, defining the situation is part of defining the aptitude (Snow & Lohman, 1984). The affordances of an environment are what it offers or makes likely or makes useful. Placing chairs in a circle affords discussion; placing them in rows affords attending to someone at the front of the room. Discovery learning affords the use of reasoning abilities; direct instruction often does not.

The second step is to identify those characteristics (or propensities) of individuals that are coupled with task or situation affordances. The most important requirement of most academic tasks is domain knowledge and skill (Glaser, 1992). Measures of prior knowledge and skill are therefore usually the best predictors of success in academic environments, especially when new learning depends heavily on old learning. Although there is much data that confirms this assertion, a simple thought experiment will illustrate it. Consider the likelihood that a secondgrader with high scores on a nonverbal reasoning test but no background in mathematics will succeed in learning arithmetic. Now consider the likelihood that a graduate student with equally high scores on a nonverbal reasoning test but no background in mathematics will succeed in a graduate-level mathematics class. Measures of current knowledge and skill include on-grade-level and above-grade-level achievement tests and well-validated performance assessments, such as rankings in debate contests, art exhibitions, and science fairs. Performance assessments that supplement achievement tests offer the most new information if they require the production of multiple essays, speeches, drawings, or science experiments, rather than evaluation of essays, speeches, drawings, or science experiments produced by others (Rodriguez, 2003).

The second most important learner characteristic for academic learning is the ability to go beyond the information given; to make inferences and deductions; and to see patterns, rules, and instances of the familiar in the unfamiliar. The ability to reason well in the symbol system(s) used to communicate new knowledge is critical for success in learning. Academic learning relies heavily on reasoning (a) with words and about the concepts they signify and (b) with quantitative symbols and the concepts they signify. Thus, the critical reasoning abilities for all students (minority and majority) are verbal and quantitative. Figural reasoning abilities are less important and thus show lower correlations with school achievement. The Relative Importance of Prior Achievement and Reasoning Abilities in the Prediction of Future Achievement Evidence for these claims about the relative importance of prior knowledge and skill versus the ability to reason in different symbol systems is shown in Tables 2 and 3. Students in a large midwestern school district were retested with different levels of CogAT and the ITBS in grades 4, 6, and 9. The data in Tables 2 and 3 are for the 2,789 students who had scores for both the grade 4 and grade 9 test administrations and the 4,811 students who had scores

Table 2 Prediction of Grade 9 Reading Achievement from Grade 4 (N = 2,789) or Grade 6 (N = 4,811) Achievement and Ability Scores

 

Grade 4

 
 
 
 

Grade 6

 
 
 

r

b

 
E

p

r

b

 
E

p

Constant
 

36.454

 

0.000*

 

28.948

 

0.000*

ITBS Reading (R)

0.732

0.556

0.359

0.000*

0.797

0.558

0.437

0.000*

ITBS Language (L)

0.642

0.010

0.007

0.148

0.684

0.059

0.055

0.005*

ITBS Mathematics (M)

0.635

0.196

0.126

0.000*

0.657

0.046

0.037

0.078

CogAT Verbal (V)

0.741

0.617

0.288

0.000*

0.796

0.722

0.321

0.000*

CogAT Quantitative (Q)

0.595

0.066

0.027

0.912

0.643

-0.006

-0.003

0.901

CogAT Nonverbal (N)

0.576

0.128

0.053

0.000*

0.602

0.114

0.048

0.005

Sex (S)a

0.075

-5.157

-0.073

0.038

0.054

1.904

0.027

0.719

Interactions with Sex
 
 
 
 
 
 
 
 
R _S
 

-0.025

-0.071

0.694
 

-0.075

-0.246

0.047*

L _S
 

0.065

0.184

0.294
 

-0.020

-0.066

0.507

M _S
 

-0.062

-0.173

0.330
 

0.019

0.060

0.613

V _S
 

0.053

0.078

0.615
 

0.165

0.244

0.031*

Q _S
 

-0.131

-0.191

0.206
 

0.082

0.123

0.240

N _S
 

0.187

0.282

0.031*

 

-0.072

-0.111

0.233

Note.  R2= .621 for Grade 4; and R2= .702 for Grade 6. r= Pearson product-moment correlation; b= unstandardized regression coefficient; E= standardized regression coefficient; p= probability. Achievement scores are from levels 10 (grade 4), 
12 (grade 6), and 14 (grade 9) of the ITBS Survey Battery, Form K (Hoover, Hieronymous, Frisbie, & Dunbar, 1993). Ability
scores are from levels B (grade 4) and D (grade 6) of CogAT, Form 5 (Thorndike & Hagen, 1993).
aGirl = 1
* p < .05

for both the grade 6 and the grade 9 testings. The dependent variable is grade 9 Reading Scale Score in Table 2 and is grade 9 Mathematics Scale Score in Table 3. The critical question here is whether prior achievement and prior ability both contribute to the prediction of grade 9 achievement. This was addressed in a series of multiple regression analyses. The independent variables were entered in two blocks: Block 1 contained the prior achievement test scores for Reading, Language, and Mathematics; CogAT scores for Verbal, Quantitative, and Nonverbal reasoning; and sex. Block 2 contained the interactions between each of the six test scores and sex. These interaction terms test whether the prediction equations differed significantly for males and females. Look first at the prediction of grade 9 Reading achievement in Table 2. The first column shows correlations between the grade 4 achievement and ability test scores and the grade 9 Reading scale score. CogAT Verbal Reasoning had the highest correlation (r = .741) followed by grade 4 Reading achievement (r = .732).

The regression analysis, however, shows that grade 4 Reading achievement was the relatively stronger predictor of grade 9 Reading achievement when all scores were considered simultaneously (?E = .359 versus ?E = .288). The right side of Table 2 shows that grade 6 reading achievement was also the best predictor of grade 9 Reading achievement. However, both Reading achievement and CogAT Verbal interacted with sex in the grade 6 predictor. The within-sex regressions showed that, for boys, grade 6 Reading achievement was a slightly better predictor than CogAT Verbal, whereas for girls the situation was reversed.

Although one can compute the unstandardized regression coefficients for with-sex regressions from the b weights reported in Tables 2 and 3, standardized regression coefficients (?E.s) required additional, within-sex analyses. These analyses are not reported here.

Table 3 Prediction of Grade 9 Mathematics from Grade 4 (N = 2,789) or Grade 6 (N = 4,811) Achievement and Ability Scores

 

 
 

Grade 4

 
 
 
 

Grade 6

 
 

r

b

 
E

p

r

b

 
E

p

Constant
 

42.692

 

0.000*

 

39.259

 
0.000*
ITBS Reading (R)

0.585

0.206

0.126

0.000*

0.664

0.207

0.155

0.000*
ITBS Language (L)

0.574

0.024

0.014

0.627

0.662

0.019

0.017

0.428
ITBS Mathematics (M)

0.683

0.377

0.231

0.000*

0.743

0.270

0.209

0.000*
CogAT Verbal (V)

0.665

0.255

0.113

0.002*

0.712

0.308

0.131

0.000*
CogAT Quantitative (Q)

0.672

0.504

0.194

0.000*

0.746

0.619

0.262

0.000*
CogAT Nonverbal (N)

0.637

0.452

0.178

0.000*

0.670

0.309

0.124

0.000*
Sex (S)a

-0.099

-1.234

-0.098

0.467

-0.096

-6.858

-0.092

0.252
Interactions with Sex
 
 
 
 
 
 
 
 
R _S
 

-0.111

-0.302

0.116

 

-0.109

-0.343

0.011*
L _S
 

-0.003

0.069

0.963

 

-0.004

-0.013

0.906
M _S
 

0.018

0.048

0.800

 

0.092

0.280

0.028*
V _S
 

0.172

0.241

0.141

 

0.100

0.141

0.246
Q _S
 

-0.073

-0.101

0.525

 

-0.035

-0.050

0.658
N _S
 

0.092

0.132

0.338

 

0.003

0.004

0.969
 
 
 
 
 
 
 
 
 
 
 
 
 
Note. R2= .578 for Grade 4; and R2= .654 for Grade 6. r= Pearson product-moment correlation; b= unstandardized regression coefficient; E= standardized regression coefficient; p= probability. Achievement scores are from levels 10 (grade 4),
12 (grade 6), and 14 (grade 9) of the ITBS Survey Battery, Form K (Hoover, Hieronymous, Frisbie, & Dunbar, 1993). Ability
scores are from levels B (grade 4) and D (grade 6) of CogAT, Form 5 (Thorndike & Hagen, 1993).
aGirl = 1
* p < .05

Table 3 shows a similar pattern of results for the prediction of grade 9 Mathematics from grade 4 achievement and ability scores (left panel) and grade 6 achievement and ability scores (right panel). At grade 4, Mathematics achievement was the strongest contributor to the prediction (?E = .23), whereas at grade 6, CogAT Quantitative reasoning predominated (?E = .262). Once again, both of these variables interacted with sex. This time grade 6 Mathematics achievement had the largest beta weight for girls, whereas for boys, CogAT Quantitative was largest. Correlations among the independent variables make these sorts of comparisons among beta weights suggestive rather than definitive (Pedhazur, 1982).

Nonetheless, it is clear that the two most important predictors of future reading (or math) achievement are current reading (or math) achievement and verbal (or quantitative) reasoning ability. The fact that prior achievement and reasoning ability are the two largest contributors to the prediction of grade 9 achievement runs counter to assertions that verbal (or quantitative) reasoning tests measure the same thing as verbal (or math) achievement tests. Rather, these results support the claim that the two most important aptitudes for academic learning are current achievement in the domain and the ability to reason in the symbol system(s) used to communicate new knowledge in that domain.

Therefore, if the goal is to identify those students who are most likely to show high levels of future achievement, both current achievement and domainspecific reasoning abilities need to be considered. Our data suggest that the two should be weighted approximately equally, although the relative importance of prior achievement and abstract reasoning will depend on the demands and affordances of the instructional environment, and the age and experience of the learner. In general, prior achievement is more important when new learning is like the learning sampled on the achievement test. This is commonly the case when the interval between old and new learning is short. With longer time intervals between testings or when content changes abruptly (as from

Table 4 Prediction of ITBS Form A (grades 1-8) or ITED Form A (grades 9-12) Reading Total Scale Score from CogAT Form 6 Verbal (V), Quantitative (Q), and Nonverbal (NV) Reasoning Standard Age Scores for All Students and for Hispanic Students, by Grade

 

 

 

All Students

 

 

 

 

 

Hispanic Students

 

 

 

Correlation

 

 

E

 

 

Correlation

 

E

 

 

Grade

V Q NV

N

V

Q

NV

Multiple R

V Q NV

N

V Q

NV

Multiple R

1

.56

.60

.52

11,424

.216

.311

.193

.64

.59

.61

.46

1,051

.271

.314

.139

.64

2

.69

.66

.55

11,882

.432

.229

.154

.73

.65

.62

.47

1,093

.424

.237

.114

.69

3

.80

.66

.62

13,678

.669

.138

.052

.81

.80

.63

.58

1,382

.714

.121

-.009

.80

4

.80

.66

.62

13,630

.660

.137

.066

.81

.80

.60

.56

1,462

.726

.070

.043

.81

5

.81

.66

.62

13,935

.696

.107

.057

.82

.79

.62

.55

1,423

.720

.061

.046

.80

6

.83

.67

.62

13,811

.710

.120

.049

.84

.83

.64

.55

1,199

.759

.122

-.019

.84

7

.84

.69

.63

13,164

.714

.150

.023

.85

.81

.65

.55

1,129

.709

.155

-.008

.82

8

.84

.67

.63

11,178

.730

.115

.041

.85

.81

.62

.57

1,095

.699

.089

.079

.82

9

.82

.67

.63

8,112

.693

.126

.053

.83

.79

.60

.50

779

.727

.100

-.005

.80

10

.82

.66

.60

6,083

.706

.128

.042

.83

.83

.84

.64

453

.760

.115

.004

.85

11

.79

.62

.57

5,078

.686

.133

.021

.80

.80

.59

.53

382

.688

.081

.107

.81

12

.78

.60

.59

4,106

.670

.098

.072

.79

.78

.66

.56

252

.602

.234

.036

.80

Mean

.78

.65

.60

10,507

.632

.149

.069

.80

.77

.62

.53

975

.650

.142

.044

.79

 

Note. Correlations are with Reading Total. E = standardized regression coefficient. Reading achievement scores are from the ITBS, Form A (Hoover et al., 2001) at grades 1-8 and from the ITED, Form A (Forsyth, Ansley, Feldt, & Alnot, 2001) at grades 9-12. CogAT scores are from Form 6 (Lohman & Hagen, 2001a).

arithmetic to algebra), then reasoning abilities become more important. Novices typically rely more on knowledge-lean reasoning abilities than do domain experts. Because children are universal novices, reasoning abilities are therefore more important in the identification of academic giftedness in children, whereas evidence of domain-specific accomplishments is relatively more important for adolescents. The Prediction of Achievement for Minority Students Are the predictors of academic achievement the same for majority and minority students? And even if they are the same, should they be weighted the same? For example, are nonverbal reasoning abilities more predictive of achievement for minority students than for majority students? Is the ability to reason with English words less predictive of achievement for Hispanic or Asian-American students than for White students?

We have examined this question in some detail. Our analyses, which concur with those of other investigators (e.g., Keith, 1999), are unequivocal: The predictors of achievement in Reading, Mathematics, Social Studies, and Science are the same for White, Black, Hispanic, and Asian-American students. Table 4 shows an example for predicting Reading Total on the ITBS in grades 1-8, and on the Iowa Tests of Educational Development (ITED) in grades 9-12. The predictors were CogAT Verbal, Quantitative, and Nonverbal SAS scores. Grades 1 and 2 used the CogAT Primary Battery, whereas grades 3-12 used one of the eight levels of the CogAT Multilevel Battery. The left half of the table reports the analyses for all students in the sample; the right half of the table shows the same analyses for the Hispanic students. Each row first reports the raw correlations between the three CogAT SAS scores and Reading Total. Next, the results of a multiple regression in which Reading Total was predicted from the three CogAT scores are reported. Entries in this portion of the table are the standardized regression coefficients (beta weights) and the multiple correlation.

Look first at the last row in the table, which reports the average entry in each column. The average correlations between the three CogAT scores and Reading achievement were .78, .65, and .60 for Verbal, Quantitative, and Nonverbal Reasoning. The multiple correlation between all three tests and reading achievement was .80, which is substantially higher than the correlation of .60 observed for the Nonverbal Battery. The good news, then, is that the score on the Nonverbal Battery predicts reading achievement. The bad news is that the prediction is relatively poor when compared to the multiple correlation across all three batteries. For Hispanics, the correlations with Reading achievement were .77, .62, and .53 for Verbal, Quantitative, and Nonverbal Reasoning. If anything, then, the nonverbal score was less predictive of Reading achievement for Hispanics than for Whites.

The standardized regression coefficients mirrored this pattern: Verbal reasoning ability was the best predictor of reading achievement for Hispanic students (?E = .650); nonverbal reasoning was the worst (?E = .044). Indeed, sometimes the Nonverbal score had a negative weight. The unique contributions of verbal, quantitative, and nonverbal reasoning abilities to the prediction of achievement were also examined in separate set of regression analyses. The question was this: What does each of these three reasoning abilities add to the prediction of reading or math achievement once the other two abilities are taken into account? The critical statistic is the increment in R-square that is observed when the third predictor is added to the regression. The results for Reading Total are shown in the left half of Table 5, and for Mathematics Total in the right half of Table 5.

For Reading achievement, the Verbal Reasoning score added enormously to the prediction even after the quantitative and nonverbal scores were entered into the equation. The median increment in R2 for the Multilevel Battery (grades 3-12) was 0.22. When entered last, Quantitative Reasoning added a small, barely significant increment of .006 to the R-square. Finally, when entered last, Nonverbal Reasoning made a significant contribution only for the orally administered Primary Battery (grades 1 and 2). For grades 3-12, nonverbal reasoning contributed only .001 to the R2. A similar but less lopsided set of results were obtained for the prediction of mathematics achievement. As expected, Quantitative Reasoning made the largest unique contribution. The median increment in R2 was .091 at grades 3-12. The corresponding increments in R2 for Verbal Reasoning and Nonverbal Reasoning were .026 and .008, respectively.

Therefore, whether judged by the relative magnitudes of the beta weights (Table 4) or the increment in R-square when entered last (Table 5), the nonverbal score was clearly much less important than the verbal and quantitative scores for the prediction of achievement. Further, at least for Reading Achievement, the regression equations were essentially the same for Hispanics as for the total sample. But do these results generalize to other ethnic groups and other domains of academic achievement?

Table 5 : Increment in R-Square Observed when for CogAT Verbal, Quantitative, or Nonverbal Scores are Added Last to the Prediction of Reading Achievement (Left Panel) or Mathematics Achievement (Right Panel)

 
 

Reading Total

 
 
Mathematics Total
 
Grade

Verbal

Quantitative

Nonverbal

Verbal

Quantitative
Nonverbal
 
 
 
CogAT Form 6 Primary Battery
 
 

1

0.019

0.032

0.021 0.018

0.085

0.018

2

0.079

0.018

0.013 0.007

0.093

0.022

 
 
 
CogAT Form 6 Multilevel Battery
 
 

3

0.191

0.007

0.001 0.033

0.071

0.009

4

0.191

0.007

0.002 0.023

0.077

0.011

5

0.206

0.004

0.001 0.022

0.080

0.011

6

0.225

0.005

0.001 0.023

0.091

0.009

7

0.219

0.008

0.000 0.017

0.094

0.008

8

0.234

0.004

0.001 0.024

0.090

0.007

9

0.208

0.005

0.001 0.031

0.085

0.005

10

0.234

0.006

0.001 0.031

0.095

0.008

11

0.233

0.007

0.000 0.035

0.093

0.006

12

0.221

0.004

0.002 0.028

0.097

0.008

 

Note. Reading (or Mathematics) Total is from Form A of the ITBS (Hoover et al., 2001) at grades 1-8 and Form A of the ITED (Forsyth et al., 2001) at grades 9-12. CogAT scores are from Form 6 (Lohman & Hagen, 2001a). Sample sizes at each grade are reported in column 5 of Table 4.

This question was addressed in a final set of regression analyses that included only Whites and Hispanics, or Whites and Blacks, or Whites and Asian Americans. As in Table 3, a variable for ethnicity was also coded (e.g., 0 = White, 1 = Hispanic) and then interactions between each of the three CogAT scores and ethnicity were computed. The interaction terms test the hypothesis that the regression weights for one or more of the CogAT scores differ for the ethnic groups being compared. The median increment in R-square that was observed when all the three interaction terms were added to the model was 0.0% of the variance for the White-Hispanic analyses, 0.1% for the White-Black analyses, and 0.0% for the White- Asian analyses. In other words, the regression equations that best predict Reading, Mathematics, Social Studies, and Science achievement from grades 1 to 12 for Hispanic, Black, and Asian-American students are the same as the regression equations that best predict the performance of White students in each of these domains at each grade. Verbal Reasoning Abilities of ELL Students Although the predictors of achievement are the same for White, Hispanic, Black, and Asian-American students, some would argue that tests that make any demands on students. language abilities are unfair to those who do not speak English as a first language or who speak a dialect of English not spoken by most test takers. Put differently, how can we estimate the verbal (and, to a lesser extent, quantitative) reasoning abilities of students who speak English as a second language?

Although quantitative reasoning tests often make few language demands, verbal reasoning tests typically measure the ability to reason using the dialect of the English language commonly used in formal schooling. However, even though this is not the same as the ability to reason in another language, verbal abilities in any language seem to depend on a common set of cognitive processes. This is shown most clearly in studies of bilingual students. Such studies show that the predictors of achievement are largely the same not only across ethnic groups, but also within a bilingual population across different languages (Gustafsson & Balke, 1993; Lindsey, Manis, & Baily, 2003). Put differently, the problem of identifying those Hispanic students best able to reason in English is similar to the problem of identifying those English-speaking students most likely to succeed in understanding, reading, and writing in French. [I leave out speaking skills because language production abilities seem to involve specific abilities unrelated to academic competence (Carroll, 1981).] The evidence is clear that competence in understanding, reading, and writing English are much better predictors of success in learning to understand, read, and write French than are numerical reasoning or nonverbal reasoning (Carroll, 1981). But the best prediction is given by the ability to comprehend and reason in French after some exposure to the French language. The same logic applies to the problem of identifying bilingual students who are most likely to achieve at high levels in domains that require verbal reasoning abilities, but in English. A test that assessed the abilities of Hispanic students to reason in Spanish would predict their ability to reason in English. But the best and most direct prediction is given by a test that measures how well they have learned to reason in English, given some years of exposure to the language.

Notice that an aptitude perspective requires that one be much more specific about the demands and affordances of the learning situation than a model that presumes an undifferentiated g should best predict performance in any context. In particular, success in schooling places heavy demands on students. abilities to use language to express their thoughts and to understand other people.s attempts to express their thoughts. Because of this, those students most likely to succeed in formal schooling in any culture will be those who are best able to reason verbally. Indeed, our data show that, if anything, verbal reasoning abilities are even more important for bilingual students than for monolingual students. Failure to measure these abilities does not somehow render them any less important. There is no escaping the fact that the bilingual student most likely to succeed in school will exhibit strong verbal reasoning skills in her first language, and, even more importantly, in the language of instruction. Thus, an aptitude perspective leads one to look for those students who have best developed the specific cognitive (and affective) aptitudes most required for developing expertise in particular domains.

The Black, Hispanic or Asian-American students most likely to develop expertise in mathematics are those who obtain the highest scores on tests of mathematics achievement and quantitative reasoning. Identifying such students requires this attention to proximal, relevant aptitudes, not distal ones that have weaker psychological and statistical justification. The Scourge of Fixed Cut Points But how can schools take in to account the relevant aptitudes and at the same time increase the diversity of the selected group? On average, Hispanic and Black students score below White students on both measures of achievement and reasoning ability. For Blacks, the problem is not only lower mean score, but also a smaller variance. In their review of the Black- White test score differences on six large national surveys, Hedges and Nowell (1998) found that the variances of the score distributions for Black students were 20-30% smaller than the variances of the score distributions for Whites. The combination of a lower mean and a smaller variance means that very few Blacks obtain high scores in the common score distributions (see Koretz, Lynch, & Lynch, 2000).

Because of this, schools have looked for measures of accomplishment or ability that show smaller group differences. One can reduce the differences by careful construction of tests. Most well-designed ability and  achievement tests go to great lengths to remove irrelevant sources of difficulty that are confounded with ethnicity. But one cannot go very far down this path without getting off of the path altogether. For example, one can use teacher ratings of student creativity rather than measures of achievement or, in the case of ability tests for ELL students, measures of figural reasoning ability instead of verbal and quantitative reasoning abilities. Creativity is a good thing, but it is not the same thing as achievement. Schools should aim to develop both. However, one cannot substitute high creativity for high achievement.

Further, ratings of creativity are much less reliable than measures of achievement. Group differences will always be smaller on a less reliable test than a more reliable test. In the extreme, a purely random selection process would show no group differences. Similarly, a less valid predictor of achievement (such as a figural reasoning test) may be successful in identifying relatively more minority students, but more of these will be the wrong students (see Figure 1). This should concern everyone, especially the minority communities who hope that students who receive the special assistance offered in programs for the gifted and talented will someday become the next generation of minority scholars and professionals. Getting the right kids is much more important than getting the right number of kids.

The problem, I believe, is that programs for the gifted and talented have not clearly distinguished between the criteria that should be used to identify students who currently display extraordinary levels of academic accomplishment from the criteria that should be used to identify those whose current accomplishments are lesser, but who show potential for developing academic excellence (for an elaboration of this point, see Lohman, in press). In identifying students whose current accomplishments are extraordinary, common measures and common criteria are most appropriate. The third grade student who will be studying algebra needs a level of mathematical competence that will allow her to succeed in the algebra class. Other aptitudes are important, but none could compensate for the lack of requisite mathematical knowledge. Potential for developing a high level of accomplishment, on the other hand, is a much slipperier concept. Even in the best of cases, predictions of future achievement are often wrong. For example, Table 2 showed the prediction of grade 9 reading achievement from six measures of achievement and reasoning abilities obtained in grade 4, plus sex, and the six interactions between sex and each of the achievement and ability tests.

The R-square in this analysis was .621, and so the R was .788. Although this is a substantial correlation, it means that only slightly more than half of the students who were predicted to be in the top ten percent of the grade 9 reading achievement on the basis of their gender, grade 4 achievement, and grade 4 ability test scores actually obtained grade 9 reading scores at this level.

16 Put differently, even with excellent measures of prior achievement and ability, we could forecast whether a child would fall in the top ten percent of the distribution five years later with only slightly more than 50 percent accuracy. Furthermore, these predictions are even less likely to hold if schools adopt interventions that are designed to falsify them. For example, intensive instruction in writing for students who have had little opportunity to develop such skills on their own can markedly change the relationship between pretest and posttest scores. Differences in opportunity to learn can be substantial and must be taken in to account when making inferences about potential to acquire competence. This is recognized, albeit in a somewhat muddled way, in the U.S.

Department of Education.s guidelines for identifying gifted students. There, gifted children are defined those who .perform or show the potential for performing at.high levels of accomplishment when compared with others of their age, experience, or environment. (U.S. Department of Education, 1993; emphasis added). The definition confounds performance with potential for performance. I would argue that, although current levels of accomplishment should be judged using standards that are the same for all, potential for acquiring competence must always be made relative to circumstances. To do otherwise presumes that abilities can be assessed independently of opportunities to develop them. This is not possible. Therefore, when estimating a student.s potential to acquire competence, schools cannot blindly apply a uniform cut score, however fair such a procedure may seem or administratively convenient it may be.

The ten-year-old ELL student with, say, three years of exposure to the English language who has learned to reason with English words at the same level as the average ten-year-old native speaker has exhibited much greater potential for language learning than the tenyear- old native speaker. Schools can best identify such bilingual students by examining frequency distributions of scores on the relevant achievement and reasoning tests. It is helpful to know where students stand in relation both to all of their age or grade peers as well as in relation to those with similar backgrounds. Concretely, the Hispanic student with the highest probability of developing academic excellence in a 16 See Lohman (2003) for tables that display these prediction efficiencies for different levels of correlation between tests.

particular domain is the student with the highest achievement in that domain, and the best ability to reason in the symbol systems most demanded for new learning in that domain. In general, judgments about potential to acquire proficiency have both empirical and ethical dimensions. Both dimensions need to be addressed. The empirical question is .Who is most likely to attain academic excellence if given special assistance?. The ethical question is .Who is most likely to need the extra assistance that schools can provide?. For some students, special programs to develop talent provide an ancillary opportunity; for other students, they provide the only opportunity.Once these high-potential students have been identified, the next step is to intervene in ways that are likely to assist them in developing their academic skills. This involves much more than a short pullout program. Students, their parents, and their teachers need to understand that the development of high levels of academic competence involves the same level of commitment (and assistance) as does the development of high levels of athletic or musical competence.

Being identified as having the potential to achieve at a high level should not be confused with achieving at a high level. One possibility is to solicit the active involvement of high achieving minority adults to work with students in developing their academic expertise. I am particularly impressed with programs such as Urban Debate League in Baltimore (see http://www.towson.edu/news/campus/msg02628.html). Some of the many good features of such programs are an emphasis on production (rather than reception) of language, teamwork, long hours of guided practice, apprenticeship programs, competitions that motivate, and the active involvement of adults who serve as role models. The Proper Role of Nonverbal Ability Tests What, then, is the proper role for nonverbal ability tests in identifying students for acceleration or enrichment? Such tests do have a role to play in this process. But it is as a measure of last resort, not of first resort. Height and weight are positively correlated. We can predict weight from height, but only with much error.

 It is not fairer to measure everyone.s height just because we find it difficult to measure some people.s weight. Rather, we should use predicted weight only when we cannot actually weigh people. High scores on figural reasoning tests tell us that students can reason well about problems that make only the most elementary demands on their verbal and quantitative development. The trouble, however, is that minority or majority students with the highest nonverbal reasoning test scores are not necessarily the students who are most likely to show high levels of achievement.either currently or at some later date. Rather, those students with the highest domain-specific achievement and who eason best in the symbol systems used to communicate new knowledge are the ones most likely to achieve at a higher level.

Therefore, high scores on the nonverbal test should always be accompanied by  evidence of high (but not necessarily stellar) accomplishment in a particular academic domain or by evidence that the student.s verbal or quantitative reasoning abilities are high relative to those in similar circumstances. Most schools have this evidence for achievement, and those that administer ability tests that contain scores for verbal and quantitative reasoning in addition to the nonverbal score would have the corresponding evidence for ability as well. For many ELL students, mathematics achievement and/or quantitative reasoning abilities are often strong even when compared to the achievements of non-ELL students. For Black students, on the other hand, low scores on the nonverbal reasoning test are relatively common among those students with strong verbal and quantitative reasoning abilities.

 This was shown by the high frequency of N- profiles for Blacks in Table 1. Thus, less than stellar performance on the nonverbal test is even less informative for these students than for other students. Absent ancillary information on verbal or quantitative abilities and achievement then, the odds are not good that one will identify many of the most academically capable students by using a nonverbal, figural reasoning test. High scores on the nonverbal test are thus a useful supplement. They sometimes add to the prediction of achievement.especially in the quantitative domains. Thus the student with high scores on both the nonverbal and quantitative tests is more likely to excel in mathematics than is the student with high scores on either measure alone. And because the average scores for ELL students are generally higher on nonverbal tests than their scores on tests with  verbal content, the test scores can encourage students whose academic performance is not strong.

 The critical point, however, is not to confuse a higher average nonverbal score with better assessment of the relevant aptitudes. Put differently, the nonverbal test may appear to reduce bias, but when used alone actually increases it by failing to select those most likely to profit from enrichment. Summary Measures of academic accomplishment (which include, but are not limited to norm-referenced achievement tests) should thus be the primary criteria for defining academic giftedness. Such assessments not only measure knowledge and skills that are important aptitudes for academic learning,but also help define the type of expertise that schools aim to develop. Because of this, they can direct the efforts of those who would aspire to academic excellence. When properly used, they offer the most critical evidence for decisions about acceleration. They also avoid some of the more intractable problems that invariably attend attempts to define giftedness primarily in terms of anything that smacks of innate potential or capacity.

Those who are not selected may rightly bristle at the suggestion that they are innately inferior. This is not as likely to occur when excellence is defined in terms of accomplishment. Measures of developed reasoning abilities should be the second criteria for identifying children who are likely to profit from advanced instruction. Some students who show excellent reasoning abilities will be ready immediately for advanced instruction; some will be ready after a period of intensive instruction; and some will never be ready. Deciding which of these events is most probable requires clear thinking about the meaning of aptitude. Since defining the treatment is part of defining the aptitude, the first step is to identify the domains in which acceleration or advanced instruction is offered (or could be offered). In most school districts in the United States, the primary offerings consist of advanced instruction in areas such as mathematics, science, writing, literature, and (to a lesser extent) the arts. The best predictors of success in any domain are typically achievement to date in that domain and the ability to reason in the symbol systems of that domain.

However, many students who have the potential for academic excellence do not meet the selection criteria of very high current achievement. The next goal, then, should be to find those students who do not currently exhibit academic excellence but are most likely to develop it if given extra assistance. Because domain-relevant knowledge and skill are always important, one should look for students whose current achievement is strong, even though it is not stellar. Elsewhere (Lohman, in press), I use the example of selecting athletes for a college-level team. The best predictor of the ability to play basketball in college is the ability to play basketball in high school. Suppose, however, that the team could not recruit an athlete who had excelled at playing center in high school.

One could cast a broader net and look at other athletes who had the requisite physical attributes (height, agility, coordination). But it would be an extraordinary risk to recruit someone who had no experience in playing the game.or, better, who had demonstrated that he or she could not play the game. Rather, one would look for a player with at least moderate basketball playing skills in addition to the requisite physical attributes. Then, extra training would be provided. A similar policy could guide the search for academically promising minority students. Suppose the criterion for admission to the gifted and talented program is scoring the top 3 percent of the district in one or more domains of achievement or their associated reasoning abilities.17 One could begin, for example, by identifying those students who score in the top 10 percent of the local distribution of achievement scores (but not in the top 3 percent). Among these high achievers, some will show higher than expected achievement the following year, whereas the majority will show somewhat lower achievement.18 Those who show higher achievement the next year are more likely to show exceptionally strong reasoning abilities in the symbol system(s) used to communicate new knowledge. Therefore, among those with comparable achievement test scores, domain-critical reasoning abilities are the second most important aptitude.

The reverse scenario holds for those who show high (but not stellar) scores on tests of reasoning abilities. For these students, domain achievement tests measure the second most important aptitude. In either case, withingroup distributions of ability and achievement test scores can assist in the identification of the minority students most likely to develop academic excellence. One reviewer of an earlier draft of this paper noted the similarities between these recommendations and the admission procedures used in many universityaffiliated talent searches, such as The Center for Talented Youth at Johns Hopkins University, The Talent Identification Program at Duke University, The Center for Talent Development at Northwestern University, and The Rocky Mountain Talent Search at The University of Denver. Admission standards for entering the talent search require a 95-97 percentile rank score on an achievement test. Students are then administered verbal and quantitative reasoning tests, typically the SAT-I. Further, as recommended, students can qualify in either verbal or mathematical domains, or both.

The policies discussed here differ, though, in the recommendation that school-based gifted and talented  programs look not only at the students. ranks on achievement and ability tests when compared to all age or grade peers, but also at rank relative to those with 17 The CogAT authors have consistently argued that such decisions should not be made on the basis of the CogAT composite score (e.g., Hagen, 1980; Lohman & Hagen, 2001b, p. 127; Thorndike & Hagen, 1996, p. 159). Academic excellence should also be identified within the major curricular domains. Some students will excel in multiple domains; most will not. 18 Regression to the mean is most evident in scores that report rank within group (such as percentile ranks) and least evident on attainment scores (such as developmental scale scores on achievement or ability test batteries).

similar backgrounds. Further, even though achievement should be given priority over reasoning abilities, it is generally wiser for school-based programs to test all students on both constructs rather than screen on one test. If one test is used to screen applicants, then a much lower cut score should be used than is typically adopted (Hagen, 1980; see Lohman, 2003, for examples). Student personality and affective characteristics also need to be taken into consideration, even though they typically show much stronger dependence on the particulars of the instructional environment than do the cognitive characteristics of achievement and reasoning abilities. Anxiety and impulsivity typically impede learning, especially when tasks are unstructured.

Interest and persistence, on the other hand, typically enhance learning, especially in open-ended tasks (Corno et al., 2002). Identifications systems that take these sorts of characteristics into account will generally be more effective (Hagen, 1980).

Conclusions

1. Except for very young children, academic giftedness should be defined primarily by evidence of academic accomplishment. Measuring what students currently know and can do is no small matter, but is much easier than measuring potential for future accomplishment. Good measures of achievement are critical. Start with an on-grade achievement test and, if necessary, supplement with above- grade testing to estimate where instruction should begin. Look for other measures of accomplishment, especially production tasks that have well-validated rating criteria. For ELL students, attend particularly to mathematics achievement. High levels of current accomplishment should be a prerequisite for  acceleration and advanced placement.

2. Measure verbal, quantitative, and figural reasoning abilities in all students. Keep in mind that the majority of all students (minority and nonminority) show uneven profiles across these three domains, and that the predictors of current and future achievement are the same in White, Black, Hispanic, and Asian-American groups. Testing only those students who show high levels of achievement misses most of the students whose reasoning abilities are relatively stronger than their achievement. Because of regression of the mean, it also guarantees that most students will obtain lower scores on the ability test than they did on the achievement test (see Lohman, 2003).

3. For young children and others who have had limited opportunities to learn, give greater emphasis to measures of reasoning abilities than to measures of current accomplishment. When accomplishments are few.either because students are young or because they have had few opportunities to learn.then evidence of the ability to profit from instruction is critical. As children mature and have opportunities to develop more significant levels of expertise, then measures of accomplishment should be given greater weight. In all cases, however, evidence of high levels of accomplishment trumps predictions of lesser accomplishments.

4. Consider nonverbal/figural reasoning abilities as a helpful adjunct for both minority and nonminority admissions.but as a measure of last resort. Scores on the nonverbal tests must always be considered in conjunction with evidence on verbal and quantitative abilities and achievements. Defining academic giftedness by scores on a nonverbal reasoning test simply because the test can be administered to native speakers of English and ELL students serves neither group well. They are measures of last resort rather than of first resort in identifying academically gifted children.

5. Use selection tests that provide useful information for all students, not just the handful that are selected for inclusion in the G&T program. Teachers should see the test as potentially providing useful information about all of their students and how they can be assisted. A test that supposedly measures innate capabilities (or is easily misinterpreted as doing so) is rightly resented by those who are not anxious to see scores for low-scoring students entered on their records or used in ways that might limit rather than expand their educational opportunities.

6. Learn how to interpret tables of prediction efficiencies for correlations. (See Lohman, 2003.) Even high correlations are much less precise for selection and prediction than most people think.

7. Clearly distinguish between the academic needs of students who show high levels of current accomplishment and the needs of those who show promise for developing academic excellence. Academic programs that combine students with high levels of current achievement with those who exhibit potential for high achievement often serve neither group well. When accelerating students, the primary criteria must be high levels of accomplishment in a domain. Common standards are necessary. Measures of developed reasoning abilities and other aptitudes (such as persistence) are best viewed as indicators of potential for future achievement. Students who show high potential but only moderate levels of current achievement need different types of enrichment programs.

8. Use common aptitude measures but uncommon cut scores (e.g., rank within group) when identifying minority students most likely to profit from intensive instruction. Since the predictors of future accomplishment are the same for minority and White students, the same aptitudes need to be taken into account when identifying minority students who show the greatest potential for developing academic excellence. However, even with the bestmeasures of aptitude, predictions will often be wrong. Because judgments about potential are inherently even more uncertain than judgments about accomplishment, a uniform cut score is difficult to defend when students come from different backgrounds. Both psychological and ethical issues must be addressed when making such decisions.

9. Do not confuse means and correlations. A biased selection procedure is one that, in any group, does not select the students most likely to profit from the treatment offered. Nonverbal reasoning tests reduce but do not eliminate differences in mean scores between groups. But they are not the best way to identify those students who either currently exhibit or who are most likely to achieve academic excellence. When used alone, they increase bias while appearing to reduce it. A more effective and fairer procedure for identifying academic potential is to look at both within-group and overall distributions of scores on tests that measure the most important aptitudes for successful learning in particular domains.notably prior achievement and the ability to reason in the symbol systems used to communicate new knowledge in that domain.

References

Ackerman, P. L., & Kanfer, R. (1993). Integrating laboratory and field study for improving selection: Development of a battery for predicting air traffic controller success. Journal of Applied Psychology, 78, 413-432. Achenbach, T. M. (1970). Standardization of a research instrument for identifying associative responding in children. Developmental Psychology, 2, 283-291. Achter, J. A., Lubinski, D., & Benbow, C. P. (1996). Multipotentiality among intellectually gifted: .It was never there and already it.s vanishing.. Journal of Counseling Psychology, 43, 65-76. Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall. Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369-406.

Assouline, S. G. (2003). Psychological and educational assessment of gifted children. In N. Colangelo & G. A. Davis (Eds.), Handbook of gifted education (3rd ed, pp. 124-145). Boston: Allyn & Bacon. Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. New York: Cambridge University Press. Bethell-Fox, C. E., Lohman, D. F., & Snow, R. E. (1984). Adaptive reasoning: Componential and eye movement analysis of geometric analogy performance. Intelligence, 8, 205-238. Bracken, B. A. & McCallum, R. A. (1998). Universal Nonverbal Intelligence Test. Itasca, IL: Riverside. Carpenter, P. A., Just, M. A., & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review, 97, 404-431. Carroll, J. B. (1981). Twenty-five years of research on foreign language aptitude. In K. C. Diller (Ed.), Individual differences and universals in language learning aptitude (pp. 83-118). Rowley, MA: Newbury House.

Carroll, J. B. (1993). Human cognitive abilities: A survey of factor-analytic studies. Cambridge, UK: Cambridge University Press. Cattell, R. B. (1971). Abilities: Their structure, growth, and action. Boston: Houghton-Mifflin. Corno, L., Cronbach, L. J., Kupermintz, H., Lohman, D. F., Mandinach, E. B., Porteus, A. W., & Talbert, J. E. (2002). Remaking the concept of aptitude: Extending the legacy of Richard E. Snow. Hillsdale, NJ: Erlbaum. Cronbach, L. J. (1990). Essentials of psychological testing (5th ed.). New York: Harper and Row. Fitts, P. M. (1964). Perceptual-motor skill learning. In A. W. Melton (Ed.), Categories of human learning. New York: Academic Press. Forsyth, R. A., Ansley, T. N., Feldt, L. S., & Alnot, S. (2001). Iowa Test of Educational Development, Form A. Itasca, IL: Riverside.

Flynn, J. R. (1987). Massive IQ gains in 14 nations: What IQ tests really measure. Psychological Bulletin, 101, 171-191. Flynn, J. R. (1999). Searching for justice: The discovery of IQ gains over time. American Psychologist, 54, 5-20. Galton, F. (1869/1972). Hereditary genius. Gloucester, MA: Peter Smith. Gardner, H. (1983). Frames of mind: The theory of multiple intelligences. New York: Basic Books. Glaser, R. (1992). Expert knowledge and processes of thinking. In D. F. Halpern (Ed.), Enhancing thinking skills in the sciences and mathematics (pp. 63-75). Hillsdale, NJ: Erlbaum. Gohm, C. L., Humphreys, L. G., & Yao, G. (1998).

Underachievement among spatially gifted students. American Educational Research Journal, 35, 515-531. Gustafsson, J. .E., & Balke, G. (1993). General and specific abilities as predictors of school achievement. Multivariate Behavioral Research, 28, 407-434. Gustafsson, J. .E., & Undheim, J. O. (1996). Individual differences in cognitive functions. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 186-242.) New York: Simon & Schuster Macmillan. Hagen, E. P. (1980). Identification of the gifted. New York: Teachers College Press. Hammill, D. D., Pearson, N. A., & Wiederholt,J. L. (1996).

Comprehensive Test of Nonverbal Intelligence. Austin, TX: Pro-Ed. Heath, S. B. (1983). Ways with words: Language, life, and work in communities and classrooms. New York: Cambridge University Press. Hedges, L. V., & Nowell, A. (1998). Black-White test score convergence since 1965. In C. Jencks &M. Phillips (Eds.), The Black-White test score gap (pp. 149-181). Washington, DC: Brookings Institution Press. Hoover, H. D., Hieronymous, A. N., Frisbie, D. A., & Dunbar, S. B. (1993). Iowa Test of Basic Skills, Form K: Survey Battery. Chicago: Riverside. Hoover, H. D., Dunbar, S. B., & Frisbie, D. A. (2001). The Iowa Test of Basic Skills, Form A. Itasca, IL: Riverside.

Horn, J. L. (1985). Remodeling old models of intelligence. In B. B. Wolman (Ed.), Handbook of intelligence (pp. 267-300). New York: Wiley. Humphreys, L. G. (1973). Implications of group differences for test interpretation. In Proceedings of the 1972 Invitational Conference on Testing Problems: Assessment in a pluralistic society (pp. 56-71). Princeton, NJ: ETS. Humphreys, L. G. (1981). The primary mental ability. In M. P. Friedman, J. P. Das, & N. O.Connor (Eds.),

Intelligence and learning (pp. 87-102). New York: Plenum. Humphreys, L. G., Lubinski, D., & Yao, G. (1993). Utility of predicting group membership: Exemplified by the role of spatial visualization for becoming an engineer, physical scientist, or artist. Journal of Applied Psychology, 78, 250-261. Irving, S. H. (1983). Testing in Africa and America. In S. H. Irvine & J. W. Berry (Eds.), Human assessment and cultural factors (pp. 45-58). New York: Plenum Press. Jensen, A. R. (1998). The g factor: The science of mental ability. Westport, CT: Praeger. Keith, T. Z. (1999). Effects of general and specific abilities on student achievement: Similarities and differences across ethnic groups. School Psychology Quarterly, 14, 239-262. Keith, T. Z., & Witta, E. L. (1997). Hierarchical and crossage confirmatory factor analysis of the WISC-III: What does it measure? School Psychology Quarterly, 12, 89- 107.

Koretz, D., Lynch, P. S., & Lynch, C. A. (2000). The impact of score differences on the admission of minority students: An illustration. Statements of the National Board on Educational Testing and Public Policy, 1 (5). Kyllonen, P. C., & Christal, R. E. (1990). Reasoning ability is (little more than)  working-memory capacity?! Intelligence, 14, 389-433.

Laboratory of Comparative Human Cognition. (1982). Culture and intelligence. In R. J. Sternberg (Ed.), Handbook of intelligence (pp. 642-722). New York: Cambridge University Press. Lindsey, K. A., Manis, F. R., & Bailey, C. E. (2003). Prediction of first-grade reading in Spanish-speaking English-language learners. Journal of Educational Psychology, 95, 482-494. Lohman, D. F. (1994). Spatially gifted, verbally inconvenienced. In N. Colangelo, S. G. Assouline, & D. L. Ambroson (Eds.), Talent development: Vol. 2.

Proceedings from The 1993 Henry B. and Jocelyn Wallace National Research Symposium on Talent Development (pp. 251-264). Dayton, OH: Ohio Psychology Press. Lohman, D. F. (1996). Spatial ability and G. In I. Dennis & P. Tapsfield (Eds.), Human abilities: Their nature and assessment (pp. 97-116). Hillsdale, NJ: Erlbaum. Lohman, D. F. (2000). Complex information processing and intelligence. In R.J. Sternberg (Ed.), Handbook of human intelligence (2nd ed., pp. 285-340). Cambridge, MA: Cambridge University Press.

Lohman, D. F. (2003). Tables of prediction efficiencies. Retrieved December 31, 2003, from The University of Iowa, College of Education Web site: http://faculty.education.uiowa.edu/dlohman. Lohman, D. F. (in press). Aptitude for college: The importance of reasoning tests for minority admissions. In R. Zwick (Ed.), Rethinking the SAT: The future of standardized testing in college admissions. Santa Barbara, CA: UC Press. Lohman, D. F. (in press). An aptitude perspective on talent: Implications for the identification of academically gifted minority students. Journal for the Education of the gifted. (Also available at the web site http://faculty.education.uiowa.edu/dlohman/) Lohman, D. F., & Hagen, E. P. (2001a).

 Cognitive Abilities Test (Form 6). Itasca, IL: Riverside. Lohman, D. F., & Hagen, E. P. (2001b). Cognitive Abilities Test (Form 6): Interpretive Guide For School Administrators. Itasca, IL: Riverside. Lohman, D. F., & Hagen, E. P. (2001c). Cognitive Abilities Test (Form 6): Interpretive Guide For Teachers and Counselors. Itasca, IL: Riverside. Lohman, D. F., & Hagen, E. P. (2002). Cognitive Abilities Test (Form 6): Research handbook. Itasca, IL: Riverside. Lubinski, D. (2003). Exceptional spatial abilities. In N. Colangelo & G. A. Davis (Eds.), Handbook of gifted education (3rd ed., pp. 521-532). Boston: Allyn & Bacon.

Lubinski, D., & Benbow, C. P. (1995). An opportunity for empiricism [Review of Multiple intelligences]. Contemporary Psychology, 40, 935-937. Marshalek, B., Lohman, D. F., & Snow, R. E. (1983). The complexity continuum in the radex and hierarchical models of intelligence. Intelligence, 7, 107-128. Martinez, M. (2000) Education as the cultivation of intelligence. Mahwah, NJ: Erlbaum. Miller, J. G. (1997). A cultural-psychology perspective on intelligence. In R. J. Sternberg & E. L. Grigorenko (Eds.), Intelligence, heredity, and environment (pp. 269- 302). New York: Cambridge University Press.

Mills, C., & Tissot, S. L. (1995). Identifying academic potential in students from under-represented populations: Is using the Ravens Progressive Matrices a good idea? Gifted Child Quarterly, 39, 209-217. Mulholland, T. M., Pellegrino, J. W., & Glaser, R. (1980). Components of geometric analogy solution. Cognitive Psychology, 12, 252-284. Naglieri, J. A. (1997). Naglieri Nonverbal Ability Test: Multilevel technical manual. San Antonio, TX: Harcourt Brace. Naglieri, J. A., & Das, J. P. (1997). Cognitive Assessment System. Itasca, IL: Riverside.

Naglieri, J. A., & Ford, D. Y. (2003). Addressing underrepresentation of gifted minority children using the Naglieri Nonverbal Ability Test (NNAT). Gifted Child Quarterly, 47, 155-160. Naglieri, J. A., & Ronning, M. E. (2000). The relationship between general ability using the Naglieri Nonverbal Ability Test (NNAT) and Stanford Achievement Test (SAT) reading achievement. Journal of Psychoeducational Assessment, 18, 230-239.

Pedhazur, E. J. (1982). Multiple regression in behavioral research: Explanation and prediction (2nd ed.). New York: Holt, Rinehart, and Winston. Piaget, J. (1952). The origins of intelligence in children. New York: International Universities Press. Plomin, R., & De Fries, J. C. (1998). The genetics of cognitive abilities and disabilities. Scientific American, 278, 62-69. Raven, J. (2000). The Raven.s Progressive Matrices: Change and stability over culture and time. Cognitive Psychology, 41, 1-48.

Raven, J. C., Court, J. H., & Raven, J. (1983). Manual for Raven.s Progressive Matrices and vocabulary scales, section 4: Advanced Progressive Matrices, sets I and II. London: H. K. Lewis. Richert, E. S. (2003). Excellence with justice in identification and programming. In N. Colangelo & G. A. Davis (Eds.), Handbook of gifted education (3rd ed., pp. 146-158). Boston, MA: Pearson Education. Rodriguez, M. C. (2003). Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. Journal of Educational Measurement, 40, 163-184. Scarr, S. (1981). Race, social class, and individual differences in IQ. Hillsdale, NJ: Erlbaum.

Scarr, S. (1994). Culture-fair and culture-free tests. In R. J. Sternberg (Ed.), Encyclopedia of Human Intelligence (pp. 322.328). New York: Macmillan. Shepard, R. N. (1978). Externalization of mental images and the act of creation. In B. Randhawa & W. Coffman (Eds.), Visual learning, thinking, and communication (pp. 133-190). New York: Academic Press. Snow, R. E., & Lohman, D. F. (1984). Toward a theory of cognitive aptitude for learning from instruction. Journal of Educational Psychology, 76, 347-376.

Snow, R. E., & Lohman, D. F. (1989). Implications of cognitive psychology for educational measurement. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 263-332). New York: Macmillan. Snow, R. E., & Yalow, E. (1982). Education and intelligence. In R. J. Sternberg (Ed.), Handbook of human intelligence (pp. 493-585). New York: Cambridge University Press. Sternberg, R. J. (1982). Reasoning, problem solving, and intelligence. In R. J. Sternberg (Ed.), Handbook of human intelligence (pp. 225-307). New York: Cambridge University Press.

Sternberg, R. J. (1985). Beyond IQ: A triarchic theory of human intelligence. Cambridge, UK: Cambridge University Press. Sternberg, R. J., & Nigro, G. (1980). Developmental patterns in the solution of verbal analogies. Child Development, 51, 27-38. Thorndike, R. L., & Hagen, E. (1993). The Cognitive Abilities Test: Form 5. Itasca, IL: Riverside. Thorndike, R. L., & Hagen, E. (1996). The Cognitive Abilities Test (Form 5): Interpretive Guide for School Administrators. Itasca, IL: Riverside. Thurstone, L. L. (1938). Primary mental abilities. Psychometric Monographs, 1.U.S. Department of Education. (1993). National excellence: A case for developing American talent. Washington, DC: U.S. Government Printing Office.

This site was built using