Spotlight on history
By Paul Gade, PhD
Welcome to the Spotlight on History! This column showcases stories on the history of military psychology. Accounts presented in the column are inclusive of all areas of military psychology. If you would like share a historical account in this column, please contact Paul Gade, PhD.
Project A: a brief history
First off, I want to encourage members of the society and others who read this column to contribute to it. Do you have a historical vignette you would like to contribute? If so, please send it to me. Want to do a whole column on a topic? Send that along too. I also want to send a warning to our members that I will be calling on you to contribute to this column. One of the things that I know some of our member can contribute is stories about and/or profiles of your fellow military psychologists or those who don’t necessarily identify themselves as military psychologists but nonetheless have made substantial contributions to military psychology. I would like to highlight the accomplishments of psychologists, members of the Society or not, who have made significant contributions to military psychology research, practice, and clinical practice. A good place to start with this might be the psychologists for whom we have named our various awards. I am especially interested in hearing from non-U.S. psychologists about their military psychology histories. I like writing these columns, but this column is for everyone in the society, and I’ll bet many of you have good ideas and/or good stories to tell. So please write to me.
Well, I had hoped to have a good society historical timeline for this issue of the newsletter, but I am still working on that. One of the things that has occupied my time, in addition to the holidays and becoming a first time grandfather in early January, is a book chapter that Mike Rumsey and I are writing on Project A management as a case study for a new book on project management that is to be published in 2014. In the course of preparing our manuscript, I thought others might be interested in a brief history of what Project A was, how it came about, and why it is important to military psychology.
As with all historical writing, I think that it is important to ground topics in their historical and social context. You saw that in my last column concerning the Don’t Ask Don’t Tell repeal. I first set the historical context for Project A’s inception and development, describe what the project was about, and then briefly discuss its importance for military psychology. And yes, believe it or not, there was a Project B, but that’s another story for another time.
Precursors to Project A
In World War II (WWII), as in World War I (WWI), the U.S. military services needed good selection and classification procedures to replace those that had been developed in WWI by Yerkes and his associates. In October 1940, anticipating the U.S. entry into the war, the Personnel Research Section of the U.S. Army developed the Army General Classification Test (AGCT). During WWII, the AGCT was used successfully to classify more than 12 million soldiers and Marines for specialty and officer training that they would probably not have received based solely on knowledge about their education and civilian occupations (Harrell, 1992). For example, the U.S. Army Air Corps assigned men of higher ability, as indicated by their AGCT scores, to technical skills training (e.g., for jobs of airplane mechanic or bombsight mechanic), even though in civilian life these men may have been truck drivers or barbers (Harrell, 1992).
In the early 1940s, specific mental tests—such as the general Mechanical Aptitude Test, Clerical Speed, Radio Learning, and Automotive Information—were often used to supplement the AGCT to assist in classification (Zeidner & Drucker, 1988). By 1947, 10 of these specific aptitude tests, which later defined the Army Classification Battery (ACB), had been used. But, at the time, the Army was unsure about how to optimize the use of these tests, so the Army began efforts to determine combinations of tests that were valid for different Army Military Occupational Specialties (MOS; Zeidner & Drucker, 1988). The organization of the specific aptitude tests into the Army Aptitude Area System for differential classification was a major innovation for the military personnel system. This multiple aptitude area system markedly increased differential classification precision and efficiency over the single measure provided by the AGCT during WWII.
With the passage of the Selective Service Act in 1948, Congress mandated that the Department of Defense (DoD) develop a selection and classification test to be used by all of the services. Between 1948 and 1950, with substantial contributions from the Navy, Marines, and Air Force, the Army, as executive agent for the DoD, developed the Armed Forces Qualification Test (AFQT), modeled after the AGCT. The test consisted of 100 multiple-choice questions in the following subjects: vocabulary, arithmetic, spatial relations, and mechanical ability. The AFQT was the first selection instrument to be used for the uniform mental screening of recruits and inductees across all the services. In addition to determining the mental qualifications of recruits during the Korean and Vietnam wars, the AFQT was used to help achieve an equitable distribution of abilities across the services (Maier, 1993).
After the end of the Vietnam War in 1973, the Army transitioned from a draft to an all-volunteer force (AVF; Shields, Hanser, & Campbell, 2001). Also in 1973, the DoD made using the joint AFQT optional, and each of the services used their own batteries for selection and classification between 1973 and 1976, with the Army using a version of the ACB (Maier, 1993). This meant that, at the entrance and examining stations, three separate classification batteries had to be administered, increasing the already enhanced burden caused by the military’s transition to the AVF (Maier, 1993). To solve this problem, the DoD called for a new joint service test to be developed. To establish an agreed upon battery for enlistment testing, technical and policy representatives from each service first met in 1974 and began to develop the Armed Services Vocational Aptitude Battery (ASVAB; Maier, 1993).
In 1976, all services began using the ASVAB as a replacement for the individual services’ classification batteries (Maier, 1993; Walker & Rumsey, 2001; Zook, 1996). The ASVAB has been updated several times since then, to include making it a computerized adaptive test, but still serves today as the essential military screening and classification tool for all U.S. military services (Zook, 1996). The AFQT score, computed by combining four subtests within the ASVAB, is still used as a general screening device by the services (Campbell, 2001; Maier, 1993; Zook, 1996). However, each service uses a unique set of ASVAB aptitude composites to assign recruits into service jobs.
Unfortunately, the initial calibration of the ASVAB was such that it resulted in a misnormed test. From 1976 until 1980, inflated ASVAB scores resulted in “hundreds of thousands of erroneous personnel decisions during the late 1970s” (Meier, 1993, p. 71). Scale scores were particularly inflated in the below-average range, causing a serious overestimate of the ability of people applying for enlistment. This misnorming first came into light in 1979 and 1980. As a result, 50 percent of non-prior-service Army recruits were drawn from the bottom 30 percent of the eligible youth population in contrast to more recent statistics of 60 percent of recruits coming from the top 50 percent of the eligible youth population (Shields et al., 2001). Since low-aptitude personnel were admitted into the military in substantial numbers, troop quality suffered (Laurence & Ramsberger, 1991).
Project A and building the career force
Not surprisingly, Congress became skeptical about the validity of entry test scores in predicting future performance in the military (Shields et al., 2001). In addition, the nation as a whole was also questioning the fairness of civilian employment tests, and in 1978 the Civil Service Commission, the Department of Labor, the Department of Justice, and the Equal Opportunity Commission jointly adopted the Uniform Guidelines on Employee Selection Procedures (U.S. General Services Administration, 1978). As a result, Congress issued a mandate, known as the Joint-Service Job Performance Measurement/Enlistment Standards (JPM), that all the military services must demonstrate the validity of the ASVAB as a device for screening its service applicants. Validity was defined as successfully predicting job performance, not just predicting training performance.
Each service was responsible for conducting its own research in response to this mandate. Project A, which was in the planning stages before the congressional mandate, went well beyond just validating the ASVAB; instead, it was an extensive research program to validate and, perhaps most importantly, to expand U.S. Army personnel selection and classification techniques. Project A quickly became the Army’s answer to the JPM requirement. This greatly expanded Army effort became possible, in part, because Maj. Gen. Maxwell Thurman, then the head of the U. S. Army Recruiting Command and a fast-rising star in the Army, pushed for a broader concept of soldier quality, which he continued when he became a lieutenant general and the deputy chief for Army personnel. Together with Dr. Joyce Shields, then head of Manpower and Personnel Resource Laboratory at the U.S. Army Research Institute for the Behavioral and Social Sciences (ARI), Gen. Thurman pushed the concept of the whole person evaluation, incorporating many diverse characteristics that could influence performance in addition to mental abilities, including psychomotor, spatial, interests and temperament characteristics. They were successful in convincing the Army and Congress to undertake and fund the enormous scope of Project A. The project was to require measuring more than 60,000 soldiers in 21 MOS. Project A (1982–1989), along with its follow-up project, Building the Career Force (1989–1995), became one of the most influential projects in the history of selection, classification, and performance research, at least in the United States.
A project of this scope was too large for ARI or any single contractor to undertake, so ARI contracted with the Human Resources Research Organization (HumRRO), the American Institutes for Research, and the Personnel Decisions Research Institute with HumRRO as the lead project integrator and John Campbell as the contractor’s chief scientist. Dr. Newell Kent Eaton, chief of the Selection and Classification Technical Area, was the ARI scientist who had responsibility for managing the project and also served as the Army’s chief scientist for the project.
The ultimate goal of Project A and Building the Career Force was to provide the Army with the "greatest possible increase in overall performance and readiness that could be obtained from improving selection and allocation of enlisted personnel. These sequential projects provided an integrated examination of performance measurement, selection and classification methods, and allocation procedures to meet the multiple goals of managing the Army’s human supply." (Zook, 1996, p. 14)
Operationally, its main intent was to develop the material needed for "assembling and validating a complete model of a selection and classification system such that the effects of using different kinds of performance criteria, different predictor batteries, different utility distributions for job assignments, and different value judgments about various priorities could be assessed." (Campbell, 1990, p. 238)
The “combined design” of Project A and Building the Career Force followed two major cohorts of soldiers in 21 MOS (new accessions for 1983/1984 and for 1986/1987), from enlistment through their first and second tours of duty. Thus, it involved two major validation samples, one concurrent and the other longitudinal.
"The concurrent sample, from which data were collected in 1985, allowed an early examination of the validity of the ASVAB, as well as a comprehensive battery of project developed experimental tests, to predict job performance for a representative sample of U.S. Army jobs. The longitudinal sample, consisting of well over 45,000 new recruits from whom data were collected from 1986 through 1992, allowed examination of the longitudinal relationship between ASVAB and the new predictors and performance at three stages in an individual’s career. It also allowed determination of how accurately current performance predicts subsequent performance both by itself and when combined with predictors administered at the time of selection." (Campbell, Harris, & Knapp, 2001, p. 31)
Not surprisingly, the relationships between certain of the predictors and performance in the concurrent data were stronger than the relationships between predictors and performance in the longitudinal sample, although the nature of the relationships remained constant. In particular, temperament was a better predictor of “will do” performance in concurrent than in the longitudinal validation.
Why Project A is important to military psychology
Project A and Building the Career Force provided key answers to the following question: What exactly is job performance? Intensive analysis of the huge soldier sample yielded five core common dimensions of performance. Two were proficiency dimensions—Core Technical Proficiency and General Soldiering Proficiency (termed “Can Do” dimensions)—and three were motivational dimensions—Effort and Leadership, Personal Discipline, and Physical Fitness and Bearing (termed “Will Do” dimensions). The concept of Can Do and Will Do dimensions of performance was not new, having been developed during and following WWII attempts to predict combat performance (Zeidner & Drucker, 1988). Conceptualizing performance along these two dimensions led to the task versus context performance distinction; these components are still seen today as key dimensions of job performance (Borman & Motowidlo, 1993). With Project A, the “classic prediction model was born” (Shields et al., 2001, p. 21), and it continues to serve as the dominant prediction model in personnel research, both in the military and in civilian worlds (Campbell, 1990).
Borman, W. C., & Motowidlo, S. J. (1993). Expanding the criterion domain to include elements of contextual performance. In N. Schmitt & W. C. Borman (Eds.), Personnel selection in organizations (pp. 71–98). San Francisco, CA: Jossey-Bass.
Campbell, J. P. (1990). An overview of the Army Selection and Classification Project (Project A). Personnel Psychology, 43, 231–239.
Campbell, J. P. (2001). Implications for future personnel research and personnel management. In J. P. Campbell & D. J. Knapp (Eds.), Exploring the limits in personnel selection and classification (pp. 577–599). Mahwah, NJ: Erlbaum.
Campbell, J. P., Harris, J. H., & Knapp, D. J. (2001). The Army selection and classification research program: Goals, overall design, and organization. In J. P. Campbell & D. J. Knapp (Eds.), Exploring the limits in personnel selection and classification (pp. 31–50). Mahwah, NJ: Erlbaum.
Harrell, T. W. (1992). Some history of the Army General Classification Test. Journal of Applied Psychology, 77, 875–878.
Laurence, J. H., & Ramsberger, P. F. (1991). Low-aptitude men in the military. New York, NY: Praeger.
Maier, M. H. (1993). Military aptitude testing: The past 50 years (DMDC-TR-93-007). Monterey, CA: Defense Manpower Data Center.
Shields, J., Hanser, L. M., & Campbell, J. P. (2001). A paradigm shift. In J. P. Campbell & D. J. Knapp (Eds.), Exploring the limits of personnel selection and classification (pp. 21–29). Mahwah, NJ: Erlbaum.
U.S. General Services Administration. (1978). Employee selection procedures: Adoption by four agencies of uniform guidelines—1978. Federal Register, 43(166, Pt. 4), 38290–38315.
Walker, C. B., & Rumsey, M. G. (2001). Application of findings: ASVAB, new aptitude tests, and personnel classification. In J. P.
Campbell & D. J. Knapp (Eds.), Exploring the limits of personnel selection and classification (pp. 559–576). Mahwah, NJ: Erlbaum.
Zeidner, J., & Drucker, A. J. (1988). Behavioral science in the Army: A corporate history of the Army Research Institute. Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.
Zook, L. M. (1996). Soldier selection: Past, present, and future. Alexandria, VA: U.S. Army Research Institute for the Behavioral and Social Sciences.