img Chapter 13
Gathering Client Information Using Selected Standardized Tests and Inventories: An In-Depth Approach

The evidence for how tests improve the process by which clients understand themselves and their fit with particular environments is compelling (Holland, 1997). Not only do tests improve the process and the outcomes of career counseling, but also the use of such assessments promotes a more scientific orientation and gives us ways to support and confirm what might otherwise be little more than good speculation. Although many decisions are explained as leaps of faith, most clients want to make decisions with prior support from hard evidence. Clients expect us to use the best tools that are available to help them in their exploration. We want to do that as well, especially when we find that it not only enhances the outcome but also provides us with better data to help other clients resolve their problems and reach their goals.

Finding the Right Standardized Tests

It is estimated that hundreds of assessment instruments might be of help in career counseling. There are interest, aptitude, and ability tests; personality and values inventories; strengths inventories; environmental assessments; state and trait measures; survey forms; card sorts; computerized assessments; and the list goes on and on. Current reviews of most tests can be found in the Mental Measurement Yearbook, now in its 18th edition (Spies, Carlson, & Geisinger, 2010), or in Tests in Print, now in its 8th edition (Murphy, Geisinger, Carlson, & Spies, 2011). It is our belief, however, that counselors typically learn to use well only a select number of assessments in career counseling. These are ones that best support or affirm their points of view and, with practice and experience, ones they become more and more proficient at using. These are usually the tests they learned to use in their graduate training, and once on the job they simply continued with these rather than learned about new ones that might be better or more useful.

We applaud the use of a particular group of tests but caution against deciding on that number and not being open to learning about new instruments. Many good instruments are still being developed (or at least were developed after many counselors finished their graduate training). Furthermore, as we learn more about the appropriateness of tailoring tests to particular populations and developing norms specific to gender, ethnic origin, race, spirituality, sexual orientation, and social class, the profession is questioning the value of many of its earlier approaches to assessment. It is a burgeoning area, and one can expect it will continue to change fairly dramatically in the next few years.

We can, however, be relatively sure that it will continue to be appropriate to use a variety of assessments to help better understand where a typical client is in the career counseling and planning process. We need to confirm results from one assessment with results from another. We also need to recognize that many of our traditionally accepted measures may not be appropriate for use with many clients from nonmajority populations who are now becoming more frequent users of career services. This is one more reason for remaining open to new measures. We may continue to rely heavily and even become dependent upon a small number of instruments, but we should still remain vigilant about our own assessment of new or additional instruments that might better complement the career counseling process.

In this chapter, we first suggest criteria for selecting appropriate standardized tests and then identify and focus on a small battery of assessment tools that have proven particularly effective in our professional practices and settings. It is important to keep in mind your orientation and employment setting as you identify instruments that can be most useful. If your orientation is to be directive, for example, and you want to be able to tell clients what to do, you want to find instruments that best support that approach. If you are in a setting that encourages time-limited career counseling or one that expects clients to be fairly self-directed, then you will want to choose instruments that promote outcomes in a timely fashion or allow clients to work with assessments on their own. Because many of us will change orientation or setting over time, we should remain flexible and open to new instruments; we may suddenly have to find new instruments to better meet our needs and the needs of our clients.

It is not unusual for professionals to depend on a select group of assessments that work particularly well for them in their particular setting. These assessments complement a particular orientation and setting(s). Here we describe the criteria that we believe influence selections and that influenced our selections of the particular instruments described in this chapter. We expect that you will want to use a similar process in choosing ones that should be most helpful to you in your setting. We find it helpful first to be clear about the criteria; this both makes it easy to identify the appropriateness of what you choose and helps you evaluate the need for new instruments as you move to new settings. As we take you through this process, think about your own orientation and setting and whether the criteria suggest similar or different instruments for you.

Criteria for Choosing Assessments

Some of the criteria for choosing particular instruments include (a) validity, (b) reliability, (c) cost, (d) time required for administration, (e) client response to the instrument, (f) training needed for scoring, (g) scoring difficulty, (h) norms, (i) training needed for interpretation, and, perhaps most important, (j) usefulness of the instrument to the client. How important each of these is may depend on a careful observation of the staff, the setting, and the clients to be served. For example, in some settings paraprofessionals might provide the initial contacts for clients. If so, one might need or choose instruments they can competently use. There are many good ones with which paraprofessionals can become competent, such as card sorts, the Intake Scale (Hope Scale), computerized assessments (SIGI [System of Interactive Guidance and Information], DISCOVER, CHOICES, etc.), My Vocational Situation (MVS; Holland, Daiger, & Power, 1980), the Occupational Dreams Inventory (Johnston, 1999), the Self-Directed Search (SDS; Holland, 1985), and other self-directed instruments. Paraprofessionals can describe well the availability of other assessments but not use them themselves. They can refer clients to others for individual assessments and career counseling. Those professionals will in turn use a different set of assessments with which they are most comfortable. Selecting which ones to use depends on the individual’s background, training, place of employment, and personal preferences. It should also depend on some less obvious but equally important criteria, which assumes we must do some homework before making our decisions.

First Points to Consider: Validity and Reliability

Two test properties must be considered first in the selection of instruments regardless of who will be administering them: validity and reliability. We refer you to basic measurement books (Anastasi & Urbina, 1997; Hogan, 2007) to appreciate the importance of these concepts. We also remind you that both of these are relative terms: You never find completely valid or reliable instruments, only some that are more valid or more reliable than others. Nevertheless, both validity and reliability need our careful and professional judgment, especially because many clients come to us expecting more than we can honestly provide. We need to be sure we are working with the very best of what is available and that we are always open to finding something better. This is one of our professional obligations.

Validity

Perhaps the most important test property to consider is validity. How well does the test really measure what it says it measures? In the career area, this often means the following: If an instrument indicates a person has strong interests in certain occupations, how true is that? Counselors can gain validity information in a variety of ways, such as by looking at how well the test predicts the behavior and satisfaction of the test takers (predictive validity), how closely it is related to constructs that seem theoretically similar (construct validity), and how unrelated it is to measures that seem theoretically dissimilar.

It is also very important to examine the populations on which the instrument has been normed and the similarity of these populations to the clients to whom you will be administering the instrument. For example, if an instrument has been developed with college populations and you want to use it with adults, it would be very important to determine how valid the instrument is with your population.

This point is particularly true for instruments used with members of racial/ethnic minority groups, disabled individuals, and other underrepresented groups. Often the norming and psychometric development research indicates that the assessment measures have not been used with significant numbers of these persons. Thus, in many cases we have little information regarding the validity of instruments for these populations. Although we have no reason to think there would be any race-based biological difference in scores, we do know that we should recognize the different social and cultural histories and experiences of these underrepresented groups. For an excellent review of validity issues of career assessment instruments as they apply to the four major racial and ethnic minority groups in the United States, see Leong’s (1995) Career Development and Vocational Behavior of Racial and Ethnic Minorities. For another excellent review of the current status of career assessments for women, see Walsh and Heppner’s (2006) Handbook of Career Counseling for Women.

Reliability

Another critical property to examine is reliability. How stable is the instrument over time? If we give the instrument twice, spaced 3 weeks apart, and little has changed in our client’s experience, will we get almost the same scores? This property is very important in career assessments. If a test has poor reliability, an individual may be told one week that she has interests similar to those of a psychologist and the next week interests similar to those of a tax collector; these data are not very useful to her. Even more harmful is the fact that clients usually do not retake tests; if they are given unreliable information one week, they may act on it, never knowing it is the product of an unreliable instrument.

The developer of an assessment instrument is always concerned with how valid and reliable the instrument really is and usually will be constantly refining it. New items can be added, or items that did not contribute much to the reliability can be deleted or replaced with better items. Depending on the construct being assessed, however, we need certain levels of assurances that what has been measured is valid and reliable. Yes, we can qualify our reporting in appropriate ways, but if we cannot speak with some certainty about the probability of success with the measurement of interests, aptitudes, skills, and other important traits, we lose credibility with our clients.

If the available measures in this field were as valid and reliable as they are in some of the hard sciences, we could move quickly beyond discussion of the nuances of validity and reliability. In the hard sciences, it may be possible to predict a happening with some certainty and be assured that it will always happen that way. That measure may be both perfectly valid and reliable. In dealing with people, and particularly when free choice is part of the equation, no measure is perfectly valid or reliable. The best instruments will always leave room for one to question the results. Even when the measure used is quite valid, the individual will take the information and process it with other information that is available. That is precisely why we need to spend time interpreting and integrating results from various standardized tests.

Several points should be made here. We must first attend to the validity and reliability of our measures in the interest of selecting good tests and because we later depend on that information to support our professional interpretations or judgments. Additionally, however, we must recognize that we will always be choosing instruments that are not as valid as we would like them to be. Standardized tests are available to help us make better judgments; they do not make judgments for us or for the client. Once we accept this, we can properly view these measures in the same way we do any other set of resources that are available to help us with the career counseling process. For further information regarding the validity and reliability of specific tests, see the references cited earlier in the chapter, particularly the various editions of the Mental Measurement Yearbook (Spies et al., 2010) and manuals that are available for each assessment you choose to use.

Other Points to Consider

Let us continue our discussion of criteria for choosing a standardized instrument. Are we looking for an instrument to administer to one client or to a large group? When one wants assessments to be available for large numbers, then the costs, time required for administration, ease of scoring and reporting results, extent to which one needs training to interpret the results, and usefulness of the results to the client all become issues. We sometimes choose a measure primarily because it is cost effective for large numbers, it can be scored immediately, or the results do not require professional interpretation.

Expanded criteria for consideration of which instruments to use in mass administrations would include not only the validity and reliability of the instrument but also the cost. Next would be how much time is involved in administration and the expected response of the client to taking the test. Also, if you give it to one or a hundred students, how easy is it to score? Or even more important, can it be self-scored in some situations? Can you receive immediate feedback on the results, or will it have to be sent away to be scored? Is training required to score the instrument? Are there good norms for the group you are testing? Is there an interpretive guide that might let the client understand the results without seeing a counselor? In our career center, all of these are important considerations in deciding on an instrument to administer to students. In Table 13-1, we list the criteria for choosing, for example, the MVS over other standardized tests. You can see that it measures up well to the criteria suggested.

Table 13-1. Criteria Used for Selecting the My Vocational Situation

Criterion Comment
Validity See referencea
Reliability See referencea
Cost Minimal
Time for administration Very short
Client response to taking instrument Usually positive
Training needed for scoring Minimal
Scoring Self-scoring, easy
Availability of norms for particular population Excellent
Training needed for interpretation Minimal
Usefulness to client Considerable
aHolland, J. L., Daiger, D. C., & Power, P. G. (1980). My Vocational Situation. Palo Alto, CA: Consulting Psychologists Press.

You may want to start with several low-cost screening tools, like the MVS (Holland et al., 1980). A critical review of the usefulness of the MVS is provided in an article by Holland, Johnston, and Asama (1993). There are many good, short assessment instruments that are inexpensive, easily administered and scored, and useful for use in workshops or other situations where immediate scoring and feedback are important. Reviews of these appear in A Counselor’s Guide to Career Assessment Instruments (Whitfield, Feller, & Wood, 2009) or any of the Mental Measurement Yearbooks (Spies et al., 2010), which is also where you will find extensive consideration of the issues of validity and reliability of most instruments. We also encourage you to read carefully the manuals for all instruments you choose. Authors are very concerned that you understand what is available to support their assessments. We should reinforce here, however, that typically one can and often must assess the usefulness of a measure as one part of an assessment battery. That is, usually it is only one of a number of measures or indexes that is available or that we have administered, and it becomes important to consider how well it works in concert with those other instruments.

A Basic Battery of Career Assessment

In a general way, we look for methods of assessing the state of people’s vocational situations (e.g., MVS-type measures), their interests, their personalities, and perhaps their skills, strengths, aptitudes, values, and beliefs. We may need an assessment instrument that works well in each of these areas. When we are choosing a battery of instruments, we add to our criteria that the assessment instruments work well together or perhaps even complement one another. Sometimes you have to reassess the usefulness of an instrument when it is only one of many assessments you want to use. But let us look at a basic instrument for each of the areas we have identified and select one that might best meet the criteria we have suggested would be important. Although we apply the criteria to the use of the instruments in a career center, you should think about how well each instrument would apply where you work.

A Basic Interest Inventory: The SDS

The most widely used assessment of interests today is the SDS, written by John Holland (1985). It can be easily administered, self-scored, and self-interpreted, and it provides a vast array of information that a self-directed client can find particularly useful. It subtly helps teach a system or plan for further career exploration, and, with the aid of several guides, it can direct one to further explore possible jobs or careers that would be appropriate given a particular pattern of scores. It has good validity and reliability; is easily administered, scored, and interpreted; is reasonable in cost; is available in hard copy or online; can be given to almost any client (reading level is minimal, and Form E [Holland, Powell, & Fritzsche, 1994] is available for even lower reading levels); and fits well with other instruments routinely used in a career center.

What helped most in determining our choice of this instrument for use with students was our decision in 1985 to adopt Holland’s six occupational themes (Realistic, Investigative, Artistic, Social, Enterprising, and Conventional) as an organizing feature of the University of Missouri Career Center. Information is stored in the pattern suggested by the scores received on the instrument. Individuals look for additional information in line with their scores on the instrument, and this is the way it is organized in our career center. It also permits broad consideration of a variety of factors relevant to career exploration—including occupational daydreams and interests and skills—and the client can easily ascertain how the scores were derived.

Other good interest inventories may be equally appropriate in other settings. These include the New Revised Strong Interest Inventory Assessment (Donnay, Morris, Schaubhut, & Thompson, 2005) and the Kuder Career Search (Zytowski, 2005). One can argue for having more than one assessment of interests for some clients, but the basic day-to-day practical value of the SDS for use in our career center is convincing. Paraprofessional staff members can be helpful with the administration and interpretation, and it reinforces what we teach and what we want students to learn about the career exploration process. We want students to explore on their own, and we want this exploration to organize and simplify a complex process. Moreover, the SDS allows for providing immediate feedback, which is also quite important.

A Basic Measure of Personality

Personality measures may assume more importance in some settings than others, but it is fair to say everyone needs at least one measure that can assess the basic dimensions of the personality. Which dimensions we measure may depend on who our typical client is and the typical concerns of the client. When making career assessments in a school or college setting, you are usually dealing with high-functioning individuals. You are looking for indexes of fit with particular majors, careers, or job environments, and you most often are sharing that information with the client. This makes some instruments more applicable than others. We opt for the short form of the NEO Personality Inventory by Costa and McCrae (McCrae, 1992) for some of the same reasons we provided for using the SDS. It has good psychometric properties (validity and reliability), can be self-directed and completed in a short period of time, has good interpretation guides or talk sheets to supplement an understanding of the results, and is inexpensive and unobtrusive for most clients. Not all measures of personality meet those criteria.

After many years of personality assessment, there appears to have emerged some consensus about the essential indexes to be considered. The NEO Personality Inventory is cited as measuring well all of “the Big Five” indexes: Neuroticism, Extroversion, Openness, Conscientiousness, and Agreeableness. All of these can be shown to be related to career decision making, and all are dimensions that can easily be used by clients in understanding their particular approaches to dealing with career issues and problems. The relationship of NEO Personality Inventory scores to SDS scores strongly supports the relatedness of personality and interests (Gottfredson, Jones, & Holland, 1993).

A Measure of Aptitudes and Skills

Career clients frequently ask for some test of their skills. They want to know what they really are good at, and yet, particularly with college students, they already know this based on their high school performance and often some work experience. What they are more likely asking is “What aptitudes or skills do I have that might better determine my choice of a major or eventually help me find employment?” Even when the evidence is already there, they seek confirmation of it or better ways to use the aptitude or skill they know they possess. One will hear something like “I’m good at math, but I don’t know what to do with it” or “I’m good at drawing, but who would hire me?” Sometimes the inquiry begins when a student is having difficulty with a course that is required in the major, and that starts a reexamination of his or her strengths. It becomes a matter of believing that an alternative is needed. More times than not, students seek reassurance, and when you help them review their high school grades, their SAT or ACT scores, and the courses or life experiences they have enjoyed the most, they reconsider the need for any extensive reexamination of aptitudes or skills.

The SDS, for example, provides an inventory of skills and aptitudes as part of the assessment of an individual’s identification with the six areas of interest. Most clients do not realize this and need to have it called to their attention. They complete the instrument with little hesitation, and, when confronted with the evidence of how well their estimates correlate with more objective measures of their aptitudes, they often elect not to take additional tests.

Laid-off workers or employees being asked to transfer to an entirely new job may be more insistent on taking and in need of aptitude or skill measures. When appropriate, a good aptitude or skill measure or measures can be quite helpful.

The Campbell Interest and Skill Survey (Campbell, 1995) is an assessment tool that meets many of the criteria we have established for practical use in our center. It has good validity and reliability, is easy to administer, can be taken online (http://www.pearsonclinical.com/psychology/products/100000323/campbell-interest-and-skill-survey-ciss.html), and is relatively inexpensive. However, some compromise with our established criteria may be necessary to find a good measure. For example, choosing to measure aptitude in ways that will provide clients with evidence not already available to them may take some time. If clients have not accumulated enough evidence to be sure of their aptitudes and skills, they must expect to invest some time in finding more than reassurance.

At the high school level something else might make more sense. In many school districts, counselors are required to use particular aptitude measures; of course, you should first learn as much as you can about those measures and make use of them as opposed to introducing others.

A Measure of Internal Resources to Aid in the Career Transition

Although the field of career development has developed numerous psychometrically sound measures to assess career interests, values, and skills, until recently no measures had been designed to help clients assess and understand internal, dynamic psychological processes that may get in the way of the career transition process. Because having such instruments available to the career counselor is critical to effectively providing the holistic blend of the personal and career domains central to our model of life career development, we chose the Career Transitions Inventory (CTI) to help clients understand their unique psychological responses to aid in their transitions.

The CTI is provided here as an example of an instrument designed to assess critical dynamic factors operating for the client and to allow counselors to intervene in more targeted and specific ways. The CTI (Heppner, 1991) is a 40-item Likert-type instrument designed to assess an individual’s internal process variables that may serve as strengths or barriers when he or she is making a career transition.

The responses for the items range from 1 (strongly agree) to 6 (strongly disagree). Factor analysis has revealed five factors: (a) Career Motivation (Readiness), (b) Self-Efficacy (Confidence), (c) Internal/External (Control), (d) Perceived Support (Support), and (e) Self Versus Relational Focus (Decision Independence–Interdependence).

High scores are positive and indicate that individuals perceive themselves to be doing well in that area; low scores indicate barriers. Thus, a high score on the Readiness factor indicates that one is highly prepared and motivated to make a career transition (e.g., “I am feeling challenged by this career transition process, and this knowledge keeps me motivated”). A high score on the Confidence factor means that one is highly confident in his or her ability to make a successful career transition (e.g., “I feel confident in my ability to do well in this career transition process”). A high score on the Control factor indicates that the person feels he or she has control over the career planning process (e.g., “The outcome of this career transition process is really up to those who control the system,” reverse scored). Similarly, a high score on the Support factor indicates a greater amount of perceived social support associated with changing one’s career situation (e.g., “Significant people in my life are actively supporting me in this career transition”). Finally, a high score on the Decision Independence factor indicates that the client feels he or she can make decisions regarding his or her career as an independent, autonomous individual (e.g., “Although family and relationship needs are important to me, when it comes to this career transition, I feel I must focus on my own needs”).

Heppner, Multon, and Johnston (1994) calculated Cronbach’s alpha coefficients for each of the factors and the total score for the CTI. These coefficients were as follows: .87 (Readiness), .83 (Confidence), .69 (Control), .66 (Support), and .83 (Decision Independence). The alpha for the total inventory was .90. The CTI has been found to correlate positively and significantly with age, marital status, length of time in the transition process, and five global ratings of coping (e.g., perceived level of stress in the career transition process). In addition, enduring personality traits such as those measured by the NEO Personality Inventory (Costa & McCrae, 1985) have been found to predict career resources as measured by the CTI. For example, openness to experience has been found to predict all five factors of the CTI, indicating that a willingness to try new things is an important personality variable predicting how one negotiates the career transition process. (The CTI has been translated into Mandarin Chinese, Arabic, Korean, Italian, and Japanese.)

Other Supplementary Measures

The Hope Scale is a good example of a brief measure that has proven particularly helpful to us in working with groups of clients facing layoffs. To establish in an easy fashion whether there are participants in these groups with minimal goals (low self-efficacy) or confusion as to how to achieve their goals (low on pathways), we routinely use the Hope Scale, a 16-item inventory initially devised by C. R. Snyder (Snyder et al., 1991) to be used with children. We found that adults who indicated little sense of goals or pathways were ones who needed individual attention in workshops. We changed the name of the inventory to The Intake Scale (with permission) and screened a number of people who then were given more individual attention in workshops. It also holds up well according to our criteria.

Numerous such inventories are available, and they often serve a useful role when other more time-consuming assessments might not be practical. The Hope Scale, like the MVS, can be used by the skilled practitioner with minimal training and can be scored in virtually no time at all. Using the Occupational Dreams Inventory, a stand-alone adaptation of the first part of the SDS, is an easy way to begin a conversation with a client about career plans. (You can create your own version of the Occupational Dreams Inventory or request a copy of it from Johnston, 1999; see also Figure 13-1.)

img

Figure 13-1. Occupational Dreams Inventory

Other supplemental instruments of note are discussed in more detail in Chapters 10 through 12. Chapter 14 covers in some detail two related instruments: one on personal and work styles (INSIGHT Inventory) and one on personal strengths (Clifton StrengthsFinder). The latter, along with the Values in Action (VIA), are receiving considerable attention in the emerging field of positive psychology. Card sorts are considered somewhat standardized, but we chose to devote a separate chapter to this approach (see Chapter 12).

Bringing Test Data Together

We give tests to supplement other data we have on clients or data that clients have on themselves. In many cases, we gather assessment data to help establish a clearer picture of the client and then, in an individual session, convey that information to our client. Both tasks require gathering enough of the right information so it can then be presented in a coherent and credible fashion to help us understand the client and eventually to help the client better understand himself or herself. You can facilitate the process by putting all the test data on a single page or into a single report following a standard format. Figure 13-2 shows how we typically record standardized test data with the SDS as the main instrument. This process helps us see the relatedness of various scores. In putting all the scores on one page, we are forcing ourselves to be sure to use all the measures assessed in as integrated a way as possible. After all, we are looking at a single individual through multiple measures or lenses to support one composite picture.

img

Figure 13-2. Interpretive Guide for Understanding Client’s Self-Directed Search Scores™

The process just described—of being sure to properly use all the available data—is an appropriate approach for interpreting a single set of scores from any one assessment. We should remember that the test profiles are designed to be used primarily by professionals. The average client is not going to understand the results from a profile without our interpretation. As good as we are at interpreting data, clients do not always hear all we are trying to say or hear it as accurately as we hope. Our interpretations can be complemented with interpretive guides or “talk sheets” because these can help make the results more easily understood and provide a written record for later reference by the client.

These guides or talk sheets should be available for all standardized assessment measures. If one is not available, we encourage you to develop your own. You can request copies of ones used at the University of Missouri Career Center through its Web site (http://career.missouri.edu).

Integrating Test Data

We can best illustrate the integrated use of assessment data by showing our own use of a variety of measures with a single client. In this case, we present a client who was part of a 2-day workshop in which the assessments were administered and interpreted. This meant we needed to give careful consideration to several factors, including the time it would take to administer and score the measures, ease of scoring and interpretation, and, because follow-up would be difficult, how easy it would be for clients to understand the results later without our assistance. Because we typically work with each client as one member of a larger group and would need to do some of the interpretations in the group, we chose instruments that were easy to administer and complete; easy and quick to score and interpret; and unobtrusive, because clients would need to share results with others. Finally, as a battery of tests, the instruments should not take too much of clients’ time to complete.

We settled on the Hope Scale, the MVS, the SDS, the NEO Personality Inventory, and the CTI. We had breadth of assessments that would not take too much of clients’ time to complete, and we found that each assessment contributed in some meaningful way to how we approached the participants and what they wanted to know about themselves. With a broad brush, we used the Hope Scale and the MVS to quickly establish whether any participants felt they were without goals or pathways or without a reasonably clear sense of vocational identity to profit from the workshop format. If this were evident early, we could provide more individual time with these participants. We also used these two measures to help ourselves make appropriate sense of the other assessment scores.

We devised an “Interpretive Guide for Understanding Clients’ SDS Scores” and completed it for each workshop participant (form available upon request). This form incorporates on a single page all the relevant observational and test data. First, we record the relevant background data, which usually includes age, employment status (previous and current), gender, race, education, family history, and other incidentals that might have a bearing on understanding and interpreting the test data. Some clues may come from an intake form, from introductions made in the workshop, or from information offered in a phone conversation as one enrolls for the workshop. Perhaps most important is identifying the expressed career choice(s) of the client. This is sometimes overlooked, and yet it clearly will be a significant part of a client’s eventual career decision. Next we record the current occupation and work history of the client. (When we are working with a student, “occupation” often may be more appropriately recorded as “academic major.”) Using the Dictionary of Holland Occupational Codes (Gottfredson & Holland, 1996), we give that occupation a three-letter code. Then we record the occupational daydreams as taken from a personal data sheet.

We then code each occupational daydream again using the Dictionary of Holland Occupational Codes. We later compare the codes of the dreams with measured interests as established from the interest inventory and from the occupational daydreams found on the first page of the Holland SDS interest inventory. The daydreams are recorded on the guide sheet because they, too, like the expressed career choice(s), are often not given their due weight in the eventual discussion of choices based on the more objective test data. It may appear as though we give too much attention to the daydreams, which are not as objective, but evidence shows that they predict eventual choice(s) about as well as any other measure one may use. In fact, if a client has already taken the SDS or some other interest inventory, we suggest you supplement this with the Occupational Dreams Inventory. Dreams should always be considered in any discussion of career plans.

Next we record other assessment data, such as the MVS, Hope Scale, or other intake scores. We include the relevant high and low scores from the CTI. A rough classification of scores as high, moderate, or low makes it easy to record and interpret these scores. You could devise a similar system for whatever inventories you use. We add any other measures that might influence the way we will interpret the eventual total array of test scores. For example, a low MVS score or Hope score would be important to note if we wanted to make appropriate use of one-on-one time with a client; that is, low scores suggesting individual time might be quite important or useful for the individual. All of these scores are important overlays to interpreting the SDS scores.

The SDS scores are recorded on the final page of the SDS in the form of actual raw scores from each of the five objective measures reported within the inventory: the activities scores, competencies scores, scores from the client’s reactions to occupational titles, and the two self-estimates measures (one on abilities and the other on skills). Given that we have five measures and six scores on each measure, 30 scores in total are recorded. When total scores are compiled for each of the six areas, we have 36 scores from the interest inventory alone. All of these contribute to the summary code that is reported as the client’s SDS code. One can by visual inspection see how each of the scores does or does not contribute to the final code. One also records self-estimates or self-efficacy ratings for the client and can flag any inconsistencies for later discussion.

Here we can make note of Holland’s suggested rules for interpreting the SDS. We first remind ourselves to list all three-letter codes that should be explored with the client based on the actual pattern of summary scores. Often referred to as the “rule of eight,” this implies that summary scores really are not different unless they total eight or more points (perhaps six or more points for older adults). Therefore, consider all possible patterns that should be explored. If the highest score, say R, and the second highest score, say E, are different by only five points, you should explore RE occupations and ER occupations, because both are equally appropriate based on what has been measured. The client may be inclined to take the ordering literally and not explore fully unless one makes a point of this as part of the interpretation. (See the manual [Holland et al., 1994] for a more complete discussion of the rule of full exploration.) You also need to make note of any special issues that need to be explained. For example, a low flat profile may need some explanation, as might measured interests that seem to be the opposite of what were expressed, a summary code that may seem to be overly influenced by previous work experience or a societal expectation, or a code that is rare and hence will not suggest many options when one looks for occupational alternatives. These are issues that may need to be pointed out and discussed with the client as you try to make the best use of the test data. Finally, record hypotheses that you believe are worth exploring based on what you have observed and measured. These can be written in such a way that they can be shared, refined, or refuted with the client. The SDS manual gives numerous examples of the proper use of scores. We urge you to consult the manual to improve your interpretive skills.

The Case of James

The use of the interpretive guide may be clearer if we apply the process to an actual case. In Figure 13-2 we present scores for James (a participant in a workshop), who was laid off as a factory worker in rural Missouri. James was 56, White, male, working on his general equivalency diploma, and planning to attend a community college now that he had time for it. The job he had was as a machine operator, which is coded RIE. Previously, he had held jobs as a shoe cutter (RSE), factory inspector (RSE), and general factory worker (REC), all in the same plant. His daydreams were to be a parole officer (SIE), an accountant (CRS), or a tax preparation assistant (CES). Other assessments included a very low score on the MVS (04); Hope Scale Scores of 13 on Goals and 13 on Pathways; and CTI scores high on Readiness and Decision Independence, moderate on Support and Confidence, and low on Control. SDS scores are reported in Figure 13-2 and were generally low, although most recordings of the eventual code supported the S code as highest and C next, except that James did not feel he had competencies in the C area (score = 3) or very high estimates of abilities or skills (4 and 3) in those areas. Codes to explore following the rule of full exploration include SCI and SCE. Special issues to be noted include his minimal years of education, his age, the relatively low profile of scores on the interest inventory, and the rather uncommon SDS code for an adult. At least two hypotheses present themselves: Is further schooling a realistic goal? And do the daydreams reflect appropriate next moves? Even before talking to him, we know at least two questions need answers.

The talk sheet helps one focus on all the available data on the client. With practice, it helps one develop hypotheses about the client, and it may help one see what is consistent and what still needs further exploration or explanation. The process is not unlike the one any professional—doctor, lawyer, or accountant—might use in compiling information before an interview with a client. You might in time do just as well without such forms, but initially the forms promote a discipline that can help make all of us better counselors.

Closing Thoughts

Throughout this chapter, we have illustrated both the types of standardized instruments that are useful and the logic of using a number of them in helping to construct a coherent picture of a client. We have emphasized how putting it all together in an interpretive guide helps with the process and how it takes effort and skill to do this for a client. This chapter adds standardized tests to other career assessment approaches already discussed, such as the structured interview and the career genogram. The next chapter introduces some less standardized approaches to career assessment. Although we probably will come to rely on a small group of assessments as we work with clients, we again stress that we always need to remain open to finding new measures and approaches to making information work for us and our clients. The two instruments discussed in the next chapter are examples of nuances that may prove both novel and useful to clients.

References

  1. Anastasi, A., & Urbina, S. (1997). Psychological testing (7th ed.). Upper Saddle River, NJ: Prentice Hall.
  2. Campbell, D. P. (1995). The Campbell Interest and Skill Survey (CISS): A product of ninety years of psychometric evolution. Journal of Career Assessment, 3, 391–410.
  3. Costa, P. T., & McCrae, R. L. (1985). The NEO Personality Inventory manual. Odessa, FL: Psychological Assessment Resources.
  4. Donnay, D. A. C., Morris, M. L., Schaubhut, N. A., & Thompson, R. C. (2005). Strong Interest Inventory manual: Research, development and strategies for interpretation. Mountain View, CA: Consulting Psychologists Press.
  5. Gottfredson, G. D., & Holland, J. L. (1996). Dictionary of Holland occupational codes (3rd ed.). Odessa, FL: Psychological Assessment Resources.
  6. Gottfredson, G. D., Jones, E. M., & Holland, J. L. (1993). Personality and vocational interests: The relation of Holland’s six interest dimensions to the five robust dimensions of personality. Journal of Counseling Psychology, 40, 518–524.
  7. Heppner, M. J. (1991). Career Transitions Inventory. (Available from Mary J. Heppner, PhD, University of Missouri, 201 Student Success Center, Columbia, MO 65211)
  8. Heppner, M. J., Multon, R. D., & Johnston, J. A. (1994). Assessing psychological resources during career change: Development of the Career Transitions Inventory. Journal of Vocational Behavior, 44, 55–74.
  9. Hogan, T. R. (2007). Psychological testing (2nd ed.). Hoboken, NJ: Wiley.
  10. Holland, J. L. (1985). Self-Directed Search. Odessa, FL: Psychological Assessment Resources.
  11. Holland, J. L. (1997). Making vocational choices: A theory of vocational personalities and work environments (3rd ed.). Odessa, FL: Psychological Assessment Resources.
  12. Holland, J. L., Daiger, D. C., & Power, P. G. (1980). My Vocational Situation. Palo Alto, CA: Consulting Psychologists Press.
  13. Holland, J. L., Johnston, J. A., & Asama, N. F. (1993). The Vocational Identity Scale: A diagnostic and treatment tool. Journal of Career Assessment, 1, 1–12.
  14. Holland, J. L., Powell, A. B., & Fritzsche, B. A. (1994). Professional user’s guide. Odessa, FL: Psychological Assessment Resources.
  15. Johnston, J. A. (1999). Occupational Dreams Inventory. Columbia: University of Missouri.
  16. Leong, F. T. L. (Ed.). (1995). Career development and vocational behavior of racial and ethnic minorities. Hillsdale, NJ: Erlbaum.
  17. McCrae, R. R. (Ed.). (1992). The five factor model: Issues and applications [Special issue]. Journal of Personality, 60(2).
  18. Murphy, L. L., Geisinger, K. F., Carlson, J. F., & Spies, R. A. (2011). Tests in print VIII. Lincoln: University of Nebraska Press.
  19. Rakes, T. D., & Johnston, J. A. (1992/1995). Interpretive guide for understanding clients. Columbia: University of Missouri Columbia, Career Center.
  20. Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T., . . . Harney, P. (1991). The will and the ways: Development and validation of an individual-difference measure of hope. Journal of Personality and Social Psychology, 60, 570–585.
  21. Spies, R. A., Carlson, J. F., & Geisinger, K. F. (Eds.). (2010). The eighteenth mental measurement yearbook. Lincoln: University of Nebraska Press.
  22. Walsh, W. B., & Heppner, M. J. (Eds.). (2006). Handbook of career counseling for women (2nd ed.). Hillsdale, NJ: Erlbaum.
  23. Whitfield, E. A., Feller, R. W., & Wood, C. (Eds.). (2009). A counselor’s guide to career assessment instruments (5th ed.). Tulsa, OK: National Career Development Association.
  24. Zytowski, D. G. (2005). Kuder Career Search with Person Match: Technical manual version 1.1. Adel, IA: National Career Assessment Services.