Social Anxiety

Please look at the article attached, it will help you understand what I am looking for

also, I will need you to attach the article you are going to be using to the paper  along with the work cited page

 

1. Project 1: Critique an empirical article

Individual Project. Everyone will be asked to critique an empirical article of their choice. You are encouraged to pick an article that is an area that you are interested in and might consider doing research in ( am interested in social anxiety). The article must be (1) Published within the last 5 years, (2) Original, empirical research (i.e. there must be data collected for the purpose addressing/attempting to answer a specific research question), it must be in a reputable journal (i.e. it shows up in a search of at least one of the following databases: PubMed, PsycInfo, or Web of Science). It is very much preferred, but not required, that the article be brief in length, as we will all need to read this articles as a class over the semester.

The critique will be 1-2 pages in length, typed, and double-spaced, 1-inch margins. Please make sure your full name, the class number, and “Article Critique” appear at the top of the page (nothing else). The body of the critique will begin with a summary paragraph of between 5-8 sentences, summarizing the major points of your critique. Below that, you will provide bullet points that address the following:

· What was the major research question or questions the authors were interested in examining?

· What were the independent and dependent variables in the study?

· What kind of research design was used in this study?

· What were major strengths of the research design?

· What were limitations of the research design?

· Describe all threats to internal validity that you can identify.

· Describe all threats to external validity that you can identify.

· Were the conclusions of the study in the Discussion section appropriate given the threats to validity you listed?

· If you were to change one aspect of the design or conduct of this study, what would you change and why?

Things to consider when critiquing the work of others

· Tone: Be respectful, sensitive, and positive where you can. Never demean someone, nor act as if you know all the answers. Avoid all superlatives

· Purpose: Try to understand what the researcher is trying to do. Make specific and clear constructive criticisms and suggestions based on that understanding. Avoid generalities.

· Balance: Put all specific criticisms in the context of an appreciation for the big picture, the overall purpose of the research. Also make note of strengths and the positive aspects of the study and write-up.

· Rationale: Are you clear from the introduction what the rationale for the study is, and is the rationale logical and clear? If not, why, what is missing?

· Method: Is the method clear, appropriate, and adequate? What are the major potential confounds given these method choices?

· Discussion: Does the author properly explain the results without excessive certainty? Do they clearly sum up the major findings? Are the limitations of the results, given their design decisions, appropriate? Do they suggest future directions and are those directions logical and clear based on the results?

Qualitative Data Analysis: A Compendium of Techniques and a Framework for Selection for School Psychology Research

and Beyond

Nancy L. Leech University of Colorado Denver

Anthony J. Onwuegbuzie Sam Houston State University

Qualitative researchers in school psychology have a multitude of analyses available for data. The purpose of this article is to present several of the most common methods for analyzing qualitative data. Specifically, the authors describe the following 18 qualita- tive analysis techniques: method of constant comparison analysis, keywords-in-context, word count, classical content analysis, domain analysis, taxonomic analysis, compo- nential analysis, conversation analysis, discourse analysis, secondary analysis, mem- bership categorization analysis, narrative analysis, qualitative comparative analysis, semiotics, manifest content analysis, latent content analysis, text mining, and micro- interlocutor analysis. Moreover, the authors present a new framework for organizing these analysis techniques via the four major sources of qualitative data collected: talk, observations, drawings/photographs/videos, and documents. As such, the authors hope that our compendium of analytical techniques should help qualitative researchers in school psychology and beyond make informed choices for their data analysis tools.

Keywords: qualitative analysis, document analysis, analysis of talk, analysis of observations, analysis of visual representations

Analysis of data is one of the most important steps in the research process. Researchers who conduct studies from the quantitative realm in school psychology and beyond have a multitude of statistics available to analyze data. For ex- ample, if a researcher was interested in answer- ing whether males and females differ on moti- vation levels, almost any analysis that repre- sents the General Linear Model could be used, including the independent samples t-test, anal- ysis of variance, and linear regression. This choice is taught to researchers via statistics courses and many textbooks.

Perhaps due to most doctoral programs mainly focusing on quantitative research meth- ods, scant qualitative research has been con- ducted in the school psychology arena. This is evidenced by the dearth of qualitative research

studies in school psychology journals. Indeed, Powell, Mihalas, Onwuegbuzie, Suldo, and Da- ley (in press) examined 873 articles published in the four major school psychology journals (i.e., Journal of School Psychology, Psychology in the Schools, School Psychology Quarterly, School Psychology Review) and found that only six articles published from 2001 through 2005 represented purely qualitative research. Powell et al. further examined the Web site of every National Association of School Psychology (NASP)-approved graduate-level school psy- chology program (n � 57), using the list pro- vided in the November 2006 issue of Commu- niqué (National Association of School Psychol- ogists, 2006, p. 44). These researchers found that of the 57 approved graduate-level school psychology programs, only 1 (1.8%) appeared to require that students enroll in one or more qualitative courses, and 11 (19.3%) only ap- peared to offer one or more qualitative courses as an elective. These researchers concluded that a likely explanation for the lack of qualitative research articles published in the four flagship school psychology journals reflects the fact that the majority of school psychologists do not re-

Nancy L. Leech, School of Education, University of Colorado Denver; Anthony J. Onwuegbuzie, Department of Educational Leadership and Counseling, Sam Houston State University.

Correspondence concerning this article should be ad- dressed to Nancy L. Leech, University of Colorado Denver, School of Education, Campus Box 106, PO Box 173364, Denver, CO 80217. E-mail: nancy.leech@cudenver.edu

School Psychology Quarterly Copyright 2008 by the American Psychological Association 2008, Vol. 23, No. 4, 587–604 1045-3830/08/$12.00 DOI: 10.1037/1045-3830.23.4.587

587

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

ceive formal training in qualitative research or mixed methods research approaches. Yet, qual- itative research, because of its exploratory and constructivist nature, can help school psychol- ogy researchers to (a) develop theories and models (Leech & Onwuegbuzie, in press); (b) address process-oriented questions of interest to the field (Leech & Onwuegbuzie, in press); (c) focus on cultural and contextual factors that improve or debilitate the efficacy and social/ ecological validity of interventions or programs (Nastasi & Schensul, 2005); (d) identify and document modifications necessary to apply in- terventions to real-life contexts; (e) identify core intervention components that are associ- ated with desired outcomes; and (f) identify unintended outcomes associated with interven- tions or programs (Nastasi & Schensul, 2005). Nastasi and Schensul recently published a spe- cial issue containing qualitative research. How- ever, clearly more qualitative research studies are needed in school psychology research.

Similar to research utilizing quantitative tech- niques, qualitative research also has a vast amount of techniques available (Leech & Onwuegbuzie, in press). Yet, most researchers are unaware of the numerous accessible choices of qualitative analy- ses. This lack of knowledge can affect the accu- racy of the results, and thus create research that, “[is] tarred with the brush of “sloppy research”” (Guba, 1981, p. 90). In fact, many researchers use only one type of analysis and assume the results are optimally meaningful. In order to triangulate results, we contend that research utilizing qualita- tive techniques should involve at least two, if not more, types of data analysis tools—what Leech and Onwuegbuzie refer to as “data analysis trian- gulation” (p. 2). We believe it is important to increase triangulation not only by using multiple data collection tools (Lincoln & Guba, 1985), but also by utilizing multiple data analysis tools.

In order for researchers to undertake qualitative data analysis triangulation, researchers need to select systematically from the many tools avail- able for analyzing qualitative data. Unfortunately, textbooks that describe qualitative data analysis techniques tend to focus on one data analysis technique (e.g., discourse analysis; Phillips & Jor- gensen, 2002) or, at best, only a few techniques. With this in mind, the purpose of this paper is to provide a compendium of multiple types of anal- yses available for qualitative data in school psy- chology research. Figure 1 and Table 1 depict how

we have categorized the analyses into four areas: talk, observations, drawings/photographs/videos, and documents. These areas represent four major sources of data in qualitative research. As such, we hope that our compendium of analytical tech- niques should help qualitative researchers in school psychology make informed choices for their data analysis tools.

Descriptions of A Selection of The Available Tools

In this article, we present 18 qualitative data analysis techniques. Whereas some of these pro- cedures represent the earliest formalized qualita- tive data analysis techniques (e.g., method of con- stant comparison analysis; Glaser & Strauss, 1967; domain analysis, taxonomic analysis, com- ponential analysis; Spradley, 1979), others repre- sent more recent techniques (e.g., secondary data analysis; Heaton, 2000, 2004; text mining; Powis & Cairns, 2003; microinterlocutor analysis; On- wuegbuzie, Dickinson, Leech, & Zoran, 2007). As noted earlier, the 18 techniques are organized around the four major sources of qualitative data, namely: talk, observations, drawings/photographs/ videos, and finally, documents. Some techniques (e.g., constant comparative analysis, word count) can be utilized with multiple sources of data. The first time the analysis is presented, we fully de- scribed the technique. In subsequent sections we have brief descriptions of how to utilize the anal- ysis with the specific source of data. It should be noted that the descriptions of each method vary in length because some techniques (e.g., conversa- tion analysis, discourse analysis, qualitative com- parative analysis) need more explanation than other techniques (e.g., narrative analysis, semiot- ics, keywords-in-context, word count). However, this variation does not imply that some procedures are more important than other techniques. (For a step-by-step presentation of method of constant comparison analysis, keywords-in-context, word count, classical content analysis, domain analysis, taxonomic analysis, and componential analysis, please see Leech & Onwuegbuzie, in press.)

Techniques to Analyze Talk

Conversation Analysis

Conversation analysis was developed in the 1960s by Harvey Sacks, Emmanuel Schegloff,

588 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

and Gail Jefferson (Sacks, Schegloff, & Jeffer- son, 1974; Schegloff, 1968, 1972). The goal of this method of analysis is to describe people’s methods for producing orderly social interac- tion. Conversation analysis emerged out of Garfinkel’s (1967) ethnomethodology program and its analysis of folk methods. Conversation analysis has at its root three fundamental as- sumptions. First, talk portrays stable and struc- tured patterns that are directly linkable to the actors. These patterns are independent of the

psychological or other characteristics of the in- dividuals involved in the conversation (Heri- tage, 1984). As such, the structural organization of talk is treated the same way (i.e., as a social fact), as is the structural organization of any social institution. Also, it is considered inappro- priate to attribute the structural organization to the psychological or other characteristics of the individuals involved in the dialogue. Second, the action of a speaker is context specific inas- much as its contribution to a continuous se-

Observations

Documents

Talk

Conversation Analysis

Discourse Analysis

Membership Categorization

Analysis

Domain, Componential and

Taxonomic

Constant Comparative

Analysis

Word Count

Keywords- in- Context

Secondary Data Analysis

Classical Content Analysis

Drawings/Photographs /Video

Manifest Content Analysis

Latent Content Analysis

Text Mining

Narrative Analysis

Semiotics

Qualitative Comparative

Analysis

Micro- Interlocutor

Analysis

Figure 1. Organization of types of analysis by type of data obtained.

589QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

quence of actions cannot adequately be under- stood without considering the context in which the sequence occurs. However, “the context of a next action is repeatedly renewed with every current action” (Heritage, 1984, p. 242). Third,

it is essential that theory construction does not take place prematurely, and research methods should not involve the exclusive use of general, thin descriptions.

Conversation analysts strive to avoid a priori speculations about the dispositions and motives of those engaged in the conversation, while, at the same time, they promote the detailed exam- ination of the actual actions of the actors. That is, conversation analysts generally focus on what participants do in conversation (i.e., their motives), rather than subjective explanation. Therefore, the behavior of speakers is treated as the central resource from which the analysis might develop (Heritage, 1984). According to Heritage, conversation analysts should demon- strate any regularities that they describe can be linked back to the actors as “normatively ori- ented-to grounds for inference and action” (p. 244). Furthermore, conversation analysts also seek to identify deviant cases, wherein these regularities do not occur.

Conversation analysis is concerned with sev- eral aspects of talk, the most common of which are (a) turn-taking and repair, (b) adjacent pairs, (c) preliminaries, (d) formulations, and (e) ac- counts. Turn-taking and repair involve how a speaker makes a turn relate to a previous turn (e.g., “uh-huh”, “OK”), what the turn interac- tionally accomplishes (e.g., a question, an ac- knowledgment), and how the turn relates to a succeeding turn (e.g., by a question, directive, request). The moment in a conversation when a transition from one speaker to another is possi- ble is called a transition relevance place (TRP; Sacks et al., 1974). TRPs avoid chaos and make turn-taking context free. When turn-taking vio- lations occur, “repair mechanisms” are imple- mented. For example, when more than one per- son is speaking at the same time, a participant might stop speaking before a typically possible completion point of a turn. Thus, turn-taking motivates actors to listen, to understand the utterances, and to display understanding. Adja- cency pairs are sequentially paired actions that feature the generation of a reciprocal response. The two actions normatively occur adjacent to each other and are generated by different par- ticipants. Preliminaries are used to examine the situation before performing some action. They provide a means for the participant to pose a question indirectly in order to decide whether the question should be posed directly. Formu-

Table 1 Relationship Between Type of Qualitative Data Analysis Technique and Source of Qualitative Data

Source of data Type of qualitative

technique

Talk Conversation analysis Discourse analysis Narrative analysis Semiotics Qualitative comparative

analysis Constant comparison

analysis Keywords-in-context Word count Membership categorization

analysis Domain analysis Taxonomic analysis Componential analysis Classical content analysis Micro-interlocutor analysis

Observations Qualitative comparative analysis

Constant comparison analysis

Keywords-in-context Word count Domain analysis Componential analysis Taxonomic analysis Manifest content analysis Latent content analysis

Drawings/ photographs/video

Qualitative comparative analysis

Constant comparison analysis

Word count Manifest content analysis Latent content analysis Secondary data analysis

Documents Semiotics Qualitative comparative

analysis Constant comparison

analysis Keywords-in-context Word count Secondary data analysis Classical content analysis Text mining

590 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

lations represent a summary of what another speaker has stated. Finally, accounts are the means by which people explain actions. They include excuses, apologies, requests, and dis- claimers (Silverman, 2001).

An example of when conversation analysis could be used in school psychology research is with discussions of individual education pro- grams (IEPs). Usually, IEPs are discussed by the teacher, the school psychologist, the parent, and any other staff who have direct contact with the child. These conversations might yield in- teresting information when analyzed via con- versation analysis. As another example, a school psychology researcher could analyze conversations held between a student with a speech impediment and his or her peers to as- sess the extent to which the former student’s impairment is affecting his or her quality of relationships with classmates. Thus, conversa- tional analysis considers the context with which the data are collected, which is important to consider when discussing student progress.

Discourse analysis. A form of discourse analysis that is also known as discursive psy- chology was developed by a group of social psychologists in Britain led by Potter and Wetherall, who contended that in order to un- derstand social interaction and cognition, it was necessary to examine how people communi- cated in everyday situations (Potter & Wether- all, 1987). In general, discourse analysis in- volves selecting representative or unique seg- ments of language use, such as several lines of an interview transcript, and then examining them in detail. Discourse analysis emphasizes the way that versions of entitles such as the society, community, and events, emerge in dis- course (Phillips & Jorgensen, 2002). This form of qualitative analysis operates on three funda- mental assumptions: antirealism (i.e., accounts cannot be treated as true or false descriptions of reality), constructionism (i.e., how participants/ constructions are accomplished and under- mined), and reflexivity (Cowan & McLeod, 2004).

Discourse analysis depends on the analyst’s sensitivity to language use, from which an “an- alytic tool kit” can be developed that includes facets such as rhetorical organization, variabil- ity, accountability, positioning, and discourses (Cowan & McLeod, 2004). Selected talk or text can be examined to see how it is organized

rhetorically in order to make claims that are as persuasive as possible, while protecting the speaker from refutation and contradiction (Bil- lig, 1996). Discourse analysts treat language as being situated in action. When people use lan- guage they perform different social actions such as questioning or blaming. Language then var- ies as a function of the action performed. Thus, variability can be used as a tool to show how individuals use different discursive construc- tions to perform different social actions. Words can be examined to see how people use ac- countability for their versions of experiences, events, people, locations, and the like. For ex- ample, when criticizing a racial or ethnic group, a person might use the phrase “Some of my best friends are Black,” in order to avert charges of prejudice. Positioning refers to the way speak- ers place each other with respect to social nar- ratives and roles. For instance, the way a student talks may position the person as a novice, whereas the way a teacher talks may position the individual as an expert.

Finally, the concept of discourses refers to well-established ways of describing and under- standing things. For instance, as noted by Cowan and McLeod (2004), in therapy the cli- ent’s language might indicate a medical- biological discourse (“it’s my nerves’), whereas the therapist may be utilizing psychoanalytic discourse (”does what you are experiencing presently remind you of any similar experiences during your childhood?“). These examples sug- gest incidents wherein conflicting discursive positioning prevails. Additional analyses of these incidents might examine the participants’ use of conversational strategies, such as repeti- tion and redefining what the other speaker has said. Studies that use discourse analysis tech- niques can provoke a critical rereading of pro- cesses that have been taken for granted that occur in social interactions (Cowan & McLeod, 2004).

There are five major traditions of discourse analysis: (a) Linguistics (i.e., examining the way sentences or utterances cohere into dis- course, e.g., studying the way words such as “however” and “but” operate, along with differ- ent kinds of references that occur between sen- tences); (b) Cognitive psychology (i.e., focusing on the way mental scripts and schemas are used to make sense of narrative); (c) Classroom in- teraction (i.e., linguistics; attempting to provide

591QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

a systematic model to describe typical interac- tion patterns in teaching based around initia- tion-response-feedback structures); (d) Post- structuralism and literary theory: Continental discourse analysis (i.e., associated with Michael Foucault, it is less concerned with discourse in terms of specific interaction as with how a dis- course, or a set of statements, comes to consti- tute objects and subjects); and (e) Metatheoreti- cal emphasis on antirealism and construction- ism (i.e., emphasizing the way versions of the world, of society, events and inner psychologi- cal words, are produced in discourse) (cf. Pot- ter, 2004). Most relevant to school psychology researchers are linguistics, cognitive psychol- ogy, and classroom interaction.

Gee (2005) conceptualized the following seven building tasks that could be examined when conducting a discourse analysis: (a) sig- nificance, which addresses the question “How is this piece of language being used to make cer- tain things are significant or not and in what ways?” (e.g., intonation, choice of words); (b) activities, which addresses the question “What activity or activities is this piece of language being used to enact [going on]?” (e.g., contrast- ing behaviors); (c) identities, which addresses the question “What identity or identities is this piece of language being used to enact (opera- tive)?” (e.g., contextualizing identity); (d) rela- tionships, which addresses the question “What sort of relationship or relationships is this piece of language seeking to enact with others?” (e.g., establishing one’s level of importance in a group); (e) politics, which addresses the ques- tion “What perspective on social goods is this piece of language communicating?” (e.g., estab- lishing protocol); (f) connections, which ad- dress the questions “How does this piece of language connect or disconnect things” and “How does it make one thing relevant or irrel- evant to another?” (e.g., connection and rele- vance of one’s attendance vs. another’s lack of attendance); and (g) sign systems and knowl- edge, which addresses the question “How does this piece of language privilege or disprivilege specific sign systems or different ways of know- ing and believing or claims to knowledge and belief?” (e.g., novice school psychologists’ knowledge vs. experienced school psycholo- gists’ knowledge).

Discourse analysis could be used in school psychology research in numerous situations;

any situation that includes discussion between two people would be appropriate for use with discourse analysis. For example, when a school psychologist is talking with parents regarding their child, a section of the talk could be ana- lyzed for the use of language, how it is orga- nized rhetorically, and the discourses that take place. Alternatively, this talk could be analyzed with respect to Gee’s (2005) seven building tasks.

Narrative analysis

Narrative analysis involves considering the potential of stories to give meaning to individ- ual’s lives, and treats data as stories, enabling researchers to take account of the research par- ticipants’ own evaluations (DeVault, 1994; Riessman, 1993). Data that are in narrative form usually are sequential in nature, although some narratives do not follow this rule (Riessman, 1993). Commonly, with narrative analysis, data are reduced to a summary. This summary can be undertaken by (a) summarizing the main plot of the narrative, (b) utilizing a coding procedure similar to constant comparative analysis, or (c) conducting an event structure analysis (Fielding & Lee, 1998).

Narrative analysis can be used by school psy- chology researchers. For example, if a school psychology researcher was interested in inter- viewing children about their experiences of school, asking them to tell a story about their day may highlight the important aspects. This story could then be analyzed with narrative analysis.

Semiotics

Semiotics is the science of signs, in which talk and text are treated as systems of signs under the assumption that no meaning can be attached to a single term. This form of analysis shows how signs are interrelated for the purpose of creating and excluding specific meanings (Silverman, 1993). Propp (1968) and Greimas (1966) utilized semiotics to create semiotic nar- rative analysis, where schemes from text are analyzed. Qualitative researchers view the use of semiotics, or symbols, in language as a view into the culture of the speaker.

School psychology researchers use semiotics to understand the language used in data col-

592 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

lected from talk. For example, a school psychol- ogy researcher may be interested in understand- ing talk from students who participate in a gang. By analyzing the talk for symbols, the re- searcher may understand better the use of the language and what the underlying symbols may represent.

Qualitative Comparative Analysis

Qualitative comparative analysis, which was developed by Charles Ragin (1987), represents a systematic analysis of similarities and differ- ences across cases. Most commonly, it is used in macrosocial studies to examine the condi- tions under which a state of affairs is realized. As such, qualitative comparative analysis typi- cally is used as a theory-building approach, allowing the analyst to make connections among previously built categories, as well as to test and to develop the categories further (Miles & Weitzman, 1994). In causal, macrolevel ap- plications, qualitative comparative analysis typ- ically is used for reanalyzing secondary data collected by other researchers (e.g., Ragin, 1989, 1994). Qualitative comparative analysis can be used to conduct a “microsociological, noncausal, hermeneutically oriented analysis of interview data. . .” in which “several analyses, at various levels, follow each other, helping us to look at the cases from different angles and accordingly arrive at new ideas about their in- terrelations” (Rantala & Hellström, 2001, p. 88).

Whether representing a causal or noncausal approach, qualitative comparative analysis be- gins with the construction of a truth table. A truth table lists all unique configurations of the study participants and situational variables ap- pearing in the data, along with the correspond- ing type(s) of incidents, events, or the like ob- served for each configuration (Miethe & Drass, 1999). The truth table provides information about which configurations are unique to a cat- egory of the classification variable and which configurations are found in multiple categories. Comparing the numbers of configurations in these groups provides an estimate of the extent to which types of events, experiences, or the like are similar or unique. The analyst then “compares the configurations within a group, looking for commonalities that allow configu- rations to be combined into simpler, yet more

abstract, representations” (Miethe & Drass, 1999, p. 8). This is accomplished by identifying and eliminating unnecessary variables from configurations. Variables are deemed unneces- sary if its presence or absence within a config- uration has no impact on the outcome that is associated with the configuration. Therefore, qualitative comparative analysis yields case- based rather than variable-based findings (Ra- gin, 1989, 1994). The qualitative comparative analyst repeats these comparisons until further reductions are no longer possible. Redundancies among the remaining reduced configurations are eliminated, which yield the final solution, namely, a statement of the unique features of each category of the typology.

Qualitative comparative analysis is a case- oriented approach that considers each case ho- listically as a configuration of attributes. Spe- cifically, qualitative comparative analysts as- sume that the effect of a variable may be different from one case to the next, depending upon the values of the other attributes of the case. By undertaking systematic and logical case comparisons, qualitative comparative ana- lysts use the rules of Boolean algebra to identify commonalities among these configurations, thereby reducing the complexity of the typol- ogy. The goal of qualitative comparative anal- ysis is to arrive at a typology “that allows for heterogeneity within groups and that defines categories in terms of configurations of at- tributes” (Miethe & Drass, 1999, p. 10).

When analyzing talk from school psychology research studies, qualitative comparative analy- sis can be conducted. For example, a truth table could be created by a school psychology re- searcher to understand the variable of “choice.” The researcher might be interested in under- standing how children diagnosed with attention- deficit hyperactivity disorder (ADHD) make ap- propriate choices throughout the school day. Looking at choice with the aid of a truth table would assist the researcher in examining the variable of choice and filtering out variables that do not impact the children.

Constant Comparison Analysis

Barney Glaser and Anselm Strauss, the fa- thers of grounded theory (i.e., study using rig- orous set of procedures in an attempt produce

593QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

substantive theory of social phenomena; Glaser & Strauss, 1967), created the method of con- stant comparison analysis (Glaser, 1978, 1992; Glaser & Strauss, 1967; Strauss, 1987). Some authors use the term “coding” when referring to constant comparative analysis (Miles & Huber- man, 1994; Ryan & Bernard, 2000). The goal of constant comparison analysis is to generate a theory, or set of themes. Some researchers be- lieve constant comparison analysis only can be used with grounded theory designs (Creswell, 2007; Merriam, 1998). Yet, we contend that constant comparison analysis can and is com- monly used with any narrative or textual data (Leech & Onwuegbuzie, in press).

There are five main characteristics of con- stant comparison analysis: (a) to build theory, not test it; (b) to give researchers analytic tools for analyzing data; (c) to assist researchers in understanding multiple meanings from the data; (d) to give researchers a systematic process as well as a creative process for analyzing data; and (e) to help researchers identify, create, and see the relationships among parts of the data when constructing a theme (Strauss & Corbin, 1998). There are three main stages of constant comparative analysis. The first stage is open coding, which is “like working on a puzzle” (Strauss & Corbin, p. 223). During this stage, the analyst is participating in coding the data, wherein the analyst chunks the data into smaller segments, and then attaches a descriptor, or “code,” for each segment. The next stage, axial coding, is when the researchers groups the codes into similar categories. The final stage is called selective coding, which is the “process of integrating and refining the theory” (Strauss & Corbin, p. 143). Through this process, the re- searcher can “create theory out of data” (Strauss & Corbin, p. 56).

The method of constant comparison analysis can be used with virtually all sources of data from school psychology research. In fact, we contend that the method of constant comparison analysis can be utilized with talk, observations, drawings/photographs/video, and documents. For example, using it for talk that occurs among parent(s), teacher(s), and student, after the talk has been transcribed, the words can be chunked and coded, and then the codes can be organized to create themes.

Keywords-in-Context (KWIC)

KWIC is a type of analysis used in many fields. The goal of KWIC is to reveal how words are used in context with other words. Fielding and Lee (1998) refer to KWIC as an analysis of the culture of the use of the word. The assumption underlying KWIC is that peo- ple use words differently and, thus, by examin- ing how words are used in context of their speech, the meaning of the word will be under- stood. KWIC can be undertaken manually, al- though there are multiple computer programs (e.g., NVIVO, version 7.0; QSR International Pty Ltd., 2006) that can assist with this analysis.

School psychology researchers can utilize KWIC with data from talk to assess the use of a keyword. For example, a school psychology researcher may be interested in interviewing students who have IEPs and their use of the keyword “stupid.” By finding the keyword throughout the data and looking at the words that surround the keyword, the researcher can understand better how these participants utilize the word “stupid.”

Word Count

Everyone has their own way of using words. Pennebaker, Mehl, and Niederhoffer (2003) call this “linguistic fingerprints” (p. 568). The the- ory behind word count is that in order to under- stand the meaning people ascribe to a specific word, one can look at the frequency of use of a target word. The basic assumption underlying the word count procedure is that the more fre- quently a word is used, the more important the word is for the person (Carley, 1993). Accord- ing to Miles and Huberman (1994), at least three reasons exist for counting words: (a) to identify patterns more easily, (b) to verify a hypothesis, and (c) to maintain analytic integ- rity. Proponents of word count procedures con- tend that it is more precise—and thus more meaningful—for qualitative researchers to specify the exact count rather than using terms such as “many,” “most,” “frequently,” “sev- eral,” “always,” and “never,” which are essen- tially quantitative (cf. Sechrest & Sidani, 1995).

However, it should be noted that word count can lead to misleading interpretations being made. In particular, word count can lead to a word being decontextualized such that it is not

594 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

meaningful. Further, a word that is used more frequently than another word does not necessar- ily imply that it is more important for the speaker. Thus, we suggest that, where possible, word count be combined with member-check- ing (Merriam, 1998), wherein participants are asked whether the interpretations (i.e., interpre- tive validity; Maxwell, 1992) or theories (i.e., theoretical validity; Maxwell) stemming from the word count adequately capture their voices.

In school psychology research studies that include analyzing data from talk, word count can be utilized. For example, when working with students in a focus group, word count could be conducted to assess which participants contributed more than others. This information can assist the school psychology researcher in understanding who supplied more of the data.

Membership Categorization Analysis

The sociologist Harvey Sacks is credited with developing membership categorization analysis. Sacks wanted to avoid study participants being treated as cultural objects, in which they are represented in ways that a particular culture deems important (Silverman, 2001). Rather, Sacks (1992) viewed culture as a means for making inferences. According to Sacks, given that many categories can be used to describe the same person or behavior, the goal is to ascertain how individuals choose among the existing set of categories for understanding a specific event. Moreover, Sack’s goal is to realize the role that interpretations play in making descriptions and the consequences of selecting a particular cate- gory.

As noted by Sacks (1992), any individual can be labeled in numerous correct ways (i.e., using many categories). Thus, he advocated the use of membership categorization analysis, more spe- cifically termed as membership categorization device. This devise comprises categories (e.g., baby, sister, brother, mother, father � family) that are viewed as being group together, as well as some rules and corollaries regarding how to apply these categories. These rules include (a) the economy rule, in which a single category may be adequate to describe a person; and (b) the consistency rule, in which, if an individual is identified from a collection, then the next indi- vidual may be identified from the same collec- tion. Sacks also identified category-bound ac-

tivities, in which activities may be deemed as being tied to certain categories; and standard- ized relational pairs, wherein pairs of catego- ries are linked together in standardized, typical ways. Thus, with membership categorization analysis, the analyst asks how individuals use everyday terms and categories in their social interactions.

An example of a research area in school psychology where membership categorization analysis would be helpful to use would be when working with a child. Children can have multi- ple labels (i.e., child, good-student, gang mem- ber, etc.). When data have been collected as talk, wherein a child has been discussed, these data can be analyzed with membership catego- rization analysis to help the researcher under- stand how the child is categorized and thus, more than likely, how the child has been treated.

Domain, Taxonomic, and Componential Analyses

Ethnographic analysis was developed by Spradley (1979). There are four types of ethno- graphic analysis: (a) domain analysis, (b), tax- onomic analysis, (c) componential analysis, and (d) theme analysis. According to Spradley, “these strategies have a single purpose: to un- cover the system of cultural meanings that peo- ple use” (p. 94).

Ethnographic analyses most commonly are undertaken in an ethnographic study—although ethnographic analyses can be used in any qual- itative study. The foundation of ethnographic analyses is the belief that informants have cul- tural knowledge. By systematically examining an informant’s words and environment, one can see the relationships among the parts. It is the examination of these parts that helps the re- searcher to understand the overall culture of the informant.

According to Spradley (1979), ethnographic analysis most commonly can be utilized in the following research steps: (a) selecting a prob- lem, which is focused on inquiring into the cultural meanings people use to organize their lives; (b) collecting cultural data; (c) analyzing the cultural data, beginning when first data are collected; (d) formulating hypotheses; and (e) writing the ethnography. The most important aspect of this process is the focus on going back to the informants to ask questions. These ques-

595QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

tions are used to help enhance the analyses. The analyses are best undertaken in order, starting with domain analysis, then taxonomic analysis, followed by componential analysis, and then, finally, theme analysis.

Domain analysis is the first type of analysis to be completed. This form of analysis starts with looking at symbols. Every culture has symbols or elements that represent other items. Symbols have three aspects: (a) the symbol itself, (b) one or more referents (to what the symbol refers), and (c) a relationship between the symbol and the referent. To understand the symbol, the re- searcher needs to analyze the relationship, by looking at semantics, of the symbol to the ref- erents. Spradley (1979) created a list of the most commonly used semantic relationships. The re- sult of a domain analysis is a better understand- ing of the domain.

Once domains have been identified, taxo- nomic analysis can be used by using one do- main and then creating a taxonomy. A taxon- omy is defined by Spradley (1979, 1997) as a “classification system” that inventories the do- mains into a flowchart or diagram to help the researcher understand the relationships among the domains.

Next, componential analysis can be used. Componential analysis is a “systematic search for attributes (components of meaning) associ- ated with cultural symbols” (Spradley, 1979, p. 174). By using matrices and/or tables, this anal- ysis is used to discover the differences among the subcomponents of domains, with the goal being to “map as accurately as possible the psychological reality of our informant’s cultural knowledge” (p. 176). Finally, theme analysis is conducted by developing themes that “go be- yond such an inventory [of domains] to discover the conceptual themes that members of a society use to connect these domains” (Spradley, 1979, p. 185). Interesting to note, domain analysis, taxonomic analysis, componential analysis, and theme analysis can be used in combination as a form of data analysis triangulation. That is, the findings stemming from two or more of these analysis stages can be compared to ascertain the extent to which findings from one analysis stage confirms those arising from another stage.

Because domain, taxonomic, and componen- tial analyses create structural questions, these analyses are best to use with talk-based data when the school psychology researcher can re-

turn to talk with the participants again—that is, participants can be interviewed on more than one occasion. For example, a school psychol- ogy researcher might have data from an inter- view with a child. The child might have used terms unfamiliar to the researcher. In this situ- ation, domain, taxonomic, and componential analyses would be helpful to use, to assist the researcher in understanding the terms the child utilized from the child’s perspective.

Classical Content Analysis

Classical content analysis, also known simply as “content analysis,” has traditionally been used in sociology, journalism, political science, and social psychology (Tesch, 1990). Berelson (1952) defined classical content analysis as “ob- jective, systematic, and quantitative description of the manifest content of communication” (p. 489). Barcus’ (1959) article is considered the first published study to use classical content analysis.

Content analysts focus on how frequently codes are used to determine which concepts are most cited throughout the data. Similar to con- stant comparison analysis, with classical con- tent analysis the researcher chunks and codes the data. However, instead of grouping the codes together, the researcher counts the fre- quency of use for each code. The codes usually are deductively produced, yet they can be in- ductively produced as well. The data (i.e., the frequency counts from the usage of each code) can be further analyzed with multiple tech- niques; describing the data (sometimes using descriptive statistics), utilizing interferential quantitative procedures (Kelle, 1996), or a com- bination of the two (Onwuegbuzie & Teddlie, 2003).

School psychology researchers can utilize content analysis with talk data in numerous situations. For example, school psychology re- searchers may be interested in understanding the impact of bullying training. By interviewing students, coding the data, then conducting con- tent analysis, the school psychology researcher can find what codes were utilized most by the students, thereby assessing what aspects of the bullying training had the most impact.

596 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

Microinterlocutor Analysis

Most analysts of focus groups use the group as the unit of analysis, which, unfortunately, usually prevents the researcher from gleaning information about other focus group members who may not have contributed to the category or theme (Onwuegbuzie et al., 2007). For exam- ple, focus group members whose voices might not be represented in the development of a theme may be those who are relatively silent, perhaps due to having low levels of self- confidence, being relatively less articulate; or having a proclivity to acquiesce to the majority viewpoint. Thus, “conformity of opinion within focus group data is therefore an emergent prop- erty of the group context, rather than an aggre- gation of the views of individual participants” (Sim, 1998, p. 348).

To address this limitation of using group as the unit of analysis, Onwuegbuzie et al. (2007) conceptualized what they termed a microinter- locutor analysis (which was further expanded by Onwuegbuzie, Leech, & Collins, in press). According to their conceptualization, focus group information is collected and analyzed re- garding which participant responds to each question, the order that the participants respond, the characteristics of the response (e.g., nonse- quitur, rambling, focused), the nonverbal com- munication used, and the like. For example, when describing and interpreting emergent themes, in addition to providing the most com- pelling statements made by focus group partic- ipants, where possible, school psychology re- searchers could provide information about how many members appeared to contribute to the feeling of consensus underlying each theme. In addition, researchers could determine how many appeared to represent a dissenting view (if any) and how many participants did not appear to express any view at all, as well as how many focus group members provided substantive statements or examples that support the consen- sus view and how many members provided substantive statements or examples that suggest a dissenting view. School psychology research- ers could also compare subgroups (e.g., male school psychologists vs. female school psychol- ogists) with respect to interactions patterns, in- cluding which subgroup member tended to speak first in response to a question. As noted by Onwuegbuzie et al., obtaining information

about dissenters would help school psychology researchers determine the degree to which the data that contributed to the theme reached sat- uration (i.e., no new or relevant information seem to emerge pertaining to a category, and the category development is well established and validated; Lincoln & Guba, 1985). Thus, such information would help school psychology re- searchers “to determine the range, depth, and complexity of emergent themes” (Onwuegbuzie et al., 2007, p. 11).

Techniques to Analyze Observations

Qualitative Comparative Analysis

Qualitative comparative analysis has been described previously. Qualitative comparative analysis can be used to analyze observations in school psychology research. For example, a school psychology researcher could use quali- tative comparative analysis to examine conflict among children. Here, qualitative comparative analysis would enable the researcher to identify similarities and differences in the general as- pects of conflict among children and identify similarities and differences in the combinations of victims, perpetrators, and situational vari- ables that provide the contexts within which conflicts occur.

Constant Comparative Analysis

Constant comparative analysis was described earlier. In addition to using constant compara- tive analysis with talk-based data, this analysis can be utilized with observations. Once the ob- servations have been written down, the words can then be chunked and coded, and then the codes can be organized into themes.

KWIC

KWIC analysis is helpful to utilize with ob- servational data. For example, after document- ing observations, the researcher may look through the data to see whether there are key- words that were utilized and can analyze how these keywords were employed. For example, in studying team-work among children, if a school psychology researcher observed elemen- tary schoolchildren playing together, the re- searcher may be interested in analyzing how

597QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

frequently the keywords “mine,” “yours,” and “ours” were utilized and in what contexts.

Word count. When analyzing observations, word count can be extremely helpful. For ex- ample, a school psychology researcher might be interested in better understanding a child who has Tourette’s syndrome. The researcher could utilize word count to assess the number of times this child uses an inappropriate word in class. Evaluating the number of inappropriate words used over time might help the school psychol- ogist in assessing whether the behavior was improving.

Domain, Taxonomic, and Componential Analyses

Domain, taxonomic, and componential anal- yses are described above, and can also be used with observational data. For example, when ob- servational data have been collected, the data can then be analyzed to look for domains (e.g., symbols) in the data, in order to investigate the observations further so that they can be under- stood better. These domains then can be utilized to create a taxonomy or could be subjected to a componential analysis.

Manifest Content Analysis

According to Berelson (1952), manifest con- tent analysis is an analytical technique for de- scribing observed (i.e., manifest) aspects of communication via objective, systematic, and empirical means. However, a manifest content analysis may be more simply described as an analysis of manifest content, wherein manifest content represents content that resides on the surface of behavior and, thus, is easily observ- able. For example, a school psychology re- searcher might identify the different ways that a teacher responds to a sixth-grade student with an emotional disorder every time the student misbehaves within a specific time period, and then count the number of times each response (i.e., strategy) was used by the teacher. The researcher also could identify the different ways that the student reacts to each strategy. Such an analysis could help determine the nature of the strategies used by the teacher, the nature of the student’s reactions, and use these two sets of data to examine the causal link between teacher strategy and the student reaction.

Latent Content Analysis

In the context of observations, latent content analysis involves the uncovering of underlying meaning of behaviors or actions. Moreover, la- tent content analysis is an interpretive analysis of behavior that “involves the imputation of meaning, ‘the reading in’ of content, the infer- ence that the behavior has function(s) either by intent or effect” (Bales, 1951, p. 6). Latent content analysts typically are interested in im- portant (although hidden) aspects of individual and social cognition underlying behaviors rather than assessing the behaviors that are eas- ily observable.

Potter and Levine-Donnerstein (1999) have identified two types of latent content: latent pat- tern variables and latent projective variables. La- tent pattern variables involve using a combination of information that indicates the existence of a target variable. Or example, in deciding whether or not a student with an emotional disorder is exhibiting a rebellious disposition, a school psy- chology researcher might utilize as many clues as is available (e.g., hairstyle, clothes, presence or absence of body piercings/tatoos, style of walk, use or nonuse of inappropriate words) that indi- cate the possible existence of the target variable (i.e., rebellious disposition). However, this exis- tence can only be determined when an appropriate pattern of elements prevails. Coding schemes for latent pattern variables are similar to, but more sophisticated than, coding schemes used in man- ifest latent analysis. Thus, for latent pattern vari- ables, the meaning of the observation (i.e., target variable) exists on the surface of the content. In contrast, in latent projective variables, the locus of the variable shifts to the researchers’ intersubjec- tive interpretations (i.e., social and cognitive sche- mata) of the meaning of the content. Here, school psychology researchers can use latent projective variables to examine cognitive processes of stu- dents, such as the critical thinking of students with attention deficit hyperactivity disorder.

Techniques to Analyze Drawings/Photographs/Videos

Qualitative comparative analysis.

Qualitative comparative analysis has been described in the section on analyzing talk-based data. When analyzing drawings, photographs,

598 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

and/or videos, utilizing qualitative comparative analysis and constructing a truth table can help the researcher identify connections among pre- viously built categories, and to test and to de- velop the categories further. Looking for con- nections among pictures of classrooms might assist a school psychology researcher in under- standing what aspects of classrooms benefit stu- dents with attention disorders.

Constant comparative analysis

As with qualitative comparative analysis, constant comparative analysis has been de- scribed in detail in the section on analyzing talk-based data. With drawings, photographs, and/or videos, constant comparative analysis can be conducted to assess similarities and dif- ferences among the pictures. The similarities/ differences are identified by selecting sections of the pictures to analyze, giving them codes, then grouping the codes together to create themes. As themes emerge, new drawings, pho- tographs, and/or video clips are compared to these themes to determine where this new visual information fits in the overall thematic develop- ment.

Word Count

When school psychology researchers are an- alyzing data from videos, word count can be an invaluable tool. For example, a school psychol- ogy researcher may have video of children in- teracting. By conducting a word count analysis, the researcher can assess who was more talk- ative and contributed more to the interaction. Researchers also could use word count analysis to determine how much importance the children in the video attach to certain words.

Manifest Content Analysis

Just as manifest content analysis can be used to analyze observations, so too can it be used to analyze drawings/photographs/video. This is because with such media, content resides on the surface of behavior of interest and, thus, is readily observable. For instance, a school psy- chology researcher can analyze classroom inter- actions that have been captured in a video.

Latent Content Analysis

Latent content analysis is used to analyze drawings/photograph/video by undertaking a subjective evaluation of the overall content of the visual representation. For example, a school psychology researcher can examine the use of humor (i.e., latent projective variable) among school psychologists in activities that have been captured on video.

Secondary Data Analysis

Qualitative secondary data analysis is a new and emerging methodology. This form of anal- ysis involves the analysis of preexisting data that have been obtained from research and other contexts. More specifically, it involves the anal- ysis of non-naturalistic or artifactual data that were derived from previous studies, including the following: field notes, data transcribed from interviews, data transcribed from focus groups, questionnaire responses to open-ended ques- tions, observational records, diaries, and life stores (Heaton, 2004). This is in contrast to the analysis of naturalistic data (e.g., diaries, letters, autobiographies, official documents, life histo- ries, social interaction), which would motivate more traditional qualitative data analysis tech- niques such as the analyses presented earlier in this article. Like quantitative secondary data analysis, qualitative secondary data analysis can be used for fulfill one or more of the following goals: (a) to address new or additional research questions; (b) to verify, refute, or refine findings of primary studies via reanalysis of preexisting data; and (c) to synthesize research (e.g., meta- ethnography; Noblit & Hare, 1988; qualitative metasummaries, Sandelowski & Barroso, 2003). Qualitative secondary data analysis in- volves three main modes: (a) formal data shar- ing (i.e., using data sets that officially have been made available for data sharing); (b) informal data sharing (i.e., obtaining data directly from primary researchers and organizations by re- quest, or indirectly via disciplinary networks); and (c) reuse of researcher’s own data (i.e., auto-data; Heaton, 2000).

Heaton (2004) identified five types of quali- tative secondary data analyses. These are (a) supra analysis (i.e., transcends the focus of the primary study from which the data were formed, addressing new theoretical, method-

599QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

ological, conceptual, or empirical questions); (b) supplementary analysis (i.e., a more in- depth investigation of an emergent issue or as- pect of data that was not considered or fully addressed in the primary study); (c) reanalysis (i.e., reanalysis of data to verify findings from primary study); (d) amplified analysis (i.e., combination of data from two or more primary studies for comparative purposes or to enlarge a sample); and (e) assorted analysis (i.e., combi- nation of secondary analysis of qualitative data with primary research and/or analysis of natu- ralistic qualitative data).

Whenever school psychology researchers have access to qualitative data collected by an- other researcher, or data they collected from a previous study that they would like to reana- lyze, they are in a position to conduct a second- ary data analysis. This type of data analysis can reveal new themes from the data and additional results.

Techniques to Analyze Documents

Semiotics

Semiotics, as discussed above in the section on data from talk, can be used to analyze doc- uments. For example, if a school psychology researcher was interested in analyzing teacher comments about a student, utilizing semiotics would assist the researcher in uncovering pos- sible symbols utilized in the text.

Qualitative Comparative Analysis

Qualitative comparative analysis has been discussed in detail in the section on talk-based data. In addition, qualitative comparative anal- ysis can be conducted with documents. Con- structing a truth table to help understand how categories are utilized throughout a document can assist a school psychology researcher in better understanding the document. For exam- ple, a school psychology researcher can use truth tables to analyze a set of IEPS to deter- mine what combination of support services and accommodations are effective for elementary schoolchildren with developmental delay. This truth table would list all unique configurations of the cases, support services, and accommoda- tions appearing in the IEPs, along with the corresponding educational outcomes observed

for each configuration. The truth table provides information about which configurations are unique to a category of the classification vari- able and which are found in multiple categories. By comparing the numbers of configurations in these groups, the school psychology researcher can evaluate the extent to which the types of educational outcomes are unique or similar. The researcher then would compare the configura- tions within a group, identifying commonalities that facilitate combining configurations into simpler representations by eliminating unneces- sary variables (e.g., support services and ac- commodations whose presence or absence has no educational impact) from configurations. When no further variables can be eliminated, the school psychology researcher ends up with a final representation: a depiction of the unique aspects of each category of the typology (e.g., profiles of the support services and accommo- dations in which educational improvement is observed).

Constant Comparative Analysis

Researchers can utilize constant comparison analysis when it comes to analyzing drawings, photographs, and videos, as well as for talk- based data. It can also be utilized with docu- ments. For example, a school psychology re- searcher may be interested in analyzing IEPs over the years for a child who has been diag- nosed with conduct disorder. Utilizing constant comparative analysis, by chunking and coding the words, then organizing the codes into themes, would assist the researcher in under- standing the progression of how the staff had attempted to help the child.

KWIC

KWIC can be used for observations, as dis- cussed above, and also for documents. For ex- ample, a school psychology researcher may be viewing IEPs to understand better how the staff has been assisting a student. By undertaking a KWIC analysis, the researcher can identify a keyword of interest, then find how the keyword has been utilized throughout the document.

600 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

Word Count

An example of using word count to analyze documents in school psychology research is an- alyzing the goals, objectives, and mission state- ments of several schools or school districts within a state in order to obtain how frequently the words “school psychology” and “school psychologists” are used as a way of determining how much value the administrators place on school psychologists.

Secondary Data Analysis

As discussed above, secondary data analysis can be used with drawings, photographs, and videos. It can also be conducted with docu- ments. Analyzing documents utilizing second- ary data analysis can address new or additional research questions, to verify, refute, or refine findings of primary studies via reanalysis of preexisting data; or synthesize multiple research studies.

Table 2 Most Common Qualitative Analyses

Type of analysis Short description of analysis

Constant comparison analysis Systematically reducing data to codes, then developing themes from the codes.

Classical content analysis Counting the number of codes. Word count Counting the total number of words used or the number of times a

particular word is used. Keywords-in-context Identifying keywords and utilizing the surrounding words to understand the

underlying meaning of the keyword. Domain analysis Utilizing the relationships between symbols and referents to identify

domains. Taxonomic analysis Creating a system of classification that inventories the domains into a

flowchart or diagram to help the researcher understand the relationships among the domains.

Componential analysis Using matrices and/or tables to discover the differences among the subcomponents of domains.

Conversation analysis Utilizing the behavior of speakers to describe people’s methods for producing orderly social interaction.

Discourse analysis Selecting representative or unique segments of language use, such as several lines of an interview transcript, and then examining the selected lines in detail for rhetorical organization, variability, accountability, and positioning.

Secondary data analysis Analyzing non-naturalistic data or artifacts that were derived from previous studies.

Membership categorization analysis Utilizing the role that interpretations play in making descriptions and the consequences of selecting a particular category (e.g., baby, sister, brother, mother, father � family).

Semiotics Using talk and text as systems of signs under the assumption that no meaning can be attached to a single term.

Manifest content analysis Describing observed (i.e., manifest) aspects of communication via objective, systematic, and empirical means (Berelson, 1952).

Latent content analysis Uncovering underlying meaning of text. Qualitative comparative analysis Systematically analyzing similarities and differences across cases, typically

being used as a theory-building approach, allowing the analyst to make connections among previously built categories, as well as to test and to develop the categories further.

Narrative analysis Considering the potential of stories to give meaning to individual’s lives, and treating data as stories, enabling researchers to take account of research participants’ own evaluations.

Text mining Analyzing naturally occurring text in order to discover and capture semantic information.

Micro-interlocutor analysis Analyzing information stemming from one or more focus groups about which participant(s) responds to each question, the order that each participant responds, the characteristics of the response, the nonverbal communication used, and the like.

601QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

Classical Content Analysis

School psychology researchers may use con- tent analysis when interested in the number of times a code has been utilized. For example, when comparing student files, the number of times the code “discipline” has been used may assist the researcher in understanding the nature of discipline occurrences in the school.

Text Mining

Text mining includes analyzing naturally occurring text in order to discover and capture semantic information (see, e.g., Del Rio, Ko- stoff, Garcia, Ramirez, & Humenik, 2002; Liddy, 2000; Powis & Cairns, 2003; Sriniva- san, 2004). Most times, text mining is used when a researcher has multiple documents. Due to the overwhelming amount of data, computer programs (e.g., NVIVO, SPSS) have text mining functions to assist the re- searcher. This automated process helps the researcher to identify themes by analyzing the words in the text. Text mining is a systematic process that focuses on the specific words in the documents.

School psychology researchers may utilize text mining when faced with analyzing mul- tiple documents. For example, if a school psychology researcher was interested in ana- lyzing themes across IEPs written over the past five years, text mining would help the researcher to identify themes throughout the documents.

Concluding Thoughts

There are a multitude of analyses available for qualitative researchers in school psychol- ogy. Qualitative researchers need to extend themselves past the recurrent use of the same type of analysis. Table 2 presents a list of all of the analyses discussed throughout this paper along with short descriptions. Using a typology, these 18 qualitative data analysis techniques were organized around four major sources of data: talk, observations, drawings/photographs/ videos, and documents. We hope with this com- pendium of possible analyses will help school psychology qualitative researchers to move be- yond conducting unfocused analyses (e.g., merely coding the data) and utilize analyses that

help to deepen their understanding of the phe- nomenon of interest.

Furthermore, we hope with this compendium that school psychology researchers can begin utilizing more than one type of analysis in order to triangulate the results (Leech & Onwueg- buzie, in press). We believe utilizing multiple types of analyses in a given study can help the researcher see the data from multiple view- points. Additionally, the use of multiple types of analyses can help to alleviate potential re- searcher bias in the data analysis process.

The purpose of this article was to provide a compendium of analyses available for qualita- tive data. By using multiple types of data anal- yses and, thus, triangulating the results of a qualitative study (i.e., data analysis triangula- tion; Leech & Onwuegbuzie, in press), we be- lieve the results will be more trustworthy and, as a result, more meaningful.

References

Bales, R. (1951). Interaction process analysis. Cam- bridge, United Kingdom: Addison Wesley.

Barcus, F. E. (1959). Communications content: Anal- ysis of the research 1900–1958. Unpublished doc- toral dissertation, University of Illinois.

Berelson, B. (1952). Content analysis in communi- cative research. New York: Free Press.

Billig, M. (1996). Arguing and thinking: A rhetorical approach to social psychology (2nd ed.). Cam- bridge, MA: Cambridge University Press.

Carley, K. (1993). Coding choices for textual analy- sis: A comparison of content analysis and map analysis. In P. Marsden (Ed.), Sociological meth- odology (pp. 75–126). Oxford: Blackwell.

Cowan, S., & McLeod, J. (2004). Research methods: Discourse analysis. Counselling & Psychotherapy Research Journal, 4, 102.

Creswell, J. W. (2007). Qualitative inquiry and re- search design: Choosing among five approaches (2nd ed.). Thousand Oaks, CA: Sage.

Del Rio, J. A., Kostoff, R. N., Garcia, E. O., Ramirez, A. M., & Humenik, J. A. (2002). Phenomenolog- ical approach to profile impact of scientific re- search: Citation mining. Advances in Complex Sys- tems, 5, 19–42.

DeVault, M. (1994). Narrative analysis. Qualitative Sociology, 17, 315–317.

Fielding, N. G., & Lee, R. M. (1998). Computer analysis and qualitative research. Thousand Oaks, CA: Sage.

Garfinkel, E. (1967). Studies in ethnomethodology. Englewood Cliffs, NJ: Prentice Hall.

602 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

Gee, J. P. (2005). An introduction to discourse anal- ysis: Theory and method (2nd ed.). New York: Routledge.

Glaser, B. G. (1978). Theoretical sensitivity. Mill Valley, CA: Sociology Press.

Glaser, B. G. (1992). Discovery of grounded theory. Chicago: Aldine.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative re- search. Chicago: Aldine.

Greimas, A. J. (1966). Semantique structurale. Paris: Larousse.

Guba, E. G. (1981). ERIC/ECTJ annual review pa- per: Criteria for assessing the trustworthiness of naturalistic inquiries. Educational Communication and Technology: A Journal of Theory, Research, and Development, 29, 75–91.

Heaton, J. (2000). Secondary data analysis of qual- itative data. A review of the literature. York: So- cial Policy Research Unit (SPRU), University of York

Heaton, J. (2004). Reworking qualitative data. Thou- sand Oaks, CA: Sage.

Heritage, J. (1984). Garfinkel and ethnomethodology. Cambridge, UK: Polity.

Kelle, U. (1996). Computer-aided qualitative data analysis. Thousand Oaks, CA: Sage.

Leech, N. L., & Onwuegbuzie, A. J. (in press). An array of qualitative analysis tools: A call for data analysis triangulation. School Psychology Quar- terly.

Liddy, E. D. (2000). Text mining. Bulletin of the American Society for Information Science & Tech- nology, 27, 14–16.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills, CA: Sage.

Maxwell, J. A. (1992). Understanding and validity in qualitative research. Harvard Educational Re- view, 62, 279–299.

Merriam, S. (1998). Qualitative research and case study applications in education. San Francisco: Jossey-Bass.

Miethe, T. D., & Drass, K. A. (1999). Exploring the social context of instrumental and expressive ho- micides: An application of qualitative comparative analysis. Journal of Quantitative Criminology, 15, 1–21.

Miles, M. B., & Huberman, A. M. (1994). Qualita- tive data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.

Miles, M. B., & Weitzman, E. A. (1994). Choosing computer programs for qualitative data analysis. In M. B. Miles & M. Huberman (Eds.) Qualitative data analysis: An expanded sourcebook (2nd ed., pp. 311–317). Thousand Oaks, CA: Sage.

Nastasi, B. K., & Schensul, S. L. (Eds.). (2005). Contributions of qualitative research to the validity

of intervention research. Journal of School Psy- chology, 43, 177–195.

National Association of School Psychologists. (2006). NASP-approved graduate programs in school psychology. Communiqué, 35, 44.

Noblit, G., & Hare, R. (1988). Meta-ethnography: Synthesizing qualitative studies. Newbury Park, CA: Sage.

Onwuegbuzie, A. J., Dickinson, W. B., Leech, N. L., & Zoran, A. G. (2007, February). Toward more rigor in focus group research: A new framework for collecting and analyzing focus group data. Paper presented at the annual meeting of the Southwest Educational Research Association, San Antonio, TX.

Onwuegbuzie, A. J., Leech, N. L., & Collins, K. M. T. (in press). Innovative data collection strategies in qualitative research. In W. P. Vogt & M. Williams (Eds.), The handbook of methodolog- ical innovation. Thousand Oaks, CA: Sage.

Onwuegbuzie, A. J., & Teddlie, C. (2003). A frame- work for analyzing data in mixed methods re- search. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behav- ioral research (pp. 351–383). Thousand Oaks, CA: Sage.

Pennebaker, J. W., Mehl, M. R., & Niederhoffer (2003). Psychological aspects of natural language use: Our words, our selves. Annual Review of Psychology, 54, 547–577.

Phillips, L. J., & Jorgensen, M. W. (2002). Discourse analysis as theory and method. Thousand Oaks, CA: Sage.

Potter, J. (2004). Discourse analysis as a way of analyzing naturally occurring talk. In D. Silverman (Ed.), Qualitative research: Theory, method and practice (pp. 200 –221). Thousand Oaks, CA: Sage.

Potter, J., & Wetherell, M. (1987). Discourse and social psychology: Beyond attitudes and behav- iour. London: Sage.

Potter, W., & Levine-Donnerstein, D. (1999). Re- thinking validity and reliability in content analysis. Journal of Applied Communication Research, 27, 258–284.

Powell, H., Mihalas, S., Onwuegbuzie, A. J., Suldo, S., & Daley, C. E. (in press). Mixed methods research in school psychology: A mixed methods investigation of trends in the literature. Psychology in the Schools.

Powis, T., & Cairns, D. (2003). Mining for meaning: Text mining the relationship between social repre- sentations of reconciliation and beliefs about Ab- originals. Australian Journal of Psychology, 55, 59–62.

Propp, V. I. (1968). Morphology of the folk tale (Rev. ed.). Austin: University of Texas Press.

603QUALITATIVE ANALYSIS COMPENDIUM

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

QSR International Pty Ltd. (2006). NVIVO: Ver- sion 7. Reference guide. Doncaster Victoria, Aus- tralia: Author.

Ragin, C. C. (1987). The comparative method: Mov- ing beyond qualitative and quantitative strategies. Berkeley, CA: University of California Press.

Ragin, C. C. (1989). The logic of the comparative method and the algebra of logic. Journal of Quan- titative Anthropology, 1, 373–398.

Ragin, C. C. (1994). Introduction to qualitative com- parative analysis. In T. Janoski & A. M. Hicks (Eds.), The comparative political economy of the Welfare State: New methodologies and ap- proaches (pp. 299–319). New York: Cambridge University Press.

Rantala, K., & Hellström, E. (2001). Qualitative comparative analysis and a hermeneutic approach to interview data. International Journal of Social Research Methodology, 4, 87–100.

Riessman, C. (1993). Narrative analysis. Newbury Park, CA: Sage.

Ryan, G. W., & Bernard, H. R. (2000). Data man- agement and analysis methods. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative re- search (2nd ed., pp. 769–802). Thousand Oaks. CA: Sage.

Sacks, H. (1992). Lectures on conversation (Vol. 2). Oxford: Blackwell.

Sacks, H., Schegloff, E. A., & Jefferson, G. (1974). A simple systematics for the organization of turn- taking for conversation. Language, 50, 696–735.

Sandelowski, M., & Barroso, J. (2003). Creating metasummaries of qualitative findings. Nursing Research, 52, 226–233.

Schegloff, E. A. (1968). Sequencings in conversa- tional openings. American Anthropologist, 70, 1075–1095.

Schegloff, E. A. (1972). Notes on a conversational practice: Formulating place. In D. Sudnow (Ed.), Studies in social interaction (pp. 75–199). New York: Free Press.

Sechrest, L., & Sidana, S. (1995). Quantitative and qualitative methods: Is there an alternative? Eval- uation and Program Planning, 18, 77–87.

Silverman, D. (1993). Interpreting qualitative data: Methods for analyzing talk, text and interaction. Thousand Oaks, CA: Sage.

Silverman, D. (2001). Interpreting qualitative data: Methods for analyzing talk, text and interaction (2nd ed.). Thousand Oaks, CA: Sage.

Sim, J. (1998). Collecting and analyzing qualitative data: Issues raised by the focus group. Journal of Advanced Nursing, 28, 345–352.

Spradley, J. P. (1979). The ethnographic interview. For Worth, TX: Holt, Rinehart and Winston.

Spradley, J. P. (1997). The ethnographic interviewer. Cambridge, MA: International Thomson Publish- ing.

Srinivasan, P. (2004). Generation hypotheses from MEDLINE. Journal of the American Society for Information Science & Technology, 55, 396–413.

Strauss, A. (1987). Qualitative analysis for social scientists. Cambridge, United Kingdom: Univer- sity of Cambridge Press.

Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for develop- ing grounded theory. Thousand Oaks, CA: Sage.

Tesch, R. (1990). Qualitative research: Analysis types and software tools. New York: Falmer.

604 LEECH AND ONWUEGBUZIE

Th is

d oc

um en

t i s c

op yr

ig ht

ed b

y th

e A

m er

ic an

P sy

ch ol

og ic

al A

ss oc

ia tio

n or

o ne

o f i

ts a

lli ed

p ub

lis he

rs .

Th is

a rti

cl e

is in

te nd

ed so

le ly

fo r t

he p

er so

na l u

se o

f t he

in di

vi du

al u

se r a

nd is

n ot

to b

e di

ss em

in at

ed b

ro ad

ly .

Statistical Methods in Psychology Journals Guidelines and Explanations

Leland Wilkinson and the Task Force on Statistical Inference APA Board of Scientific Affairs

n the light of continuing debate over the applications of significance testing in psychology journals and follow- ing the publication of Cohen’s (1994) article, the Board

of Scientific Affairs (BSA) of the American Psychological Association (APA) convened a committee called the Task Force on Statistical Inference (TFSI) whose charge was “to elucidate some of the controversial issues surrounding ap- plications of statistics including significance testing and its alternatives; alternative underlying models and data trans- formation; and newer methods made possible by powerful computers” (BSA, personal communication, February 28, 1996). Robert Rosenthal, Robert Abelson, and Jacob Co- hen (cochairs) met initially and agreed on the desirability of having several types of specialists on the task force: stat- isticians, teachers of statistics, journal editors, authors of statistics books, computer experts, and wise elders. Nine individuals were subsequently invited to join and all agreed. These were Leona Aiken, Mark Appelbaum, Gwyneth Boo- doo, David A. Kenny, Helena Kraemer, Donald Rubin, Bruce Thompson, Howard Wainer, and Leland Wilkinson. In addi- tion, Lee Cronbach, Paul Meehl, Frederick Mosteller and John Tukey served as Senior Advisors to the Task Force and commented on written materials.

The TFSI met twice in two years and corresponded throughout that period. After the first meeting, the task force circulated a preliminary report indicating its intention to examine issues beyond null hypothesis significance test- ing. The task force invited comments and used this feed- back in the deliberations during its second meeting.

After the second meeting, the task force recommended several possibilities for further action, chief of which would be to revise the statistical sections of the American Psychological Association Publication Manual (APA, 1994). After extensive discussion, the BSA recommended that “before the TFSI undertook a revision of the APA Publication Manual, it might want to consider publishing an article in American Psychologist, as a way to initiate discussion in the field about changes in current practices of data analysis and reporting” (BSA, personal communica- tion, November 17, 1997).

This report follows that request. The sections in italics are proposed guidelines that the TFSI recommends could be used for revising the APA publication manual or for developing other BSA supporting materials. Following each guideline are comments, explanations, or elaborations assembled by Leland Wilkinson for the task force and under its review. This report is concerned with the use of

statistical methods only and is not meant as an assessment of research methods in general. Psychology is a broad science. Methods appropriate in one area may be inappro- priate in another.

The title and format of this report are adapted from a similar article by Bailar and Mosteller (1988). That article should be consulted, because it overlaps somewhat with this one and discusses some issues relevant to research in psychology. Further detail can also be found in the publi- cations on this topic by several committee members (Abel- son, 1995, 1997; Rosenthal, 1994; Thompson, 1996; Wainer, in press; see also articles in Harlow, Mulaik, & Steiger, 1997).

Method Design

Make clear at the outset what type of study you are doing. Do not cloak a study in one guise to try to give it the assumed reputation of another. For studies that have mul- tiple goals, be sure to define and prioritize those goals.

There are many forms of empirical studies in psychol- ogy, including case reports, controlled experiments, quasi- experiments, statistical simulations, surveys, observational studies, and studies of studies (meta-analyses). Some are hypothesis generating: They explore data to form or sharpen hypotheses about a population for assessing future hypothe- ses. Some are hypothesis testing: They assess specific a priori hypotheses or estimate parameters by random sampling from that population. Some are meta-analytic: They assess specific a priori hypotheses or estimate parameters (or both) by syn- thesizing the results of available studies.

Some researchers have the impression or have been taught to believe that some of these forms yield information that is more valuable or credible than others (see Cronbach, 1975, for a discussion). Occasionally proponents of some research methods disparage others. In fact, each form of research has its own strengths, weaknesses, and standards of practice.

Jacob Cohen died on January 20, 1998. Without his initiative and gentle persistence, this report most likely would not have appeared. Grant Blank provided Kahn and Udry’s (1986) reference. Gerard Dallal and Paul Velleman offered helpful comments.

Correspondence concerning this report should be sent to the Task Force on Statistical Inference, c/o Sangeeta Panicker, APA Science Di- rectorate, 750 First Street, NE, Washington, DC 20002-4242. Electronic mail may be sent to spanicker@apa.org.

594 August 1999 * American Psychologist Copyright 1999 by the American Psychological Association. Inc. 0003-066X/99/$2.00

Vol. 54, No. 8, 594-604

Population

The interpretation of the results of any study depends on the characteristics of the population intended for analysis. Define the population (participants, stimuli, or studies) clearly. If control or comparison groups are part of the design, present how they are defined.

Psychology students sometimes think that a statistical population is the human race or, at least, college sopho- mores. They also have some difficulty distinguishing a class of objects versus a statistical population-that some- times we make inferences about a population through sta- tistical methods, and other times we make inferences about a class through logical or other nonstatistical methods. Populations may be sets of potential observations on peo- ple, adjectives, or even research articles. How a population is defined in an article affects almost every conclusion in that article.

Sample

Describe the sampling procedures and emphasize any in- clusion or exclusion criteria. If the sample is stratified (e.g., by site or gender) describe fully the method and rationale. Note the proposed sample size for each subgroup.

Interval estimates for clustered and stratified random samples differ from those for simple random samples. Statistical software is now becoming available for these purposes. If you are using a convenience sample (whose members are not selected at random), be sure to make that procedure clear to your readers. Using a convenience sam- ple does not automatically disqualify a study from publi- cation, but it harms your objectivity to try to conceal this by implying that you used a random sample. Sometimes the case for the representativeness of a convenience sample can be strengthened by explicit comparison of sample charac- teristics with those of a defined population across a wide range of variables.

Assignment

Random assignment. For research involving causal inferences, the assignment of units to levels of the causal variable is critical. Random assignment (not to be confused with random selection) allows for the strongest possible causal inferences free of extraneous assumptions. If random assignment is planned, provide enough informa- tion to show that the process for making the actual assign- ments is random.

There is a strong research tradition and many exem- plars for random assignment in various fields of psychol- ogy. Even those who have elucidated quasi-experimental designs in psychological research (e.g., Cook & Campbell, 1979) have repeatedly emphasized the superiority of ran- dom assignment as a method for controlling bias and lurk- ing variables. “Random” does not mean “haphazard.” Ran- domization is a fragile condition, easily corrupted deliber- ately, as we see when a skilled magician flips a fair coin repeatedly to heads, or innocently, as we saw when the drum was not turned sufficiently to randomize the picks in the Vietnam draft lottery. As psychologists, we also know

that human participants are incapable of producing a ran- dom process (digits, spatial arrangements, etc.) or of rec- ognizing one. It is best not to trust the random behavior of a physical device unless you are an expert in these matters. It is safer to use the pseudorandom sequence from a well- designed computer generator or from published tables of random numbers. The added benefit of such a procedure is that you can supply a random number seed or starting number in a table that other researchers can use to check your methods later.

Nonrandom assignment. For some research questions, random assignment is not feasible. In such cases, we need to minimize effects of variables that affect the observed relationship between a causal variable and an outcome. Such variables are commonly called confounds or covariates. The researcher needs to attempt to deter- mine the relevant covariates, measure them adequately, and adjust for their effects either by design or by analysis. If the effects of covariates are adjusted by analysis, the strong assumptions that are made must be explicitly stated and, to the extent possible, tested and justified. Describe methods used to attenuate sources of bias, including plans for minimizing dropouts, noncompliance, and missing data.

Authors have used the term “control group” to de- scribe, among other things, (a) a comparison group, (b) members of pairs matched or blocked on one or more nuisance variables, (c) a group not receiving a particular treatment, (d) a statistical sample whose values are adjusted post hoc by the use of one or more covariates, or (e) a group for which the experimenter acknowledges bias exists and perhaps hopes that this admission will allow the reader to make appropriate discounts or other mental adjustments. None of these is an instance of a fully adequate control group.

If we can neither implement randomization nor ap- proach total control of variables that modify effects (out- comes), then we should use the term “control group” cau- tiously. In most of these cases, it would be better to forgo the term and use “contrast group” instead. In any case, we should describe exactly which confounding variables have been explicitly controlled and speculate about which un- measured ones could lead to incorrect inferences. In the absence of randomization, we should do our best to inves- tigate sensitivity to various untestable assumptions.

Measurement

Variables. Explicitly define the variables in the study, show how they are related to the goals of the study, and explain how they are measured. The units of measure- ment of all variables, causal and outcome, should fit the language you use in the introduction and discussion sec- tions of your report.

A variable is a method for assigning to a set of observations a value from a set of possible outcomes. For example, a variable called “gender” might assign each of 50 observations to one of the values male or female. When we define a variable, we are declaring what we are prepared to represent as a valid observation and what we must consider as invalid. If we define the range of a particular

August 1999 * American Psychologist 595 595August 1999 • American Psychologist

variable (the set of possible outcomes) to be from 1 to 7 on a Likert scale, for example, then a value of 9 is not an outlier (an unusually extreme value). It is an illegal value. If we declare the range of a variable to be positive real numbers and the domain to be observations of reaction time (in milliseconds) to an administration of electric shock, then a value of 3,000 is not illegal; it is an outlier.

Naming a variable is almost as important as measuring it. We do well to select a name that reflects how a variable is measured. On this basis, the name “IQ test score” is preferable to “intelligence” and “retrospective self-report of childhood sexual abuse” is preferable to “childhood sexual abuse.” Without such precision, ambiguity in defin- ing variables can give a theory an unfortunate resistance to empirical falsification. Being precise does not make us operationalists. It simply means that we try to avoid exces- sive generalization.

Editors and reviewers should be suspicious when they notice authors changing definitions or names of variables, failing to make clear what would be contrary evidence, or using measures with no history and thus no known prop- erties. Researchers should be suspicious when code books and scoring systems are inscrutable or more voluminous than the research articles on which they are based. Every- one should worry when a system offers to code a specific observation in two or more ways for the same variable.

Instruments. If a questionnaire is used to collect data, summarize the psychometric properties of its scores with specific regard to the way the instrument is used in a population. Psychometric properties include measures of validity, reliability, and any other qualities affecting con- clusions. If a physical apparatus is used, provide enough information (brand, model, design specifications) to allow another experimenter to replicate your measurement process.

There are many methods for constructing instruments and psychometrically validating scores from such mea- sures. Traditional true-score theory and item-response test theory provide appropriate frameworks for assessing reli- ability and internal validity. Signal detection theory and various coefficients of association can be used to assess external validity. Messick (1989) provides a comprehen- sive guide to validity.

It is important to remember that a test is not reliable or unreliable. Reliability is a property of the scores on a test for a particular population of examinees (Feldt & Brennan, 1989). Thus, authors should provide reliability coefficients of the scores for the data being analyzed even when the focus of their research is not psychometric. Interpreting the size of observed effects requires an assessment of the reliability of the scores.

Besides showing that an instrument is reliable, we need to show that it does not correlate strongly with other key constructs. It is just as important to establish that a measure does not measure what it should not measure as it is to show that it does measure what it should.

Researchers occasionally encounter a measurement problem that has no obvious solution. This happens when they decide to explore a new and rapidly growing research

area that is based on a previous researcher’s well-defined construct implemented with a poorly developed psycho- metric instrument. Innovators, in the excitement of their discovery, sometimes give insufficient attention to the quality of their instruments. Once a defective measure enters the literature, subsequent researchers are reluctant to change it. In these cases, editors and reviewers should pay special attention to the psychometric properties of the in- struments used, and they might want to encourage revisions (even if not by the scale’s author) to prevent the accumu- lation of results based on relatively invalid or unreliable measures.

Procedure. Describe any anticipated sources of attrition due to noncompliance, dropout, death, or other factors. Indicate how such attrition may affect the gener- alizability of the results. Clearly describe the conditions under which measurements are taken (e.g., format, time, place, personnel who collected data). Describe the specific methods used to deal with experimenter bias, especially if you collected the data yourself

Despite the long-established findings of the effects of experimenter bias (Rosenthal, 1966), many published stud- ies appear to ignore or discount these problems. For exam- ple, some authors or their assistants with knowledge of hypotheses or study goals screen participants (through per- sonal interviews or telephone conversations) for inclusion in their studies. Some authors administer questionnaires. Some authors give instructions to participants. Some au- thors perform experimental manipulations. Some tally or code responses. Some rate videotapes.

An author’s self-awareness, experience, or resolve does not eliminate experimenter bias. In short, there are no valid excuses, financial or otherwise, for avoiding an op- portunity to double-blind. Researchers looking for guid- ance on this matter should consult the classic book of Webb, Campbell, Schwartz, and Sechrest (1966) and an exemplary dissertation (performed on a modest budget) by Baker (1969).

Power and sample size. Provide information on sample size and the process that led to sample size decisions. Document the effect sizes, sampling and mea- surement assumptions, as well as analytic procedures used in power calculations. Because power computations are most meaningful when done before data are collected and examined, it is important to show how effect-size estimates have been derived from previous research and theory in order to dispel suspicions that they might have been taken from data used in the study or, even worse, constructed to justify a particular sample size. Once the study is analyzed, confidence intervals replace calculated power in describ- ing results.

Largely because of the work of Cohen (1969, 1988), psychologists have become aware of the need to consider power in the design of their studies, before they collect data. The intellectual exercise required to do this stimulates authors to take seriously prior research and theory in their field, and it gives an opportunity, with incumbent risk, for a few to offer the challenge that there is no applicable research behind a given study. If exploration were not

596 August 1999 * American Psychologist August 1999 * American Psychologist596

disguised in hypothetico-deductive language, then it might have the opportunity to influence subsequent research constructively.

Computer programs that calculate power for various designs and distributions are now available. One can use them to conduct power analyses for a range of reasonable alpha values and effect sizes. Doing so reveals how power changes across this range and overcomes a tendency to regard a single power estimate as being absolutely definitive.

Many of us encounter power issues when applying for grants. Even when not asking for money, think about power. Statistical power does not corrupt.

Results Complications

Before presenting results, report complications, protocol violations, and other unanticipated events in data collec- tion. These include missing data, attrition, and nonre- sponse. Discuss analytic techniques devised to ameliorate these problems. Describe nonrepresentativeness statisti- cally by reporting patterns and distributions of missing data and contaminations. Document how the actual anal- ysis differs from the analysis planned before complications arose. The use of techniques to ensure that the reported results are not produced by anomalies in the data (e.g., outliers, points of high influence, nonrandom missing data, selection bias, attrition problems) should be a standard component of all analyses.

As soon as you have collected your data, before you compute any statistics, look at your data. Data screening is not data snooping. It is not an opportunity to discard data or change values to favor your hypotheses. However, if you assess hypotheses without examining your data, you risk publishing nonsense.

Computer malfunctions tend to be catastrophic: A system crashes; a file fails to import; data are lost. Less well-known are more subtle bugs that can be more cata- strophic in the long run. For example, a single value in a file may be corrupted in reading or writing (often in the first or last record). This circumstance usually produces a major value error, the kind of singleton that can make large correlations change sign and small correlations become large.

Graphical inspection of data offers an excellent pos- sibility for detecting serious compromises to data integrity. The reason is simple: Graphics broadcast; statistics narrow- cast. Indeed, some international corporations that must defend themselves against rapidly evolving fraudulent schemes use real-time graphic displays as their first line of defense and statistical analyses as a distant second. The following example shows why.

Figure 1 shows a scatter-plot matrix (SPLOM) of three variables from a national survey of approximately 3,000 counseling clients (Chartrand, 1997). This display, consisting of pairwise scatter plots arranged in a matrix, is found in most modern statistical packages. The diagonal cells contain dot plots of each variable (with the dots

Figure 1 Scatter-Plot Matrix

18 99

u,

I

LU

0

.I

O I-

AGE

Note. M = male; F = female.

SEX TOGETHER

stacked like a histogram) and scales used for each variable. The three variables shown are questionnaire measures of respondent’s age (AGE), gender (SEX), and number of years together in current relationship (TOGETHER). The graphic in Figure 1 is not intended for final presentation of results; we use it instead to locate coding errors and other anomalies before we analyze our data. Figure 1 is a se- lected portion of a computer screen display that offers tools for zooming in and out, examining points, and linking to information in other graphical displays and data editors. SPLOM displays can be used to recognize unusual patterns in 20 or more variables simultaneously. We focus on these three only.

There are several anomalies in this graphic. The AGE histogram shows a spike at the right end, which corre- sponds to the value 99 in the data. This coded value most likely signifies a missing value, because it is unlikely that this many people in a sample of 3,000 would have an age of 99 or greater. Using numerical values for missing value codes is a risky practice (Kahn & Udry, 1986).

The histogram for SEX shows an unremarkable divi- sion into two values. The histogram for TOGETHER is highly skewed, with a spike at the lower end presumably signifying no relationship. The most remarkable pattern is the triangular joint distribution of TOGETHER and AGE. Triangular joint distributions often (but not necessarily) signal an implication or a relation rather than a linear function with error. In this case, it makes sense that the span of a relationship should not exceed a person’s age. Closer examination shows that something is wrong here,

August 1999 * American Psychologist 597 597August 1999 • American Psychologist

however. We find some respondents (in the upper left triangular area of the TOGETHER-AGE panel) claiming that they have been in a significant relationship longer than they have been alive! Had we computed statistics or fit models before examining the raw data, we would likely have missed these reporting errors. There is little reason to expect that TOGETHER would show any anomalous be- havior with other variables, and even if AGE and TO- GETHER appeared jointly in certain models, we may not have known anything was amiss, regardless of our care in examining residual or other diagnostic plots.

The main point of this example is that the type of “atheoretical” search for patterns that we are sometimes warned against in graduate school can save us from the humiliation of having to retract conclusions we might ul- timately make on the basis of contaminated data. We are warned against fishing expeditions for understandable rea- sons, but blind application of models without screening our data is a far graver error.

Graphics cannot solve all our problems. Special issues arise in modeling when we have missing data. The two popular methods for dealing with missing data that are found in basic statistics packages-listwise and pairwise deletion of missing values-are among the worst methods available for practical applications. Little and Rubin (1987) have discussed these issues in more detail and offer alter- native approaches.

Analysis

Choosing a minimally sufficient analysis. The enormous variety of modem quantitative methods leaves researchers with the nontrivial task of matching analysis and design to the research question. Although complex designs and state-of-the-art methods are some- times necessary to address research questions effectively, simpler classical approaches often can provide elegant and sufficient answers to important questions. Do not choose an analytic method to impress your readers or to deflect criticism. If the assumptions and strength of a simpler method are reasonable for your data and research prob- lem, use it. Occam’s razor applies to methods as well as to theories.

We should follow the advice of Fisher (1935):

Experimenters should remember that they and their colleagues usually know more about the kind of material they are dealing with than do the authors of text-books written without such personal experience, and that a more complex, or less intelligible, test is not likely to serve their purpose better, in any sense, than those of proved value in their own subject. (p. 49)

There is nothing wrong with using state-of-the-art methods, as long as you and your readers understand how they work and what they are doing. On the other hand, don’t cling to obsolete methods (e.g., Newman-Keuls or Duncan post hoc tests) out of fear of learning the new. In any case, listen to Fisher. Begin with an idea. Then pick a method.

Computer programs. There are many good computer programs for analyzing data. More important than choosing a specific statistical package is verifying

your results, understanding what they mean, and knowing how they are computed. If you cannot verify your results by intelligent “guesstimates, ” you should check them against the output of another program. You will not be happy if a vendor reports a bug after your data are in print (not an infrequent event). Do not report statistics found on a print- out without understanding how they are computed or what they mean. Do not report statistics to a greater precision than is supported by your data simply because they are printed that way by the program. Using the computer is an opportunity for you to control your analysis and design. If a computer program does not provide the analysis you need, use another program rather than let the computer shape your thinking.

There is no substitute for common sense. If you can- not use rules of thumb to detect whether the result of a computation makes sense to you, then you should ask yourself whether the procedure you are using is appropriate for your research. Graphics can help you to make some of these determinations; theory can help in other cases. But never assume that using a highly regarded program ab- solves you of the responsibility for judging whether your results are plausible. Finally, when documenting the use of a statistical procedure, refer to the statistical literature rather than a computer manual; when documenting the use of a program, refer to the computer manual rather than the statistical literature.

Assumptions. You should take efforts to assure that the underlying assumptions required for the analysis are reasonable given the data. Examine residuals carefully. Do not use distributional tests and statistical indexes of shape (e.g., skewness, kurtosis) as a substitute for examin- ing your residuals ‘graphically.

Using a statistical test to diagnose problems in model fitting has several shortcomings. First, diagnostic signifi- cance tests based on summary statistics (such as tests for homogeneity of variance) are often impractically sensitive; our statistical tests of models are often more robust than our statistical tests of assumptions. Second, statistics such as skewness and kurtosis often fail to detect distributional irregularities in the residuals. Third, statistical tests depend on sample size, and as sample size increases, the tests often will reject innocuous assumptions. In general, there is no substitute for graphical analysis of assumptions.

Modem statistical packages offer graphical diagnos- tics for helping to determine whether a model appears to fit data appropriately. Most users are familiar with residual plots for linear regression modeling. Fewer are aware that John Tukey’s paradigmatic equation, data = fit + residual, applies to a more general class of models and has broad implications for graphical analysis of assumptions. Stem- and-leaf plots, box plots, histograms, dot plots, spread/level plots, probability plots, spectral plots, autocorrelation and cross-correlation plots, co-plots, and trellises (Chambers, Cleveland, Kleiner, & Tukey, 1983; Cleveland, 1995; Tukey, 1977) all serve at various times for displaying residuals, whether they arise from analysis of variance (ANOVA), nonlinear modeling, factor analysis, latent vari-

598 August 1999 * American Psychologist

598 August 1999 * American Psychologist

able modeling, multidimensional scaling, hierarchical lin- ear modeling, or other procedures.

Hypothesis tests. It is hard to imagine a situa- tion in which a dichotomous accept-reject decision is bet- ter than reporting an actual p value or, better still, a confidence interval. Never use the unfortunate expression “accept the null hypothesis. ” Always provide some effect- size estimate when reporting a p value. Cohen (1994) has written on this subject in this journal. All psychologists would benefit from reading his insightful article.

Effect sizes. Always present effect sizes for pri- mary outcomes. If the units of measurement are meaningful on a practical level (e.g., number of cigarettes smoked per day), then we usually prefer an unstandardized measure (regression coefficient or mean difference) to a standard- ized measure (r or d). It helps to add brief comments that place these effect sizes in a practical and theoretical context.

APA’s (1994) publication manual included an impor- tant new “encouragement” (p. 18) to report effect sizes. Unfortunately, empirical studies of various journals indi- cate that the effect size of this encouragement has been negligible (Keselman et al., 1998; Kirk, 1996; Thompson & Snyder, 1998). We must stress again that reporting and interpreting effect sizes in the context of previously re- ported effects is essential to good research. It enables readers to evaluate the stability of results across samples, designs, and analyses. Reporting effect sizes also informs power analyses and meta-analyses needed in future research.

Fleiss (1994), Kirk (1996), Rosenthal (1994), and Snyder and Lawson (1993) have summarized various mea- sures of effect sizes used in psychological research. Con- sult these articles for information on computing them. For a simple, general purpose display of the practical meaning of an effect size, see Rosenthal and Rubin (1982). Consult Rosenthal and Rubin (1994) for information on the use of “counternull intervals” for effect sizes, as alternatives to confidence intervals.

Interval estimates. Interval estimates should be given for any effect sizes involving principal outcomes. Provide intervals for correlations and other coefficients of association or variation whenever possible.

Confidence intervals are usually available in statistical software; otherwise, confidence intervals for basic statistics can be computed from typical output. Comparing confi- dence intervals from a current study to intervals from previous, related studies helps focus attention on stability across studies (Schmidt, 1996). Collecting intervals across studies also helps in constructing plausible regions for population parameters. This practice should help prevent the common mistake of assuming a parameter is contained in a confidence interval.

Multiplicities. Multiple outcomes require special handling. There are many ways to conduct reasonable inference when faced with multiplicity (e.g., Bonferroni correction ofp values, multivariate test statistics, empirical Bayes methods). It is your responsibility to define and justify the methods used.

Statisticians speak of the curse of dimensionality. To paraphrase, multiplicities are the curse of the social sciences. In many areas of psychology, we cannot do research on important problems without encountering multiplicity. We often encounter many variables and many relationships.

One of the most prevalent strategies psychologists use to handle multiplicity is to follow an ANOVA with pair- wise multiple-comparison tests. This approach is usually wrong for several reasons. First, pairwise methods such as Tukey’s honestly significant difference procedure were de- signed to control a familywise error rate based on the sample size and number of comparisons. Preceding them with an omnibus F test in a stagewise testing procedure defeats this design, making it unnecessarily conservative. Second, researchers rarely need to compare all possible means to understand their results or assess their theory; by setting their sights large, they sacrifice their power to see small. Third, the lattice of all possible pairs is a straight- jacket; forcing themselves to wear it often restricts re- searchers to uninteresting hypotheses and induces them to ignore more fruitful ones.

As an antidote to the temptation to explore all pairs, imagine yourself restricted to mentioning only pairwise comparisons in the introduction and discussion sections of your article. Higher order concepts such as trends, struc- tures, or clusters of effects would be forbidden. Your theory would be restricted to first-order associations. This scenario brings to mind the illogic of the converse, popular practice of theorizing about higher order concepts in the introduction and discussion sections and then supporting that theorizing in the results section with atomistic pairwise comparisons. If a specific contrast interests you, examine it. If all interest you, ask yourself why. For a detailed treat- ment of the use of contrasts, see Rosenthal, Rosnow, and Rubin (in press).

There is a variant of this preoccupation with all pos- sible pairs that comes with the widespread practice of printing p values or asterisks next to every correlation in a correlation matrix. Methodologists frequently point out that these p values should be adjusted through Bonferroni or other corrections. One should ask instead why any reader would want this information. The possibilities are as follows:

1. All the correlations are “significant.” If so, this can be noted in a single footnote.

2. None of the correlations are “significant.” Again, this can be noted once. We need to be reminded that this situation does not rule out the possibility that combinations or subsets of the correlations may be “significant.” The definition of the null hypothesis for the global test may not include other potential null hypotheses that might be re- jected if they were tested.

3. A subset of the correlations is “significant.” If so, our purpose in appending asterisks would seem to be to mark this subset. Using “significance” tests in this way is really a highlighting technique to facilitate pattern recog- nition. If this is your goal in presenting results, then it is better served by calling attention to the pattern (perhaps by

August 1999 * American Psychologist 599 599August 1999 • American Psychologist

sorting the rows and columns of the correlation matrix) and assessing it directly. This would force you, as well, to provide a plausible explanation.

There is a close relative of all possible pairs called “all possible combinations.” We see this occasionally in the publishing of higher way factorial ANOVAs that include all possible main effects and interactions. One should not imagine that placing asterisks next to conventionally sig- nificant effects in a five-way ANOVA, for example, skirts the multiplicity problem. A typical five-way fully factorial design applied to a reasonably large sample of random data has about an 80% chance of producing at least one signif- icant effect by conventional F tests at the .05 critical level (Hurlburt & Spiegel, 1976).

Underlying the widespread use of all-possible-pairs methodology is the legitimate fear among editors and re- viewers that some researchers would indulge in fishing expeditions without the restraint of simultaneous test pro- cedures. We should indeed fear the well-intentioned, indis- criminate search for structure more than the deliberate falsification of results, if only for the prevalence of wishful thinking over nefariousness. There are Bonferroni and re- cent related methods (e.g., Benjamini & Hochberg, 1995) for controlling this problem statistically. Nevertheless, there is an alternative institutional restraint. Reviewers should require writers to articulate their expectations well enough to reduce the likelihood of post hoc rationaliza- tions. Fishing expeditions are often recognizable by the promiscuity of their explanations. They mix ideas from scattered sources, rely heavily on common sense, and cite fragments rather than trends.

If, on the other hand, a researcher fools us with an intriguing result caught while indiscriminately fishing, we might want to fear this possibility less than we do now. The enforcing of rules to prevent chance results in our journals may at times distract us from noticing the more harmful possibility of publishing bogus theories and methods (ill- defined variables, lack of parsimony, experimenter bias, logical errors, artifacts) that are buttressed by evidently impeccable statistics. There are enough good ideas behind fortuitous results to make us wary of restricting them. This is especially true in those areas of psychology where lives and major budgets are not at stake. Let replications pro- mote reputations.

Causality. Inferring causality from nonrandom- ized designs is a risky enterprise. Researchers using nonrandomized designs have an extra obligation to ex- plain the logic behind covariates included in their de- signs and to alert the reader to plausible rival hypoth- eses that might explain their results. Even in randomized experiments, attributing causal effects to any one aspect of the treatment condition requires support from addi- tional experimentation.

It is sometimes thought that correlation does not prove causation but “causal modeling” does. Despite the admo- nitions of experts in this field, researchers sometimes use goodness-of-fit indices to hunt through thickets of compet- ing models and settle on a plausible substantive explanation only in retrospect. McDonald (1997), in an analysis of a

historical data set, showed the dangers of this practice and the importance of substantive theory. Scheines, Spirites, Glymour, Meek, and Richardson (1998; discus- sions following) offer similar cautions from a theoretical standpoint.

A generally accepted framework for formulating ques- tions concerning the estimation of causal effects in social and biomedical science involves the use of “potential out- comes,” with one outcome for each treatment condition. Although the perspective has old roots, including use by Fisher and Neyman in the context of completely random- ized experiments analyzed by randomization-based infer- ence (Rubin, 1990b), it is typically referred to as “Rubin’s causal model” or RCM (Holland, 1986). For extensions to observational studies and other forms of inference, see Rubin (1974, 1977, 1978). This approach is now relatively standard, even for settings with instrumental variables and multistage models or simultaneous equations.

The crucial idea is to set up the causal inference problem as one of missing data, as defined in Rubin’s (1976) article, where the missing data are the values of the potential outcomes under the treatment not received and the observed data include the values of the potential outcomes under the received treatments. Causal effects are defined on a unit level as the comparison of the potential outcomes under the different treatments, only one of which can ever be observed (we cannot go back in time to expose the unit to a different treatment). The essence of the RCM is to formulate causal questions in this way and to use formal statistical methods to draw probabilistic causal inferences, whether based on Fisherian randomization-based (permu- tation) distributions, Neymanian repeated-sampling ran- domization-based distributions, frequentist superpopula- tion sampling distributions, or Bayesian posterior distribu- tions (Rubin, 1990a).

If a problem of causal inference cannot be formulated in this manner (as the comparison of potential outcomes under different treatment assignments), it is not a problem of inference for causal effects, and the use of “causal” should be avoided. To see the confusion that can be created by ignoring this requirement, see the classic Lord’s para- dox and its resolution by the use of the RCM in Holland and Rubin’s (1983) chapter.

The critical assumptions needed for causal infer- ence are essentially always beyond testing from the data at hand because they involve the missing data. Thus, especially when formulating causal questions from non- randomized data, the underlying assumptions needed to justify any causal conclusions should be carefully and explicitly argued, not in terms of technical properties like “uncorrelated error terms,” but in terms of real world properties, such as how the units received the different treatments.

The use of complicated causal-modeling software rarely yields any results that have any interpretation as causal effects. If such software is used to produce anything beyond an exploratory description of a data set, the bases for such extended conclusions must be carefully presented

600 August 1999 * American Psychologist

600 August 1999 * American Psychologist

Figure 2 Figure 2 Graphics for Regression

A I I

3.5 4.0 4.5 5.0

3.5 4.0 4.5 5.0

800-

700-

9 600-

500-

400-

800-

700-

‘U”X 600-

500-

400-

GPA GPA

Note. GRE = Graduate Record Examination; GPA = grade point average; PhD and No PhD = completed and did not complete the doctoral degree; Y = yes; N = no.

and not just asserted on the basis of imprecise labeling conventions of the software.

Tables and figures. Although tables are com- monly used to show exact values, well-drawn figures need not sacrifice precision. Figures attract the reader’s eye and help convey global results. Because individuals have dif- ferent preferences for processing complex information, it often helps to provide both tables and figures. This works best when figures are kept small enough to allow space for both formats. Avoid complex figures when simpler ones will do. In all figures, include graphical representations of interval estimates whenever possible.

Bailar and Mosteller (1988) offer helpful information on improving tables in publications. Many of their recom- mendations (e.g., sorting rows and columns by marginal averages, rounding to a few significant digits, avoiding decimals when possible) are based on the clearly written tutorials of Ehrenberg (1975, 1981).

A common deficiency of graphics in psychological publications is their lack of essential information. In most cases, this information is the shape or distribution of the data. Whether from a negative motivation to conceal irreg- ularities or from a positive belief that less is more, omitting shape information from graphics often hinders scientific evaluation. Chambers et al. (1983) and Cleveland (1995) offer specific ways to address these problems. The exam- ples in Figure 2 do this using two of the most frequent graphical forms in psychology publications.

Figure 2 shows plots based on data from 80 graduate students in a Midwestern university psychology depart- ment, collected from 1969 through 1978. The variables are scores on the psychology advanced test of the Graduate

Record Examination (GRE), the undergraduate grade point average (GPA), and whether a student completed a doctoral degree in the department (PhD). Figure 2A shows a format appearing frequently in psychology journal articles: two regression lines, one for each group of students. This graphic conveys nothing more than four numbers: the slopes and intercepts of the regression lines. Because the scales have no physical meaning, seeing the slopes of lines (as opposed to reading the numbers) adds nothing to our understanding of the relationship.

Figure 2B shows a scatter plot of the same data with a locally weighted scatter plot smoother for each PhD group (Cleveland & Devlin, 1988). This robust curvilinear regression smoother (called LOESS) is available in modem statistics packages. Now we can see some curvature in the relationships. (When a model that includes a linear and quadratic term for GPA is computed, the apparent interac- tion involving the PhD and no PhD groups depicted in Figure 2A disappears.) The graphic in Figure 2B tells us many things. We note the unusual student with a GPA of less than 4.0 and a psychology GRE score of 800; we note the less surprising student with a similar GPA but a low GRE score (both of whom failed to earn doctoral degrees); we note the several students who had among the lowest GRE scores but earned doctorates, and so on. We might imagine these kinds of cases in Figure 2A (as we should in any data set containing error), but their location and distri- bution in Figure 2B tells us something about this specific data set.

Figure 3A shows another popular format for display- ing data in psychology journals. It is based on the data set used for Figure 2. Authors frequently use this format to

August 1999 * American Psychologist 601

B

Y

3.5 4.0 4.5 5.0

PhD

No PhD

N

Y Y

August 1999 * American Psychologist 601

Figure 3 Graphics for Groups

A

800-BO-

700-

r 600-

500-

400-

N PhD

Y

Note. GRE = Graduate Record Exoaminotion; N = no; Y = yes.

display the results of t tests or ANOVAs. For factorial ANOVAs, this format gives authors an opportunity to represent interactions by using a legend with separate sym- bols for each line. In more laboratory-oriented psychology journals (e.g., animal behavior, neuroscience), authors sometimes add error bars to the dots representing the means.

Figure 3B adds to the line graphic a dot plot repre- senting the data and 95% confidence intervals on the means of the two groups (using the t distribution). The graphic reveals a left skewness of GRE scores in the PhD group. Although this skewness may not be severe enough to affect our statistical conclusions, it is nevertheless noteworthy. It may be due to ceiling effects (although note the 800 score in the no PhD group) or to some other factor. At the least, the reader has a right to be able to evaluate this kind of information.

There are other ways to include data or distributions in graphics, including box plots and stem-and-leaf plots (Tukey, 1977) and kernel density estimates (Scott, 1992; Silverman, 1986). Many of these procedures are found in modem statistical packages. It is time for authors to take advantage of them and for editors and reviewers to urge authors to do so.

Discussion Interpretation

When you interpret effects, think of credibility, generaliz- ability, and robustness. Are the effects credible, given the results of previous studies and theory? Do the features of the design and analysis (e.g., sample quality, similarity of the design to designs of previous studies, similarity of the

effects to those in previous studies) suggest the results are generalizable? Are the design and analytic methods robust enough to support strong conclusions?

Novice researchers err either by overgeneralizing their results or, equally unfortunately, by overparticularizing. Explicitly compare the effects detected in your inquiry with the effect sizes reported in related previous studies. Do not be afraid to extend your interpretations to a general class or population if you have reasons to assume that your results apply. This general class may consist of populations you have studied at your site, other populations at other sites, or even more general populations. Providing these reasons in your discussion will help you stimulate future research for yourself and others.

Conclusions

Speculation may be appropriate, but use it sparingly and explicitly. Note the shortcomings of your study. Remember, however, that acknowledging limitations is for the purpose of qualifying results and avoiding pitfalls in future re- search. Confession should not have the goal of disarming criticism. Recommendations for future research should be thoughtful and grounded in present and previous findings. Gratuitous suggestions (‘”further research needs to be done . . “) waste space. Do not interpret a single study’s results as having importance independent of the effects reported elsewhere in the relevant literature. The thinking presented in a single study may turn the movement of the literature, but the results in a single study are important primarily as one contribution to a mosaic of study effects.

Some had hoped that this task force would vote to recommend an outright ban on the use of significance tests

602 August 1999 * American Psychologist

B

w c,0-

N Y PhD

602 August 1999 * American Psychologist

in psychology journals. Although this might eliminate some abuses, the committee thought that there were enough counterexamples (e.g., Abelson, 1997) to justify forbear- ance. Furthermore, the committee believed that the prob- lems raised in its charge went beyond the simple question of whether to ban significance tests.

The task force hopes instead that this report will induce editors, reviewers, and authors to recognize prac- tices that institutionalize the thoughtless application of statistical methods. Distinguishing statistical significance from theoretical significance (Kirk, 1996) will help the entire research community publish more substantial results. Encouraging good design and logic will help improve the quality of conclusions. And promoting modern statistical graphics will improve the assessment of assumptions and the display of results.

More than 50 years ago, Hotelling, Bartky, Deming, Friedman, and Hoel (1948) wrote, “Unfortunately, too many people like to do their statistical work as they say their prayers-merely substitute in a formula found in a highly respected book written a long time ago” (p. 103). Good theories and intelligent interpretation advance a dis- cipline more than rigid methodological orthodoxy. If edi- tors keep in mind Fisher’s (1935) words quoted in the Analysis section, then there is less danger of methodology substituting for thought. Statistical methods should guide and discipline our thinking but should not determine it.

REFERENCES

Abelson, R. P. (1995). Statistics as principled argument. Hillsdale, NJ: Erlbaum.

Abelson, R. P. (1997). On the surprising longevity of flogged horses: Why there is a case for the significance test. Psychological Science, 23, 12-15.

American Psychological Association. (1994). Publication manual of the American Psychological Association (4th ed.). Washington, DC: Au- thor.

Bailar, J. C., & Mosteller, F. (1988). Guidelines for statistical reporting in articles for medical journals: Amplifications and explanations. Annals of Internal Medicine, 108, 266-273.

Baker, B. L. (1969). Symptom treatment and symptom substitution in enuresis. Journal of Abnormal Psychology, 74, 42-49.

Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society, 57(Series B), 289-300.

Chambers, J., Cleveland, W., Kleiner, B., & Tukey, P. (1983). Graphical methods for data analysis. Monterey, CA: Wadsworth.

Chartrand, J. M. (1997). National sample survey. Unpublished raw data.

Cleveland, W. S. (1995). Visualizing data. Summit, NJ: Hobart Press. Cleveland, W. S., & Devlin, S. (1988). Locally weighted regression

analysis by local fitting. Journal of the American Statistical Associa- tion, 83, 596-640.

Cohen, J. (1969). Statistical power analysis for the behavioral sciences. New York: Academic Press.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cohen, J. (1994). The earth is round (p < .05). American Psychologist, 49, 997-1003.

Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand McNally.

Cronbach, L. J. (1975). Beyond the two disciplines of psychology. Amer- ican Psychologist, 30, 116-127.

Ehrenberg, A. S. C. (1975). Data reduction: Analyzing and interpreting statistical data. New York: Wiley.

Ehrenberg, A. S. C. (1981). The problem of numeracy. American Statis- tician, 35, 67-71.

Feldt, L. S., & Brennan, R. L. (1989). Reliability. In R. L. Linn (Ed.), Educational measurement (3rd ed., pp. 105-146). Washington, DC:

American Council on Education. Fisher, R. A. (1935). The design of experiments. Edinburgh, Scotland:

Oliver & Boyd. Fleiss, J. L. (1994). Measures of effect size for categorical data. In H.

Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 245-260). New York: Sage.

Harlow, L. L., Mulaik, S. A., & Steiger, J. H. (1997). What if there were no significance tests? Hillsdale, NJ: Erlbaum.

Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945-960.

Holland, P. W., & Rubin, D. B. (1983). On Lord’s paradox. In H. Wainer & S. Messick (Eds.), Principals of modern psychological measurement (pp. 3-25). Hillsdale, NJ: Erlbaum.

Hotelling, H., Bartky, W., Deming, W. E., Friedman, M., & Hoel, P. (1948). The teaching of statistics. Annals of Mathematical Statistics, 19, 95-115.

Hurlburt, R. T., & Spiegel, D. K. (1976). Dependence of F ratios sharing a common denominator mean square. American Statistician, 20, 74-78.

Kahn, J. R., & Udry, J. R. (1986). Marital coital frequency: Unnoticed outliers and unspecified interactions lead to erroneous conclusions. American Sociological Review, 51, 734-737.

Keselman, H. J., Huberty, C. J., Lix, L. M., Olejnik, S., Cribbie, R., Donahue, B., Kowalchuk, R. K., Lowman, L. L., Petoskey, M. D., Keselman, J. C., & Levin, J. R. (1998). Statistical practices of educa- tional researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses. Review of Educational Research, 68, 350-386.

Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746-759.

Little, R. J. A., & Rubin, D. B. (1987). Statistical analysis with missing data. New York: Wiley.

McDonald, R. P. (1997). Haldane’s lungs: A case study in path analysis. Multivariate Behavioral Research, 32, 1-38.

Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measure- ment (3rd ed., pp. 13-103). Washington, DC: American Council on Education.

Rosenthal, R. (1966). Experimenter effects in behavioral research. New York: Appleton-Century-Crofts.

Rosenthal, R. (1994). Parametric measures of effect size. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 231- 244). New York: Sage.

Rosenthal, R., Rosnow, R. L., & Rubin, D. B. (in press). Contrasts and

effect sizes in behavioral research: A correlational approach. New York: Cambridge University Press.

Rosenthal, R., & Rubin, D. B. (1982). A simple general purpose display of magnitude of experimental effect. Journal of Educational Psychol- ogy, 74, 166-169.

Rosenthal, R., & Rubin, D. B. (1994). The countemull value of an effect size: A new statistic. Psychological Science, 5, 329-334.

Rubin, D. B. (1974). Estimating causal effects of treatments in random- ized and nonrandomized studies. Journal of Educational Psychology, 66, 688-701.

Rubin, D. B. (1976). Inference and missing data. Biometrika, 63, 581- 592.

Rubin, D. B. (1977). Assignment of treatment group on the basis of a covariate. Journal of Educational Statistics, 2, 1-26.

Rubin, D. B. (1978). Bayesian inference for causal effects: The role of randomization. Annals of Statistics, 6, 34-58.

Rubin, D. B. (1990a). Formal modes of statistical inference for causal effects. Journal of Statistical Planning and Inference, 25, 279-292.

Rubin, D. B. (1990b). Neyman (1923) and causal inference in experiments and observational studies. Statistical Science, 5, 472-480.

Scheines, R., Spirites, P., Glymour, C., Meek, C., & Richardson, T. (1998). The TETRAD project: Constraint based aids to causal model specification. Multivariate Behavioral Research, 33, 65-117.

Schmidt, F. (1996). Statistical significance testing and cumulative knowl- edge in psychology: Implications for the training of researchers. Psy- chological Methods, 1, 115-129.

August 1999 .American Psychologist 603

August 1999 *’American Psychologist 603

Scott, D. W. (1992). Multivariate density estimation: Theory, practice, and visualization. New York: Wiley.

Silverman, B. W. (1986). Density estimation for statistics and data analysis. New York: Chapman & Hall.

Snyder, P., & Lawson, S. (1993). Evaluating results using corrected and uncorrected effect size estimates. Journal of Experimental Education, 61, 334-349.

Thompson, B. (1996). AERA editorial policies regarding statistical sig- nificance testing: Three suggested reforms. Educational Researcher, 25(2), 26-30.

Thompson, B., & Snyder, P. A. (1998). Statistical significance and reli- ability analyses in recent JCD research articles. Journal of Counseling and Development, 76, 436-441.

Tukey, J. W. (1977). Exploratory data analysis. Reading, MA: Addison- Wesley.

Wainer, H. (in press). One cheer for null hypothesis significance testing. Psychological Methods.

Webb, E. J., Campbell, D. T., Schwartz, R. D., & Sechrest, L. (1966). Unobtrusive measures: Nonreactive research in the social sciences. Chicago: Rand McNally.

604 August 1999 * American Psychologist August 1999 a American Psychologist604

 
Do you need a similar assignment done for you from scratch? We have qualified writers to help you. We assure you an A+ quality paper that is free from plagiarism. Order now for an Amazing Discount!
Use Discount Code "Newclient" for a 15% Discount!

NB: We do not resell papers. Upon ordering, we do an original paper exclusively for you.