Psychology Wiki
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory


This article is in need of attention from a psychologist/academic expert on the subject.
Please help recruit one, or improve this page yourself if you are qualified.
This banner appears on articles that are weak and whose contents should be approached with academic caution.

Qualitative data analysis is the treatment of data captured in qualitative research. Issues include:

Interpretive techniques[]

The most common analysis of qualitative data is observer impression. That is, expert or bystander observers examine the data, interpret it via forming an impression and report their impression in a structured and sometimes quantitative form.

Coding[]

Main article: Coding (social sciences)

Coding is an interpretive technique that both organizes the data and provides a means to introduce the interpretations of it into certain quantitative methods. Most coding requires the analyst to read the data and demarcate segments within it. Each segment is labeled with a "code" – usually a word or short phrase that suggests how the associated data segments inform the research objectives. When coding is complete, the analyst prepares reports via a mix of: summarizing the prevalence of codes, discussing similarities and differences in related codes across distinct original sources/contexts, or comparing the relationship between one or more codes.

Some qualitative data that is highly structured (e.g., open-end responses from surveys or tightly defined interview questions) is typically coded without additional segmenting of the content. In these cases, codes are often applied as a layer on top of the data. Quantitative analysis of these codes is typically the capstone analytical step for this type of qualitative data.

Contemporary qualitative data analyses are sometimes supported by computer programs, termed Computer Assisted Qualitative Data Analysis Software. These programs do not supplant the interpretive nature of coding but rather are aimed at enhancing the analyst’s efficiency at data storage/retrieval and at applying the codes to the data. Many programs offer efficiencies in editing and revising coding, which allow for work sharing, peer review, and recursive examination of data.

A frequent criticism of coding method is that it seeks to transform qualitative data into quantitative data, thereby draining the data of its variety, richness, and individual character. Analysts respond to this criticism by thoroughly expositing their definitions of codes and linking those codes soundly to the underlying data, therein bringing back some of the richness that might be absent from a mere list of codes.

Recursive abstraction[]

Some qualitative datasets are analyzed without coding. A common method here is recursive abstraction, where datasets are summarized; those summaries are then further summarized and so on. The end result is a more compact summary that would have been difficult to accurately discern without the preceding steps of distillation.

A frequent criticism of recursive abstraction is that the final conclusions are several times removed from the underlying data. While it is true that poor initial summaries will certainly yield an inaccurate final report, qualitative analysts can respond to this criticism. They do so, like those using coding method, by documenting the reasoning behind each summary step, citing examples from the data where statements were included and where statements were excluded from the intermediate summary.

Mechanical techniques[]

Main article: Computer Assisted Qualitative Data Analysis Software

Some techniques rely on leveraging computers to scan and sort large sets of qualitative data. At their most basic level, mechanical techniques rely on counting words, phrases, or coincidences of tokens within the data. Often referred to as content analysis, the output from these techniques is amenable to many advanced statistical analyses.

Mechanical techniques are particularly well-suited for a few scenarios. One such scenario is for datasets that are simply too large for a human to effectively analyze, or where analysis of them would be cost prohibitive relative to the value of information they contain. Another scenario is when the chief value of a dataset is the extent to which it contains "red flags" (e.g., searching for reports of certain adverse events within a lengthy journal dataset from patients in a clinical trial) or "green flags" (e.g., searching for mentions of your brand in positive reviews of marketplace products).

A frequent criticism of mechanical techniques is the absence of a human interpreter. And while masters of these methods are able to write sophisticated software to mimic some human decisions, the bulk of the "analysis" is nonhuman. Analysts respond by proving the value of their methods relative to either a) hiring and training a human team to analyze the data or b) letting the data go untouched, leaving any actionable nuggets undiscovered.

=See also[]

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Advertisement