Structured Interviewing
Adrian Angold MRCPsych.
E. Jane Costello PhD.
Helen Link Egger M.D.
Introduction
Interviews are necessary tools for all forms of clinical medical diagnosis, and they have a singularly prominent position in psychiatry because of the lack of other “tests” for psychiatric disorders. All structured interviews used in psychiatry have their roots in the phenomenological clinical interview, although different interviews take rather different routes in the standardization of the collection of phenomenological data relevant to diagnosis. The questioning strategies involved now represent a mature technology, and the sometimes acrimonious methodological debates that once characterized the field have been replaced by the recognition that each approach has advantages and disadvantages that must be weighed in selecting a structured interview for each individual application.
The Limitations of Unstructured Diagnostic Interviews
It has been known for a long time that clinical training is sufficiently varied that colleagues of the same discipline, working in the same establishment, are often unable to agree about an individual’s diagnosis, even when presented with exactly the same information 1,2,3,4. An apparent difference in rates of schizophrenia between New York and London proved to be almost entirely due to differences in diagnostic criteria applied to observed phenomenology (5). Observations such as these motivated the development of the formalized sets of diagnostic criteria familiar to us today from the DSM-IV and ICD-10.
The literature on medical decision-making had already shown that clinicians suffer from a number of information
collection biases: 1) They tend to come to diagnostic determinations before they have collected all the relevant information, 2) they tend then to focus on collecting information to confirm that diagnosis (confirmatory bias), 3) they tend to ignore disconfirmatory information, 4) they combine information in idiosyncratic ways, and 5) they tend to make judgments based on the most readily available cognitive patterns (the availability heuristic). Further problems arise because of a tendency to see correlations where none exist (illusory correlation), and to miss real correlations (6).
collection biases: 1) They tend to come to diagnostic determinations before they have collected all the relevant information, 2) they tend then to focus on collecting information to confirm that diagnosis (confirmatory bias), 3) they tend to ignore disconfirmatory information, 4) they combine information in idiosyncratic ways, and 5) they tend to make judgments based on the most readily available cognitive patterns (the availability heuristic). Further problems arise because of a tendency to see correlations where none exist (illusory correlation), and to miss real correlations (6).
Added to all these problems is the fact that, even today, standard diagnostic manuals do not provide very detailed descriptions of how to assess psychopathology at the symptom level. All of the criteria for oppositional defiant disorder, for instance, begin with the word “often.” But “how often is often?” There is a great deal of room for clinicians to adopt very different decision rules about when to regard such symptoms as being present.
In the face of all these difficulties it became apparent that methods were required to standardize the collection, quantification, and combination of diagnostic information. As a result, all structured interviews aim to:
Structure information coverage, so that all interviewers will have collected all relevant information (both confirmatory and disconfirmatory) from all subjects.
Define the ways in which relevant information is to be collected.
Structure the process by which relevant confirmatory and disconfirmatory information is combined to produce a final diagnosis.
Early Structured Diagnostic Interviews
In the early days of structured interviews, it was supposed that clinicians would be using them, because it was felt that only they had the necessary training and experience to be able to decide about the presence or absence of symptoms, even when quite detailed definitions were provided. The interview schedule served as a tool to guide the clinician interviewer in determining whether symptoms were present, but the interviewer made the decisions, on the basis of information provided by the child or adult. Interviews of this sort, like the Present State Examination (7) and the Reynard (8) for adults, and the Isle of Wight interview for children 9,10, were the first to be developed, since they sprang naturally from clinical practice. They were called semi-structured because the interviewer was allowed latitude in the specific form of the questions used.
Although the PSE and Isle of Wight interviews were used extensively in community surveys, it was clear that the use of clinician interviewers created both logistic and budgetary problems. Large scale epidemiological studies, such as the Epidemiologic Catchment Area (ECA) studies (11) mandated the use of nonclinician (“lay”) interviewers. Some felt that such interviewers would be incapable of making the judgments about symptoms, so, following methodologies used by political and marketing surveys, interviews were developed that required only that the interviewer ask a set of fixed questions in a preset order, and collect the simple answers to those questions. In such interviews, it is the questions put to the subject that are structured, and the interviewer makes no decisions about the presence of symptoms. Hence they came to be called highly or fully structured. The Diagnostic Interview Schedule (DIS) was the paradigmatic example of this sort of interview in adult psychiatry (12), while the original Diagnostic Interview Schedule for Children and Adolescents (DICA) was the first child-oriented example 13,14.
Emergence of the Diagnostic Interview with the Child
Until the late 1960s interviews and questionnaires directed to a parent or teacher about a child’s behavior and observation of the child’s behavior were the predominant methods of assessment in child and adolescent psychiatry. Verbal information from the child was typically regarded as being only supplemental, or material for psychodynamic interpretation (15). More attention was paid to playing with the child than to the collection of information through direct questioning. In 1968, a key transitional paper reported on the reliability and validity of the Isle of Wight interview with the child (9). Here the behavior of the child in a face-to-face interview was examined directly, but little was made of the factual content of the child’s reports. In 1975, Herjanic and her colleagues asked “are children reliable reporters” of factual information, and presented evidence that they are (16). Since then, a great deal of work has confirmed the importance of children’s self-reports as a source of factual information, with the result that fact-finding (as opposed to interpretative) interviews with both parents and children are now regarded as being of equal weight in the diagnostic process, at least from late childhood (prior to about age 9 children are incapable of completing such “adult-style” interviews). The one exception is in the evaluation of attention deficit hyperactivity disorder (ADHD) symptoms, where child reports have been found to be of little help 17,18. Even here, however, the recent growth of interest in ADHD in adolescence and adulthood has led to the development of new measures in this area (e.g., Conners, 1997) (19).
Disagreement among Informants and Its Implications
Until the 1980s, agreement between child and parent reports of symptomatology was widely regarded as being a test of the validity of child reports 9,14. However, it soon became apparent that only low levels of agreement among informants (correlation coefficients around 0.3 for agreement among children, parents and teachers) could be expected 20,21. It is now considered that low levels of agreement among different informants about the child’s clinical state are to be expected and do not invalidate the reports of any of them. Rather, each key informant presents a particular view of the child’s problems. Indeed, it is precisely because agreement among informants is low that multiple informants are needed. Were agreement very high, taking the history from more than one informant would be redundant.
The problem is that disagreement among informants means that one has to decide how to weight the information from each informant in arriving at a diagnosis. Since it is uncommon for informants to invent fictitious symptoms, the simple rule of regarding a symptom as being present if any informant reports it usually suffices. When symptoms are combined to make diagnoses, the usual procedure is to ignore the source, and to add up all positive symptoms from any source. Thus, a diagnosis of a major depressive episode (which requires the presence of at least five symptoms) might be made on the basis of three relevant symptoms being reported by the child (say depressed mood, anhedonia and excessive guilt), with two other relevant symptoms (perhaps sleep and appetite disturbances) being reported by another informant (typically a parent and/or teacher). Although some interview developers have recommended “reconciliation” discussions involving the interviewer, the parent and the child to clear up discrepancies between their reports (22), such a discussions are problematic. Reconciliation requires one informant to modify his or
her story, but that means admitting being wrong, or at least uninformed. The knowledge that such a discussion will occur could cause informants (e.g., drug-using adolescents) to withhold important information that they did not wish other informants (such as their parents) to hear about. Finally, in most research applications, one wishes to assure informants that what they say will not be revealed to anyone else, in which case a reconciliation interview is ruled out.
her story, but that means admitting being wrong, or at least uninformed. The knowledge that such a discussion will occur could cause informants (e.g., drug-using adolescents) to withhold important information that they did not wish other informants (such as their parents) to hear about. Finally, in most research applications, one wishes to assure informants that what they say will not be revealed to anyone else, in which case a reconciliation interview is ruled out.
The remainder of this chapter is concerned with the description of key points relating to general psychiatric diagnostic interviews, that is, those that cover a broad range of the common disorders of childhood and adolescence. A number of interviews and observational systems exist for more specialized tasks (for instance the Autism Diagnostic Interview and the Autism Diagnostic Observation Schedule 23,24), but such instruments will not be considered further here.
A Typology of Interviews
As we have already seen, a distinction between semistructured and highly structured interviews has found its way into the description and discussion of diagnostic interviewing techniques. However, these terms are not very helpful for two reasons. First, they imply that the key difference between different types of interview concerns the amount of structure they impose. The problem is that the real issue is not one of amount of structure, but rather who makes the final decision as to whether a symptom is present.
Respondent-Based Interviews
In interviews where the questions are absolutely prespecified, it is the respondent who makes the final decision (typically by answering yes or no to each question). The interviewer makes no such decisions, but merely reads the questions. Since the decisions as to the presence or absence of psychopathology lie with the respondent in such interviews, we refer to them as being respondent-based. The Diagnostic Interview Schedule for Children (DISC (25)) and the computer-assisted version of the Diagnostic Interview Schedule for Children and Adolescents (DICA (26)), and the Dominic-R (27) are the three representatives of this approach.
Interviewer-Based Interviews, and Glossary-Based Interviews
We call interviews that require the interviewer to make an informed decision based on what the respondent says interviewer-based. The interviewer is expected to question until s/he can decide whether a symptom meeting the definitions provided by the interview (or known to them from their training) is present. This group of interviews includes the Anxiety Disorders Interview Schedule (ADIS (28)), the Child and Adolescent Psychiatric Assessment (CAPA (29)), the Child Assessment Schedule (CAS 30,31), the paper and pencil (not the computerized) versions of the Diagnostic Interview Schedule for Children and Adolescents (DICA (26)) and its close relative the Missouri Assessment of Genetics Interview for Children (MAGIC), the Interview Schedule for Children and Adolescents (ISCA (32)), the various versions of the Kiddie Schedule for Affective Disorders and Schizophrenia (K-SADS (33)), and the Pictorial Instrument for Children and Adolescents (PICA-IIIR (34)). Three of these interviewer-based interviews (the K-SADS-P IVR, the DICA, and the CAPA) provide extensive sets of definitions of symptoms and/or detailed guidance on the conduct of the interview, and we call these glossary-based. Such glossaries are particularly important when an interviewer-based interview is to be used by nonclinician interviewers because they provide detailed guidance as to what the interviewer is supposed to be looking for in making symptom ratings. Nonclinician interviewers have been shown to be able to make such “clinical” judgments with high reliability when they have received adequate training with such glossaries (35).
The distinction between interviewer- and respondent-based interviews is not hard and fast in actual practice, because there has been considerable cross-fertilization between these approaches. For instance, the Child and Adolescent Psychiatric Assessment (CAPA), which has its roots in the interviewer-based tradition, includes a subset of questions that are to be asked verbatim of all subjects, as in a respondent-based interview, but then allows further questioning for clarification. On the other hand, the DICA, which had previously been a respondent-based interview, now requires interviewers to question much more flexibly, and is now an interviewer-based instrument (26). Though the distinction between interviewer- and respondent-based interviews provides a useful rough-and-ready typology, it is really better to consider interviews as lying at various locations along three dimensions: 1) Degree of specification of questions, 2) degree of definition of symptom concepts and 3) degree of flexibility in questioning permitted to the interviewers. Interviews that provide extensive definitions and require interviewers to make judgments lie in the interviewer-based region of that three-dimensional space, while those that specify every question and allow no interviewer deviation from those questions lie in the respondent-based region.
Pictorial Interviews
More recently respondent-based child self-report interviews that add pictorial cues have been added to the assessment armamentarium. The most developed pictorial interview at this time is the Dominic-R 27,36,37 which is intended for use with 6–11-year-olds. Pictures representing psychopathology relevant to seven diagnoses are shown to the child, and questions about whether each symptom is present are read at the same time. Because no frequency, duration, or onset data are collected, it is not yet clear how such information should be combined with diagnostic information from other sources. This is, however, a general problem for interviews with younger children, because before the age of 8 or 9, they simply cannot provide all the frequency, dating and timing information that full diagnostic interviews require. Although diagnostic test-retest reliabilities cannot be reported for the Dominic-R, its item reliabilities are respectable in comparison with those reported from studies of older children with other interviews.
The Pictorial Instrument for Children and Adolescents (PICA-IIIR), for children aged 6–16, adopts a somewhat similar approach, but the questions to be asked with the pictures are more loosely specified, and it is intended to be used by clinicians. It covers a broader range of diagnoses than the Dominic-R, but no test-retest reliability data are yet available (34).
Parent-Only Interviews for Younger Children
Standard practice in adult psychiatry is to rely upon a single key informant for structured diagnostic interviews. The person who is the subject of the interview alone is interviewed. Parent and teacher interviews are added in child and adolescent psychiatry because the child him- or herself is regarded as being a limited informant. The point is that, at any age, interviews need to be conducted with whoever is needed to provide adequate reliable information coverage. We have already noted
that younger children cannot provide all the information necessary for making DSM-style diagnoses, but there is no reason why the child’s lack of capacity in this regard should invalidate the use of the available best informants (parents and sometimes teachers) for diagnostic purposes. After all, in clinical practice diagnosis for young children is very largely based on parent reports of the child’s behavior supplemented by office observations and teacher reports. Following this logic, several groups have modified interviews originally developed for use with parents of older children to allow structured diagnostic assessments down to age 2 38,39,40. A test-retest study of one of these (the Preschool Age Psychiatric Assessment (PAPA)) suggests that preschoolers’ diagnoses assessed in this way are just a reliable as those of older children (38).
that younger children cannot provide all the information necessary for making DSM-style diagnoses, but there is no reason why the child’s lack of capacity in this regard should invalidate the use of the available best informants (parents and sometimes teachers) for diagnostic purposes. After all, in clinical practice diagnosis for young children is very largely based on parent reports of the child’s behavior supplemented by office observations and teacher reports. Following this logic, several groups have modified interviews originally developed for use with parents of older children to allow structured diagnostic assessments down to age 2 38,39,40. A test-retest study of one of these (the Preschool Age Psychiatric Assessment (PAPA)) suggests that preschoolers’ diagnoses assessed in this way are just a reliable as those of older children (38).
Screened Interviews
The Children’s Interview for Psychiatric Syndromes (ChIPS (41)) was designed as a screening tool covering 20 DSM-IV Axis 1 disorders. “Cardinal questions” concerning symptoms most often seen in children with a particular disorder are asked at the beginning of each section. If the answers to these screening questions are in the negative, then the rest of that section is skipped. No test-retest reliability data are yet available for the ChIPS. A similarly screened version of the CAPA is also available, but in practice it has been found to save only about 10 minutes of interview time, so the loss of information resulting from not asking about all symptoms may not really be worth the time saving.
Computerized Interviews
Computer-assisted psychiatric interviews (CAPI) employ an interviewer to read questions from the screen and enter the appropriate codes into the computer as the interview progresses. The machine takes the interviewer to the appropriate stem questions, and stores the responses in a database. There is no need for bulky interview schedules to be copied and carried around, and data entry is completed during the interview (or during coding of the interview later in the office with some interviews). Furthermore, the computer will not accidentally skip parts of the interview, or accidentally vary the order of its presentation. On the other hand, computerized interviews can be programmed to vary the order in which sections are presented deliberately, so as to reduce the order of presentation effects observed when respondents learn that saying no tends to shorten the interview. However, interviews for use with children do not currently incorporate this potential feature. Recent advances in programming technology for structured interviews mean that even the most “interviewer-based” interviews can now be produced in CAPI formats. For instance, the CAPI version of the Preschool Age Psychiatric Assessment (PAPA) allows interviewers to write and store text notes with a stylus on a tablet PC; similar versions of the CAPA and YAPA are also available. When some interview schedules run to over 300 pages, the costs of buying computers can soon be offset against savings on schedule reproduction and data entry. The DISC has become progressively more complex over the last 20 years (largely because of the ever-increasing complexity of the DSMs), and, except as discussed below, the DISC-IV is now supposed always to be completed in its CAPI format, because it is really too difficult to administer it effectively in a paper-and-pencil format. There is also a CAPI version of the DICA, but this differs from the paper-and-pencil version of the interview in being fully respondent based (42). Given the advantages of CAPI administration, we predict that paper and pencil will soon disappear as a means of interview administration.
The next level of computerization is referred to as audio computer-administered survey interviewing (ACASI). Here no interviewer is used at all. Rather, digitized audio recordings of the questions (sometimes even with digitized video of an interviewer) are played back by the computer as the written form of the question is displayed. The respondent enters a response to the question, which is saved to the database. Obviously, such an approach can only be adopted with a respondent-based interview, and the DISC provides the paradigmatic example of this approach (43).
Introduction to the Interviews
Here we present a brief introduction to each of the diagnostic interviews, with a focus on their characteristic response formats.
The Schedule for Affective Disorders and Schizophrenia for School-Age Children (Kiddie-SADS, K-SADS)
The K-SADS “family” of interviews consists of a group of very diverse assessments. Indeed, the only features that all of the current versions of the K-SADS share in common are the name, the ability to make DSM-IV diagnoses, and the fact that all were designed to be administered by clinicians. The original version of the K-SADS (the K-SADS-P (44)) was a downward extension of the adult Schedule for Affective Disorders and Schizophrenia (SADS) and focused on the Research Diagnostic Criteria (45). Note that the “P” in its title stands for present (not parent). It was designed for use with children aged 6–17, but covered only a relatively limited range of symptoms and diagnoses. It was revised to cover DSM-IIIR (46) and DSM-IV.
K-SADS-P IVR
The version of the K-SADS-P most recently developed by Ambrosini and colleagues is called the K-SADS-P IVR (46). This version is closest conceptually to the original K-SADS-P in including quite detailed definitions of severity codings for each symptom. The modal form for these symptom codings is a six-point scale, involving judgments about various combinations of intensity, duration, frequency, environmental responsiveness, psychosocial impairment and observed behavior.
K-SADS-E
The K-SADS-E (47) (E for epidemiologic) collects ratings of the present episode of any disorder and the worst past episode. This interview was never even remotely similar in format to the K-SADS-P, because it rated only the presence or absence of symptoms, rather than employing the carefully defined severity codings of the K-SADS-P. The latest edition is DSM-IV compatible, and allows the current episode (not individual symptoms) to be rated as mild, moderate or severe.
K-SADS-PL
A group in Pittsburgh have developed the K-SADS-PL (present and lifetime), as a sort of cross between the K-SADS-P and the K-SADS-E 48,49. Symptom ratings have been reduced
to three-point scales (typically not at all, subthreshold, threshold), and fairly minimal anchoring definitions of each point are provided. An initial 82-item screen interview, which allows skipping of substantial symptom areas, is also available.
to three-point scales (typically not at all, subthreshold, threshold), and fairly minimal anchoring definitions of each point are provided. An initial 82-item screen interview, which allows skipping of substantial symptom areas, is also available.
Wash-U-Ksads
Rather brief definitions of symptoms are given, and level of severity is coded, but the severity codings are idiosyncratic, and bear little relation to those in other versions of the K-SADS.
Columbia K-SADS
The symptom “definitions” provided are often simply restatements of the DSM-IV criteria. Sometimes (particularly in relation to depression) a little more guidance is given. Symptom severity is typically rated on what appears to be a 6-point scale like the K-SADS-P and K-SADS-P IVR, but closer inspection reveals that two of the points are usually defined only as being intermediate between two other points. Thus, only four points (one being symptom absence) are really defined.
Although K-SADS interviews were developed for use by clinicians, some have also have been used with lay interviewers (31).
The K-SADS family differs from most interviews (but not the ISCA) in directing that the parent should be interviewed first and then the child should be seen by the same interviewer, who is then expected to resolve any discrepancies between the child’s reports and those of the parent. The interviewer then completes a record representing his/her summation of the two interviews. This procedure is highly dependent on clinical judgment, and means that the process of combining the information is not structured. It also seems likely to bias the results of the interview in favor of the parental reports, and some workers have instead scored the interviews with the parent and the child separately (50).
The Child and Adolescent Psychiatric Assessment (CAPA) and Its Congeners
The CAPA is one of an integrated group of instruments developed to assess a variety of risk factors for, manifestations of, and outcomes of child and adolescent psychiatric disorders. In addition to the usual symptom and impairment assessments, it also includes extensive ratings of the family environment and relationships, family psychosocial problems, and life events (including traumatic events and physical and sexual abuse). A separate module called the Child and Adolescent Impact Assessment (CAIA (51)) measures the impact of the child’s problems on the family, while the Child and Adolescent Services Assessment (CASA 52,53) covers service use for mental health problems in multiple sectors and settings. Psychosocial impairment in 17 domains of functioning is measured at both the syndromic level and overall. In the interview with the child, 62 items reflecting the child’s observed behavior during the interview are also coded. In order to facilitate completion of the interview by nonclinicians, the CAPA provides a more molecular approach to symptom codings. Extensive symptom definitions are given in a glossary and on the schedule, and rules are specified to allow nonclinicians make separate codings of the intensity, frequency, duration, date of onset of symptoms, and psychosocial impairment resulting from them. The CAPA emphasizes getting descriptions and examples of possible pathology to ensure that codings are not based on the informant’s misunderstanding of what was being asked about 35,54,55.
A version of the CAPA has been developed for use with young adults (the Young Adult Psychiatric Assessment, YAPA), and a substantially modified version is now available for use with the parents of preschool children (The Preschool Age Psychiatric Assessment, PAPA (38)). The latter includes assessment of a number of areas of particular relevance to preschoolers that are not included in any other diagnostic interview. In addition, a version of the CAPA with empirically derived screen items is available, which allows sections to be skipped if screen symptoms are absent. A streamlined version of the CAPA for collecting data for twin studies has also been developed (56).
Diagnostic Interview for Children and Adolescents (DICA)
The DICA started out as a respondent-based interview over 20 years ago 16,42,57,58. Since then it has been progressively modified so that its paper and pencil version is now an interviewer-based interview 26,59,60. However, there is also a computer-based version of the DICA that remains fully respondent based 26,60. In addition, the group responsible for the development of the DICA has produced a modification called the Missouri Assessment for Genetics Interview for Children (MAGIC). The major difference between the DICA and the MAGIC is that the MAGIC has a specifications manual, which includes a great deal of guidance on how to elicit key features of symptoms, and a variety of clarifications of coding instructions (26).

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

