Research Methods Flashcards

1
Q

Descriptive Statistics

A

Describe a data set and what’s going on , but don’t let you say anything about the people who aren’t included in your data set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Inferential Statistics

A

Is the difference we observed between the two groups dependable, or did we observe it by chance? Is there really a difference in the underlying population?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Null hypothesis

A

There is no difference between these two groups

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

P-value

A

The probability of obtaining a difference as big as the one observed when there actually is no difference.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Basic Statistical Tests that Developmental Psychopathologists Use?

A

1) Differences between groups: T-Tests (two groups), ANOVA (more than two groups).
2) Are continuous variables related to each other? Are greater number of conduct symptoms associated with poorer language skills? Relationships between variables, not looking at group differences.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Risk & Protective Factors

A

Risk factors: Increase the chance of a negative outcome. Low SES functions as a risk factor for childhood maltreatment, depression, anxiety.
Protective factors: Decrease the chance of a negative outcome. Family social support is protective against the onset of depression, secure attachment is protective against the risk of later anxiety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Different types of protective factors

A
  1. Regular protective
  2. Protective-stabilizing: When an attribute is present, no matter how high the risk is, you still experience that positive outcome equally.
  3. Protective-enhancing: Uncommon. Well-being goes up as people’s exposure to something bad increases, as it gets tough you start doing better and better.
  4. Protective-reactive: As stress goes up, your well-being goes down, but it doesn’t go down quite as steeply as in regular protective factor. The factor prevents harmful outcomes when risk is low, but ask risk gets higher the effects of the protective factor begin to break down.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Main effect

A

Association between independent variable and dependent variable. Conduct disorder and educational attainment. Family income and education attainment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Interaction/Moderation

A

The association between one IV and the DV varies as a function of the other variable. Social support impacting the relationship between stress and depression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Mediator Variables

A

Impact the process, mechanism, or means through which a variable produces a particular outcome. Accounts for some (partial mediation) or all (full mediation) of the apparent relationship between two variables. Direct effect:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Unstructured interviews

A

Clinician asks questions and arrives at diagnosis. Common and easy. Challenges: Less comprehensive. Biases. Confirmatory bias: Come in with pre-conceived notion of what’s goning on, and that might push me to ask questions in certain direction. Availability heuristic: Base decisions on examples that come to mind easily. Combine information in idiosyncratic ways.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Semi-Structured and Structured Interview

A

Interviewer has a set of questions that are presented to the respondent.
Semi-structured: Interviewed has a lot of latitude in asking the questions. Clinical judgment involved in determining when a symptom is present.
Structured: Questions are fixed and interviewed has very little flexibility. Can be administered by computer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Unstructured vs Semi-Structured and Structured Interview

A

Data suggests that structured and semi-structured are more reliable and give better data. Also do a better job at getting to a numerical score about how severe or present a disorder is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Disadvantage of Structured & Semi-Structured Interviews

A

Structured and semi-strutted interviews are the gold standard instruments in psychopathology research. Weaknesses might come in terms of their feasibility. Often longer, and might involve a certain amount of required training.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

K-SADS

A

Has an initial self-report screener, and each question asks about a different domain of psychopathology. Based on that screening info, it might tell me to follow up in certain areas. Good coverage across many sorts of disorders. Can ‘skip out’ if participants aren’t endorsing symptoms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Rating Scales

A

People knowledgeable about the child answer questions about behaviours and feelings. Often used to measure psychopathology continuously. Can be used to make a categorial decision. Shower than structured and semi-structured interviews, and no interviewer required. Assumption has been that they are less good than interviews. Trade-off between higher validity/reliability of interviews than feasibility of checklists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Observation

A

Actually trying to go into a naturalistic setting as a clinician and see the behaviour of interest in person. Naturalistic observation: Occur in child’s natural environment (classroom, home). Structured observation: Laboratory or clinic -based.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Challenges associated with observational approaches

A

Feasibility. External validity: extent to which findings will generalize. May be difficult to see behaviours of interest: Low-base rate (e.g. physics aggression), Covert (e.g. relational aggression), researchers have developed very creative solutions to this.

19
Q

A ‘typical’ thorough ADHD assessment

A

IQ testing + Academic achievement testing (reading, writing, math) + ADHD rating scales from teachers, parents, and self-report (if old enough) + Semi-structured clinical interview (e.g., K-SADS with parent and child (if old enough) = Determination of whether or not child meets criteria for ADHD.

20
Q

A ‘typical’ thorough ADHD assessment: Why do all this just to diagnose ADHD?

A

Rule out learning disabilities and intellectual development disorders as root cause of inattention + hyperactivity in school.

21
Q

Use of Informants in Assessment

A

Assessments of kids often uses other reports (self-report, parents, teachers). They all have different windows into kids’ experience an in different contexts. Rating scales and interviews rely on someone’s report of symptoms.

22
Q

Disagreement among informations: Different perspectives

A

Rater-specific factors that lead to systematic differences in reporting: some evidence for bias in reports of some informants.
Legitimate differences in the meaning of behaviours across settings: Parents an teachers interact with children in different contexts, and these contexts may change interpretation of behaviour.
Informant discrepancies may tell us something real about children’s adaption in various settings.

23
Q

Disagreement among informations: Situation specificity of children’s behaviour

A

Children’s behaviour varies markedly across different situations and settings. Different demands. Inter-rater differences may be capturing legitimate differences in children’s behaviour across settings. Parents and teachers may be seeing different behaviours.

24
Q

How are data from multiple informants used to make a diagnosis?

A

“Or” Rule: I give a rating scale to parent and a teacher and ask if different symptoms are present, I count it as present if parent OR teacher says it is.
“And” Rule: Symptom is present only if all informants agree.
These rues can potentially result in different diagnostic decisions.

25
Q

Takeaway: ‘And’ vs ‘Or’ rule

A

Clinicians must use their judgment to determine whether to apply ‘and’ rule vs ‘or’ rule thinking to the child. The specific disorder being screened is important - some disorders teachers or parents may be better or worse reporters. How reliable is each reporter? How much insight does the child have into their own experience?

26
Q

Combining Informant Reports

A

Both the ‘And’ and ‘Or’ rules do not capture the fact that differences between informants are valuable. Diagnosis identified by teacher may be different than one identified by a parent. Think bout maintaining saints separately.

27
Q

Properties of Good Measures

A

Reliability: Consistency
Validity: Are we measuring what we think we are measuring?
Reliability is a necessary condition for validity. Less reliable = more error.

28
Q

Reliability: Test-retest vs inter-rater

A

Internal consistency. Test-retest reliability: Do we get similar answers on different measurement occasions?
Inter-rate reliability: Agreement between two people judging whether something is present or occurring. Can two clinicians agree that a child as ADHD?

29
Q

Validity (Convergent, Discriminant, Face)

A

Convergent validity: Are scores on the measure related to other measures or indicators of the same construct?
Discriminant validity: Are scores on the measure different from scores of other constructs?
Face validity: Does this appear to measure what it is supposed to measure?

30
Q

Measurement Invariance

A

Generally - think of ‘fairness’ of a measure. People in different groups with similar abilities should score more similarly across items on a math test. We want to use scales that are reliable and valid, but also scales that do not are across groups of interest, because if we do, we can’t really compare those groups because it’s not fair.

31
Q

Correlational Research Designs: Cross-Sectional Design

A

Compare cohorts of different ages to one another at a given time. Relatively cheap and practical. Can’t learn about how individual people change with age. Age effects are confounded with cohort effects.

32
Q

Correlational Research Design: Longitudinal Design

A

Can make within-subject comparisons. No cohort effects. But subjects could drop out, effects of repeated teasing, requires foresight, time consuming, age effects confounded with time of measurement effects.

33
Q

Correlational Research Design: Sequential Design

A

Multiple cohorts that I’m following throughout a longer period of time. Can look at change within people across time in each of the cohorts, and can look at the differences across cohorts in their different ages. Helps disentangle age effects from cohort effects time of measurement effects. But it’s very time-consuming, complex, and expensive.

34
Q

Evidence-Based Treatments: How to Classify

A

What used to be the criteria: Two different evaluations of the same intervention, both showing that the intervention looks better than a control or different intervention. People are now shifting towards looking at systematic reviews of the literature followed by a committee reviewing the evidence.

35
Q

Single-Case experimental designs

A

Examine the effect of a treatment on a single child’s behaviour: repeated measures of behaviour, replication of treatment effects. Good internal validity, temporal ordering. But bad external validity (might not generalize to other kids in population), can be hard to interpret findings, is ethical for us to remove an intervention thats working on someone?

36
Q

Group-Based Designs: Randomized Control/Clinical Trial (RTC)

A

Randomly assigning participants to treatment and control groups. What about my intervention is causing improvement? Internal validity: Is it my intervention that is causing the change in touches? Powerful test of intervention efficacy. Construct validity: What about my intervention is causing the change in outcome? Powerful test of theories. If designed carefully, can let researcher establish cause.

37
Q

RCTs: Disadvantages

A

External validity. Samples. Drop out (RCTs are looking at averages, even with treatment group some people will not have improved). Attrition bias: The selective dropout of participants who systematically differ from others who remain in the study. Vast majority are focused on Efficacy. Few trials focused on effectiveness and even fewer on efficiency.

38
Q

Nosology in Developmental Psychopathology

A

Nosology: Classification of disease. Organization of behaviour and emotional dysfunction into meaningful groupings.

39
Q

Nosology: Categorial vs Dimensional Classification

A

Categorical: Someone who has that disorder is fundamentally different than someone who does not (qualitative difference).
Dimensional: Present in everyone to varying degrees (on a spectrum).

40
Q

Nosology: Categorical Approach

A

The DSM is categorical in the sense that you either have a disorder or you don’t. But once you cross the threshold into having a major depressive disorder, you can be classified as either mild or severe depending on how many symptoms you have.

41
Q

Advantages and Disadvantages of the DMS-5

A

Advantages: Synthesis of information. Aids communication.
Disadvantages: Children (and adults) often do not fit into categories. Current categories providing inadequate for generic and neuroscience research.

42
Q

Nosology: Dimensional Systems

A

Research Domain Criteria (RDoC): Rather than using diagnostic categories, move towards assessing key dimensions (Negative emotionality, temper loss).
Dimensional measurement.

43
Q

Advantages and Disadvantages of Dimensional Systems

A

Advantages: Allows us to retain valuable info, provides a measure to severity. Disadvantages: Which dimensions? Becomes very complicated very quickly. Is it too soon for RDoC?