Research & Data Collection Flashcards
Subjective Data
Perceived through client descriptions of feelings, experiences, and perceptions
Objective Data
“Facts” related to clients situation
Data that can be observed
Examples: disorientation, failing academics, legal issues, etc.
Research Design
Ensures the evidence or data collected enables the research question to be answered
- Identify & justify the research problem clearly
- Review previously published literature associated with the problem area
- Clearly and explicitly specify hypothesis (i.e. research question) central to the problem
- Effectively describe the data that will be necessary for an adequate test of the hypothesis and EXPLAIN HOW DATA WILL BE OBTAINED
- Describe the METHODS OF ANALYSIS that will be applied to the data in determining whether or not the hypothesis are true or false
Experimental Research
Randomized experiments; randomization of subjects or groups
Quasi-Experimental Research
Use of intervention and comparison groups
Assignment to groups is nonrandom
Pre-Experimental Research
Study contains intervention group only and lacks comparison/control group
Weakest
Single-Subject Research
Aims to determine whether an intervention has the intended impact
Ideal for studying behavioral change exhibited as result of treatment
Can show causal effect between intervention and outcome
Examples:
Pre- & post- test
Single-case Study (AB): comparison of
(A) behavior before treatment [baseline]
(B) behavior after the start of treatment [intervention]
Reversal (ABA) Multiple Baseline (ABAB)
Flexible, simple, low cost
Low in EXTERNAL VALIDITY d/t small number of participants, limiting ability to GENERALIZE findings to a wider audience
Inter-Rater or Inter-Observer Reliability
Assesses the degrees to which the different raters/observers give consistent estimates of the same phenomenon
Test-Retest Reliability
Assesses the consistency of a measure from one time to another
Parallel Forms Reliability
Assesses the consistency of the results of two tests constructed in the same way from the same content domain
Internal Consistency Reliability
Assesses the consistency of results across items within a test
Face Validity
Examines whether the assessment’s stated aims effectively measure the constructs
Content Validity
Examines whether all of the relevant content domains are covered
Criterion-Related Validity
Examines whether constructs perform as anticipated in relation to other theoretical constructs
Includes:
Predictive Validity, Concurrent (distinguished) Validity, Convergent (similar) Validity, Discriminant (differ) Validity