Exam #2 Flashcards

1
Q

Standardization

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Self Deception

A

A form of socially desirable responding. An unconscious distortion. The person is distorting results, but not aware of it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Impression management

A

A form of Socially Desirable Responding. Conscious distortion of results. They are aware that they are being deceptive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are the 5 sources of error in an interview?

A

1) Successful interview depends on the interviewee providing honest information- Sometimes clients are not immediately forthcoming about the various facets of their life
2) lack of insight- clients do not always have insight and they paint an inaccurate picture of themselves- this is why it is sometimes useful to interview a significant other
3) Interviewer error: bias. We all have biases and it is important to come to an understanding of these biases
4) interviewer error: judgement- don’t rush to judgement based on a single piece of information. ie client hearing voices does not necessarily mean schizophrenia
5) Interview error: interpretation and inferences. - if report writing it is best to stick to behavioural data(You believe X because client said or did X)- Speculating about the meaning of behavioural data is dangerous (both to the client and you) so be conservative about your statements about what you report about a client

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Social Facilitation

A

People impact eachothers moods and behaviours. People tend to act like the models around them. Therefore the therapist must model relaxed confidence, in order to create conditions of openness and warmth.The client will be impacted by the therapist’s behaviours and moods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

False reassurance

A

Attempts to comfort or support but the support is false. Ie Someone was just diagnosed with HIV+. telling them “Don’t worry, everything is going to be all right” is a false reassurance.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the 5 basic steps in test construction

A

`1) Test Conceptualization

2) Item generation
3) Item Tryout
4) Item Analysis
5) Test Revision

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

1) Test Conceptualization (4 parts)

Test construction

A

1) Specify the objective of the test

What is the goal of testing?
Ie Create a measure of personality

The purpose of the test ought to be guided by a theory that links the test and the construct

2) Define the attribute
- What is the construct that you are measuring?

Operational definition- provide an objective and measurable definition of the construct

3) Identify the population
- Which is the group on which you will develop the test?

4) Review of the literature
- What is the previous research on the construct?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Step 2) Item generation

Test construction

A

Generate a pool of items to measure the trait by systematically identifying and specifying the items in the content domain

-Items should relate to the central theory and be expressed in clear and precise language

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Rules for Item Writing

test construction

A

1) Define what you want to measure
2) Generate item pool
3) Use short items
4) Use appropriate reading level
5) Avoid double-barreled items
6) Reverse code items

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Response Biases

A

1) Social Desirability
- Self Deception- An unconscious distortion. The person is distorting results, but not aware of it.
- Impression management- Conscious distortion of results. They are aware that they are being deceptive.

Acquiescence- Yay Sayers, agree with everything.

Random Responding*

Extreme Responding

Negative Impression*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Step 3) Item Tryout

Test construction

A

Develop a method of data collection, ie a Questionnaire

-Administer the test items to a sample that is identical to population for which the test will be developed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Step 4) Item analyses

A

Quantitative Evaluation
- The purpose is to obtain more information on each item in order to determine the retention, deletion, or revision of items.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Step 5) Test Revision

A

Assessing strengths and weaknesses of items.
Modifications on the basis of the analyses.

What items should be retained or revised?

Review purpose of the test to determine any modifications.

Continue the cycle of tryout, analyses, and revisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Reliable

A

Consistent measurement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Valid

A

Measuring what it is supposed to measure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A good test must be what 2 things

A

Reliable and Valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

4 Forms of Reliability

A

1) Test Retest-person takes test then takes exact same test again after a period of time.

2) Alternative Forms- You have two different versions of a test. Tests have different items but they are tapping the same construct
3) Internal Consistency-degree to which items hang together. Different forms of it – chronbach’s alpha (very popular in psychology). Also reported with KR-20 and split half. You might have a test that doesn’t have good test retest but it might be internally consistent. Internal consistency is very important for validity.

4) Inter Rater- The 3 that we just have discussed make use of the correlation coefficient. The interrater is known as a Kappa statistic. E.g., you have 3 psychologists rating a client, to what degree do they decide the same diagnosis. Looking at agreement across judges.

Test Retest, Alternative Forms, and Internal Consistency report on the correlation coefficient. Inter-rater reports the Interclass coefficient) ICC

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

5 Forms of Validity

A

1) Face validity:
2) Content
3) Criterion
4) Construct
5) Factorial

1) Face validity – non-statistical and non-expert approach to assessing validity. You make a surface judgment about a test. Say if you were given a social psychology exam instead of a psychometrics one that you were expecting, you would give it low face validity.
2) Content validity – non-statistical approach established by experts. Experts determine if the test is measuring the domains its supposed to measure. How can you establish content related validity from your last test? Ask another psychometrics prof to look at your last exam.
3) Criterion Validity – Looking at association of bivariate – behavior (criterion – will be placed on the y axis) and test (predictor – placed on the x axis). Correlating your test with actual behavior. Two different forms.
4) Construct Validity – correlating your test with a well established gold standard test.
5) Factorial Validity – Trying to test validity of some sort of theoretical structure. ie 3 factor structure (psychoticism, neuroticism, and extroversion). So you administer a test to lots of people, and see if in fact the outcomes boil down to the 3 theoretical factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

5 Forms of Validity

A

1) Face validity:
2) Content
3) Criterion
4) Construct
5) Factorial

1) Face validity – non-statistical and non-expert approach to assessing validity. You make a surface judgment about a test. Say if you were given a social psychology exam instead of a psychometrics one that you were expecting, you would give it low face validity.
2) Content validity – non-statistical approach established by experts. Experts determine if the test is measuring the domains its supposed to measure. How can you establish content related validity from your last test? Ask another psychometrics prof to look at your last exam.
3) Criterion Validity – Looking at association of bivariate – behavior (criterion – will be placed on the y axis) and test (predictor – placed on the x axis). Correlating your test with actual behavior. Two different forms.
4) Construct Validity – correlating your test with a well established gold standard test.
5) Factorial Validity – Trying to test validity of some sort of theoretical structure. ie 3 factor structure (psychoticism, neuroticism, and extroversion). So you administer a test to lots of people, and see if in fact the outcomes boil down to the 3 theoretical factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Relationship between Reliability and error

A

There is an inverse relationship between reliability and error

Reliability = 1/Error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Classical Test Theory

A

Observed Score = True Ability + Random Error

23
Q

Classical Test Theory Assumptions

A

1) errors are random
2) the true score will not change with repeated measurement
3) the distribution of errors will be the same for all people

24
Q

Test Re-Test Reliability

A

Administering the same test to the same set of examinees on two separate occasions

Coefficient of stability

Error (or lowered correlations) could be due to the passage of time.

Traits should have high test retest correlations. ie neuroticism, agreeableness etc*These are stable

States should have lower test retest correlations. this fluctuates. We want lower retest correlations due to this

25
Q

3 measures of internal consistency

A

Cronbach’s Alpha* Most commonly reported
KR-20
Split half

26
Q

Split-Half

A
  • Type of internal consistency
  • Involves correlating two halves of a test
  • Homogeneous content is best
27
Q

KR-20

A

The degree to which test items correlate with each other

Used with correct and incorrect responses

28
Q

Cronbach’s Alpha

A

Considers average correlation among items

  • (Looks sort of like an a) values range from 0 - 1.00
  • “good”  values are .6 - .7 and above
    common: personality inventories
29
Q

Cronbach’s Alpha

A

Considers average correlation among items

  • Alpha values range from 0 - 1.00
  • “good” Alpha values are .6 - .7 and above
    common: personality inventories
  • Look up formula
30
Q

Estimations of reliability

A

Correlation coefficients are sometimes used (Foundation of reliability testing)
-Ranges from -1 to +1

A reliability coefficient is expressed from 0.0 to 1.0

31
Q

Test-Retest Reliability

A

Administering the same test to the same set of examinees on two separate occasions.

  • Coefficient of stability
  • Error (or lowered correlations) could be due to the passage of time.
  • Traits should have high test retest correlations
  • States should have lower test retest correlations
32
Q

Reliability: Alternative Forms

A

Two versions of the same test with similar content

Forms must be equal

Difficult to accomplish and expensive for test development

AKA Coefficient of equivalence

33
Q

Formula for Cronbach’s Alpha

A

Google

34
Q

What is SEM and how is it calculated?

A

Standard Error of Measurement

35
Q

Face Validity

A

Does a test Superficially measure a domain

  • non statistical
  • established by non experts, just from looking and judging
36
Q

What is SEM and how is it calculated?

A

Standard Error of Measurement. Look up formula

37
Q

Content related validity

A

systematic examination of the test content to determine if it covers a representative sample of the targeted domain

38
Q

SEM

A

Standard Error of Measurement. Look up formula

39
Q

Limitations of content related validity

A

Biases
Consider biases of the panelists

Level of expertise of the panelists
Are they really experts?

40
Q

Content related validity

A

Systematic examination of the test content to determine if it covers a representative sample of the targeted domain

Exhaustive examination of the literature
Consult with experts
Generate an adequate sampling of the item universe

41
Q

Discriminant Construct-related Validation

A

Different constructs should not correlate

42
Q

Convergent construct validity

A

Constructs which are similar should correlate

43
Q

Construct Related Validity techniques

A

Convergent validation

Discriminant validation

44
Q

Multitrait Multimethod Matrix

A

Correlation of two or more traits by two or more methods

MM: Monotrait Monomethod. Same Trait. Same Method. High correlations = high reliability.

MH: Monotrait Heteromethod. Same trait, different method of report. High correlations: High validity, Low correlations: low validity

HM: Heterotrait Monomethod: Different trait, Same method. High correlations = bad. Low Correlations = Good.

HH: Heterotrait Heteromethod. Different trait, different method. High correlations= bad construct validity, Low Correlations = good construct validity

45
Q

Limits to confidentiality

A

1) if a person is homicidal or suicidal 2) If child abuse is reported and 3) if the court orders it and it is subpoenaed

46
Q

3 Key Interviewing/Counselling Skills

A

1) Being empathetic
2) Establishing rapport
3) Developing a working alliance

47
Q

Advantages and Disadvantages of Multitrait Multimethod Matrix (MTMM)?

A

Advantages:
1 ) Allows simultaneous examination of convergent and discriminant validity

2 ) Stresses Unbiased Measurement

Disadvantages: Too complex

48
Q

Factor analysis

A

Multivariate data reduction technique

Ie Sorting a deck of cards- can be sorted by suit, number, color etc.

49
Q

Max Validity Coefficient Equation

A

Your max validity coefficient (rxy) = the square root of the internal consistency of the first measure multiplied by internal consistency of the second measure

50
Q

Max Validity Coefficient Equation

Slide 27

A

Your max validity coefficient (rxy) = the square root of the internal consistency of the first measure multiplied by internal consistency of the second measure

51
Q

In factor analysis what are latent variables and manifest variables?

A

1) Latent variables- That which is hidden. Output from factor analysis
2) Manifest variables- that which is obvious. All the items in the inventory

52
Q

2 Limitations of Criterion-related Validity

A

1) Criterion contamination: has too much overlap with criterion. Leads to inflated correlation.

2) Nature of sample:
Homogenous: Bad
Heterogenous: Good
Want a range of responses (heterogeneity). Correlation coeficients depend on variability of scores

53
Q

Factor Analysis Step 1

A

Administer the scales to a very large group of people and then create a square correlational matrix

Rule of thumb 5:1
Five people for every one item