Task 3 (chapters 5,8 And 9) Flashcards
How do you choose variables?
Research tradition
Choosing variables based on theory
Availability of new techniques/equipment
What is the reliability of a measure?
Reliability of a measure concerns its ability to produce similar results when repeated measurements are made under identical conditions.
What is the relation between variability and reliability?
The more variability the less reliability
How do you measure the reliability of a physical measure?
Height and weight are assessed by repeatedly measuring fixed quantity of variable
Precision represents range of variation to be expected on repeated measurement( precise measurements show little range of variation)
How do you measure the reliability of population estimates?
Measures of opinion attitude and similar psych. Variables
Problematic to estimate the average value of a variable in given population based on sample drawn from population
Precision of the estimate is called margin error
How do you measure the reliability of judgements or ratings by multiple observers?
Establish degree of agreement amongst observers by using statistical measure of interrater reliability.
—>how much agreement is there between raters?
How do you measure the reliability of psychological test of measures?(e.g intelligence,anxiety etc)
Basic strategy is to repeat the assessment twice to a large group of individuals, and then determine correlation between scores .
High correlation=greater reliability (high reliability of r is 0.95 or higher)
What are the three ways to test reliability of psychological tests of measures?
Test-retest reliability
Parallel forms reliability
Split-half reliability
How does test-retest reliability work?what are its limitations? What is it best to assess with?
- Administering the same test twice separated by a long interval of time
- Participants could respond in same way because they recall initial answers
- it is best for assessing estable chatacteristics(such as intelligence)
How does parallel(alternate) forms of reliability work?
Limitations?
Same as test-retes, except the form on the second retesting is replaced by a parallel form which contains quizá Kent items to the original
Differences I test performance can be due to nonequivalence
How does split half reliability work?
Two parallel forms of test are intermingled in a single test and administered together in one testing
Responses from two forms are separated and scored individually
Quantity being measured had no time to change
What is the accuracy?
Measure that produces results that agree with a known standard
Value may not agree with the standard so you average all values and that is what has to be equal to the standard
How is the difference between average and standard called?
Bias
What is precision? I
The range of variation that is expected
What is the validity of a measure ?
The extent to which a measure measures what you intend to measure
What are the different types of validity?
Face validity
Content validity
Criterion related validity
Construct validity
What is face validity?
How well a measurement Instrument appears to measure what it was designed to measure (judging by appearance)
What is content validity?
How adequately the content of a test samples the knowledge, skills or behaviors that the test is intended to measure
What is criterion related validity?
How adequately a test score can be used to infer an individuals value on some criterion measure
What two types of criterion-related validity are therev
Concurrent validity
Predictive validity
What is concurrent validity?
If scores are collected about the same time
What is predictive validity?
Comparing scores on your test with the value of a criterion measure observed at a later time
What is construct validity?
When a test is designed to measure a “construct” or variable not directly observable that has been developed to explain Behaviour in the basis of a theory(cognitions, happiness etc)
What are the differences between criterion and construct validity?
Construct is more about abstractions while criterion is just one variable.
Construct is theoretical that you cannot directly observe while criterion is more general and already established
What is the sensitivity of a dependent measure?
How much your measure responds to your manipulation
What are range effects?
Occur when the values of a variable have an up and lower limit
If bathroom scale measures up till 100kg you put something of 200 kg it will still show 100 kg
Psychology questionnaire is too hard or too easy for participants
What are the two distinct cases for range effects?
Floor effects
Ceiling effects
What are floor effects?
Variable reaches lowest possible value
Ceiling effects
Variable reaches highest possible value
What are behavioral measures?
Measures that record the actual Behaviour of subjects
Good indicator of overt Behaviour
Behavioral measures: what does frequency of responding do?
Count the number of occurrences over specified period
Behaviour measures: what does latency do?
Measure amount of time it takes for subjects to respond to stimulus
What are physiological measures?
Measures that record the participant bodily functions
Provide fairly accurate info about the state of arousal within participants body
Psych. States must be inferred from physical states
What are self report measures?
Participant self report variables
Can’t be sure the participantes are telling the truth as they have the tendency to project themselves in a socially desirable manner
Self report measures: Rating scale and what is the likert scaling?
Rating from 1-10
Likely scaling: indicating the degree to which they agree or disagree with statement on a 5 point scale
Self-report measures: Q-sort methodology
Qualitative measuring technique; establishing evaluative categories and sorting items into those categories
What are implicit measures
Measure responses that are not under direct conscious control
E.g IAT implicit association test
How to understand implicit measures and the role they play in an experiment?
The participant is not an object is a human being and the experiment is a relation between him and his attitudes and experimental context.
Participant assessed you and laboratory and dress inferences on what the experiment is about.
Implicit measures: what are demand characteristics?
Cues provided by the researcher and the context that communicate the participant the purpose of the study (or expected responses of the participant)
Implicit measures: what are role attitude cues?
May signal the participant that a change in the participants attitudes is needed to conform to their role as a research participant
Implicit measures:what is the experimenter bias?
When the Behaviour of the experimenter influences the Behaviour of the experiment
Implicit theories: what are the expectancy effects?
When an experimenter develops preconceived ideas about the capabilities of the participants
How can we prevent experimenter bias?
Single blind technique
Double blind
Single blind technique what is it?
Just experimenter or participant does not know which experimental condition a subject has been assigned to
What is double blind technique?
Technique to lower exp time tee kiss where neither the experimenter nor the participant know at the time of testing which treatments the participants are receiving
What does automating a research mean?
Use technology to eliminate th experimenter effects and increase the precision of measures
What is a pilot study?
Small scale version of a study used to establish procedures,materials and parameters to be used in the full study
What is the manipulation check?
Tests whether or not your independent variables had the intended effects on your participants
What are some quantifying behaviors in observational studies?
Frequency method
Duration method
Intervals method;helpful method to observe multiple behaviors at the same time
Recording single events or Behaviour sequences
What are the different types of sampling?
Time sampling
Individual sampling
Event sampling
Recording
Name the different types of observations
Naturalistic observation
Ethnography
Sociometry
Case history
Archival research
Content analysis
What is naturalistic observation?
Observing subjects in their natural environments without making any attempt to control or manipulate variables
How do you make naturalistic observations?
You have to make unibstrusive observations so the subjects don’t know they are being observed
What are the advantages and disadvantages of naturalistic observation?
Advanatages: gives insight in how Behaviour occurs in the real world; observations made are not tainted by laboratory settings (high external validity)
Disadvantages: Only description of observ d Behaviour can be derived from this method, no investigation of underlying causes of Behaviour.
It also is time consuming and expensive
What is ethnography?
Becoming immersed in the behavioral or social system being studied.
What do we use ethnography for?
To study and describe the functioning of cultures through study of social interactions and expressions between people and groups
How do you perform ethnographical observations?
Conducting observations using participant observation(you act as a functioning member of the group) or using non participant observation (observing as a non member)
Deciding whether to conduct observations overtly (group members will know) or undercover(covertly) (group members unaware)
What is sociometry?
Identifying and measuring interpersonal relationships within a group
Example of sociometry?
Have research participants evaluate each other along some dimension
What are case history observation?
Descriptive technique in which you observe and report on a single case (or a few cases)
Limitations of case history observations?
Purely descriptive
What are archival research observations?
Non experimental strategy that involves studying existing records
It requires having specific research questions in mind
Limitations of archival research?
Purely descriptive causal relationships cannot be established
What are content analysis observations?
Used to analyze a written or spoken record (or other meaningful matter) for the occurrence of specific categories or events,items or Behaviour.
What should a content analysis be like?
Should be objective
Should be systematic; Including articles not in favor of your position as well
Should have generality; findings should fit within a theoretical, empirical, or applied context.
Limitations of content analysis
Purely descriptive, centers in the durability of findings
What is survey research?
Research where you directly question your participants about three Behaviour and underlying attitudes, beliefs and intentions.
What kind of study is survey research?
Is a correlational study
Limitations of survey research?
Usually does not permit you to draw causal inferences from your data
Steps to define a questionnaire or
- Clearly define topic of your study
- Demographics
- write questionnaire items
- order of questions in questionnaire
What are demographics?
Characteristics of participants(age,sex,marital status,occupation,income,education)
How should demographics be of use?
They’re used as predictor variables during analysis of the data to determine whether participants characteristics correlate with or predict responses to other items in the survey( should not be presented first k. He questionnaire,first question should be engaging).
What are the different types of writing questionnaire items?
- Open ended items
- Restricted items
- Partially open-ended items
- Rating scales
How do open ended items work?
Allow participant to respond in their own words
How do Restricted items (close-ended items) work?
Provide limited number of specific response alternatives
How do partially open ended items work?
They resemble restricted items but provide an additional “other” category, an opportunity to give an answer not listed amongs specific alternatives
How do rating scales work?
They are a variation on restricted items using rating scale rather than response alternatives.
How should the order of questions in a questionnaire be? How is a questionnaire more effective?
Sensitive questions should be towards the end
Questionnaire is more effective if the organization is coherent and the questions follow a logical order and relate to each other
How can you administer a questionnaire?
- mail survey
- internet survey
- telephone survey
- group administer d survey
- face to face interviews
Mixed mode survey
What are the two ways to asses the reliability of a questionnaire l
Repeated administration
Single administration
How do you asses reliability of a questionnaire through repeated administration?
- test retest reliability
- use parallel forms to avoid the problem with test res test reliability
How do you asses reliability of a questionnaire through single administration?
Split half reliability
Splitting the questionnaire into equivalent hables & deriving score from each half(split half…)
Applying Kuder-Richardson formula
How do you apply Kuder-Richardson formula?
Yielding the average of all split half reliabilities that could be derived from the questionnaires handed.
The resulting number should lie somewhere between 0 and 1
The higher the number, the greater the reliability of questionnaire
How do we increase the reliability of a questionnaire?
- increasing the number of items on questionnaire
- standardize administration procedures(timing procedures,lighting,ventilation, instructions to participants, and instructions to administrators are constant)
- score questionnaire carefully
- items on questionnaire are clear, well written, and appropriate for sample