Research Methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Aim [definition]:

A

A statement of what the researcher intends to find out in a research study.
For example: Investigating the effect of caffeine on memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Debriefing [definition]:

A

A post-research interview designed to inform participants of the true nature of the study and to restore them to the state they were at the start of the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How is debriefing useful? [2]:

A
  • It is a means of dealing with ethical issues

- Can be used to get feedback on the procedures of the study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Independent variable [definition]:

A

The variable that changes in an experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Dependent variable [definition]:

A

Dependent on the independent variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Control variable [definition]:

A

The one that doesn’t change

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Confounding Variable [definition]:

A

A variable under the study that is not the IV but varies systematically with the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Extraneous variables [3]:

A
  • Do NOT vary systematically with the IV
  • They do not act as an alternative IV but instead have an effect on the DV
  • They are nuisance variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Internal validity [definition]:

A

The degree to which an observed effect was due to the experimental manipulation rather than other factors such as confounding/extraneous variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

External validity [definition]:

A

The degree to which a research finding can be generalised to other settings (ecological validity)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Validity vs Reliability:

A
Reliability = consistency of a measure 
Validity= accuracy of a measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Confederate [2]:

A

An individual in a study who has been instructed how to behave, by the researcher
- In stanford prison experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Directional hypothesis [2]:

A
  • States the direction of the predicted difference between two conditions
  • example: Women will have higher scores than men will on Hudson’s self-esteem scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Non-directional hypothesis [2]:

A
  • Predicts simply that there is a difference between conditions of the iv
  • There will be no difference between men’s scores and women’s scores on Hudson’s self-esteem scale
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Pilot study [definition]:

A
  • A small-scale trial run of a study to test any aspects of the design, to make improvements before the final study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

When do psychologists use a directional hypothesis?

A

When past research suggests that the findings will go in a particular direction

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

When is a non-directional hypothesis used?

A

When there is no past research on the topic studied or past research is contradictory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are 3 types of experimental design?

A
  • Repeated measure design
  • Independent measure design
  • Matched pairs design
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Repeated measures design [3]:

A

ALL participants experience ALL levels of the IV
+ Participant variables are reduced since its the same person
+ Fewer people are needed as they take part in all conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Limitations of repeated measure design [2]:

A
  • Order effects e.g getting tired. Can be avoided by using counterbalance
  • Participants may guess the aim of the experiment and behave a certain way e.g purposely do worse in the second half. Can be avoided by using a cover story
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Independent measure design [2]:

A

Participants are placed in separate groups and only experience one level of the IV each
+ Avoids order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Limitations of independent measure design [2]:

A
  • Participant variables e.g different abilities or characteristics [participants are randomly allocated]
  • Needs more participants than repeated measure
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Matched pairs design [3]:

A

Participants are matched by key characteristics or abilities, related to the study
+ Reduces participant variables
+ Reduces order effects

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Limitations of matched pairs design [3]:

A
  • If one participant drops out you lose 2 PPs’ data
  • Very time-consuming trying to find closely matched pairs
  • Impossible to match people exactly
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Lab experiments [2]:

A
  • Conducted in an environment controlled by researcher

- Researcher manipulates the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

lab experiment examples [2]:

A
  • Milgram’s experiment on obedience

- Bobo’s doll

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Strengths of lab experiments [2]:

A
  • It is easier to replicate. This is because standard procedure is being used
  • They allow for precise control of extraneous and independent variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Weakness of lab experiments:

A
  • The artificiality of the setting may produce unnatural behavior that does not reflect real life (low ecological validity)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Field experiments [3]:

A
  • Conducted in the participant’s everyday setting
  • Researcher manipulates the IV, but in a real-life setting (can’t rlly control extraneous variables)
  • example: Hofling’s hospital study on obedience (involves medicine cabinet used by nurses in hospital and tested nurses)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Strengths of field studies [2]:

A
  • Behavior in a field experiment is more likely to reflect real life because of its natural setting
  • There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied (in covert experiments)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Weakness of field experiments [2]`:

A
  • There is less control over extraneous variables that might bias the results.
  • This makes it difficult for another researcher to replicate the study in exactly the same way.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Natural experiments [3]:

A
  • Conducted in everyday life
  • Researcher does NOT manipulate the IV cus occurs naturally
  • Hodges and Tizard’s attachment research (1989) compared the development of children who had been adopted to children who spent their lives with their biological families
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Strengths of natural experiments [3]:

A
  • Behavior in a natural experiment is more likely to reflect real life because of its natural setting
  • There is less likelihood of demand characteristics affecting the results, as participants may not know they are being studied
  • Can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g. researching stress
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Weaknesses of natural experiments [2]:

A
  • They may be more expensive and time consuming than lab experiments
  • There is no control over extraneous variables that might bias the results. makes it difficult replicate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Quasi experiments [3]:

A

+ Can be done in a controlled environment

  • IV is not made, it is a pre-existing difference
  • Sheridan and King 1972 tested obedience between the genders by making them shock a puppy with increasing strength. Male obedience was 54% and female was 100%
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Strengths of quasi experiment:

A

Allows comparisons between different types of people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Weaknesses of quasi experiments [2]:

A
  • Participants may be aware they are being studied, creating demand characteristics
  • The dependent variable may be an artificial task reducing mundane realism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Mundane realism [definition]:

A

The degree to which the procedures in an experiment are similar to events that occur in the real world

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Single blind design:

A

Participant is not aware of research aims and/or which condition of the iv they are in

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Double blind design [2]:

A
  • Both participant and researcher are unaware of condition of IV or aim
  • The person conducting the experiment is less likely to give away the aim of the experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Experimental realism:

A

If the researcher makes an experimental task sufficiently engaging the participant pays attention to the task and not the fact that are being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Generalisation [definition]:

A

Applying the findings of a particular study to the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Opportunity sample [3]:

A

People who are the most convenient or available are recruited
+ Easiest method cus u can just use the first suitable subject
- Biased sample cus only a small part of the population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Random sample [3]:

A

Uses random methods like picking names out of a hat
+ Unbiased/ all members of target population have an equal chance of getting chosen
- Time consuming (needs to have a list of all population members)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Stratified sample [3]:

A

Strata (subgroups) within a population are identified. Then members of the strata are chosen
+ More representative of the population than other samples
- Very time consuming

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Systematic sample [3]:

A

A predetermined system is used to select participants
+ Unbiased as it uses an objective system
- Not truly random

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Volunteer sample [3]:

A

Advertised on the newspaper, noticeboard or the internet and people volunteer
+ Gives access to variety of participants which can make the sample more representative
- Sample is biased because participants are more highly motivated to be helpful

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Random techniques [3]:

A
  • Random number table
  • Random number generator
  • Lottery method (pulling names out of a hat)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Ethical issues [6]:

A
  • Deception
  • Informed consent
  • Privacy
  • Confidentiality
  • Protection from harm
  • Right to withdraw
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

Deception- Participant POV [3]:

A
  • It’s unethical.
  • The researcher should not deceive anyone without good cause.
  • Deception prevents informed consent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Deception- researcher’s POV [2]:

A
  • Can be necessary otherwise participants may alter behaviour
  • Can be dealt with by debriefing participant when study is completed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Informed consent- Participant POV [2]:

A
  • They should be told what will be required to do in the study so that they know what they are agreeing too
  • It is a basic human right
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Informed consent- Researcher POV [2]:

A
  • Means revealing true aims of the study

- Can get presumptive consent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Right to withdraw- Participant POV [3]:

A
  • It is an important right
  • Allows patient to leave if uncomfortable
  • The withdraw may be compromised if payment was used as an incentive
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Right to withdraw- Researcher POV [3]:

A
  • Can lead to a biased sample if people leave
  • They lose money if the person was paid and withdrew
  • Researcher has to inform the participant of this right before the study
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Protection from harm- Participant POV [2]:

A
  • Nothing should happen to them during a study that causes harm
  • It is acceptable if the harm is no greater than what the subject would experience in ordinary life
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Protection from harm- Researcher POV [3]:

A
  • Some more important questions in psychology involve a degree of distress to participants
  • It is difficult to guarantee protection from harm
  • Harm is acceptable if the outcome is more beneficial than the harm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Confidentiality- Participant POV [2]:

A
  • The data protection act makes confidentiality a legal right
  • It is only acceptable for personal data to be recorded if the data is not made available in a form that identifies participants
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Confidentiality- Researcher’s POV [3]:

A
  • Can be difficult because the researcher wishes to publish the findings
  • A researcher can guarantee anonymity but it may still be obvious who the subjects were
  • Researchers should not record the names of participants
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

Privacy- Participant POV:

A
  • People do not expect to be observed in certain situations
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

Privacy- Researcher POV [2]:

A
  • It may be difficult to avoid invasion of privacy when studying participants in public
  • Do not study anyone without informed consent unless in a public place and displaying public behaviour
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

BPS ethical guideline strengths and weaknesses [3]:

A

+ The guidelines are quite clear

  • They’re vague
  • The guidelines absolve the individual responsibility because they can just justify their research claiming they followed the guidelines
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

Controlled observation [definition]:

A

A form of investigation in which behaviour is observed but under conditions where certain variables have been organised the researcher

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

Covert observations [definitions]:

A

Observing people without their knowledge.

Knowing that behaviour is being observed is likely to alter the participant’s behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

Inter-observer reliability [definition]:

A

The extent to which there is agreement between 2 or more observers involved in observations of a behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

Naturalistic observation [definition]:

A

An observation carried out in an everyday setting, in which the investigator does not interfere in anyway but merely observes the behaviours in question

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

Non-participant observation [definition]:

A

The observer is seperate from the people being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

Overt observation [definition]:

A

Observational studies where participants are aware they are being observed

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

Participant observation [definition]:

A

Observations made by someone who is also participating in the activity being observed, which may affect their objectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

Naturalistic observation evaluation [2]:

A

+ Gives a realistic picture of spontaneous behaviour (high ecological validity)
- There is little control of all other things that are happening (sumn unknown may cause the behaviour being observed)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

Controlled observation evaluation [2]:

A

+ Observer can focus on particular aspects of behaviour

- control comes at the cost of the environment (artificial feeling)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
72
Q

Covert observation evaluation [2]:

A

+ Behaviour is more natural

- Participants cannot give consent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
73
Q

Overt observation [-]:

A
  • Participants are aware they are being watched and may behave unnaturally
74
Q

Participant Observation evaluation [3]:

A

+ May provide insight into behaviour from the ‘inside’

  • Likely to be overt and so have participant awareness issues
  • Might be biased
75
Q

Non-participant observation evaluation [2]:

A

+ Observers are likely to be more objective because they are not part of the group being observed
- More likely to be covert and so there is ethical issues

76
Q

Event sampling [definition]:

A

An observational technique in which a count is kept of the number of times a certain behaviour occurs

77
Q

Time sampling [definition]:

A

An observational technique in which the observer records behaviours in a given timeframe

78
Q

Structured interview [definition]:

A

Any interview with predetermined questions

79
Q

Unstructured interview [definition]:

A

The interview starts with some general aims and possibly some questions, and lets the interviewee’s guide subsequent questions

80
Q

Questionnaire evaluation [3]:

A

+ Can reach large numbers of people easily (large sample)
+ Respondents may be more willing to give personal information in a questionnaire than an interview
- can only be filled by literate people = sample is biased

81
Q

Structured interview evaluation [4]:

A

+ Can be easily repeated cus questions are standardised
+ Easier to analyse than unstructured interview
- low reliability: different interviewers behave differently
- Interviewer bias

82
Q

Unstructured interview evaluation [3]:

A

+ more detailed information than in structured

  • Require interviewers with more skill than structured
  • In-depth questions may lack objectivity compared to predetermined ones
83
Q

Correlation [definition]:

A

A relationship between two variables

84
Q

Correlations [3]:

A
  • Participant provides data for both variables
  • In a correlation design, there are no independent or dependent variables, but co-variables
  • We only use a correlation when testing the relationship between 2 variables
85
Q

Structured observation [definition]:

A

A researcher uses various systems to organise observation such as behavioural categories and sampling procedures

86
Q

What happens in unstructured observations?

A

The researcher records all relevant behaviour but has no system

87
Q

Features of structured observations [2]:

A
  • Behavioural categories

- Time/event sampling

88
Q

Rules of behavioural categories [3]:

A
  • Categories should be objective
  • Cover all possible component behaviours
  • Categories should be mutually exclusive
89
Q

What are the self report techniques [3]:

A
  • Structured interview
  • Unstructured interview
  • Questionnaire
90
Q

Rules of writing a questionnaire [3]:

A
  • Questions must be clear
  • Bias can lead to a participant to give a particular answer
  • Questions need to be written so that answers are easily to analyse
91
Q

What to add in a questionnaire [2]:

A
  • Filler questions to distract participant from true aim

- Easier questions first

92
Q

Meta analysis [definition]:

A

When a researcher looks at findings from a number of different studies and produces a statistic to represent the overall effect

93
Q

Review [definition]:

A

A consideration of a number of studies that have investigated the same topic in order to reach a general conclusion about a particular hypothesis

94
Q

Content analysis [definition]:

A

A type of observational study where behaviour is observed INDIRECTLY in written or verbal materials (interviews, questionnaires)

95
Q

Effect size [definition]:

A

A measure of the strength of the relationship between two variables

96
Q

Meta analysis strengths [2]:

A

+ Increases validity of conclusion as they are based on wider sample of participants
+ Groups of studies on same topic often contradict, meta analysis helps us to reach an overall conclusion with stats

97
Q

Limitations of meta analysis [2]:

A
  • Experimental designs in different studies may vary so research will never be truly comparable
  • Putting them all together to calculate the effect size may not be appropriate
98
Q

Why is the mean the most sensitive measure of central tendency? *

A

It takes account of the exact distance between all the values of data

99
Q

When is a scattergram used in psychology?

A

When correlations lol

100
Q

When is a line graph used in psychology?

A

With continuous data

101
Q

When is a histogram used in psychology?

A
  • Can’t draw histogram with data in categories

- When frequency is mentioned lmao

102
Q

When is a bar chart used in psychology? [2]:

A
  • When data is not continuous

- Can be used when categorical / nominal data

103
Q

When is a table used in psychology?

A

when displaying raw data

104
Q

Skewed distribution [definition]: *

A

There is a number of extreme values on one side

105
Q

Positive skewed distribution =

A

Scores on the left side

106
Q

Negative skewed distribution =

A

Scores to the right side

107
Q

Quantitative data =

A

numbers n shiiii

108
Q

Qualitative data =

A

cannot be quantified

109
Q

Quantitative data evaluation [2]:

A

+ Easy to analyse using descriptive stats or stats tests

- Data may oversimplify reality

110
Q

Qualitative data [2]:

A

+ Provides richer and detailed information about people’s experiences
- Complexity makes it more difficult to analyse/summarise and draw conclusions from

111
Q

Primary data evaluation [2]:

A

+ researcher has control of the data and how it is collected
- Lengthy and expensive process

112
Q

Secondary data [definition]:

A

Information used in research that was collected by someone else

113
Q

Secondary data evaluation [2]:

A

+ It is simpler and cheaper to access someone else’s data

- Data may not exactly fit the needs of the study

114
Q

When is a sign test used? [3]:

A
  • Paired or related data
  • Repeated measure design
  • matched pair design
115
Q

How to sign test:

A

s value = no. of smallest sign

116
Q

Nominal data [definition]:

A

Named data which can be separated into discrete categories which do not overlap

117
Q

Ordinal data [definition]:

A

Data which is placed into some kind of order or scal

118
Q

Interval data [definition]:

A

Data which comes in the form of a numerical value where the difference between points is standardised and meaningful

119
Q

What are the types of data? [4]:

A
  • Nominal data
  • Ordinal data
  • Interval data
  • Ratio
120
Q

Peer review evaluation

A

*

121
Q

Content analysis [definition]:

A

A type of observational study where behaviour is observed indirectly in visual or verbal material

122
Q

Thematic analysis [explanation]:

A

Themes or categories are identified and data is organised into these themes

123
Q

Content analysis [3]:

A
  • Researcher has to pick whether they are using a time or event sample
  • Then have to pick behavioural categories for them to tally
  • Represent data by analysing data quantitatively or qualitatively
124
Q

Example of quantitative content analysis [4]:

A
  • Anthony Manstead & Caroline McCulloch
  • Interested in the way men and women were presented in tv ads
  • observed 170 ads over one week
  • Focused on adult figure & recorded frequency of desired behaviour in a table
125
Q

Thematic analysis [3]:

A
  • qualitative content analysis
  • quali data summarised by identifying repeated themes
  • Very lengthy process cus every thing is heavily analysed
126
Q

Strengths of content analysis [2]:

A

+ have high ecological validity cus its based on observations of what people acc do
+ Content analysis can be replicated cus sources can be accessed by others

127
Q

Weaknesses of content analysis [2]:

A
  • Observer bias reduces OBJECTIVITY & VALIDITY of findings cus different observers may interpret behavioural categories differently
  • likely to be culturally biased cus observer is judging behaviours based on their standards
128
Q

What are the intentions of thematic analysis? [3]:

A
  • to impose some kind of order on data
  • summarise data to reduce page no.
  • ensure that ‘order’ is representative
129
Q

Case study [2]:

A
  • detailed study of an individual

- provide a rich record of human experience

130
Q

Case study example [3]:

A
  • Henry Molaison: his hippocampus was removed cus epi seizures, can’t form new memories
  • Little Hans
  • Phineas Gage: iron rod through brain
131
Q

Case studies strengths [2]:

A

+ ideographic- in-depth data provides new insights

+ Allow us to investigate rare instances of human behaviour e.g Romanian orphanages

132
Q

Case studies weaknesses [2]:

A
  • Difficult to generalise/ apply to population

- ethical issues like confidentiality and informed consent, protection from harm

133
Q

Inter-observer reliability [definition]:

A

The extent to which there is an agreement between two or more observers in an experiment

134
Q

Test-retest reliability [definition]:

A

same test or interview is given to same participant on diff occasion to see if they get the same result

135
Q

How to asses reliability [3]:

A
  • Have 2 or more observers making separate records and then compare
  • inter-observer- reliability is the extent they agree
  • calculated using correlation coefficients
136
Q

Improving inter-observer reliability [2]:

A
  • Clearer behavioural categories (may not have been clear before)
  • Observers may need more practice using the categories
137
Q

Improving reliability [2]:

A
  • Reduce ambiguity of items in tests

- Standardise procedure

138
Q

Concurrent validity [definition]:

A

establishing validity by comparing an existing test with the one ur interested in

139
Q

Ecological validity [definition]:

A

ability to generalise research effect beyond the research setting

140
Q

Face validity [definition]:

A

the extent which test items look like what the test claims to measure

141
Q

Mundane realism [definition]:

A

How a study mirrors the real world/ is it realistic?

142
Q

Temporal validity [definition]:

A

whether research can be generalised beyond the time period of the study

143
Q

Validity [definition]:

A

whether an observed effect is a genuine one

144
Q

How to improve validity [2]:

A
  • If face = make better questions bro

- If internal/ external = use better research design bro smh

145
Q

What are the features of science? [5]:

A
  • Empirical methods
  • Objectivity
  • Replicability
  • Theory construction
  • Hypothesis testing
146
Q

What are empirical methods?

A

When info is gained thru observation or experimentation rather than unfounded beliefs

147
Q

Theory construction [explanation]:

[2]:

A

Facts alone are meaningless, theories/ explanations must be made to make the facts make sense
- can be done through hypothesis testing

148
Q

Falsifiablility [definition]:

A

The possibility that a statement or hypothesis can be proven wrong

149
Q

Type 1 error [definition]:

A

When a researcher rejects a null hypothesis that’s true

150
Q

Type 2 error [definition]:

A

When a researcher accepts a null hypothesis that is not true

151
Q

When is p ≤ 0.01 used?

A

When a researcher is replicating another study because results need to be more certain

152
Q

What is a parametric test? [3]:

A
  • A test that has interval data or ratio level of measurement.
  • Be drawn from population with normal distribution.
  • Both samples have equal variances.
153
Q

Non- parametric tests of difference [4]:

A
  • Wilcoxon test
  • Mann-whitney
  • Sign test
  • Chi-square
154
Q

Parametric tests of difference [2]:

A
  • Related t test

- Unrelated t test

155
Q

Tests of correlation [2]:

A
  • Spearman’s Rho (non-parametric)

- Pearson’s R (parametric)

156
Q

Wilcoxon test [3]:

A
  • Hypothesis states a difference
  • Related data (repeated measure/ matched pairs)
  • Ordinal data
157
Q

Wilcoxon significance [2]:

A
  • Calculated value of ‘T’ must be ≤ than the critical value to be significant
  • If not significant we accept the null
158
Q

Mann-Whitney [3]:

A
  • Hypothesis states a difference
  • Unrelated data (independent measure)
  • Ordinal data
159
Q

Mann-Whitney significance [2]:

A
  • Calculated value of ‘U’ must be ≤ than the critical value to be significant
  • If not significant we accept the null
160
Q

Related t- test [3]:

A
  • Hypothesis states a difference
  • Related data (repeated measure/ matched pairs)
  • Interval data
161
Q

Related t-test significance [2]:

A
  • Calculated value of ‘t’ must be ≥ than the critical value to be significant
  • If not significant we accept the null
162
Q

Unrelated t-test [3]:

A
  • Hypothesis states a difference in data
  • Unrelated data (independent measure)
  • Interval data
163
Q

Unrelated t-test [2]:

A
  • Calculated value of ‘t’ must be ≥ than the critical value to be significant
  • If not significant we accept the null
164
Q

Spearman’s Rho [3]:

A
  • Hypothesis states a corrrelation
  • Related data (repeated measure/ matched pairs)
  • Ordinal data
165
Q

Spearman’s Rho significance [2]:

A
  • Calculated value of ‘rho’ must be ≥ than the critical value to be significant
  • If not significant we accept the null
166
Q

Pearson’s R [3]:

A
  • hypothesis states a correlation
  • Related data (repeated measure/ matched pair)
  • Interval data
167
Q

Pearson’s R significance [2]:

A
  • Calculated value of ‘r’ must be ≥ than the critical value to be significant
  • If not significant we accept the null
168
Q

Chi square [3]:

A
  • Hypothesis states a difference/ association
  • Unrelated/ independent data
  • Nominal data
169
Q

Chi square significance [2]:

A
  • Calculated value of ‘x2’ must be ≥ than the critical value to be significant
  • If not significant we accept the null
170
Q

Sign test [3]:

A
  • Hypothesis states a difference
  • Related data (repeated measure/ matched pairs)
  • Nominal data
171
Q

Sign test significance [2]:

A
  • Calculated value of ‘S’ must be ≤ than the critical value to be significant
  • If not significant we accept the null
172
Q

One-tailed test [definition]:

A

Form of test used with a directional hypothesis

173
Q

Two-tailed test [definition]:

A

Form of test used with a non-directional hypothesis

174
Q

Degrees of freedom

A

*

175
Q

levels of measurement =

A

ration , nominal , ordinal , interval

176
Q

Report structure [6]:

A
  1. abstract
  2. intro
  3. Method
  4. results
  5. discussion
  6. references
177
Q

What is an abstract?

A

A summary of the study e.g aims, hypothesis method

178
Q

what measure of central data is used for nominal data?

A

Mode

179
Q

what measure of central data is used for ordinal data?

A

Median

180
Q

what measure of central data is used for interval/ratio?

A

Mean

181
Q

What is the order of robustness for the measures of central tendency?

A
  1. Mean
  2. Median
  3. Mode