Research methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

what is ordinal data?

A

made up scale. objective scale based on judgement, e.g ratings of attractiveness (as opposed to size of eyes) or ratings of agression (as opposed to number of punches)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

what is interval data?

A

precise scale. someone took 10 secs to complete a task took twice the time someone who took 5 secs, but someone who scores 10 on an attractiveness scale is not twice as attractive as someone who scored 5.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

what is ratio data?

A

same as interval data but there is an absolute zero. there can be no negative scores e.g number of words recalled or height.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

what is a significant difference

A

a difference that is unlikely to be due to chance

can reject the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

what does just by chance mean?

A

the null hypothesis is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

what percentage does a difference have to be to be deemed significant is psychology

A

5% p<0.05

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

what level significance is used in medial trials

A

1% p<0.01

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

p means

A

probabilty the result is due to chance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

content analysis (2marks)

A

a technique for analysing qualitative data of various kinds. Data can be places in categories and counted (quantitative) or anlaysed by themes (qualitative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

application of content analysis to a Q

A

psychologists observe/watch/read whatever it is
enable them to identify potential categories which emerge of different types of x
give examples of what the categories could be
then watch/read/observe again and count the number of examples which fell into each category to provide quantitative data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

how to assess the reliability of a content anlysis

A

inter rater reliability
the two psychologists could carry out the content analysis of the films separately and compare their answers to see if they got the same/similar tallies for each category- if they did then the inter-rater reliability is high

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

justify the use of repeated measures design

A

to remove the effects of individual differences in the DV if an independent groups design was used.
to avoid potential difficulties in matching participants
to reduce the number of participants needed for the experiment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

experimental extraneous variables

A
  1. situational variable: environmental that affect ppt behaviour such as noise, temeperature, lighting etc (should be controlled by standardised procedures so the conditions are the same for every ppt (including standardised instructions)

2.partipnt varibales e.g mood intelligence anxiety nerves concentration
these could affect performance and therefore the results of the experiment

  1. investigator effects: because they know the aim of the experiment they interpret behaviour is a way that is bias and fits with what they were expecting
  2. demand characteristics: clues in the experiment that suggest to the ppt the purpose of the research
    to minimise the effects of this the envinroment should be as natural as possible
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

how to control order effects

A

counterbalancing: give half the ppts condition A first while the other ppts get condition B first
this prevents improvement due to practice or poorer performance due to boredom

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

how can participant variable be controlled

A

random allocation to conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

external validity

A

how fair findings from an experiment can be generalised to real life situations

e.g a hazard perception test on a computer doesn’t resemble a real life driving situation (no noise, stress etC) ecologically invalid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

example of extraneous variable and how affects experiment

A

the conversation with the psychologist was not controlled so the difficulty or the number of questions could have aired this would influence the DV as more or less attention would be required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

possible ethical issues:

A
  • protection from harm
  • informed consent: participants should be given full info about the nature of the task before deciding whether or not to participate
  • debriefing: at end of experiment, feedback on performace and ppt can ask questions
  • freedom to withdraw: ppt should be made aware of their right to withdraw before and during the ex
  • confidentiality:ppl should not be identified and retain anonymity (use of initial or numbers instead of names)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

writing a set of standardised instructions

A
  • you will take part in blah test and how long
  • what you have to do is
  • do you have any questions

have to be written to be read out
and formal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

variables in treatment studies

A

use of different therapists on conditions
length of time before assessment
the interaction between sex of therapist and patient
individual differences such as age and gender
whether patients were receiving other form of therapy or medication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

validity

A

how well a test measures what it says it measures- so is it true and accurate
and can it be generalised beyond the research setting within which it was found

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

assessing validity e.g a questionnaire used to measure the severity of symptoms

A

CONCURRENT
take another measure of symptoms from the same ppts e.g doctor or family member and compare the two sets of scores. if they agree then the measure has high validity

CONTENT
ask a expert to assess the questions to see if they are an accurate measure of panic attacks

CONSTRUCT
assess how closely the questions relate to underlying theoretical constructs i.e. how well they relate to panic symptoms

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

giving fully informed consent- what should participants be told first

A

ppts should be informed about the trial (use the stem to give details on what they’ll be required to do)

AND

data will be anonymised so that they are not identifiable in the results
make them aware they are free to withdraw themselves or their data from the clinical trial if they want

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

purpose of the abstract in a psychological report

A

to provide a short summary of the study, that is sufficient to establish whether it is worth reading the full report.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

recording data from interviews

A
audio recording( less intrusive than filming the patient) 
and more likely to agree to take part or be honest 

also making written notes during the interview could be off putting for the interviewee
likewise with filming.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

how would you analyse qualitative data from the interviews

A

content analysis: involve identifying important categories from a sub sample of interview responses (for example references to homework or warmth in the therapist ((things they found helpful in therapy))
they would then work through the written data and count the number of occurrences of each of the categories to produce quantitative data

thematic analysis: this method involves reading the rereading (familiarisation) the written transciptrs carefully. coding would involves looking for words which cropped up repeatedly in transcripts. these could then be combined to reduce the number of codes into three or four themes. the data would stay in qualitative format and not be reduced to numbers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

sampling techniques

A

opportunity - quicker and easier than a random sample using people who are in the surroundings e.g students in a school

random sample- participant selection is less biased because every student has an equal chance of being selected

volunteer sampling- participants would be interested in the task and likely to take it seriously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

importance of pilot studies (general)

A

make specific to stem
but
time appropriate
length of study- not too tiring
make sure instructions are clear (if stem is complex)]
order of things e.g is things are shown to ppt (should be random probs)
number of stimuli appropriate- sufficient to give reliable data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

reliability

A

refers to the consistency of findings/data or participant responses.

external reliability is whether the rest is consistent over time (whether a measure varies from one use to another).
and internal is whether results across items within a test are consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

assessing the reliability of a measure e.g a questionaire

A
split half ( internal reliability)
scores on half of the items are correlated with scores on the other half and the higher the correlation the reliable the questionnaire

test-re test (external reliability)
the questioner is given to the same participants again on different occasion the scores on the two tests are correlated and the higher the correlation the more reliable the questionnaire

so the similarity of scores between the tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

reasons for doing correlational reserach

A
  • when it’s not practical to do an experiment as you cannot manipulate the IV e.g individual differences
  • when it’s not ethical to manipulate the IV e.g stressful life events
  • preliminary research: establish that variables are related before conducting an experiment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

intra rater rel

A

similarity of ratings made by the same observer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

inter-rater rel

A

similarity of rating made by different observers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

experimental realism

A

extent to which participants are engaged in and take seriously the experimental task (maybe more likely with volunteer samples)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

ecological validity

A

extent to which the results of a study generalise to other contexts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

population valifity

A

extent to which the results of a study generalise to other people who weren’t part of the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

standardisation

A

process of making everything the same in a study in some relevant way

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

randomisation

A

altnernative to standardisation for controlling extraneous varibles in a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

correlation

A

statistical measure often used to test the reliability of the measure (the higher the correlation the higher the reliability of the test)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

face validity

A

a basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure- for instance does a test of anxiety look like it measures anxiety

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

concurrent validity

A

extent to which a score is similar to a score on another test that is known to be valid
(an existing similar measure)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

predictive validity

A

extent to which a score predicts future behaviour

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

types of external validity

A

ecological (other settings/situations)
population (other people)
temporal (generalising to other historical times/eras)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

types of internal validity

A

operationalisation (does the study validly measure/manipulate the variables
control- extraneous varibles controlled
experimental validity/realism- are participants engaged realistically in the experimental/research task

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

types of validity of measurement

A

face validity
concenuurent
predictive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

threats to validity

A

demand characteristics: participants who know they are in the study may guess the aim and therefore behaviour differently to how they would not knowing

social desribailty bias- ppts may behave/respond in social acceptable ways either to appear a better person to the researcher or to themselves so not truthfully

order effects- what ppts experience early in a study may affect their behaviour later (e.g learning from practice)

hawthorn effect- participants who they are in a study may try harder than they would in everyday life

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

what do you do if the results are not statistically significant

A

retain the null hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

type 1 error

A

when results are accepted as significant when in fact they are not; the alternative hypothesis is accepted when it is false (false positive)

where the researcher rejects the null hypothesis (or
accepts the research/alternative hypothesis) when in fact the effect is due to chance – often
referred to as an error of optimists

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

what decreases the chance of a type 1 error

A

if the results are significant at p<0.01 since that is a stringent level of significance
the more stringent the level the less chance of a type 1 error
the liklihood of a psychologist making a type1 error is 1 in 100 or less

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

type 2 error

A

false negative
when the results are accepted as insignificant when they in fact are significant
the null hypothesis is retained when it is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

what decreases the chance of a type 2 error

A

increase the sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

‘discuss’ purpose of a peer review

so can say limitations

A

to check in terms of suitability for publication, appropriateness of the theoretical background, methodology, statistics and conclusions

  • to ensure the methodology is sound, valid and does not involve plagiarism of other people’s research
  • the finding are vowel, interesting and relevant and add to knowledge of a particular research area
  • the authors are not making unjustified claims about the importance of their findings
  • ensures research is reviewed by fellow experts
  • maintains standards of published work and allows university research departments to be rated and funded in terms of their quality
  • ensures that poor quality research is not published in reputable journals
  • bias it has been established that a publication bias occurs towards prestigious researchers and departmetns
  • bias towards positive findings- often negative findings and replications are not published though these can be critical in establishing whether important findings are reliable.

(limitations)

  • time consuming and expensive can take months or years so delaying publication of important findings
  • peer reviews sometimes fails to prevent sciefntic fraud
  • as reviewers normally work in the same field and are competing for limited research funds, there is a temptation to delay or even prevent the publication of competing research
53
Q

how to measure behaviour - e.g helping behaviours

A

friends and family could be asked to rate the helpfulness of the ppt e.g on a scale from 1-10
or a scenario could be set up without their knowledge to observe their helping behaviour e.g a confederate of the researcher drops their books and the number of people helping/not helping is recorded

54
Q

chi squared test

A

test of difference
using independent designs
nominal data

55
Q

man whitney U test

A

test of difference
using independent designs
ordinal/interval/ratio data

56
Q

wilcoxon

A
test of difference
related design (so matched pairs)
ordinal, interval or ratio
57
Q

spearmans

A

test of correlation
ordinal interval or ratio
rank order correlation coefficient

58
Q

statisitcal test box

A

China

Man Will Spearman

59
Q

ordinal data

A

data that can be ordered or ranks on a scale
does not have equal intervals between each unit

e.g ranking IQ scores

lacks precision as is subjective

uses median measure of central tendency and range for dispersion

60
Q

nominal data

A
simplest level
identifies categories and how many instances that are in each category 
discrete data
(frequency)
uses mode and no measure of dispersion
61
Q

interval data

A

item are placed on a scale but have fixed intervals so not only can you rank data but can see how far apart they are

use mean and standard deviation

62
Q

how to deal with ethical issues

A

e.g lack of informed consent- obtained through using a consent letter that the participant must sign

consent letter containing outline details of the study and referring to the right to withdraw and confidentiality of data

in cases of deception, presumptive constant may be used in which case a full debriefing would be essential

63
Q

how to improve validity in experiments

A

using a control group means that a researcher is better able to assess whether changes in the DV were due to manipulation of the IV
e.g a control group in a drug trial who do not receive therapy means the researcher has greater confidence that improves in the treatment condition was due to therapy not due to passage of time for e.g.

also standardisation: minimises the impact of ppt reactivity and investigator effects - with the use of single blind and double blind procedures
reduce demand characteristics

64
Q

blind procedures

A

ppts are not made aware of the aims of the study until they have taken part to reduce demand characterisc

65
Q

double blind procedures

A

a third party conducts the investigation without knowing its main purpose which reduces both demand chars and investigator effects- improving validity

66
Q

how to improve validity of questionaires

A

Qnaires and psychological tests often include a lie scale
within the questions in order to assess the consistency of a respondent’t response and to control for the effects of social desirability bias
validity may be further enhances by ensuring respondants that all data will be anonymous so they’re more likely to answer truthfully

67
Q

improving validity of observations

A

normally quite good
ecologically valid especially when observation is covert (minimal intervention of the researcher (remind undetected)
so the behaviour observed is likely to be natural and authentic

however if behavioural categories are too broad, are overlapping or ambitious they may have a negative impact of the validity of data.

68
Q

why are qualitative methods of research often more ecologically valid than quantitative

A

because the depth and detail in case studies and interviews for instance is better able to reflect the participants reality than numbers

triangulation enhanes validity

69
Q

triangulation

A

the use of a number of different sources as evidence for example data compiled through interviews with friends and family, personal diaries and observations

consistency of evidence from different types of studies

70
Q

validity definition

A

refers to whether a psychological test, observation or experiment produces a results that is legitimate

so whether the observed effect is genius and represents what is actually in the real world.
this includes whether the researcher has managed to measure what they intended to measure (internal validity)
and whether those findings can be generalised beyond the research setting they were found in (external validity)

71
Q

CONSORT guidelines

A

consolidated standards on reporting trials

72
Q

generalised placebo effect

A

But as with any treatment there is a generalised placebo effect; knowing you’re going to therapy allows the brain to prepare yourself for some sort of improvements and often symptoms are relieved as least for a short period of time due to this expectation. So the treatment does no harm and the worst it can do is nothing, so it’s worthwhile attending.

73
Q

reasons for opportunity sampling

A

quicker and easier than random sampling

can just use the people around

74
Q

reasons for random sampling

A

participant selection is less biased because everyone has an equal chance of being selected, the fairest method of sampling

75
Q

reasons for volunteer sampling

A

participants are more likely to be interested in the task and take it seriously and therefore answer more honestly?

76
Q

importance of pilot study

A

apply to the experiment if necessary

to find out:
whether the DV is measurable

whether the timings are appropriate for the task

instructions can be understood

the length of the study is appropriate

can it be done with the resources available.

77
Q

meaning of reliablity

A

the extent to which results/procedures are consistent (sameness of a measure/method/researcher)

78
Q

types/measures of reliability

A

internal reliability means that the items making up the questionnaire are assessing the same characteristic

external reliability means that the same person is taking the questioner at two different times producing the same score.

79
Q

controlled meaning

A

the IV directed affects the DV and can be closely measured.

lab experiments have the highest levels of this.

80
Q

How to make an experiment more controlled?

A

?? find from a markscheme

81
Q

objective

A

dealing with facts in an unbiased way, so results are not influences by opinion

when all scores of personal bias are minimised so as not to distort or influence the research process

82
Q

testable

A

links to testing hypotheses and making theories

a theory is testable if it is capable of being proved true or false with respect to scientific method.

the more consistent results are the with theory the more empirical the support.

83
Q

falsifiable

A

the principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue (false)

84
Q

replicable

A

refers to whether an experiment can be repeated to obtain the same results by other researchers
to check and verify the information
repeating can demonstrate the validity of the experiment
affirms the truth of original result

85
Q

issues around fraudulent research

A

1) lack of trust in the scientific community

2) the fraudulent research still remains published

86
Q

a peer review

A

a process where experts int he field assess a piece of work to consider it’s validity, significance and originality.

87
Q

why use a peer review

A

1) allocation of research funding - considering importance/priority
2) publication of research in scientific journals and books
3) assessing the research rating of university departments
4) to check methods/publication of results/bias??

88
Q

evaluation of a peer review

A

1) unacheiveable ideal - it can be hard to find experts in obscure areas so bad research could be published due to the reviewers not fully understanding it
2) anonymity - usually used to keep reviewers honest and objective but it can be used to settle scores or bury rival research
3) publication bias- reviewers favour positive results
4) preserving the status-quo - reviewers prefer research which foes with the current held theory

so peer reviews are really useful in an ideal world yet with them come complications that defeat their purpose somewhat

89
Q

strengths of content anlysis

from textbook

A

useful in that it circumnavigates many of the ethical issues normally associated with psychological research. much of the material wanting to be studied may already be within the public domain so there are no issues with obtaining permission.
it is flexible in that it produces both qualitative and quantitative data depending on the aims of the research

90
Q

limitations of content analysis

A

people tend to be studied indirectly as part of content analysis so the communication they produce if often separated from the context within it appeared.

danger that the researcher may attribute opinions/motivations to the speaker or writer that were not intended originally.

content analysis may lack objectivity especially when the descriptive forms of thematic analysis are used.

91
Q

case study

A

an in depth investigation, description or analysis f a single individual, group for institution or event
normally producing qualitative data
tend to be longitudinal

92
Q

strengths of case studies

A

offer rich detalied insights
may shed light on very unusual forms of behaviour
this may be preferred to the more superficial forms of data obtained from an experiment or questioner so depending the investigation case studies can be valuable research methods.

93
Q

limitations of case studies

A

generalisations! of findings when dealing with small samples sizes is a problem.

the information that makes it into the final report is subjectivity selected and interpreted by the researcher

personal accounts from indivials and family members may be prone to inaccuracies and memory decay when not tested in objective ways
e.g. children recalling information from early childghood
so generally the validity of research from case studies is fairly low

94
Q

limitations of case studies

A

generalisations! of findings when dealing with small samples sizes is a problem.

the information that makes it into the final report is subjectivity selected and interpreted by the researcher

personal accounts from indivials and family members may be prone to inaccuracies and memory decay when not tested in objective ways
e.g. children recalling information from early childghood
so generally the validity of research from case studies is fairly low

95
Q

internal reliablity

A

whether the test is consistent within itself- assesses the consistency of results across items within a test

96
Q

external reliablity

A

whether the rest is consistent over time

97
Q

assessing internal reliablity

A

inter rater rel

split half method

98
Q

assessing external rel

A

test/retest

99
Q

split half method

A

compares two halves of the test to see if they correlates
do all parts of the test contribute equally to what is being measured?
the similarity of the two sets of scores from different halves of a questionnaire

100
Q

inter- rater rel

A

the extent to which two independent observers agree in their assessment

101
Q

test/retest

A

the same test administered to the same person/group on different occasions and results compared for similarity

102
Q

type 1 error mark scheme (why researchers think they have not made this)

A

it’s (probably based on the question) unlikely because the calculated value may be higher at both the 0.05 and 0.01 level for a one-tailed test (so even at a more stringent level it is significant)
the chance of a type 1 error occuring is therefore less than 1% (0.01)
this means the researchers can be 99% certain that the results obtained are not due to chance.

103
Q

reasons for using lab experiments

A

control over extraneous variables. the lab setting minmises the influence of extraneous variables e.g waiting time/noise/stress which are present in field experiments can be easily removed.

ethical issues: in a field experiment deception of patients/ or withholding information is likely to be necessary.

these experiments are easily replaceable (contextualise to stimulus if necessary)

104
Q

Design an experiment question

VARIABLES / CONTROLS / PROCEDURES

A

1) IV and DV and design if not told
2) Controls e.g with repeated measures design - order effects need to be controlled: counterbalancing

2 tasks should be matched for difficulty for example

randomisation/time delay between tasks is also creditable

3) procedures
- dealing with ethical issues
- sampling
- details fo conditions and allocation to them
- standardised instructions
data collection

105
Q

one tailed test

A

directional

used when a theory predicts a difference or correlation
when previous research suggests a direction of difference

106
Q

two tailed test

A

non directional

used when different theories makes different predictions
when previous research is contradictory
when there is no previous research.

107
Q

example of extraneous variable influencing the results of an experiment

A

e.g. X (e.g.) conversations with psychology not controlled (1 mark for identification) affecting X e.g. the number or difficulty of questions. (1 mark) This would influence the DV because BLAH (1 mark).

108
Q

when asked about factors in an experiment that affect external validity

A

think about whether elements of the experiment are generalisable to other settings - were any part of it superficial? does the environment truly reflect real-life settings

109
Q

problem with two-tailed tests

A
  • more difficult to obtain significant results
  • minimum 5% probability of being down to chance is halved to cover each tail
  • increase chance of a type 2 error
110
Q

issues with longitudinal studies

A

In longitudinal studies problems include participant attrition, inability to control
intervening events
 In self report measures problems include demand characteristics
 Difficulties in establishing cause and effect.

111
Q

how to test internal validity of a questionnaire

A

Internal validity of the questionnaire can be
established by concurrent validity (1 mark) by comparing questionnaire scores to those from
another similar established measure (1 mark). A high positive correlation would indicate validity (1
mark).

112
Q

Content validity – questionaire example- testing internal V

A

is the content of the questionnaire appropriate? A simple way of establishing
this is face validity - does the questionnaire look right and appear to test what it aims to test? This
could be established by using a rating scale to assess the suitability of the questionnaire. Those
asked to rate the suitability could be the participants themselves or others in a position to offer
judgement (perhaps teachers, parents, school counsellors). A high suitability rating might suggest
a valid test.

113
Q

Criterion validity - questionnaire exemple - testing internal V

A

is a measure of the extent to which questionnaire items are actually measuring
the topics that they are intended to measure. This may be established through concurrent validity
how well the scores relate to those from a currently existing similar measure that is known to have
good validity. A high positive correlation between the two measures would indicate concurrent
validity.

114
Q

predictive validity - questionnaire example- testing internal V

A

how well do questionnaire scores relate to
later performance on a related measure. Similar performance on the two measures would be
predicted and again a high positive correlation between them would indicate validity.

115
Q

use of control group with treatment studies

A

1 mark for identifying that a control group acts as a comparator for the treatment group who
receive the anxiety management programme.
Up to 2 additional marks for further explanation. If a control group was not used there would be no
way of knowing whether students’ anxiety levels may have changed over the period of the study,
irrespective of whether they were in the programme or not. Therefore whether or not the
programme was effective.

control group provides reliable baseline data to compare the experimental group with

116
Q

why use random allocation

A

1 mark for a reason why participants were allocated randomly. For example, eliminates any
researcher bias in participant allocation to conditions.
1 further mark for appropriate elaboration, e.g. this means that participant characteristics that might
affect the research are not distributed systematically

117
Q

how can ppt be randomly allocated

A

ppt given a number and these numbers are either placed in a hat or plugged into a random number generated designed for this purpose and alternately allocated to each condition.

118
Q

improving reliability

A

1) Identify parts of test which didn’t correlate well and improve them
2) Reduce inaccuracies- more than one measurement from each participant
3) Pilot study
4) Use more than one investigator and standardise data collection

119
Q

objectivity

A

the methods are not influenced by the researchers own beliefs
ensures that other researchers doing the same work should get the same results- as not subjective.
objecvity ensures validity

120
Q

theory construction

A

as part of scientific methods- explanation of why phenomena happen

121
Q

hypothesis testing

A

prediction derived by theories

122
Q

empirical methods

A

information gained through direct observation/experiment rather than by reasoned argument or unfounded beliefs.

it is sensory experience that is central to the development and formation of knowledge and thus central also to scientific method

123
Q

validating new knowledge

A

mostly done by a peer review

124
Q

how to diminish unconscious bias

A

standardised instructions
operational definition of observed variables
objective measures
double blind procedures

125
Q

methodological issues of questionnaires

A

Methodological issues are most likely to arise from the use of questionnaires eg their
reliability/ validity, poor response rate associated with sending back questionnaires leading to
biased sample, demand characteristics etc. Reference to problems with correlational
research should be credited.

126
Q

ethical issues of questionnaires

A

Ethical issues are most likely to surround confidentiality and consent. Protection from harm
is also a possible issue in that this is a rather sensitive area and the results of the
questionnaire could be distressing for the participants and they might need some support/
counselling.

127
Q

self report questionnaires ads and disads over interviews

A

Advantage: Much quicker to administer and to score – could all have been given out at the
same time whereas the therapist has to conduct 30 time-consuming interviews; cheaper than
interviews, ie in terms of the therapist’s time; people might be more comfortable, and, therefore,
more honest, if they have to write responses rather than face an interviewer (could work the
other way as well – see disadvantages).

Disadvantage: Self-report questionnaires might not yield as accurate data as an interview –
questions can limit range of answers and there are no additional cues, eg body language,
participants might be less honest on a questionnaire than in a face-to-face interview.

128
Q

consent form content

A

• no pressure to consent – it will not affect any other aspects of their treatment if they
choose not to take part
• they can withdraw at any time
• they can withdraw their data from the study
• their data will be kept confidential and anonymous
• they should feel free to ask the researcher any questions at any time
• they will receive a full debrief at the end of the programme.