Research Methods Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What are confounding variables?

A

Confounding variables change systematically with the IV, so confound the findings of an experiment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are extraneous variables?

A
  • EV’s are additional, unwanted variables that could potentially interfere with the IV
  • Many EV’s are straightforward to control, such as participant age and lighting in the lab
  • These are described as ‘nuisance variables’ that do not vary systematically with the IV
  • They do not confound the findings of the study, but may make it harder to detect a result
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is mundane realism?

A

The term mundane realism refers to how an experiment mirrors the real world. If an experimental task lacks mundane realism, the results of the study may not be very useful in terms of understanding behaviour in the real world.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is generalisability?

A

The ability of your results to be able to be generalised to the full population. The materials and environment of the study effects it’s generalisability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is internal validity?

A

Internal validity concerns what goes on inside a study.
- Is the experiment testing what it is meant to be testing
- Did the IV produce a change in the DV or did something else affect it
- Were there any confounding variables
- Did the study have mundane realism

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is external validity?

A

External validity is affected by low internal validity, as you cannot generalise the results of a study that was low in internal validity because the results have no real meaning for the behaviour in question.

Ecological validity
Population validity
Historical / temporal validity (over time)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a directional hypothesis?

A

States the expected direction of the results, i.e. you are stating that people who sleep well do better on class tests

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a non-directional hypothesis?

A

States there is a difference between two conditions, but does not state the direction of the difference, i.e. people who have plentiful sleep have different marks on a class test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are pilot studies?

A
  • A small scale trial run of the investigation which takes place before the real one is conducted
  • The aim is to check the procedure, materials, et cetera to allow the researcher to iron out any potential problems and make any modifications that may be necessary
  • They are usually run with a handful of participants and can be used to test out any research method from experiments to questionnaires to observations
  • They are important because the opportunity to identify and modify the investigation will save any time and money that could possibly be wasted during the real thing
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Outline independent groups

A

Different groups of people performing the conditions

+ Reduced demand characteristics
+ No order effects

  • Participant and/or situational variables
  • More participants needed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Outline repeated measures

A

One group performs all the conditions

+ No participant variables
+ Less participants needed

  • Demand characteristics likely
  • Could be order effects (practice, fatigue)

Counterbalancing - ABBA or AB BA

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Outline matched pairs

A

Participants are paired up on a certain quality, such as age or gender

+ Reduces participant variables
+ Reduces demand characteristics
+ No order effects

  • More participants needed
  • Matching participants can be time consuming and difficult
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Outline lab experiments

A

Lab experiments are experiments conducted in a special environment where variables can be carefully controlled.

+ Controlled so high internal validity
+ Easily replicated

  • Low mundane realism
  • Low ecological validity
  • More likely to have demand characteristics, participant effects, and investigator effects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Outline field experiments

A

Field experiments are conducted in a real life setting. The IV is still deliberately manipulated by the researcher.

+ Often unaware they are being studied so less demand characteristics
+ Higher mundane realism and ecological validity

  • More difficult to control ev’s
  • Ethical issues
  • Less easy to replicate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Outline natural experiments

A

A natural experiment is conducted when it is not possible, for ethical or practical reasons, to deliberately manipulate the IV. The IV is a naturally occurring event.

+ Allows research where IV can’t be manipulated due to ethical or practical reasons
+ Increased mundane realism and ecological validity

  • Can’t demonstrate causal relationships
  • Random allocation not possible so may be variables that can’t be controlled
  • Probably unable to replicate
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Outline quasi experiments

A

IV is based on existing personal differences (age, gender)

+ Allows comparison between types of people
+ Less experimenter bias

  • Demand characteristics so reduced internal validity
  • Decreased mundane realism
  • Increased chance of participant variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are demand characteristics?

A
  • Participant reactivity is a significant extraneous variable in experimental research and is very difficult to control
  • In a research situation, participants will try and work out what is going on, using certain clues
  • These clues are the demand characteristics of the experimental situation and may help a participant ‘second guess’ the experimenters intentions as well as the aims of the study
  • Participants may look for clues to tell them how they should behave in the experimental situation
  • They may act in a way they think is expected and over-perform to please the experimenter, which is the ‘please-you effect’
  • They may deliberately under-perform to sabotage the results of the study, which is the ‘screw-you effect’
  • Either way, participant behaviour is no longer natural, and thus and extraneous variable that may affect the DV
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are investigator effects?

A
  • Investigator effects are any cues (other than the IV) from an investigator that encourage certain behaviours in the participant, and which might lead to the fulfilment of the investigator’s expectations
  • Such cues act as extraneous or confounding variables
    Indirect investigator effects:
    The ‘investigator experimental design effect’ -> the investigator may operationalise the measurement of variables in such a way that the desired result is more likely, or may limit the duration of the study for the same reason.
    The ‘investigator loose procedure effect’ -> this refers to situations where an investigator may not clearly specify the standardised instructions and/or procedures, which leaves room for the results to be influenced by the experimenter.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How do you reduce experimenter bias and demand characteristics?

A

Single blind - withhold aim from participants
Double blind - neither the participant or researcher is aware of the aim
Control group - neutral group to formulate comparisons or set a baseline

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are participant variables?

A

Any characteristic of individual participants, such as personality or intelligence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are situational variables?

A

Features of a research situation that may influence participants’ behaviour, such as temperature or time of day.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the five types of sampling?

A

Opportunity
Random
Stratified
Systematic
Volunteer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Outline opportunity sampling

A

People who are most convenient / available.

+ Easy, not time consuming

  • Biased because sample drawn from a small part of the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Outline random sampling

A

Sample obtained using random techniques such as the lottery method or the random number table.

+ Unbiased

  • May be time consuming, need a list of the population
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Outline stratified sampling

A

Subgroups within a population are identified, for example gender or age, and participants are obtained from each of the strata in proportion to their occurrence in the population. Selection from the strata is done randomly.

+ Representative

  • Time consuming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Outline systematic sampling

A

Using a predetermined system to select participants, such as selecting every nth person from a phone book.

+ Unbiased

  • Time consuming
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Outline volunteer sampling

A

+ May be more representative and less biased

  • Participants more likely to be highly motivated to be helpful or want the reward
  • Volunteer bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What are the six ethics guidelines?

A

Informed consent
Deception
Confidentiality and privacy
Debrief
Right to withdraw
Protection from (psychological) harm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the strategies of dealing with ethical issues?

A

A cost-benefit analysis -> the costs of doing the research is judged against the benefits. The costs and benefits can be judged from the point of view of participants, society as a whole, or in terms of the group to which an individual belongs

Ethics committees -> these committees must approve any study beforehand, and the committee looks at all possible ethical issues raised in any research proposal and how the researcher suggests those issues will be dealt with

Punishment -> if a psychologist behaves in an unethical manner, the BPS reviews the research and may decide to bar the person from practicing as a psychologist.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Outline naturalistic observations

A

In a naturalistic observation, behaviour is studied in a natural situation where everything has been left as it is normally.

+ Gives a realistic picture of spontaneous behaviour

  • Little control over variables
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Outline controlled observations

A

In a controlled observation, some variables in the environment are regulated by the researcher, reducing the ‘naturalness’ of the environment, and the ‘naturalness’ of the behaviour being studied.

+ An observer can focus on particular aspects of behaviour

  • Control of variables means environment and behaviour is less natural
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Outline covert observations

A

People are observed without their knowledge.

+ Less demand characteristics - increased validity
+ Natural behaviour

  • Ethical issues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Outline overt observations

A

Participants are aware their behaviour is being studied.

  • Demand characteristics
  • Unnatural behaviour
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Outline participant observation

A

Observations are made by someone who is also participating in the activity being observed.

+ Provides special insights into behaviour that might not otherwise be gained

  • Likely to be overt and thus have issues of participant awareness
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Outline non-participant observation

A

The observer is separate from the people being observed.

+ Observers are likely to be more objective

  • Likely to be covert, and thus ethical issues may arise
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are unstructured observations?

A

All relevant behaviour is recorded with no system. An issue is there may be too much too record. Furthermore, the behaviours recorded will often be those which are the most visible or eye-catching to the observer, but these may not be the most important or relevant behaviours. Researchers may use this approach where research has not been conducted before, as a kind of pilot study to see what behaviours might be recorded.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Outline structured observations

A

The researcher has a system used to record behaviour.

Behavioural categories:
Behaviour being studied is broken up into a set of components. Behavioural categories should:
- Be objective
- Cover all possible component behaviours and avoid a ‘waste basket’ category
- Be mutually exclusive

Sampling procedures:
Event sampling - counting the number of times a certain behaviour occurs
Time sampling - recording behaviours in a given tome frame, for example, noting what the participant is doing every 30 seconds

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What is operationalisation?

A

Turning the DV into a measurable format

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Evaluate self-report techniques

A

+ Allows access to what people think and feel, and to their experiences and attitudes

  • People may not supply truthful answers due to social desirability bias
  • If people do not know the answer to a question, they often make it up, meaning answers may lack validity
  • The sample of people used in any study using self-report may lack representativeness and thus the data may lack generalisability
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Outline questionnaires

A

Questionnaires are a self report technique. They allow researchers to discover what people think and feel, in contrast to observations, where researchers have to ‘guess’ what people think and feel on the basis of how they behave.

+ Once designed and tested, questionnaires can be distributed to large numbers of people relatively cheaply and quickly, enabling a researcher to collect data from a large sample of people
+ Respondents may be more willing to give personal information in a questionnaire than an interview, where they may feel self-conscious and more cautious

  • Questionnaires are only filled in by people who can read and write and have the time to fill them in, meaning the sample is biased
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Outline structured interviews

A

A structured interview has pre-determined questions, so is essentially a questionnaire that is delivered face to face. It is a self-report technique.

+ The main strength of a structured interview is that it can be easily repeated because the questions are standardised, meaning answers from different people can be compared
+ This also means that they are easier to analyse there an unstructured interview because answers are more predictable

  • Comparability may be a problem in a structured interview, if the same interviewer behaves differently on different occasions or different interviewers behave differently
  • A limitation of both structured and unstructured interviews is that the interviewers expectations may influence the answers the interviewee gives - interviewer bias
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Outline unstructured interviews

A

In an unstructured interview new questions are developed during the course of the interview. It is a self-report technique.

+ More detailed information can generally be obtained from each respondent than in a structured interview because the interviewer tailors further questions to the specific response

  • Unstructured interviews require interviewers with more skill than a structured interview because the interviewer has to develop new questions on the spot
  • Such in-depth questions may be more likely to lack objectivity than predetermined ones, because of their instantaneous nature, with no time for the interviewer to reflect on what to say
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

How are questionnaires constructed?

A

There are three guiding principles when writing questions:
1. Clarity -> there should be no ambiguity in questions
2. Bias -> any bias in a question may lead the respondent to be more likely to give a particular answer, with the biggest issue being social desirability bias
3. Analysis -> questions need to be written so the answers are easy to analyse, so closed questions would be better to analyse

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What are correlations?

A

A research method that investigates the relationship between 2 co-variables.

45
Q

What are co-variables?

A

Things examined together for a relationship. The co-variables are both measured.

46
Q

What is a meta-analysis?

A
  • A large-scale review study where a researcher takes the data of lots of individual studies and then analyses it all together to get an overall picture
  • It’s an ‘analysis of analyses’
47
Q

Outline the measures of central tendency

A

Mean:
+ The most sensitive measure of central tendency because it takes account of the exact distance between all the values of all the data
- Can be easily distorted by an anomaly so may be misrepresentative of the data as a whole
- Cannot be used with nominal data

Median:
+ Not affected by extreme scores
+ Easier to calculate than the mean
- Not as ‘sensitive’ as the mean because the exact values are not reflected in the final calculation

Mode:
+ Unaffected by extreme values
+ Only method that can be used on nominal data
- Not a useful way of describing data when there are several modes
- Tells us nothing about the other values in a distribution

48
Q

Outline the measures of dispersion

A

Range:
+ Easy to calculate
- Affected by extreme values
- Fails to take into account the distribution of the numbers

Standard deviation:
+ A precise measure of dispersion because it takes all the exact values into account
- May hide some of the characteristics of the data set, such as extreme values

49
Q

What are the standard deviations?

A

-3 = 0.13%
-2 = 2.15%
-1 = 13.59%
0 = 68.26%
+1= 13.59%.
+2 = 2.15%
+3 =0.13%

50
Q

Outline data distribution

A

Positively skewed distribution - right sided skew, goes mode median mean - high to low
Negatively skewed distribution - left sided skew, goes mean median mode - low to high

51
Q

What is quantitative data?

A

Numerical data

+ easy to analyse, test with statistics, and compare for patterns and trends
+ objective, scientific, less open to investigator bias
- not detailed, doesn’t explain the reasons behind behaviour

52
Q

What is qualitative data?

A

Non-numerical data

+ detailed and can provide real insight into why behaviour is the way it is, behaviour isn’t limited by numbers - anything can be recorded
- more subjective and open to investigator bias
- nor easy to analyse and compare for patterns and trends

53
Q

What is primary data?

A

Information observed or collected directly from firsthand experience

+ a strength of generating primary data is the researcher has full control over the data. The data collection can be designed to fit the aims and hypothesis of the study.
- it is a very lengthy and therefore expensive process. Simply designing a study takes a lot of time and then time is spent recruiting participants, conducting the study, and analysing the data.

54
Q

What is secondary data?

A

Information that was collected for a purpose other than the current one

+ it is simpler and cheaper to just access someone else’s data because significantly less time and equipment is needed
+ such data may have been subjected to statistical testing and thus it is known whether it is significant
- the data may not exactly fit the needs of the study

55
Q

What are the three levels of data?

A

Nominal
Ordinal
Interval

56
Q

What is nominal data?

A
  • This type of data is used to classify or categorise things like yes/no male/female answers when collating data in a study
  • This system is generally accepted as a means of identification
  • A set of data is said to be nominal if the values/observations belonging to it can be assigned a code in the form of a number where the numbers are simply labels
  • You can count but not order or measure nominal data
57
Q

What is ordinal data?

A
  • A set of data is said to be ordinal if the values/observations belonging to it can be ranked or have a rating scale attached
  • You can count and order, but not measure ordinal data
58
Q

What is interval data?

A
  • Data is measured in using units of equal intervals
  • Scales are excepted precise ones, such as distance, time, and temperature
  • With this scale, arithmetic operations are possible - this is because the numbers are ordered and the interval between each number along the scale is of equal size
59
Q

The statistical test table

A

[Test of difference] [Correlation]
[Independent] [Repeated and matched] [Correlation]
Nom: [Chi Squared] [Sign Test] [Chi Squared]
Ord: [Mann-Whitney] [Wilcoxon] [Spearman’s Rho]
Int: [Unrelated T] [Related T] [Pearson’s R]

  • If the name of the test has an R in it, the observed value needs to be moRe than the critical value
  • If the name of the test doesn’t have an R in it, the observed value needs to be less than the critical value
  • Statistical tests for nominal and ordinal data are non-parametric
    Statistical tests for interval data are parametric
60
Q

What is the accepted probability in psychology?

A

P<_0.05

61
Q

What is a type I error?

A
  • A false positive
  • Where the null hypothesis is rejected, and the alternative hypothesis is accepted
  • Where the researchers claim to have found a significant difference when there is not one
62
Q

What is a type II error?

A
  • A false negative
  • The null hypothesis is accepted, and the alternative hypothesis is rejected
  • The claim of no significant difference between variables, when there actually is one
63
Q

Null hypothesis

A

The null hypothesis is written alongside the experimental one, and states there is ‘no difference’ between the conditions. The results of the experiment will determine which hypothesis is ‘true’, and which one is rejected.

64
Q

What are the three aims of peer review?

A

Allocation of funding
Publication of research in academic journals and books
Assessing the research rating of university departments

65
Q

Outline the Cyril Burt affair

A

In the early 1950s, Cyril Burt published results from a twin study that was used to show intelligence is inherited. He started with 21 pairs of twins raised apart and then increased that to 42. The suspicious consistency of the correlation coefficient led to him being accused of inventing data. His two research assistants didn’t exist. However, his research was used to shape social policy. He helped establish the 11+ exams to identify who should go to grammar schools. He argued that since IQ was largely inherited, it was appropriate to segregate children into schools suited to their ability.

66
Q

Outline peer review and the internet

A

There is an extremely large amount of information available on the Internet. Scientific information is available on numerous online blogs, online journals, and Wikipedia. Information is often policed by the ‘wisdom of crowds’ approach. Readers decide whether it is valid or not and post comments. On Philica, papers are ranked on the basis of peer reviews which can be read by anyone.

67
Q

Evaluate peer review

A

A limitation is it isn’t always possible to find an appropriate expert to review a research proposal or report. This means that poor research may be passed because the reviewer didn’t really understand it. In addition, reviewers may be bias towards prestigious researchers rather than less well-known names. This emphasises the need to always take a rigourous and questioning approach both to the appointment of reviewers and into their work.

A limitation of peer review is publication bias. Journals prefer to publish positive results because editors want research that has important implications to increase the standing of the journal. This results in a bias in published research that leads to misperception of facts. It also appears that journals avoid publishing replications. Ritchie et al submitted a replication of a study on paranormal phenomena and found that it wasn’t even considered for peer review. This suggests that academic journals are as bad as newspapers for seeking eye catching stories.

With peer reviews, reviewers have to keep their identity secret. This aims to allow them to be honest and objective. However, it may have the opposite effect; reviewers may use the veil of anonymity to settle old scores or bury rival research. Research is conducted in a social world where people compete for research grants and jobs and make friends and enemies. Social relationships inevitably affect objectivity. To combat these issues, some journals now favour open reviewing, where both author and reviewer know each other’s identity.

68
Q

What are the sections of a scientific report?

A

Abstract
Introduction
Method
Results
Discussion
Referencing

69
Q

What is the abstract section of a scientific report?

A
  • short summary of all sections of the report (150-200 words)
  • includes aims / hypothesis, method, results, and conclusion
  • it is useful for psychologists to decide which areas of interest are worthy of future investigation
70
Q

What is the introduction section of a scientific report?

A
  • a discussion of the general area of interest of your study
  • includes findings from other major / relevant research in the area
  • should begin quite broadly, but then become more specific in relation to your area of interest
  • the final part is where you present your aims and hypothesis
71
Q

What is the method section of a scientific report?

A
  • must include sufficient detail so other researchers can replicate the study if they wish
    It outlines:
  • design, both experimental and actual method
  • sample
  • apparatus / material
  • procedure
  • ethics
72
Q

What is the results section of a scientific report?

A
  • should summarise key findings
  • descriptive statistics, such as graphs and tables
  • inferential statistics, such as statistical tests
  • raw data
  • if qualitative data has been used then a thematic analysis
73
Q

What is the discussion section of a scientific report?

A
  • summary of findings in written rather than statistical form
  • discussed in the context of the introduction
  • should acknowledge weaknesses and suggests how these may be addressed in a future study
  • consideration of the wider implications of the research
74
Q

What are the 5 features of a science?

A
  1. Objectivity and the empirical method
  2. Replicability
  3. Falsifiability
  4. Theory construction and hypothesis testing
  5. Paradigms and paradigm shifts
75
Q

What is objectivity?

A
  • scientists aim to be objective in their investigations
  • this means they keep a ‘critical distance’ in their research
  • they must not allow their personal opinions or biases to ‘discolour’ the data they collect or influence the behaviour of the participants they are studying
  • objective methods in psychology are usually those where there is high control, such as lab experiments
76
Q

What is the empirical method?

A
  • objectivity is the basis of the empirical method
  • empirical methods emphasise the importance of data collection based on direct, sensory experience
  • the experimental and observational methods are good examples of the empirical method in psychology
  • a theory cannot claim to be scientific unless it has been empirically tested and verified
77
Q

What is replicability?

A
  • replication has an important role in determining the validity of a finding
  • if a scientific theory is to be ‘trusted’, the findings from it must be shown to be repeatable across a number of different contexts and circumstances
  • replicating findings over different contexts and circumstances allows us to see the extent to which findings can be generalised
78
Q

What is falsifiability?

A
  • in the 1930s, Karl Popper asserted that the key criterion of a scientific theory is falsifiability
  • genuine scientific theories, Popper suggested, should hold themselves up for hypothesis testing and the possibility of being proved false
  • he believed that even when a scientific principle had been successfully and repeatedly tested, it was not necessarily true, but had simply not been proven false
  • Popper drew a clear line between good science, in which theories are constantly challenged, and what he calle ‘pseudosciences’ which couldn’t be falsified
  • those scientific theories that survive most attempts to falsify them become the strongest, as they have defied attempts to prove them false
  • this explains why psychologists avoid using phrases such as ‘this proves’ on favour of ‘this supports’ or ‘this suggests’
  • it also explains why alternative hypotheses must also have null hypotheses
79
Q

What is theory construction and hypothesis testing?

A
  • a theory is a set of general laws or principles that can explain events or behaviours
  • theory construction occurs through gathering evidence via direct observation, using the empirical method
  • it should be possible to make clear and precise predictions on the basis of a theory
  • an essential component of a theory is that it can be scientifically tested
  • theories should suggest a number of possible hypotheses
  • any one of these hypotheses can then be tested using systematic and objective methods to determine whether it will be supported or refuted
  • in the case of supporting, the theory is strengthened
  • in the case of refuting, the theory will be revised, revisited, or rejected
  • the process of deriving ney hypotheses from an existing theory is known as deduction
80
Q

What are paradigms and paradigm shifts?

A

Paradigms:
- a paradigm is a clear, distinct concept accepted by most people in a scientific field, for example, evolution, plate tectonics, and planets orbiting the sun
- Thomas Kuhn suggested that what distinguishes scientific disciplines and non-scientific disciplines is a shared set of assumptions and methods, which is a paradigm
- Kuhn suggested that psychology lacks a universally accepted program, and has too many internal disagreements and conflicting approaches, so is best seen as a ‘pre-science’
Paradigm shifts:
- according to Kuhn, progress within a particular science occurs when there is a scientific revolution, which is when a group of researchers begin to question the accepted paradigm, as there is too much contradictory evidence to ignore
- Kuhn would argue that psychology has not undergone any paradigm shifts

81
Q

What is internal reliability?

A

The extent to which the subject matter within the confines of the research is being measured consistently by the method / procedure or tool. This can be influenced by standardised procedures, standardised instructions, operationalisation of variables, and the tool used to measure the dependent variable.

82
Q

What is external reliability?

A

The extent to which the subject matter is measured consistently outside of the original study, by using the same method / procedure or tool, on the same people, and on subsequent replications. This can be influenced by how quickly the materials used in the methods / the tool date, for example changes of fashions, language, technology, and understanding over time, or if the materials used in the method were used in exceptional circumstances.

83
Q

What are the 3 issues of reliability?

A
  1. If a procedure does not have reliability then it means that something has not been done consistently in the study.
  2. The measuring tool does not consistently measure the same when used on repeat occurrences.
  3. In an observation, researchers recording data could be inconsistent with their observations and what they write down.
84
Q

What are the 2 ways of assessing the reliability of research?

A
  1. Test-Retest
  2. Inter-Observer Reliability
85
Q

What is Test-Retest?

A

This checks the reliability of a measuring tool. It is done by administering the same test to the same people on different occasions. The timing is important, as it cannot be too close that they remember the test but not too long that their performance could have genuinely changed. If the scores are similar on both occasions then the tool has reliability. This is checked by correlating their scores normally. If the tool comes out as showing poor reliability, the questionnaire might need changing, as the questions may be too subjective or ambiguous.

86
Q

What is Inter-Observer Reliability?

A

This checks that observers assessing behaviour are consistent / reliable with one another. It is done by giving the two (or more) observers the same coding sheet but independently record data from the same participant. If the scores of both observers are similar / correlate well then it would mean they are observing reliably. If the test comes out showing poor reliability, the behavioural categories would need checking, as maybe they are too similar etc, and the observers might need more training. When checking the reliability, data is usually correlated to check consistency. If the correlation coefficient is R+0.8 or above then the study is said to have reliability.

87
Q

How do you improve reliability in experiments?

A
  • Lab based experiments should have good control over the procedures
  • Standardisation
88
Q

How do you improve reliability in observations?

A
  • Use controlled and structured observations
  • Categories should be measurable and clear
  • More than one observer should be used, and an inter-observer reliability check should take place
  • Behaviour should be filmed
89
Q

How do you improve reliability in correlations?

A
  • Check the measuring tools and co-variables for reliability using test-retest
90
Q

How do you improve reliability in questionnaires?

A
  • Use test-retest
  • Anything that is unreliable will need adapting, for example removing questions or making them clearer
  • Using closed questions rather than open
91
Q

How do you improve reliability in interviews?

A
  • Having the same interviewer for each participant
  • Using a structured interview can improve reliability
92
Q

What are case studies?

A
  • An in-depth investigation of one person or a small group
  • They use a range of sources, such as the individual, family, and friends
  • They use many techniques, such as psychological tests, observations, interviews, and experiments
  • Case studies produce rich, meaningful, descriptive detail about what they are studying
  • They produce qualitative data
93
Q

What are the advantages of case studies?

A
  • Rich, in-depth, qualitative data can be collected
  • A complex interaction of factors can be studied, in contrast to experiments where only one is usually studied
94
Q

What are the disadvantages of case studies?

A
  • They are difficult to generalise as each person has unique factors
  • They are subjective, and may be affected by investigator bias
  • They are difficult to replicate, which affects the reliability of results
95
Q

How do you deal with validity and reliability in case studies?

A

Validity:
The validity in case studies will often depend upon the methods being used as part of the study. The validity of of each method, such as observations and interviews, will be assessed separately.
Reliability:
Case studies often collect qualitative data, which can be open to researcher bias. One way to overcome this is to use inter-observer reliability. By showing the results to another researcher and comparing the results, we can assess how reliable the data is.

96
Q

What is content analysis?

A
  • A form of observation, but instead of people studied, communication is studied, such as newspapers, tv ads, magazines etc
  • It is the analysis of something created by humans
  • Things are studied for patterns and trends within them and then summarised in a systematic way so conclusions can be made
  • They can be quantitative or qualitative in their design
  • The aim is to summarise and describe this communication in a systematic way so overall conclusions can be drawn
97
Q

How do you carry out a content analysis qualitatively?

A

Thematic analysis:
- A form of content analysis but the outcome is qualitative
- It involves organising, describing, and interpreting data
- Identified themes become the categories for analysis, with thematic analysis performed through the process of coding
- A theme is any idea, implicit or explicit, that is recurring in the data

98
Q

What are the steps to a thematic analysis?

A
  1. Familiarisation with the data -> get to know the content of the data in detail
  2. Coding -> identify important features of the data
  3. Searching for themes -> identify patterns of meaning
  4. Reviewing themes -> ascertaining whether the themes explain the data and answer the research question
  5. Defining and naming themes -> a detailed analysis of each theme
  6. Writing up
99
Q

How do you carry out a content analysis quantitatively?

A

Coding:
- It involves the quantification of qualitative material; the numerical analysis of written, verbal, and visual communications
- This creates behavioural categories that will be examined in the objects of study

Example coding units:
Word -> number of slang words used
Theme -> amount of violence on TV
Character -> the number of female commentators there are in TV sports programmes
Time and space -> the amount of time (on TV) and space (in newspapers) dedicated to eating disorders

100
Q

Give an example of coding in content analysis

A

Matthews (2012):
- He analysed 1200 instances of graffiti from toilet walls in US bars
- Graffiti was coded according to categories:
Sexual
Socio-Political (race, politics, religion)
Entertainment (music/film)
Physical presence (someone’s name)
Love/romance

Matthews found that males composed more physical presence or sexual graffiti, and females were more romantic in their graffiti.

101
Q

How is content analysis done?

A
  1. Sampling -> decide on what will be sampled
  2. Familiarise -> read over the sample
  3. Construct categories
  4. Collect data
  5. Draw conclusions
102
Q

What are the strengths of content analysis?

A
  • No social desirability / demand characteristics as using something that is already printed, so improves the internal validity.
  • Content analysis is based on real-life sources of information rather than the artificial situations that are found in experiments. This gives content analysis high mundane realism which also leads to high levels of ecological validity.
  • Content analyses can be easily replicated, as other researchers can use the same sources and coding systems and find the same results, making it easy to check the reliability of the data.
103
Q

What are the weaknesses of content analysis?

A
  • It is time consuming to carry out and difficult to do, which can result in issues with incomplete analysis, which affects the internal validity.
  • There are issues with observer bias, which affects internal validity.
  • Cause and effect relationships cannot be demonstrated using content analysis, as patterns and trends can be identified, but they could be either reflecting the social world, or causing differences in the social world.
  • Content analysis may be subject to researcher bias, which is where researchers find what they are looking for because of their preconceived and subjective ideas and beliefs, and this lowers the objectivity of the method.
104
Q

How do you improve the validity and reliability of a content analysis?

A

Validity -> use a double blind
Reliability -> have more than one observer analyse the sample, use the same coding sheet / behavioural categories

105
Q

Outline ways of assessing validity

A

Face validity:
Whether a test, scale, or measure appears ‘on the face of it’ to measure what it is supposed to measure. This can be determined by simply ‘eyeballing’ the measuring instrument or by passing it to an expert to check.

Concurrent validity:
The concurrent validity of a particular test or scale is demonstrated when the results obtained are very close too, or match , those obtained on another recognised and well-established test. Close agreement between the two sets of data indicates that the new test has high concurrent validity, and close agreement is indicated if the correlation between the two sets of scores exceeds +0.8

106
Q

How do you improve the validity of experiments?

A
  • Control group
  • Standardised procedures
  • Single and double blind procedures
107
Q

How do you improve the validity of questionnaires?

A
  • Lie scale
  • Anonymous data
108
Q

How do you improve the validity of observations?

A
  • Covert observations
  • Clear behavioural categories
109
Q

How do you improve the validity of qualitative research?

A
  • Coherence of the researcher’s narrative and the inclusion of direct quotes from participants within the report
  • Triangulation, which is the use of a number of different sources as evidence, for example, data compiled through interviews with friends and family, personal diaries, observations etc