Topic 2: Research Methods Flashcards

1
Q

Types of experimental methods

A

-Laboratory experiments
-field experiments
-natural experiments
-Quasi - experiments

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Laboratory experiments

A
  • Usually conducted in a lab, using standardised procedures but can be done in any controlled environment. Participants should be randomly allocated to groups.
  • manipulates one variable (IV) to measure its effect on another (DV)

Strengths:
- Replication of procedures is easy
- High degree of control (most variables can be controlled in the situation)
- Cause and effect: The relationship between IV and the DV should be easy to determine as long as the experiment is well designed.

Weaknesses:
- Some situations cannot be created in lab conditions.
- May lack ecological validity
- Participants maybe aware they are being studied, causing demand characteristics, which will effect results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Field experiments

A

An experiment performed in the natural environment, the Independent variable is manipulated.

Strengths:
- Participants are unaware of study - eliminates demand characteristics and participants reactivity.
- Setting is more natural so participants may be more relaxed.
- High ecological validity (due to natural setting, results are more likely to show true behaviour and results can be generalised to other situations.)

Limitations:
- The IV in a FE may lack mundane realism.
- Less control over IV, DV and Extraneous variables.
- Ethical issues due to deception.
- Replication is difficult as conditions are unlikely to be exactly the same again.
- Sample bias - Participants aren’t randomly allocated to groups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Natural experiment

A
  • A natural experiment is conducted when it is not possible, for ethical or practical reasons, to deliberately manipulate an IV. The researcher makes use of naturally occurring differences in an IV.

Strengths:
- Allows you to research with groups like drug users, alcoholics, victims of abuse, without creating situations which are harmful to the participant.
- High levels of ecological validity and mundane realism because the research is based on real life.
- Reduced demand characteristics
- Lack of intervention by experimenter

Limitations:
- Less control over IV, DV and extraneous variables. It occurs naturally so you cannot control anything about the participants or environment.
- Cannot demonstrate causal relationship because IV not directly manipulated.
- Can only be used where conditions vary naturally. This makes replication difficult as conditions are unlikely to be exactly the same again.
- Sample bias - participants aren’t randomly allocated to gorups.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Quasi-Experiment

A

IV is not manipulated nut is based on an existing difference e.g. gender.
(The investigator examines the relationship between an IV and DV , in a situation where the IV is a characteristic of a person i.e. gender or age.)

Strengths:
- Allows comparison between types of people.
- High degree of control. (Most variables can be controlled in the situation).
- Replication of procedure is easy.

Limitations:
- Participants may be aware of being studied , creating demand characteristics and reducing internal validity.
- The dependant variable may be a fairly artificial task, reducing mundane realism.
-Cause and effect cannot be established, as the research has not directly manipulated the IV , but based it on existing difference.
- Participants cannot be randomly allocated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Types of sampling

A
  • Random
  • Opportunity
  • Volunteer
  • Systematic
  • Stratified
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Why do psychologists use sampling techniques?

A

Psychologists use sampling techniques to chose people to represent the target population (the group of people they want to draw conclusions on).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Opportunity Sampling

A

Definition: Consists of selecting anyone who is available and willing to take part in the study.

Method to obtain sample: Researcher approaches whoever is available and asks them to take part in their study.

Evaluation:
Strengths: The easiest method method because you just use the first suitable participants you can find, which means it takes less time to locate your sample than other techniques.
Weakness: Inevitably biased because the sample is drawn from a small part of the population. (E.g. if you selected your sample from people walking around centre of a town on a Monday morning then it would be unlikely to include professional people (because they are at work) or people from rural areas.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Stratified Sampling

A

Definition: Selecting participants to represent subcategories within the target population (i.e. gender).

Method to obtain sample: A subgroup within the population is identified (i.e. males & females). Participants are obtained in proportion to their occurrence within the population. Selection from the subgroup is done using random allocation.

Evaluation:
Strengths: Likely to be more representative because there is a proportional and randomly selected representation of subgroups. So can be generalised to the target population.
Weaknesses: It is difficult and time consuming to identify subgroups, then randomly select participants and contact them. Also people that are picked may be unwilling to take part.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Volunteer Sampling

A

Definition: Participants self-select themselves. They offer to take part in the research.

Method to obtain sample: Researcher advertises their study and participants respond to the advert.

Evaluation:
Strengths: Gives access to a variety of participants (e.g. all the people who read a particular newspaper) which may make the sample more representative and less biased.
Weakness: A particular type of person offers to take part in research (i.e. participants might be more motivated to be helpful or in need of the money offered for participation). This results in a volunteer bias and can’t generalise to the rest of the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Random sampling

A

definition: Every member of the population has an equal chance of being selected.

Method to obtain sample: Every member of the population is identified and a random sampling technique is used to select the sample. E.g. names, are drawn from a hat or a computer software is used. Participants are allocated to conditions.

Evaluation:
Strengths: Eliminates researcher bias as the researcher is not selecting the individual participants. All members have an equal chance of selection so the sample is likely to be representative and can be generalised to the target population.
Weaknesses: Time consuming and difficult to get full details of everyone in the target population. Also people that are picked may be unwilling to take part. May still have participant bias if participants selected happen to share a related characteristic.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Random techniques

A
  • The lottery method (Putting names in a lottery barrel or hat and selecting the number required)
  • Random number table
  • Random number generators (number every member of the population and use a random number generator on phone/computer)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Systematic Sampling

A

Definition: A predetermined system is used to select participants.

Method to obtain sample: The researcher selects every 5th, 10th, 12th person consistently, from a register, phonebook etc.

Evaluation:
Strengths: Eliminates researcher bias as participants are selected using an objective system.
Weaknesses: May be that every 5th person has a characteristic in common and therefore the sample would have participant bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Experimental designs

A
  • Independent measure
  • Repeated measures
  • Matched pairs
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Order effects

A

Refers to differences in research participants’ responses that result from the order (e.g. the first, second, third) in which the experimental materials are presented to them.

Types of order effects:
- Fatigue effect
- Practice effect
- Boredom effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Counterbalancing

A

To combat the order effects the researcher counterbalances the order of conditions for the participants. The sample is split into two groups experimental (A) and control (B). For example, group 1 does ‘A’ then ‘B’ whereas group 2 does ‘B’ then ‘A’ - the participants are counterbalanced.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Independent measure

A
  • Different participants are used in each condition of the experiment.
  • This should be done by random allocation, which ensures that each participant has an equal chance of being assigned to one group or the other.

Strengths:
- Won’t cause an order effect
- Less likely to guess the purpose of the experiment.

Limitations:
- The researcher cannot control the effects of participant variables (i.e. the different abilities or characteristics of each participant).
- Requires more participants

Method of dealing with the limitations:
- Randomly allocate participants to conditions which (theoretically) distribute participant variables evenly and prevents researcher bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Repeated Measures design

A
  • The same participants take part in both conditions of the experiment.
    -This means that each condition of the experiment includes the same group of participants.

Strengths:
- Requires fewer participants
- Eliminates participant variables

Limitations:
- Could cause an order effect
- Participants may guess the purpose of the experiment when they do the second test.

Method of dealing with the limitations:
- Using two different tests to reduce practice effect - though the test must be equivalent. This can be done using random allocation.
- Counterbalancing
- Presenting a cover story about the purpose of the test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Matched Pairs Design

A
  • Pairs of participants are matched in terms of key variables, such as age. One member of each pair is then placed in the two different conditions.

Strengths:
- Largely eliminates participant variables
- No order effects

Limitations:
- Very time-consuming and difficult to match participants on key variables.
- It is not possible to control all participant variables.

Method of dealing with limitations:
- Restrict the number of key variables to match on to make it easier.
- Conduct a pilot study to consider key variables that might be important when matching.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Ethical guidelines (code of conduct)

A

A set of principles designed to help professionals behave honestly and with integrity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the four principles identified by the Code Of Ethics and Conduct (BPS, 2009)?

A

1) Respect - for the dignity and worth of all persons. This includes standards of ethical issues such as privacy.

2) Competence - psychologists should maintain high standards in their professional work.

3) Responsibility - Psychologists have a responsibility to their clients, to the general public and to the science of psychology. This includes protecting participants from harm and debriefing them at the end.

4) Integrity - Psychologists should be honest and accurate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the six ethical issues psychologists must deal with?

A

1) Informed consent
2) Deception
3) The right to withdraw
4) Protection from physical and psychological harm
5) Confidentiality
6) Privacy

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

1) What is informed consent?
2) How is it dealt with?
3) What are the limitations?

A

1) Informed consent: Making participants aware of the aims and purposes of a study, so that they can agree to take part in the full knowledge of what the research is about.

2) researchers could gain presumptive consent (getting a group of people similar to the participants to say if they would consent to take part in the study- if they say yes, it can be presumed the participants would also have agreed); prior general consent (participants give ‘general permission’ to take part in a number of studies, some involving deception); or retrospective consent (participants give full informed consent via a debrief at the end of the study, at which point they can ask to withdraw their results).

3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

1) What is deception?
2) How is it dealt with?
3) What are the limitations?

A

1) Ethical issue: Deception
- A participant is not told the true aims of a study (e.g. what participation will involve) and thus cannot give truly informed consent.

2) How to deal with it:
- The need for deception should be approved by the ethics committee via a COST-BENEFIT ANALYSIS.
- Participants should be fully DEBRIEFED after the study.

3) Limitations:
- Cost benefit decisions are flawed because they involve subjective judgements and the cost and/or benefits are not always apparent until after the study.
- Debriefing can’t turn the clock back - a participant may still feel embarrassed or have lowered self-esteem.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

1) What is the right to withdraw?
2) How is it dealt with?
3) What are the limitations?

A

1) Ethical issue: Right to withdraw
- Participants can stop participating in a study if they are uncomfortable in anyway. Participants should also have the right to refuse permission for the researcher to use any data they produced.

2) How to deal with it:
- Participants should be informed at the start of a study that they have the right to withdraw.

3) Limitations:
- Participants may feel they shouldn’t withdraw because it will spoil the study.
- In many studies participants are paid or rewarded in some way, and many feel not able to withdraw.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

1) What is meant by protection of harm?
2) How is it dealt with?
3) What are the limitations?

A

1) Ethical issue: Protection from harm
- During a research study, participants should not experience negative physical or psychological effects, such as physical injury, lowered self-esteem or embarrassment.

2) How to deal with it:
- Avoid any risk greater than experienced in everyday life.
- Stop the study if harm is suspected.

3) Limitations:
- Harm may not be apparent at the time of the study and only judged later with hindsight.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

1) What is meant by confidentiality?
2) How is it dealt with?
3) What are the limitations to dealing with confidentiality?

A

1) Ethical issue: Confidentiality
- Concerns the communication of personal information from one person to another and the trust that it will be protected.

2) How to deal with it:
- Researchers should not record the names of any participants; they should use numbers or false names.

3) Limitations:
- It is sometimes possible to work out who the participants were using information that has been provided, for example the geographical location of a school. In practice, therefore, confidentiality may not be possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

1) What is meant by privacy?
2) How is the issue dealt with?
3) What are the limitations of dealing with privacy?

A

1) Ethical issue: Privacy
- A person’s right to control the flow of information about themselves.

2) How to deal with it:
- Do not study anyone without their informed consent unless it is in a public place and public behaviour.

3) Limitations:
- There is no universal agreement about what constitutes a public place.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is meant by a cost-benefit analysis?

A

A systematic approach to estimating the negative and positives of any research.
(It judges the costs of doing the research (i.e. potential harm to participants) against the benefits (i.e. the value of the findings in improving people’s lives))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

what is an ethics committee?

A

A group of people within a research institution that must approve a study before it begins. Member often include experts in the field.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Peer Review

A

A process that takes place before a study is published to ensure that the research is of high quality, contributes to the field of research and is accurately presented. The process is carried out by experts in the related field of research.

Purpose of Peer Reviews:
- Allocation of research funding: PRs enable gov. and charitable bodies to decide which research is likely worthwhile funding.
- Publication of research in academic journals and books: PRs prevent incorrect or faulty data from entering the public domain.
- Assessing the research rating of University departments: All uni science department research is assessed in terms of quality (Research Excellence Framework, REF). Future funding for the department depends on recurring good rating from the REF peer review.

Evaluation:
- Finding an expert is not always possible and therefore poor research may be passed down if the reviewer didn’t understand it.
- Anonymity: Reviewers may have their identity kept private, which could be used to bury rival research. Solution = open reviewing.
- Publication bias: Journals tend to prefer to publish positive results to increase standing.
- Preserving the status quo: PR results in a preference for research that goes with existing theories
- Cannot deal with already published research

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

debriefing

A

A post-research interview designed to inform participants of the true nature of the study and to restore them to the state they were in at the start of the study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Presumptive consent

A

A method of dealing with lack of informed consent or deception, by asking a group of people who are similar to the participants whether they would agree to take part in a study. If this group consents to the procedures of the proposed study, it is presumed that the real participants would also have agreed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

What will happen if a psychologist does behave in an unethical manner?

A

Psychologists who behave unethically are not sent to prison - it is not a criminal offence. However, they may be barred from psychological research or practice.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Evaluate the ethical guidelines

A

Strengths:
- Offers clarity

limitations
- The BPS/APA guidelines are inevitably rather general due to the virtual impossibility of covering every conceivable situation that a researcher may encounter.
- The approach tends to close of discussions bout what could go right and wrong because the answers are provided.
- Guidelines absolve the researcher of any individual responsibility

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is a strength of the Canadian (CPA) guidelines?

A

They present a series of hypothetical dilemmas for psychologists to discuss. This approach stimulates debate, encouraging psychologist to engage deeply with ethical issues rather than just following rules.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What are the limitations of a cost-benefit analysis?

A
  • It is difficult, if not impossible, to predict both costs and benefits prior to, and even after, conducting a study. How are cost and benefits quantified? How much does personal distress cost?
  • It arguably legitimises unethical practices by suggesting that deception and harm are acceptable in many situations provided the benefits are high enough.
    This means that the cost-benefit approach solves nothing because you simply exchange one set of dilemmas (the ethical issues) for another.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Demand characteristics

A

A cue that makes participants unconsciously aware of the aims of the study or helps participants work out what the researcher expects to find out.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Investigator effect

A

Anything that an investigator does that has an effect on a participant’s performance in a study other than what was intended. This includes direct effects (as a consequence of the investigator interacting with the participant) and indirect effects (as a consequence of the investigator designing the study). Investigator effects may act as a confounding or extraneous variable.
(Sometimes referred to investigator or experimenter bias)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Ways to deal with demand characteristic and investigator effects

A
  • Single blind design
  • double blind design
  • Experimental realism
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Single blind design

A

In a single blind design the participant is not aware of the research aims and/or of which condition of the experiment they are receiving. This prevents the participant from seeking cues about the aims and reacting to them.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Double blind design

A

In a double blind design both the participant and the person conducting the experiment are ‘blind’ to the aims and/or hypotheses. Therefore the person conducting the investigation is less likely to produce cues about what she/he expects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Experimental realism

A

If the researcher makes an experimental task sufficiently engaging the participant pays attention to the task and not the fact that they are being observed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Negatively skewed distribution graph

A

The mean is less than the median, which is less than the mode.
The cruve is furhter away from the Y axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Positively Skewed distribution graph

A

The mean and median will be greater than the mode.
The curve is travelling towards the Y axis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q
  • Measures of central tendency
  • When they should be used
  • And the most appropriate measure of dispersion
A
  • Mean - used with interval/ratio data - MOD=Standard deviation
  • Median - Used with ordinal data or interval data (where there are extreme scores) - MOD=Range
  • Mode - used with nominal data - MOD=N/A

Outliers (extreme scores) can distort the mean but the median and mode are not distorted by outliers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Measures of dispersion

A

MOD’s tell us how much variation there is the data. There’s two MOD:
- Range (+1 to account for rounding)
- Standard deviation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

Levels of data

A
  • Nominal - categories - e.g. male, female, rolling tongue or twisting
  • Ordinal - Order without equal measures - e.g. race positions, likes/dislikes, ranking, scale
  • Interval - Uses units with equal measures - e.g. temperature
  • Ratio - has a true zero point - e.g. time, height, distance
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

When are sign tests used?

A
  • When looking at paired or related data, which can come from repeated measures or matched pairs designs.
  • When looking at nominal or categorical data.
  • When it is a test of difference.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

How to carry out a sign test

A

Stage 1: HYPOTHESIS
- directional (one-tailed) or non directional (two-tailed)
Stage 2: RECORD & SIGN
- work out the sign (+or-)
Stage 3: CALCULATED VALUE (S)
- Work out whichever there are least of (+or-) and count.
Stage 4: CRITICAL VALUE
- Find the total number of results, omitting any neutral results = N
- Use appendix 6 Table of critical values for S to establish significance.
Stage 5: ACCEPT OR REJECT
- significant result = accept hypothesis
- not significant = reject hypothesis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

Types of data

A
  • Primary
  • Secondary
  • Quantitative
  • Qualitative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

Primary data

A

Definition: Original data that has been collected specifically for the purpose of the investigation by the researcher.

When is it used? Questionnaire, interview or observation.

Pro: Authentic and specific. Obtained directly from PP themselves so is likely to give greater insight.
Con: Requires time + effort

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

Secondary data

A

Definition: Data that has been collected by someone other than the person conducting the research.

When is it used? Meta-analysis’, journal articles, books or websites, statistical info, population records.

Pro: Inexpensive + easy to access. Requires minimal effort.
Con: Variation in quality + accuracy of secondary data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

Quantitative data

A

Definition: Data that is expressed numerically.

When is it used? Experiments which are converted to graphs/charts and analysed statistically.

Pro: Simple to analyse and comparisons easily drawn.
Con: Narrower meaning and less detail, lower ecological validity (less representative of real life).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

Qualitative data

A

Definition: Data that is expressed in words rather than numbers or statistics.

When is it used? Interviews or unstructured observation

Pro: Higher ecological validity and more insightful.
Con: Difficult to analyse and hard to identify patterns/draw comparisons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

Non-directional (two-tailed) hypothesis

A

Predicts simply that there is a difference between two conditions or two groups of participants, without stating the direction of the difference.
Should be used when there is no previous research in the area or previous research in the area is inconsistent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

Directional (one-tailed) hypothesis

A

States the direction of the predicted difference between two conditions or two groups of participants.
Are used when past research suggests that the findings will go in a particular direction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

Naturalistic and Controlled observation

A
  • Naturalistic observation: An observation carried out in an everyday setting, in which the investigator does not interfere in any way but merely observed the behaviour(s) in question.
  • Controlled Observations: A form of investigation in which behaviour is observed but under conditions where certain variables have been organised by the researcher.

Strengths:
- Naturalistic observations have high ecological validity.
- Controlled observations allows the observer to focus on particular aspects of behaviour
Limitations:
- Natural Observations lack control
- Controlled Observations have low ecological validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

Overt and Covert observation

A
  • Overt Observations: Observational studies where participants are aware that their behaviour is being studies.
  • Covert observations: Observing people without their knowledge. Knowing that behaviour is observed is likely to alter participants behaviour.

Strengths:
- Covert: behaviour is more natural
Limitations:
- Overt: Demand characteristics likely to lead to unnatural behaviour
-Covert: Ethical issues of privacy and lack o informed consent, although possible to receive retrospective consent.

60
Q

Participant and non-participant observation

A

Participant Observation: Observations made by someone who is also participating in the activity being observed, which may affect their objectivity.
Non-Participant Observation: The observer is separate from the people being observed.

Strengths:
-Participant observation give ‘insider’ insights
- Non-participant observations are more objective
Limitations:
- Participant observations are more likely to be overt thus issue of participant awareness.
- Non-participant observations are more likely to be covert thus ethical issues.

61
Q

Evaluation of observational studies in general

A

Strengths:
- High validity as it records what ppl actually do rather than what they say they do.
- May capture spontaneous/unexpected behaviour.
- Are often used as a way to measure the DV in an experiment, therefore are a fundamental method for gathering data.
Limitations:
- Observer bias: Observations are distorted by expectations. Lower risk of bias by using more than one observer.
- Doesn’t give information on what people think or feel.

62
Q

Unstructured Observations

A

The researcher records all relevant behaviour but has no system.

63
Q

Structured Observations

A

A researcher uses various systems to organise observations, such as behavioural categories and sampling procedures.

➢ It is much easier to use coding schemes as the observers only need to focus
on set behaviours that are relevant, and can therefore waste less time noting
down irrelevant behaviours.
➢ Easy to establish inter-rater reliability because multiple observers can
compare their observations to check for concurrence
➢ Behavioural categories are open to interpretation.
➢ Some key behaviours may be missed because they were not pre-determined
on the coding scheme..

64
Q

Behavioural Categories

A

Dividing a target behaviour (such as stress or aggression) into a subset of specific and operationalised behaviours.
Behavioural categories should:
- be objective (not inferred).
- Cover all possible component behaviours.
- Be mutually exclusive, meaning you should not have to mark two categories at one time.

65
Q

Event Sampling

A

An observational technique in which a count is kept of the number of times a certain behaviour (event) occurs.

66
Q

Time Sampling

A

An observational technique in which the observer records behaviours in a given time frame, e.g. noting what a target individual is doing every 20 seconds or 1 minute. The observer may select one or more behavioural categories to tick at this time interval.

67
Q

Self Report Techniques: Questionnaires

A

Description:
Data are collected through the use of written questions.
Questionnaire’s are always structured.
Can include open questions and closed questions (i.e. Yes-no questions, Multiple choice questions, Likert scales).

Evaluation:
Strengths:
- Large samples - can be distributed to a large number of people relatively cheaply and quickly.
- Respondants more willing to give personal info than in an interview.
- Reliable – easy to replicate.
- Closed questions generate quantitative data and are easy to analyse.

Limitations:
- Biased sample - questionnaires are only filled in by ppl who can read and write and have the time to fill them in.
- Lacks validity: (Demand characteristics. Social desirability.)
- Difficult to write questions which are not leading or unclear (need a pilot study).

Flaws in questionnaire wording could include:
The use of jargon, making the question hard to understand.
Leading questions, biasing participant responses.
Vague questions, where it is unclear what is being asked.
Offensive questions, where particular groups might be upset by the questions.

68
Q

Self report techniques: Interviews

A

A research method or technique that involves a face-to-face, ‘real-time’ interaction with another individual and results in the collection of data.

69
Q

Self report techniques: Interviews

A

A research method or technique that involves a face-to-face, ‘real-time’ interaction with another individual and results in the collection of data.

70
Q

Self report techniques: A structured Interview

A

Definition:
Any interview in which the questions are decided in advance.

Evaluation:
Strengths:
- Easily repeated due to standardised questions - allows for comparison.
- Answers are easier to analyse because they are more predictable.
Limitations:
- Comparability may be a problem in strcutured interviews if the same or different interviewer(s) behave differently on different occasions (low reliability).
- Interviewer bias - the interviewers expectations may influence the answers the interviewee gives.

71
Q

Self-report techniques: Unstructured Interview

A

Definintion:
The interviewer starts out with some general aims and possibly some questions, and lets the interviewer’s answers guide subsequent questions.

Evaluation:
Strengths:
- More detailed info generally obtained.
Limitations:
- Interviewer bias
- Interviewers require more skill - well trained interviewers more expensive.
- May lack objectivity
- comparitability and analyses of answers more difficult

72
Q

Interviewer bias

A

The effect of an interviewer’s expectations, communicated unconsciously, on a respondent’s behaviour.

73
Q

Social desirability bias

A

A distortion in the way people answer questions - they tend to answer questions in such a way that presents themselves in a better light.

74
Q

Evaluation of self-report techniques

A

Strengths:
- Allows access to what people think/feel.
Limitations:
- ppl may not be truthful or answers may be distorted by social desirability bias.
- ppl may not know what they think/feel and make up their answer - thus answers lack validity.
- Sample make lack representativeness - data cannot be generalised.

75
Q

Evaluation of self-report techniques

A

Strengths:
- Allows access to what people think/feel.
Limitations:
- ppl may not be truthful or answers may be distorted by social desirability bias.
- ppl may not know what they think/feel and make up their answer - thus answers lack validity.
- Sample make lack representativeness - data cannot be generalised.

76
Q

Aims

A

A statement of what the researcher(s) intend to find out in a reserach study
E.g. Aim: “to investigate the effect of having a training partner on athletes’ motivation levels”

77
Q

Hypothesis

A
  • A precise and testable statement about the assumed relationship between variables. Precise prediction on the outcome of the study. - Operationalization is a key part of making the statement testable.
  • The hypotheses needs to be a testable prediction
  • The hypotheses should contain the IV and the DV
  • has to be able to be scientifically tested using systematic and objective methods.
  • may have to be revised and should be falsifiable.

Example hypotheses: “Students who experience test anxiety before an English exam will get lower scores than students who do not experience test anxiety.”

78
Q

Pilot Studies

A
  • Small-scale, trial versions of proposed studies to test their effectiveness and make improvements.
  • A perliminary analysis
  • Determines whether the study is feasible
  • Principles & processes are similar to the concepts imagines for the final study

Advantages:
- Avoids wasting time and resources.
- Increases validity of later research projects if conducted properly.
- Allows the researcher to demonstrate the most appropriate design and method of data collection to test the hypotheses.
- Researchers can identify potential ethical issues or obstacles affecting later research.
Disadvantages:
- Time-consuming and can be costly.
- Researchers should not use pilot studies to guess expected results.
- Dif. participants are needed for subsequent research because of participant bias such as demand characteristics.

79
Q

Closed questions

A

Questions that have a predetermined range of answers from which respondents select one. Tend to produce quantitative data - e.g. yes/no answers are qualitative but can then be counted to produce quantitative data.

Strengths:
- Produce quantitative data, which is easier to analyse using graphs and measures such as the mean.
Limitations:
- Respondents may be forced to select answers that don’t represent their real thoughts or behaviours, lowers valididty.
- PPs may select ‘don’y know’ or have a preference to answer yes (an acquiescence bias) and therefore the data collected are not informative.

80
Q

Open questions

A

Questions that invite respondents to provide their own answers rather than select one of those provided. Tends to produce qualitative data.

Strengths:
- More detailed information collected.
- Can provide unexpected answers, thus allowing researchers to gain new insights into people’s feelings and attitudes.
Limitations:
- Most respondents may avoid giving lengthy complex answers.
- qualitatuve data is more difficult to summarise, due to the wide range of responses.

81
Q

Questionnaire construction

A
  1. Clarity: questions should be clear with no ambiguity.
  2. Bias: Avoid leading questions.
  3. Analysis: Questions need to be written so that answers are easy to analyse i.e. by using closed questions.
    Consider:
    - Filler questions: irrelevant questions to help distract the respondent from the main purpose
    - Sequence of questions: start with easy ones, so that the respondent is relaxed when answering harder, more anxiety-provoking questions
    - Sampling technique: Questionnaires often used stratified sampling.
    - Pilot studies: Used to refine questions and respond to any difficulties encountered.
82
Q

Design of interviews

A

Recording the interview - prevents the interviewer’s listening skills being interfered when taking notes.

The effect of the Interviewer:
non-verabl communication - head nodding and leaning forwards may encourage the respondent to speak.
Listening skills - Should not interrupt and have a range of encourage comments when they do speak.

Questioning skills in an unstructured interview:
avoid repeating questions, avoid probing too much.

83
Q

Extraneous Variables

A

An extraneous variable isany variable not being investigated that has the potential to affect the outcome of a research study. In other words, it is any factor not considered an independent variable that can affect the dependent variables or controlled conditions.

84
Q

Confounding Variable

A

A confounding variable is an extraneous variable related to the independent and dependent variables. I.e., a confounding variable is correlated with the independent variable and has a direct effect on the dependent variable.

85
Q

External validity

A

The degree to which a research finding can be generalised: to other settings (ecological validity); to other groups of people (population validity); over time (historical validity).

86
Q

Internal validity

A

The degree to which an observed effect was due to the experimental manipulation rather than other factors such as confounding/extraneous variables.

87
Q

Mundane realism

A

Refers to how a study mirrors the real world. The research environment is realistic to the degree to which experiences encountered in the research environment will occur in the real world.

88
Q

Operationalisation

A

The process of turning abstract concepts into measurable variables and indicators

89
Q

Experimental controls: Randomisation

A

The use of chance methods to control for effects of bias when deciding order of conditions or designing materials i.e. using a random image generator.

90
Q

Experimental control: Standardisation

A

Participants all get the exact same instructions and procedures in a study i.e. create a script and don’t veer from it. Standardised methods use the same equipment.

91
Q

Experimental control: random allocation

A

allocating participants to experimental groups or conditions using random techniques.

92
Q

Experimental control: counterbalancing

A

An experimental technique used to overcome order effects when using repeated measures design. Counterbalancing ensures that each condition is tested first or second in equal amounts.

93
Q

Normal distribution

A

bell shaped curve
- The mean, median and mode are all in the exact same mid-point.
- The distribution is symmetrical around this midpoint.
- The dispersionof scores or measurements either side of the mid-point is consistent and can be expressed in standard deviations.

94
Q

Features of Science

A
  • Theory construction
  • Hypothesis Testing (falsifiability)
  • Objectivity
  • Replicability
  • Empirical method
  • Paradigms and paradigm shifts (eg, behaviourism in 1920s onwards, 1970s paradigm shift - the cognitive revolution)
95
Q

Content Analysis

A
  • A type of indirect observational method that is used to analyse human behaviour, investigating through studying human artifacts (the things ppl make). usually observes visual, written, or verbal material (non-numerical/qualitiative data). This is transormed into quanititative data.
    To perform content analysis:
    1 - Decide a research question.
    2 - Select a sample (e.g. randomly/systematically) from a large quantity of possible data (e.g. diary entries, tweets, books).
    3. Coding - the researcher decides of categories/ coding units to be recorded (e.g. occurances of particular words), these are based on the research question.
    4. Work through the data - Read the sample, and tally the number of times the pre-determined categories appear.
    5. Data analysis - can be performed on quantitative data to look for patterns.

The coding units must be operationalised (defined as clearly as possible) to reduce subective interpretation/

To test reliability:
Test-retest: run the content analysis again on same sample & compare data.
Inter-rater reliability: Use a second rater with the same behavioural categories

96
Q

Thematic Analysis

A

A type of content analysis used when analysing qalitative data. Researchers start by attempting to identify the deeper meaning of the text by reading it first, and allowing themes to emerge
Involves identification of themes - ideas that are reccurent, which can be developed into broader catagories (i.e. stereotyping, treatment, control)

To perform a thematic analysis:
1 - Collect text/turn recordings into written transcripts.
2 - Read text/transcripts first to spot patterns that can be coded & collected.
3 - Re-read the transcript for emergent themes (the themes are not pre-determined).

Eval:
- As theories come after the discovery of themes, it can be argued thematic analysis stops the researcher imposing their own bias on the analysis by only looking for what they want to see.
- High external validity, easy to get a sample, Easy to replicate.
- Subjective interpretation, data not created in controlled conditions.

97
Q

What are the strengths and weaknesses of each of the central tendencies?

A

Median
Strengths: Unaffected by extreme outliers, easier to calculate than the mean.
Weaknesses: doesn’t account for exact distances between the values like the mean does

Mode
Strengths: simple to understand and easy to calculate, It can be located graphically, unlike mean and median. It can be used for qualitative analysis. The extremities in the values of the data do not affect the mode.
Weaknesses: The mode does not consider all the values in the data.
There can be more than one mode or no mode for the data.
It is not well defined.

Mean
Strengths: Most sensitive and precise because it is used on interval data, considers the exact distances between values.
Weaknesses: Can easily be distorted by unrepresentative values (outliers).

98
Q

What are the strengths and weaknesses of the measures of dispersion?

A

Range
Strengths: Quick and easy to calculate, which speeds up data analysis.
Weaknesses: Vulnerable to extreme scores therefore it is not always representative.

Standard deviation:
Strengths: Most precise as it considers all available scores in the data set - is representative.
Weaknesses: Limited use as it requires the mean and interval data. Time consuming to calculate, making data analysis tedious and difficult.

99
Q

Evaluate content analysis

A

Strengths:
- Tends to have high ecological validity because it’s based on observations of what people actually do - real communications, such as newspapers/books.
- When sources can be accessed by others (e.g. videos or books), the content analysis can be replicated, and therefore the observations can be tested for reliability.

Limitations:
- Observer bias reduces the objectivity and validity of the findings because different observers may interpret the meaning of the behavioral categories differently.
- Content analysis is likely to be culture biased because the interpretation of written content will be affected by the language and culture of the observer and the behavioral categories.

100
Q

Case studies

A

A research method that involves an in-depth examination, normally over time, and analysis of a particular phenomenon or case, such as an individual, organization, community, event or institution

A case study is often (but not always)…
- Scientific method - evidence based research
- Uses a single individual, organization or event
- Uses a range of sources to draw conclusions
- Uses a range of techniques: interviews, IQ tests, observations, questionnaire.
- Tend to be qualitative and longitudinal

101
Q

Evaluate Case Studies as a research method

A

Strengths:
- provide rich, detailed insights as opposed to ‘superficial data’.
- Draws attention to unusual/atypical forms of behavior which can then be studies further.

Limitations:
- difficulties generalising
- Subjective interpretations and selection of data in the final report
- Relies on retrospective memories, which are prone to inaccuracy and mental decay which decrease the validity.

102
Q

Reliability

A

Reliability is consistency - the consistency of measurements. We would expect any measurement to produce the same data if taken on successive occasions.

103
Q

Inter reliability vs external reliability

A

Inter reliability: Measurements used in the study should be the same for every participant and condition.
External validity: Measures consistency over time. If you test participants more than once their results should remain the same.

104
Q

Inter-rater reliability / inter-oserver reliability

A

a measure of how much agreement there is between multiple people when they observe or assess the same thing.

105
Q

What is the correlation co-effecient for reliability

A

Reliability will have a correlation coefficient of 0.8 or more 0.8<

106
Q

Reliability of observational techniques

A

Assessing reliability: Repeat the observations and compare to the first set of data. Or have 2 or more observers making seperate recordings and then compare these records to test inter-observer reliability. If reliable data should remain the same.
Improving reliability: Ensure behavioural categories are operationalised clearly and not ambigious. Make sure observers are trained/have practiced using the behavioural categories for faster responses.

107
Q

Reliability of self-report techniques

A

Assessing reliability: Test-retest method by giving the test/questionnaire to a group of people and then give the same test to the same group of people a second time.
Assessing relaibility: split half method (used for questionnaires). Compare the results of one half of a test with the results of the other half. Similar results indicate reliability. Low correlatioonal results (less than 0.8) would indicate poor reliability. Quick and easy but only effective with large questionnaires in which all questions measure the same construct.
Assessing reliability: Inter-interviewer reliability can be assessed for one interviewer by comparing answers on one occassion with answers from the same person with the same interviewer a week later. Or consistency between interviewers by doing the same interview on the same person with different interviewers and comparing answers.
Improving reliability: re-examine and re-write questions to reduce ambiguity or increase correlation coefficient (in the case of split half method).

108
Q

Reliability of Experiments

A

Assessing reliability: Test-retest method, ensuring conditions are the same each time participants are being tested. Method used to measure the DV needsw to be consistent.
Improving reliability: Standardisation - procedures need to be the same each time the experiment is repeated otherwise the performance of participants cannot be compared.

109
Q

Test-retest method

A

The same test or interview is given to the same participants on two occasions to see if the same results are obtained. Tests external reliability by measuring the stability of a test over time.
Factors to consider:
Time - can’t leave too long between repeats because pps views and mindests may change. Can’t repeat too soon due to order effects (e.g. practice effects)
Research - If different researcher carries out each experiment there might be researcher bias.
Sampling - If a different sample is used there may be population bias.

110
Q

Internal validity

A
  • Relates to how well the study is conducted (focuses on the structure/procedure of the study)
  • The extent to which a study establishes a trustworthy cause-and-effect relationship between the IV and the DV.
  • It controls extreneous variables and makes it possible to eliminate alternative explanantions for a finding.
111
Q

Factors effecting Internal validity

A
  • Social desirablility bias
  • Demand characteristics
  • Investigator effects
  • confounding variables
  • operationalisation of catgeories
112
Q

External Validity

A
  • Relates to how applicable the findings are in the real world (universality of results)
  • external validity means that findings can be generalised and the results apply to practical situation and the world at large.

Factors affecting External validity:
- selection bias
- Lack of mundane realism

113
Q

Concurrent validity

A

A means of establishing validity by comparing an existing test or questionnaire with the one you are interested in.
CV is a way to measure how well a new test correlates with an existing, validated test that’s used to measure the same or a related construct
Purpose: The goal of concurrent validity is to show that a new assessment measures the intended psychological construct

114
Q

Ecological validity

A

The ability to generalise a research effect beyond the particular setting in which it is demonstrated to other settings (does it relate to real life situations?)

115
Q

Face Validity

A

The extent to which test items look like what the test claims to measure.
(Does it look like it is measuring what it is supposed to? What is it like at face value?)

116
Q

Mundane realism

A

Refers to how a study mirrors the real world. The research environment is realistic to the degree to which experiences encountered in the research environment will occur in the real world.

117
Q

Temporal Validity

A

Concerning the ability to generalise a research effect beyond the particular time period of the study.

118
Q

Validity

A

Refers to whether an observed effect is a genuine one.

119
Q

Improving validity

A

Experimental research - Control groups, standardised procedures; single-double-blind procedures.
Questionnaires - lie scale (checks consistency by re-asking in different format), promise anonymity.
Observations - Covert observations; clear operationalised behavioural categories.
Qualitative methods (higher eco. val) - Triangulation of data (multiple sources)

120
Q

What are ways to assess validity?

A

ways to assess validity:
Face validity (Whether the measure looks like it’s measuring what it’s intended to measure.)
Concurrent validity (Assessed by correlating scores from research that’s already known to be valid)
Ecological validity: (Is data generalizable to the real world?)
Predictive validity: (How well does the test do at predicting future behavior?)
Temporal validity: (assess how valid it remains over time)

121
Q

What is meant by empirical method?

A

Empiricism states that the only source of knowledge comes through our senses e.g. sight, hearing etc. It comes from experience.

122
Q

What is a Paradigm?

A

A paradigm is an individual or a society’s view of how things work in the world (a common belief).

123
Q

Paradigm shift

A

A paradigm shift is a fundamental change in an individual or a society’s view of how things work in the world.

Stages of a paradigm shift:
1) Immature science
2) Normal science
3) Crisis
4) Revolution
5) New theory (circles back to normal science)

124
Q

Falsifiability

A

The possibility that a statement or hypothesis can be proved wrong.

The more times a theory fails to be proven wrong, the more reliable it is. But nothing can be proven 100% right because a hypothesis could be disproved at any moment.

125
Q

Theory construction

A

Theories are based on facts and observations that have been gained. Theories are created to help us understand phenomenon around us. Psychologists can use inducrive or deductive methods

Induction (the theory is drawn last):
observation > testable hypothesis > study to test hypothesis > draw conclusion > propose theory. ‘From the specific to the general’.
Deduction (thoery is drawn first):
observation > propose a theory > testable hypothesis > study to test hypothesis > draw conclusions. ‘From the general to the specific’.

126
Q

What is meant by Objectivity?

A

Objectivity is the tendency to base interpretations and judgments on external data, rather than on personal feelings, beliefs, or experiences. Fact-based. This means the researcher not letting their personal biases affect the results of studies.

Objectivity increases internal validity as research is free from bias and subjective interpretation, which makes findings more generalizable (increases external reliability).

127
Q

What is meant by replicability and why is it important?

A

Replicability refers to a study being repeatable. This involves standardization and means that a study should produce the same results if repeated exactly, either by the same researcher or by another.

The likelihood of the same differences occurring twice (or more), by chance alone are much smaller than when they occur the first time
Effects that occur in a study are more likely to be reliable if they occur in the repeat of the study - replication therefore increases external reliability.

128
Q

Empirical methods

A

using direct observation or experimenting, not rational argument or hearsay, and recorded in detail so that other investigators can repeat and attempt to verify the work. Standardized and recorded.

129
Q

Challenges to the scientific approach

A
  • deterministic
  • Reductionist - may miss alternatives
  • Temporal validity - scientific methods change over time effecting standardization.
  • humanistic approach challenges the scientific approach by focusing on an individuals entire experience.
130
Q

Define standardisation

A

Standardization is the process of making a test uniform, or setting it to a specific standard. This involves administering and scoring the test in the same manner for everyone that takes it. (keeping everything the same for all participants so that the investigation is fair).

131
Q

Probability

A

A mathematical measure of the likelihood/chance that a particular event will occur, where 0 is impossible and 1 is certain

132
Q

What is the significance level that the BPs have set for research?

A

5%
The pobability that the results are due to chance needs to be less than or equal to 5% (95% probability that the results are due to the IV).
P<0.05

133
Q

What is meant by statistical significance?

A

Statistic significance is the claim that a set of observed data are not the result of chance but can instead be attributed to a specific cause.
Researchers use statistical tests to determine the likelihood that the difference has occured due to chance.

134
Q

Related vs unrelated research

A

When investigating relationships, If using repeated measures or matched pairs, the design is a related one because the samples are connected. If an independent measures design has been used, it is an unrelated design.

135
Q

What factors affect the choice of statistical test?

A
  • Levels of data: (nomial, ordinal, interval, ratio)
  • Design: (independent, repeated measures, matched pairs)
  • Difference or correlation
136
Q

Parametric tests vs non parametric tests?

A

Non parametric tests - Nominal and Ordinal data (chi squared, sign, mann-whitney u, wilcoxon, and spearsons Rho
Parametric tests - interval and ratio data (Unrelated T test, related T test, pearsonsR R

Parametric tests are more reliable/powerful. They are more rigorous because they require interval/ratio data, which is standardised and objective (measurements are consistent, whereas nominal and ordinal can be subjective).

137
Q

What is the acronym for the significance test table?
List the types of statistical tests and when they should be used

A

Carrots Should Come Mashed With Swede Under Roast Potatoes

C = Chi Squared (x^2): used with nominal data, and either unrelated design if looking for difference, or correlation.
S = Sign (S): Used with nominal data, difference and related design
M = Mann-whitney U: Ordinal data, difference, and unrelated design
W = Wilcoxon T: Ordinal data, difference, related design.
S = Spearman’s Rho: Ordinal data, correlation
U = Unrelated T test: Interval/ratio data, unrelated design, difference
R = Related T test: Interval/ration, related design, difference
P = Pearson’s R: Interval/ratio, correlation

138
Q

Inferential statistics - define the following key terms (in reference to statistical tests):
- Calculated (or observed) value
- Critical value
- Table of critical values
- N values/df values

A
  • The calculated value is the value calculated in the test (i.e the S value in the sin test).
  • The critical Value is the number that tells us whether or not we can accept or reject the null hypothesis.
  • Table of critical values - represented in a table, each test has it’s own critical table.
  • N values / df values - the number of participants in the study.
139
Q

Type 1 error

A

Type 1 error is when you have rejected the null hypothesis when in truth the null hypothesis is true.
Type 1 errors occur when the level of significance is too high (0.10)

140
Q

Type 2 error

A

A type 2 error is when you have accepted the null hypothesis, when in truth the null hypothesis is false.
Type 2 errors occur when the levl of significance is too low (0.01)

141
Q

What is a contiguency table.

A

A contingency tabledisplays frequencies for two or more categorical variables, to show relationships between them.
Must include a columb for totals.

142
Q

What is a correlation co-efficient?

A

A correlation coefficient, often expressed as r,indicates a measure of the direction and strength of a relationship between two variables. When the r value is closer to +1 or -1, it indicates that there is a stronger linear relationship between the two variables.

143
Q

How to calculate percentage change

A

% change = Initial value - final value / initial value

144
Q

Outline and describe the sections of a scientific report

A
  • Abstract: 150-200 words. Short summary that includes all of the key elements: aim; hypothesis; procedure; results and conlusions.
  • Introduction: A literature review of the general area of investigation detailing theories which have been researched. It should progress logically - beginning broadly and becoming more specific.
  • Method: Design (methods: lab, natural etc, design: repeated measures, matched pairs etc, level of data i.e. nominal). Sample (who? how many? Target population?) Apparatus. Procedure (stage by stage what you’re going to do). Ethical considerations (type of consent, overcoming deception, how pps will be protected, consideration of wider implications - privacy and confidentiality).
  • Results: Descriptive statistics (tables, graphs etc), inferential statistics (choice of statistical test, critical & calculated values, level of significance, rejected or accepted hypothesis). qualitative methods will involve analysis of themes/categories. raw data/calculations appear in appendix.
  • Discussion: Summary of findings in verbal form. Discused in context with findings from other relavent research. Discuss limitations of the experiment and how these may be adressed in future studies. Considers wider implications/real world applications of what has been discovered, and the contribution it has made to already exisiting knowledge in the field.

Referencing: The full details of any journal articles or books that are mentionefd in the research report are given.
Format for journal articles: Surname, Initials.(year).Title of paper.Journal title in italics, volume in italics, page numbers.
Format for book/ multiple books: List the books in alphabetical order - Authors surname, followed by initials. (Year of publish), Title of book, Location of publish: Publisher. Page numbers usedExample: “Boring, E. G. (1929) A History of Experimental Psychology, New York: Century.
Format for websites: Author, year that the site was published/last updated (in round brackets), title of the web page (in italics), available at: URL, (accessed date).

145
Q

There were 30 4 -year old children at nursery A and 20 4-year old children at nursery B. The researcher used a stratified sample of 10 4 -year old children for the study.
Explain how the researcher might have obtained the stratified sample of 4-year old children from the two different nursery schools.

A
  • Identify the strata (age)
  • Calculate the proportion of these characteristics present in the target population (3:2 ratio, 60%:40%)
  • Randomly select their sample to represent these proportions - 6 children from nursery A and 4 children from nursery B