Research Methods - Year 13 Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Correlation And Correlation Coefficient?

A

Correlation - a mathematical technique in which a researcher investigated an association between co variables.

Correlation coefficient- number between -1 and +1 that represents the direction and strength of a relationship between two co variables. They’re plotted on a scattergram.

The closer to -1 or +1, the stronger the correlation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Difference between descriptive and inferential statistics?

A

This isn’t too important.

Descriptive - graphs, tables and summary statistics (measures of central tendency and dispersion. Used to identify trends and analyse data.

Inferential - statistical tests which tell psychologists whether relationships are significant or not. This helps decided wether a hypothesis should be accepted or rejected. A correlational coefficient is calculated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Two ways of analysing human behaviour?

A

(Don’t include year 12 stuff - like observations, investigations, etc).

Case studies and content analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are case studies?

A

A ‘case’ in psychology is a detailed and in-depth analysis of a person/group/institution or event (usually unusual).

Analyses human behaviour.

Usually produces qualitative data. But can produce quantitive.

Case history is produced - can involve interviews, questionnaires, observations and sometimes experimental or psychological testing takes place.

Usually longitudinal (take place over a long time).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a content analysis?

A

A way of analysing human behaviour.

An observational research technique where indirect study of behaviour takes place by examining communications that people produce. E.g. in texts, emails, TV, film and other media.

The aim is to summarise this communication system in a systematic way so overall conclusions can be drawn.

Involves coding and sometimes thematic analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Coding?

A

Initial stage of content analysis.

Some data that needs analysed during content analysis is large so data is categorised into units.

E.g. a newspaper is analysed for references to ‘mentally ill’ communication. Words with these connotations include ‘crazy’ and ‘mad’.

The words are then counted for the amount of references to ‘mentally ill’.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Thematic analysis?

A

Content analysis may also involve thematic analysis.

Generates qualitative data.

Themes are reoccurring ideas, explicit or explicit, in communications. They’re likely to be more descriptive than coding.

Themes are then developed into broader categories such as ‘control’ and ‘stereotyping’ (they’re not just about mentally ill people, but about other things too.

Once the researcher is satisfied with the themes they have developed, they may collect more data to test validity of the themes and categories. Assuming these represent the data adequately, a final report is written usually using direct quotes to illustrate each theme.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Evaluation of case studies?

A

Strengths:

  • offer rich and detailed insights to unusual and atypical behaviour.
  • they contribute to our understanding of ‘normal’ functioning. E.g. the case of HM was significant in demonstrating ‘normal’ memory processing (separate stores in the STM and LTM.
  • case studies may generate hypothesis for future study’s and revision of a whole new theory.

Weaknesses:
- generalisation of the findings is an issue when dealing with small samples sizes.
- the information in the final report is based on subjective interpretations of the researcher. Therefore conclusions cannot be confident.
- personal accounts are sometimes used from family and friends which may be prone to inaccuracy and memory decay (especially in childhood memories).
Therefore, they can lack validity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Evaluation of content analysis?

A

Strengths:

  • it can circumnavigate (get around) ethical issues associated with psychological research.
  • much material that is being analysed (tv, film, etc) is already available available to public so it’s cheap and no issues with obtaining permission.
  • communications such as texts between two people are really high in external validity, provided the authors consent is there.
  • content analysis is flexible because it produce quantitive and qualitative data depending on aims.

Weaknesses:
- people are studied indirectly so the communication they produce is usually analysed out of context. There’s a danger that researcher may attribute opinions to someone which weren’t originally intended.
- many modern analysts are clear about how their own biases and preconceptions influence the research, and make this clear in the final report. However, some analysis may still stuffer from lack of objectivity, especially when more descriptive forms of thematic analysis are used.
-

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What Is Reliability?

A

Reliability is a measure of consistency.

E.g. if you are using a tape measure, you expect to get the same results every time you measure a certain object. If the results are not consistent, then the measure is not reliable.

In psychology, a method (e.g. questionnaire) is deemed reliable if it consistently produces the same results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How Can You Test Reliability?

A

To test reliability, you use the test‐retest method.

To do this: the same person or group of people are asked to undertake the research measure, e.g. a questionnaire, again.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What To Consider When Planning A Test-Retest Method?

A
  1. In the test-retest method, the same group of participants are being studied twice, so researchers need to be aware of any potential demand characteristics.

E.g. If the same measure is given twice in one day, there is a strong chance that participants will be able to recall the responses they gave in the first test, and so psychologists could be testing their memory rather than the reliability of their measure.

Psychologists avoid this from happening by repeating the study, but a few weeks later so that it is more unlikely the participants will recall the same answers.

  1. It is also important to make sure that there is not too much time between each test. For example, if psychologists are testing a measure of depression, and question the participants a year apart, it is possible that they may have recovered in that time, and so they give completely different responses for that reason, rather than that the questionnaire is not reliable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How Do You Compare The Results Of A Test-Retest Method?

A

After the retest has been completed on two separate occasions, the two scores are then correlated.

If the correlation is shown to be significant, then the measure is deemed to have good reliability.

A perfect correlation is 1, and so the closer the score is to this, the stronger the reliability of the measure, but a correlation of over +0.8 is also perfectly acceptable and seen as a good indication of reliability.

Another name for this method is the ‘pearsons r test’. This is a good way of describing the method in exam questions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What Is Inter-Observer Reliability?

A
  • Inter‐observer reliability is the extent to which two or more observers are observing and recording behaviour in a consistent way.
  • This is a useful way of ensuring reliability in situations where there is a risk of subjectivity.

E.g. if a psychologist was making a diagnosis for a mental health condition, it would be a good idea for someone else to also make a diagnosis to check that they are both in agreement.

  • In psychological studies where ‘behavioural categories’ (e.g. violent play, non-violent play) are being applied, inter‐observer reliability is important to make sure that the categories are being used in the correct manner.
  • To check inter-observer reliability, psychologist would observe the same situation or event separately, and then their observations (or scores) would be correlated to see whether they are suitably similar.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How To Ensure Reliability?

A

This is not the same as testing reliability.

We ensure reliability by using inter-observer reliability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Example Of Inter-Observer Reliability?

A

Ainsworth’s Strange Situation

  • From attachment topic,
  • He uses operationalised behavioural categories,

During the controlled observation, the research team were looking for instances of separation anxiety, proximity seeking, exploration and stranger anxiety across the eight episodes of the methodology.

Ainsworth et al found 94% agreement between observers.
When inter‐observer reliability is assumed to a high degree, such as this, the findings are considered more reliable, and therefore meaningful.

If reliability is found to be poor, there are different ways in which it can be rectified depending on the type of measure being used.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How Can We Improve The Reliability Of: Questionnaires?

A

To improve the reliability of questionnaires, first researchers should identify which questions will have the biggest impact on the study and not include any questions that are unnecessary.

The questions should be written in a manner that reduces the potential for them to be incorrectly interpreted.

E.g. if the item in question is an open question, it may be possible to change it into a closed question, reducing possible responses and thereby limiting potential ambiguity.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How Can We Improve The Reliability Of: Interviews?

A

If reliability needs improving in an interview, there are several factors that can be adjusted.

Firstly, ensuring that the same interviewer is conducting all interviews will help reduce researcher bias. There is the potential for variation in the way that questions are asked which can then lead to different responses. Some researchers may ask questions that are leading or are open to interpretation.

If the same interviewer cannot be used throughout the interviewing process, then training should be provided in order to limit the potential bias.

Also, changing the interview from unstructured to structured will limit researcher bias.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

How Can We Improve The Reliability Of: Experiments?

A

In experiments, the level of control that the researcher has over variables is one way that reliability can be influenced.

Laboratory experiments are often referred to as having high reliability due to the high level of control over the independent variable(s), which in turn makes them easier to replicate by following the standardised procedures.

To improve the reliability within experiments, researchers might try to take more control over extraneous variables, helping to further the potential for them to become confounding.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

How Can We Improve The Reliability Of: Observations?

A

Observations can lack objectivity when the clear goal/objective of the study is not made clear. This could lead to researcher’s interpreting a situation/objective differently, leading to unreliable results.

If behavioural categories are being used, it is important that the researcher is applying them accurately and not being subjective in their interpretations. One way to improve reliability in this instance would be to operationalise the behavioural categories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Difference Between Extraneous And Confounding Variables?

A

Extraneous variables are any variable other than the independent variable that might affect the dependent variable and therefore affect the results of the study.

Extraneous variables that are important enough to cause a change in the dependent variable are called confounding variables.

If they are not important enough to cause a change in the dependent variable, they stay known as extraneous variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What Are Operationalised Behavioural Categories?

A

This means that the categories need to be clear and specific on what constitutes the behaviour in question.

There should be no overlap between categories leaving no need for personal interpretation of the meaning.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What Is Validity?

A

Validity refers to whether something is true or legitimate.

There are different types of validity that can be assessed in psychology:

  • Internal validity,
  • External validity (Ecological validity and Temporal validity).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Internal Validity?

A

Internal validity is a measure of whether results obtained are solely affected by changes from the independent variable being manipulated in a cause and effect relationship.

This means the results can be representative of the population in the study (being studied).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

External Validity?

A

External validity is a measure of whether data can be generalised to other situations/people outside of the research environment.

There are two different types of external validity:

  • Ecological validity,
  • Temporal validity.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Ecological Validity?

A

Ecological validity is a type of external validity.

It is the extent to which psychologists can apply their findings to other settings.

How is this different from external validity? External validity asks whether the findings of a study can be generalised to patients with characteristics that are different from those in the study, or patients who are treated in a different way, or patients who are followed up for longer duration.

In contrast, ecological validity specifically examines whether the findings of a study can be generalized to natural situations (e.g. patients in a hospital/clinical practice in everyday life).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Temporal Validity?

A

Temporal validity is another form of external validity.

It refers to the extent to which research findings can be applied across time.

E.g. will this conclusion and findings from a study eventually become outdated? Is this research outdated?

28
Q

How To Assess Validity?

A

Assessing validity will help measure weather the research being carried out is valid (true) or not. This will help us understand weather the results from the study can be representative of anyone/generalised.
This is how we asses validity:

  1. Face Validity -
    Firstly, the face validity is considered, that is, does the test appear to measure what it says it measures?
    For example,
    if there is a questionnaire that is designed to measure depression, do the items all look like they are going
    to represent what it is like to have depression? Is the questionnaire conducted by specialist.
  2. Concurrent validity -
    Secondly, this is where the performance of the test in question is compared to a test that is already recognised and trusted within the same field.
    For example, if psychologists are wanting to introduce a new measure of depression, they might
    compare their results to the data obtained from a measure that is very similar, such as Beck’s depression
    inventory.

If validity is not high, we can improve it.

29
Q

How To Improve Validity In: Experiments?

A

Control Group - Allows for easy comparison to see if the results of the study are actually valid,

Single Blind or Double Blind - reduce demand characteristics/investigator effects,

Standardised instructions - involves giving all participants the same instructions in exactly the same format so that no demand characteristics occur.

30
Q

How To Improve Validity In: Questionnaires?

A

Lie scale - to check the consistency of participants’ responses.
One way in which this can be done is by having two items that are asking the same thing, but in opposite ways. E.g. “Have you ever said anything you wish you could take back?” and “Do you regret saying anything?”. This could allow people to rule out any unreliable participants (liars) from the research.

Anonymous - reduces social desirability bias.

31
Q

How To Improve Validity In: Observations?

A

Covert observation - this eliminates any chances that participants will be acting in a way that they deem correct or desirable for the sake of the study,

Behavioural categories - This makes sure there is no researcher subjectivity. To do this, you must have a clearly defined objective for what the researchers are looking for, specifically and the researchers must undergo training,

Qualitative data - Quotes + triangulation (use three different methods and compare results).

32
Q

What Are The Features Of Science?

A

It is an ongoing debate in psychology whether psychology can be considered a science.

We look at the different features of what makes a science to understand weather psychology fulfils these expectations.

Features of a science:

  • Objectivity and Empirical Methods,
  • Replicability and Falsifiability,
  • Theory Construction and Hypothesis Testing,
  • Paradigms and Paradigm Shifts.
33
Q

What Is Objectivity?

A

Objectivity is a feature of science.

It is the ability for researchers to remain objective, meaning that they must not let their personal opinions, judgements or biases interfere with the data.

  • Laboratory experiments are the most objective method within the psychology discipline because of the high level of control that is exerted over the variables. A natural experiment, cannot exert control over the manipulation of independent variables and is often viewed as less objective.
  • Similarly, the observational and content analysis methods in psychology can fall victim to objectivity issues since the behavioural categories assigned are at the personal discretion of the investigator.
34
Q

What Are Empirical Methods?

A

Empirical methods refer to the idea that knowledge is gained/conclusions are drawn from direct experiences in an objective, systematic and controlled manner to produce quantitative data.

  • It suggests that we cannot create knowledge based on belief alone, and therefore any theory will need to be empirically tested in order to be considered scientific.

Adopting an empirical approach reduces the opportunity for researchers to make unfounded claims about phenomena based on subjective opinion.

35
Q

What Is Replicability?

A

Replicability is a feature of a science.

It refers to the ability to conduct research again and achieve consistent results.

If the findings can truly be generalised, and thus be truly valid, psychologists would expect that any replication of a study using the same standardised procedures would produce similar findings and reach the same conclusions.

36
Q

What Is Falsifiability?

A

For research to be considered scientific it should also be falsifiable.

Falsifiability (Popper) refers to the idea that a research hypothesis could be proved wrong.

Scientific research can never be ‘proven’ to be true, only subjected to research attempts to prove them as false.

For this reason, all investigations have a null hypothesis which suggests that any difference or relationship found is due to chance.

37
Q

Example In Psychology Of A Study Which Lacks Falsifiability?

A

An example within psychology which causes conflict in the scientific community for its lack of falsifiability is the:
Freudian psychodynamic approach.

The Oedipus complex, which occurs for boys during childhood whereby they resolve an unconscious sexual desire for the opposite‐sex parent in order to develop the final element of their psyche: the superego.

If a male individual refutes the idea that he will have gone through this stage of psychosexual development in his youth, psychodynamic theorists would counter this with the supposition that they were in denial (a defence mechanism) which is another facet of the theory.

If any person tries to argue that Freudian’s psychodynamic approach is not true, the psychodynamic theorists suggest that the person is still in a stage of denial - a defence mechanism in the theory. This means the theory cannot be falsified.

  • Popper argued that if falsification cannot be achieved, the theory cannot have derived from a true scientific discipline, which should instead be regarded as a pseudoscience.

Therefore, the psychodynamic approach casts doubt on the scientific rigour of psychology when considered as a whole.

38
Q

What Is Theory Construction?

A

A theory is a set of principles that intend to explain certain behaviours or events.

However, to construct a theory, evidence to support this notion needs to be collected first, since the empirical method does not allow knowledge to be based solely on beliefs.

If a researcher suspects something to be true, they need to devise an experiment that will allow them to examine their ideas.

It is only when a researcher starts to discover patterns or trends in their research, that they can then construct a theory.

This is called the ‘inductive process’ and is sometimes referred to as the ‘bottom up’ approach. Thereafter, researchers can make predictions about what they expect to happen – a hypothesis.

Induction: involves reasoning from the particular to the general. For example a scientist may observe instances of a natural phenomenon and come up with a general law or theory.

39
Q

What Is Hypothesis Testing?

A

When designing a hypothesis, it must be objective and measurable so that at the end of the investigation a clear decision can be made as to whether results have supported or refuted the hypothesis.

If findings support the hypothesis, then the theory will have been strengthened.

If it is refuted, then it is likely that alterations will be made to the theory accordingly.

  • There is the deductive process of theory construction which works from the more general ideas to the more specific and is informally referred to as a ‘top‐down’ approach.
  • Deduction: involves reasoning from the general to the particular, starting with a theory and looking for instances that confirm this. Darwin’s theory of evolution is an example of this. He formulated a theory and set out to test its propositions by observing animals in nature. He specifically sought to collect data to prove his theory.

Here, the psychologist may begin with a theory relating to a topic of interest. This will then be narrowed down into a more specific hypothesis which can be tested empirically. Any data gathered from testing the hypothesis in this way will then be used to adjust the predictions.

40
Q

What Is A Paridigm?

A

A paradigm is a set of shared assumptions and methods within a particular discipline.

Kuhn suggested that it was this that separated a scientific discipline from non‐scientific disciplines. Under this assumption, he suggested that psychology was perhaps best seen as a pre‐science, separate from the likes of physics or biology. He suggested that psychology had too much disagreement at its core between the various approaches (e.g. behaviourist versus cognitive psychologists), and was unable to agree on one unifying approach to consider itself a science.

41
Q

What Is A Paradigm Shift?

A

The way in which a field of study moves forward is through a scientific revolution. It can start with a handful of scientists challenging an existing, accepted paradigm. Over time, this challenge becomes popular with other scientists also beginning to challenge it, adding more research to contradict the existing assumptions. When this happens, it is called a paradigm shift.

42
Q

Example Of A Paradigm Shift?

A

An example of a paradigm shift is how scientists historically believed the world to be flat when now it is accepted to be round.

From the late nineteenth century psychoanalytic theory was at the forefront of psychological thinking, with the role of the unconscious mind in governing behaviour being the dominant approach.

However, between 1927 and 1938 the work of Pavlov and Skinner emerged who adopted the behaviourist position that all behaviour was learned from the environment and experiences. Shortly thereafter, in the 1960s, another paradigm shift occurred with the cognitive approach taking precedence in psychology with the development of the electronic computer.

Here, the shift from behaviourist thinking moved towards the role of cognitions in explaining human behaviour although elements of the behaviourist approach remained in use and were combined in cognitive behavioural therapy (CBT).

43
Q

What is a statistical test?

A

A statistical test is used to determine whether a difference or an association found in an investigation is statistically significant (not occurred by chance).

This decides if we accept or reject hypothesis.

44
Q

How to choose a statistical test?

A
  1. Is the researcher looking for a difference or a correlation? This is obvious from the wording of the hypothesis. ‘Correlation’ can include correlation analysis as well as investigations that are looking for an association between two co variables.
  2. In the case of a difference, what experimental design is being used? This can be independent groups, matched pairs, repeated measures. Matched pairs and repeated measures are related (repeated) designs. People in independent groups are unrelated.
  3. The measurement of the study. Qualitative data is divided into ordinal, nominal, interval.
45
Q

Nominal data?

A

Quantitive data that is represented in categories.

E.g. boys = 15, girls = 7.

Nominal data is discrete in that one item can only appear in one of the categories. E.g. if you asked people to name their favourite ice cream, you can only place a vote into one category.

46
Q

Ordinal data?

A

Quantitate data that is ordered.

E.g. asking everyone in your class to rate how much they like psychology on a scale of 1 to 10.

Does not have equal intervals and does not include specific measurements such as time or temperature.
E.g. it would not make sense to say someone who involves psychology at an 8 likes it twice as much as someone who voted a 4.

This data is usually less accurate and lacks precision - e.g. a happiness scale or an IQ test. Questionnaires and psychological tests usually don’t measure something ‘real’.

Ordinal data, for these reasons, are sometimes referred to as ‘unsafe’ because they lack precision. It’s not used as part of the statistical testing. Instead, raw scores are converted to ranks (1st, 3rd) and ranks (not the scores) are used in the investigation.

47
Q

Interval data?

A

Based on numerical scales that include units of equal, precisely defined size.

Examples are time and temperature and weight.

It’s ‘better’ than ordinal data because more detail is preserved (and original is ‘better’ than nominal).

Most precise and sophisticated form of data in psychology.

48
Q

Type 1 error?

A

Researchers can never be 100% certain threat they have found statistical significance, it is possible (usually up to 5% possible) that the wrong hypothesis has been accepted?

Type 1 - when the null hypothesis is rejected and the alternative hypothesis is accepted when it should have been the other way around because the null hypothesis is ‘true’. This is often referred to as an optimistic error or FALSE POSITIVE.

We are more likely to make a type 1 error if the significance level is too lenient (too high, e.g. 0.1% probability)

49
Q

Type 2 error?

A

Researchers can never be 100% certain threat they have found statistical significance, it is possible (usually up to 5% possible) that the wrong hypothesis has been accepted?

Type 2 error - FALSE NEGATIVE. When the null hypothesis is accepted but the alternative hypothesis should have been accepted.

A type 2 error is more likely if the significance level is too stringent (e.g. 0.01%) as potentially significant values may be missed. Psychologists favour the 0.05% level of significance as it balances the risk of having a type 1 or 2 error.

50
Q

Levels of significance?

A

We use the 0.05% level of significance in study’s.

P < 0.05.

This means that the probability of the study being due to chance is 5%.

51
Q

Critical value?

A

Once the statistical test is calculated, the result is a number. To check of a significant difference, this calculated value is compared with a critical value.

Depending on whether the critical value is higher or lower (it will tell you in test), depends on weather we accept or reject the hypothesis.

52
Q

How to work out degrees of freedom?

A
N = number of people in study. 
do = degrees of freedom.

Do:
N - 1 if it’s a one tailed hypothesis (directional).
N - 2 I’d it’s a two tailed hypothesis (non-directional).

Also remember if anyone in the study has no effect in both conditions of the study, remove their data from the study. So minus them also from the N.

53
Q

Why might a lower level of significance be used in a study?

A

Sometimes a lower level of p is used, such as 0.01.

This might happen when there is a human cost (medical studies) and when you cannot risk there be a 5% chance that medicine or health goes wrong.

Might also happen when this is a one-off study and is not going to be replicated again.

Also, if there is a large difference between calculated and critical values, the researcher may lower the p to check the validity.

54
Q

Why might you use a higher significance level in a study?

A

If there’s already research done before the study.

55
Q

Why would you use Mann-Whitney?

A

When the data shows looking for a DIFFERENCE between two groups of people.

When the experimental design is unrelated.

When the measurement is ordinal.

56
Q

Types of statistical test?

A

Mann-Whitney,

Wilcoxon,

Unrelated t-test,

Related t-test,

Spearman’s rho,

Pearson’s r,

Chi-squared.

57
Q

Why would you use Wilcoxon?

A

When your looking at a difference between two groups.

When the experimental design is related (repeated).

When the measurement used is ordinal.

58
Q

Why would you use a unrelated t-test?

A

When looking for a difference between two sets of data/groups.

When the experimental design is unrelated (hence name).

When the measurement used is interval.

59
Q

When would you use a related t-test?

A

When looking at the difference between two groups.

When the experimental design is related.

When the level of measurement is interval.

60
Q

When would you use a Pearson’s r?

A

When your looking at a correlation between two co-variables.

(We don’t look at the experimental design because we’re not looking at a difference).

When we are using a measurement of interval.

61
Q

When would you use spearman’s rho?

A

When looking at a correlation between two co-variables.

(We don’t look at the experimental design because we’re not looking at the difference).

When looking at a measurement of ordinal.

62
Q

Why would you use chi-squared?

A

You can use chi-squared for both correlations and looking at difference.

If it’s a difference, the experimental design must be unrelated.

The measurement needs to be nominal (for looking at a correlation and looking at a difference!

63
Q

What makes up a scientific report?

A

When psychologists come to write up their research for publication in journal articles, they use a conventional format.

What makes them up:
- abstract (short summary which includes aims and hypothesis’, method, procedure, results and conclusions).

  • introduction (literature review of general area of investigation, relevant theories and concepts).
  • method,
  • results,
  • discussion,
  • referencing.
64
Q

What’s in a method in a scientific report?

A

Split into several sub-sections.

Should indicate sufficient detail so that researchers are able to replace the study.

  1. Experimental design and the justification.
  2. Sample - how the sample was taken from the target population and information related to people in study. How many people were there? And biological/demographic information as long as anonymity is kept.
  3. Apparatus/materials.
  4. Procedure - list of everything that happened. Includes what was said to participants, briefing, standardised instructions and debriefing.
  5. Ethics - an explanation of how they were addressed.
65
Q

Whats in the results of a scientific report?

A

Should summarise the key findings.

Likely to feature descriptive statistics such as tables, graphs and charts.

Inferential statistics should include reference to statistical test, calculated and critical values, the level of significance and final outcome.

Also includes if any hypothesis’ were rejected and accepted.

Any raw data and calculations appear in the appendix. If qualitative methods are used, the findings involve themes and categories.

66
Q

What is in the discussion of the scientific report?

A

Summarise findings in verbal form,

Include context to the evidence to support the findings.

Limitations of the study.

How the limitations can be addressed in any further study.

Wider implications of the research are considered. This may include real-world application or existing knowledge.

67
Q

What is in the referencing part of the scientific study?

A

Full details of any sources used that the researcher drew upon or cited in the report.

May include journals, books, websites, etc.

References take the following format:
Author, date, title of book (in italics), place of publication, publisher.