Scientific process Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is the aim of a study?

A

A general statement about what the researcher intends to study.
States the purpose of the study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a ‘hypothesis’?

A

A precise and testable statement that states the relationship between variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the two types of hypotheses?

A

Directional
Non-Directional

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a directional (one tailed) hypothesis?

A

Predicts the nature of the effect of the independent variable on the dependent variable.”THERE WILL BE NO DIFFERENCE”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a non-directional (two tailed) hypothesis?

A

Predicts that the independent variable will have an effect on the dependent variable, but the direction of the effect is not specified. It just states “THERE WILL BE A DIFFERENCE.”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is an experimental/ alternate hypothesis?

A

States that there is a relationship between the two variables being studied (one variable has an effect on the other)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a null hypothesis?

A

Predicts there will not be a difference or effect between two variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a sample?

A

Small group of participants the researchers is interested in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define the target population.

A

A specific group of people the researchers is often interested in.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are two methods to gain a random sample?

A

Lottery method
Random number generator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is an opportunity sample?

A

Recruit those people who are most convenient or most available.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a random sample?

A

A sample of participants produced by using a technique such that every member of the target population being tested has an equal chance of being selected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a volunteer sample?

A

Also known as a self-selected sample. To recruit participants, advertisements are placed in a newspaper or on a noticeboard.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a systematic sample?

A

A sample obtained by selecting every nth person.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a stratified sample?

A

Subgroups also known as strata within a population are identified. participants are obtained from each of the strata in proportion to their occurrence in the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Strengths and weaknesses of opportunity sampling.

A

+ Quick way and easy of choosing participants
+ Easy to locate a sample.
- It may not provide a representative sample, and could be biased.
- Inevitably biased due to sample being drawn from a small part of the population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Strengths and weaknesses of random sampling.

A

+ Least bias method of sampling as all members of the target population have equal chance of being selected.
+ Findings can be applied to the entire population base (generalised).
- May not truly represent the target population.
- It is a complex and time-consuming method of research.
With random sampling, every person or thing must be individually interviewed or reviewed so that the data can be properly collected.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Strengths and weaknesses of Volunteer/self-selected sampling.

A

+ This method allows the researcher to reach a range of potential ppts as many different people will see the advertisements and be able to respond.
+ Participants will all be happy and willing to participate. (reduces ethical issues)
- It will be biased towards a certain type of person as only people with a personal interest in the research topic will volunteer.
- Results can’t be generalised.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Strengths and weaknesses of stratified sampling.

A

+ Unbiased.
+ Highly representative of the target population and therefore we can generalise from the results obtained.
- Gathering such a sample would be extremely time consuming and difficult to do.
- Care must be taken to ensure each key characteristic present in the population is selected across strata, otherwise this will design a biased sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Strengths and weaknesses of systematic sampling.

A

+ Should provide a representative sample- unbiased.
+ Gives researchers a degree of control. It can help eliminate cluster selection.
- Very difficult to achieve (i.e. time, effort and money).
- Method isn’t truly random unless you select a random number at random to begin with.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is sampling bias?

A

Happens when some members of a sample population are more likely to be selected in a sample than others. Sampling bias limits the generalisability of sample findings because it is a threat to external validity (specifically population validity).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is generalisation?

A

Taking something specific and applying it more broadly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is a pilot study?

A

This is a practice run of the proposed research project. Researchers will use a small number of participants and run through the procedure with them. The purpose of this is to identify any problems or areas for improvement in the study design before conducting the research in full. A pilot study may also give an early indication of whether the results will be statistically significant.

For example, if a task is too easy for participants, or it’s too obvious what the real purpose of an experiment is, or questions in a questionnaire are ambiguous, then the results may not be valid. Conducting a pilot study first may save time and money as it enables researchers to identify and address such issues before conducting the full study on thousands of participants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is experimental design?

A

The different ways in which the testing of participants can be organised in relation to the experimental conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What is independent groups design?

A

Participants are allocated to different groups where each group represents one experimental condition.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What is repeated measures?

A

All participants take part in all experimental conditions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is matched pairs design?

A

Pairs of participants are first matched on some variable(s) that may affect the DV. Then one member of the pair is assigned to Condition A and the other to Condition B.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is random allocation?

A

An attempt to control for participant variables in an independent group design ensures that each participant has the same chance of being in one condition as any other. This is used in IGD.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What is counterbalancing?

A

A technique used to deal with order effects in a repeated measures design: half the participants experience the conditions in one order, and the other half in the opposite order.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

What are order effects?

A

It’s the order of the conditions having an effect on the participants’ behaviour. Performance in the second condition may be better because the participants know what to do.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

What are strengths and weaknesses of independent measures?

A
  • More participants are needed.
  • Least effective design for controlling variables.
    + Reduces the chance of demand characteristics.
    + Participants are randomly allocated
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

What are strengths and weaknesses of repeated measures?

A
  • Increased chance of demand characteristics.
  • Same participants are used in both conditions.
    + Fewer participants are needed.
    + Order effects like boredom and fatigue can be controlled through counterbalancing.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

What are strengths and weaknesses of matched pairs?

A
  • More participants are needed.
  • It is not possible to match all the participant’s characteristics.
    + Identical twins provide researchers with a close match.
    + Good attempt to control participant variables.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Target behaviour is systematically divided up into operationalised specific categories. These are:

A

Objective
Mutually exclusive
Covers all possible behaviours

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

What are the 2 sampling procedures?

A

Time sampling
Event sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What is event sampling?

A

It is where an observer records the number of times a certain behaviour occurs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

What is time sampling?

A

Time sampling is a method of sampling behaviour in an observation study and is where an observer records behaviour at prescribed intervals. For example, every 10 seconds.

Recording behaviours within a specific time frame.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

What are behavioural categories?

A

An observational study will use behavioural categories to prioritise which behaviours are recorded and ensure the different observers are consistent in what they are looking for.
For example, a study of the effects of age and sex on stranger anxiety in infants might use the following behavioural categories to organise observational data:

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

What ensures behavioural categories are consistent?

A

Inter-observer reliability: In order for observations to produce reliable findings, it is important that observers all code behaviour in the same way. For example, researchers would have to make it very clear to the observers what the difference between a ‘3’ on the anxiety scale above would be compared to a ‘7’. This inter-observer reliability avoids subjective interpretations of the different observers skewing the findings.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

What is a questionnaire?

A

A questionnaire is a set of written questions on a topic on which opinions are sought. Questionnaires are frequently used in survey research in which information is gathered regarding people’s attitudes and beliefs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

What are closed questions?

A

These have specific, limited answers. Often a statement is given to the respondent and they must choose from several fixed responses. The following example is often referred to as a Likert Scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

What are open questions?

A

With this type of question, the respondent is given a high level of freedom with their answers. Often the researcher simply asks a question and provides space underneath for the respondent to write their answer. This type of question would collect qualitative data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

What are the strengths and weaknesses of closed questions?

A

+ It collects quantitative data which means that the researcher can statistically analyse the data, and produce graphs allowing for a thorough numerical analysis to be completed on the data.

+ Easy to replicate the study. This means that since the questions are standardised it is easy to replicate the questionnaire.

  • They obtain quantitative data. This is a weakness because the data can be criticised for not accurately representing the complexity of human behaviour.
  • Participants may respond in a socially desirable way. This is problematic because it means the findings are not representative of the truth.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

What are the strengths and weaknesses of open questions?

A

+ Collects qualitative data. This is a strength because open questions collect rich qualitative data which helps researchers develop a better, more in-depth knowledge of human behaviour.

+ Easy to replicate the study. This means that since the questions are standardised it is easy to replicate the questionnaire. This is positive because it allows for the questionnaires to be assessed in terms of their reliability.

  • Data collected is in qualitative form. This is a weakness because when the researcher comes to analysing the data it is very difficult for them to carry out any statistical tests which means that it can be difficult to draw any firm conclusions on the basis of using inferential statistics.
  • Participants may respond in a socially desirable way. This is problematic because it means the findings are not representative of the truth.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

What are the ways in designing questionnaires?

A

Postal questionnaires: This involves sending out questionnaires to people through the post. However, this could cause an unrepresentative sample because only people who have time will respond to the questionnaires, this may exclude people who work, have or have full-time family commitments.

Magazine and newspaper questionnaires: This involves asking the readers to send in the completed questionnaire. However, this could bring about an unrepresentative sample as only readers of that particular magazine will respond to the questionnaire. This will exclude individuals who don’t read this magazine.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

How can you design a good questionnaire?

A

CLARITY: respondent needs to know what is being asked of them.
BIAS: avoid all questions that may lead to ppts answering in a certain way.
ANALYSIS: questions need to be easy to analyse.
PILOT STUDY: carry out a small-scale version in order to make changes.
SAMPLING: use a sample that represents the population.
SEQUENCE: leave more difficult questions to the end.
FILLER QUESTIONS: irrelevant questions should be included to distract ppts, reducing demand characteristics.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

What is a variable?

A

Anything that can vary or change within an investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the independent variable?

A

The variable altered/manipulated by the experimenter

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

What is the dependent variable?

A

The variable that is measured.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

What is meant by operationalising variables?

A

Explains how a variable is measured or manipulated.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

What are extraneous variables?

A

Variables additional to the IV that should be accounted for and controlled to avoid their impact upon the DV.

52
Q

What are confounding variables?

A

An extraneous variable that hasn’t been controlled that may have affected the true nature of the study.

53
Q

What are confounding variables?

A

An extraneous variable that hasn’t been controlled that may have affected the true nature of the study.

54
Q

How do you enable control in research?

A
  • Random allocation
  • Counterbalancing
  • Randomisation
  • Standardisation
55
Q

What is randomisation?

What is counterbalancing?

What is random allocation?

What is standardisation?

A

Randomisation in an experiment refers to a random assignment of participants to the treatment in an experiment to control the effect of extraneous variables.

It is counterbalancing, which means testing different participants in different orders.

Random allocation is when the researchers divide the participants and allocate them to certain groups using a random method e.g lottery method

Standardisation refers to the process in which procedures used in research are kept the same.

56
Q

What are demand characteristics?

A

Presence of demand characteristics in a study suggest that there is a high risk that participants will change their natural behaviour in line with their interpretation of the aims of a study, in turn affecting how they respond in any tasks they are set.

Participants may, for example, try to please the researcher by doing what they have guessed is expected of them.

57
Q

What are investigator effects?

A

Investigator effects occur when a researcher unintentionally, or unconsciously influences the outcome of any research they are conducting. This can be done in several ways. 1 Non-verbal communication. The researcher can communicate their feelings about what they are observing without realising that they have done so.

58
Q

What is an ethical issue?

A

A conflict about what is acceptable in psychological research.

59
Q

What is an ethical guideline?

A

Way of resolving the conflict.

60
Q

What are examples of ethical issues?

A

Informed consent
Deception
The right to withdraw
Protection from physical and psychological harm
Confidentiality
Privacy

61
Q

What is informed consent?

A

Consent is a basic human right and should make an informed choice whether to be part of the research. Yet revealing the true aims of the study may cause demand characteristics and spoil the experiment.

62
Q

What is deception?

A

It’s unethical for participants. Information must not be withheld from participants, nor should they be misled. Could have a negative impact on the psychological community.

63
Q

What is the right to withdraw?

A

Participants have the right to withdraw at any time during the research, beginning or end regardless of whether they have been paid or not. Participants leaving the experiment could hinder the validity.

64
Q

What is protection from physical and psychological harm?

A

No physical or psychological harm should happen during the study. However, it is considered ethical if the risk or harm is no greater than would be expected in ordinary life.

65
Q

What is confidentiality?

A

The data protection act makes confidentiality a legal right. Data must not be able to identify participants.

66
Q

What is privacy?

A

It’s difficult to avoid. People don’t expect to observe in certain situations. However, it may be more acceptable in public spaces.

67
Q

What is debriefing?

A

After the investigation participants should be fully informed about the nature of the research.

68
Q

What is the BPS code of practice (guideline)?

A

Suggests ways of dealing with ethical issues that could arise during research.

69
Q

What are the 4 strategies of dealing with ethical issues?

A

Punishments
Ethical guidelines
Cost-benefit analysis
Ethical committee

70
Q

What are punishments?

A

Psychologists who ignore guidelines and behave unethically or unacceptable will be barred from research.

71
Q

What are ethical guidelines (code of conduct)?

A

Tells psychologists behaviours that are and aren’t acceptable. Gives guidance on how to deal with ethical dilemmas.

72
Q

What is cost-benefit analysis?

A

The costs of the research are judged against the benefits from the perspective of:
1) Participants
2) Society
3) Group of individuals

73
Q

What is an ethical committee?

A

In places where research takes place abroad must approve the study before it begins, taking into consideration how issues will be dealt with and weighing up the costs and benefits.

74
Q

How do you deal with deception and what problems could arise?

A

Any deception should be approved by the ethics committee. Participants should be debriefed after the study. Problems are that ppts could feel embarrassed that they’ve been deceived.

75
Q

How do you deal with informed consent and what problems could arise?

A

Participants are asked to give their consent and agreement in participants through signing a detailed document. Problems are that it doesn’t guarantee the ppts understand what they’ve let themselves in for.

76
Q

How do you deal with the protection of participants from harm and what problems could arise?

A

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, make sure their participation is brief as they get tired easily and have a limited attention span. Terminate the study if harm is suspected. Problems are that harm may not be apparent at the time of the study and only judged later with hindsight.

77
Q

How do you deal with having the right to withdraw and what problems could arise?

A

Participants should informed that they have the right to withdraw. Problems are that ppts may feel they shouldn’t withdraw because they will spoil the study.

78
Q

How do you deal with confidentiality and what problems could arise?

A

Don’t make records of participants personal details. Problems are that it can be possible to work out who the ppts were using that information that has been provided.

79
Q

How do you deal with privacy and what problems could arise?

A

Unless a public space, don’t study anyone without informed consent. Problems are is that there is no universal agreement about what contributes a public space.

80
Q

What is peer review?

A

Peer review is the process by which psychological research papers, before publication, are subjected to independent scrutiny by other psychologists working in a similar field who consider the research in terms of its validity, significance and originality.

To allocate research funding; validation of the quality and relevance of research and improvements and amendments are suggested.

81
Q

Who is Cyril Burt?

A

Published results from twin studies claiming that intelligence was inherited. at the time, the evidence was used to shape public policy. Test were introduced to the schooling system, designed to segregate kids according to their inherited abilities.

82
Q

What are implications of faulty research?

A

Useless= doesn’t benefit the understanding of behaviour.
Incorrect psychological treatment and therapies.
Discrimination of groups in society.
Lack of trust for psychology as a discipline.
Clearing up mistakes due to faulty data.

83
Q

What are the implications of published faulty data?

A

Lack of trust in the subject and any research findings.
Public use of trusted data, that unbeknown to them is in fact faulty.

84
Q

What’s an academic journal?

A

Typically reviewed, academic journals are periodicals in which researchers publish articles on their work. Most often these are to discuss recent research.

85
Q

What is the main purpose of peer review?

A
  1. Allocation of research funding
  2. Publication of research in scientific journals and books
  3. Assessing the research rating of university departments

Peer review is an important part of this process because it provides a way of checking the validity of the research, making a judgement about the credibility of the research and assessing the quality and appropriateness of the design and methodology. Peers are also in a position to judge the importance or significance of the research in a wider context. They can also assess how original the work is and whether it refers to relevant research by other psychologists. They can then make a recommendation as to whether the research paper should be published in its original form, rejected or revised in some way. This peer review process helps to ensure that any research paper published in a well-respected journal has integrity and can, therefore, be taken seriously by fellow researchers and by lay people.

86
Q

How has the internet affected quality and credibility of peer review?

A

With a large amount of information available on the internet new solutions are needed to maintain the quality of information.

87
Q

Evaluation of peer review?

A

+ Peer review is essential, as without it we don’t know what is opinion and speculation and what rigorously researched data is. We need to have a means of establishing the validity of scientific research.

+ Helps to prevent scientific fraud, as submitted work is scrutinised. It promotes the scientific process through the development and dissemination of accurate of knowledge and contributes new knowledge to the field.

  • Richard Smith (Boston Medical Journal editor) commented “Peer review is slow, expensive, profligate of an academic tie, highly subjective, prone to bias, easily abused, poor at detecting gross and almost useless at detecting fraud”.
  • Appropriate experts can’t always be found to review a research proposal, meaning poor research may be passed because the reviewer didn’t really understand it.
88
Q

What is the steps to peer review?

A
  1. Psychologists study behaviour and write about their results.
  2. They send their research paper to a journal editor who sends it for peer review.
  3. Independent scrutiny by other psychologists working in a similar field who read the paper and provide feedback to the editor.
  4. The work is considered in terms of its validity, significance and originality and an assessment of the appropriateness of the method and designs used.
  5. The reviewer can accept the manuscript as it is or accept changes made, suggests the author makes revisions and resubmits or reject without the possibility of resubmission.
  6. Editors may send review comments and may revise and resubmit their review. if it doesn’t meet the standards it will be rejected. The editor makes the final decision on whether it’s published.
  7. If the article meets editorial and peer standards it is published in a journal.
89
Q

What is assessing the research ratings of uni debts in peer review?

A

All university science departments are expected to conduct research that is assessed in terms of quality (research excellence framework, REF). Future funding depends on good ratings from the REF.

90
Q

What is economic psychology?

A

Seeking a better understanding of peoples behaviour in their economic lives. Also referred to as ‘behavioural economics’.

91
Q

What are reasons for why psychology contributes to the economy?

A
  • Provides solutions to social problems such as drug abuse or mental health.
  • Psychological conditions (depression) have a direct economic impact in the UK. So investing in psychotherapy has a positive impact on people so they can return to work.
  • Takes place out of unis like hospitals, businesses and gov. departments - often occurring without contributions from research councils.
92
Q

What are the reasons why psychology doesn’t contribute to the economy?

A
  • As psychology is so spread out it’s almost impossible to judge the economic impact of psychological research that may have been carried out decades earlier.
  • Placing an economic value on psychological research underestimates its true social value.
  • It’s difficult to calculate the economic beliefs of psychology as much research is carried out due to curiosity about human and animal behaviour.
93
Q

Psychology and the economy: Attachment Research into the Role of the Father

A

Recent research has stressed the importance of multiple attachments and the role of the father in healthy psychological development. This may promote more flexible working arrangements in the family and means that modern parents are better equipped to contribute more effectively to the economy

94
Q

Psychology and the economy: Development of Treatment for Mental Illness

A

A third of all days off work are caused by mental illness. Psychological research into the causes and treatments means that patients can have their disorders diagnosed quickly. Patients have access to therapies or psychotherapeutic drugs and sufferers can manage their condition effectively, return to work and contribute to the economy.

95
Q

What is internal and external reliability?

A

External: This measures consistency from one occasion to another. e.g. the same result should be found on different days, in different labs etc.

Internal: This measures the extent to which a test or procedure is consistent within itself e.g. in the questionnaire, they should all be measuring the same thing.

96
Q

How do you improve the reliability of observational techniques?

A

Agree upon behavioural categories

Ensure there is clarity in the behavioural categories

Ensure observers have practice e.g. pilot studies using the categories.

97
Q

What are the three ways to assess reliability in self-report techniques?

A
  • Test-retest
  • Inter observer reliability
  • Reduce ambiguity
98
Q

How is test-retest completed?

A

To measure test-retest reliability, you conduct the same test on the same group of people at two different points in time. Then you calculate the correlation between the two sets of results. A high correlation between the test scores indicates the test has good external reliability.

99
Q

What is inter-interviewer reliability?

A

The answer to the interviews carried out on different occasions is compared for consistency. Reliability can be assessed with one research behaviour, or assessed with two research is using the same method.

100
Q

How is inter-observer reliability conducted?

A

Measures the degree of agreement between different people observing or assessing the same thing.

Here researchers observe the same behaviour independently (to avoid bias) and compare their data. If the data is similar then it is reliable.

A more positive correlation coefficient (closer to 1)

101
Q

How to reduce ambiguity in self report techniques?

A

Ensure all questions are clear in what they are asking. Participants should not be interpreting the questions in any way. Researchers should ensure there is clarity in the questions asked.

102
Q

How to improve reliability in questionnaires?

How to improve reliability in interviews?

How to improve reliability in experiments?

How to improve reliability in observations?

A
  • Replace open-ended questions with closed questions - less ambiguous.
  • Use the same interviewer each time.
  • Train all interviewers - not to ask leading questions.
  • Use structured interviews.
  • Standardising procedures (i.e. making sure that procedures are carried out the same way each time), for instance by implementing interviewer training, and/or practice through pilot studies.
  • Operationalise behavioural categories.
  • Categories should not overlap.
  • All possible behaviours should be covered by the categories.
103
Q

What ensures behavioural categories are consistent?

A

Inter-observer reliability: In order for observations to produce reliable findings, it is important that observers all code behaviour in the same way. For example, researchers would have to make it very clear to the observers what the difference between a ‘3’ on the anxiety scale above would be compared to a ‘7’. This inter-observer reliability avoids subjective interpretations of the different observers skewing the findings.

104
Q

What are the 2 main types of validity?

A

Internal and external

105
Q

What is internal validity?

What is external validity?

A

Internal validity refers to whether the effects observed in a study are due to the manipulation of the independent variable and not some other factor.

External validity is the extent to which you can generalise the findings of a study to other situations, people, settings, and measures. In other words, can you apply the findings of your study to a broader context?

106
Q

What are the types of external validity?

A
  • Ecological validity
  • Temporal validity
  • Population validity
  • Content validity
107
Q

What is ecological validity?

What is temporal validity?

What is population validity?

What is content validity?

A

Is a measure of how test performance predicts behaviours in real-world settings.

The extent to which findings from a research study can be generalised to other historical times.

Refers to whether you can reasonably generalise the findings from your sample to a larger group of people (the population.

The extent to which a test measures a representative sample of the subject matter or behaviour under investigation.

108
Q

What are the two types of internal validity?

A

Concurrent validity
Face validity

109
Q

What is concurrent validity?

What is face validity?

A

Assessing concurrent validity involves comparing a new test with an existing test (of the same nature) to see if they produce similar results. If both tests produce similar results, then the new test is said to have concurrent validity.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

110
Q

What are ways of improving the validity of questionnaires?

What are ways of improving the validity of qualitative methods?

What are ways of improving the validity of experiments?

What are ways of improving the validity of observations?

A
  • Anonymity- confidentiality
  • Using a lie scale - controls for social desirability.
  • Triangulation - using a number of different methods.
  • Inclusion of direct quotes demonstrates interpretative validity.
  • Using a control group.
  • Standardised procedures.
  • Single/double-blind trial.
  • Behavioural categories well defined - shouldn’t be too broad, overlapping or ambiguous.
111
Q

What is the purpose of assessing validity?

A

Trying to determine whether results are accurate and can be generalised to other settings.

112
Q

What are the features of science?

A
  • Objectivity
  • Empirical method
  • Replicability
  • Falsifiability
  • Theory construction
  • hypothesis testing
  • Paradigms
  • Paradigm shifts
113
Q

What is objectivity?

What is the empirical method?

A

Objectivity is a feature of science, and if something is objective it is not affected by the personal feelings and experiences of the researcher. The researcher should remain value-free and unbiased when conducting their investigations.

An empirical method involves the use of objective, quantitative observation in a systematically controlled, replicable situation, in order to test or refine a theory through direct observation or experiment rather than from unfounded beliefs or arguments.

114
Q

What is replicability?

What is falsifiability?

A

It means that a study should produce the same results if repeated exactly, either by the same researcher or by another. The consistency of the data.

It is the principle that a proposition or theory could only be considered scientific if in principle it was possible to establish it as false. One of the criticisms of some branches of psychology, e.g. Freud’s theory, is that they lack falsifiability.

115
Q

What is theory construction?

What is hypothesis testing?

A

A theory is a proposed explanation for the causes of behaviour. To be scientific, a theory needs to be a logically organised set of propositions that defines events, describes relationships among events, and explains and predicts the occurrence of events. A scientific theory should also guide research by offering testable hypotheses that can be rigorously tested.

Hypothesis testing is an inferential procedure that uses sample data to evaluate the credibility of a hypothesis about a population. In other words, we want to be able to make claims about populations based on samples. This ensures the validity is being tested.

116
Q

What is a paradigm?

What are paradigm shifts?

A

A paradigm consists of the basic assumptions, ways of thinking, and methods of study that are commonly accepted by members of a discipline or group.

A paradigm shift, as identified by Thomas Kuhn (1962), is an important change in the basic concepts and experimental practices of a scientific discipline. It is a change from one way of thinking to another and is also referred to as the ‘scientific revolution’.

117
Q

What is inductive reasoning?

A

Makes for generalisations from specific observations. Basically, there is data, then conclusions are drawn from the data.

  • Bottom-up approach
118
Q

What is deductive reasoning?

A

Starts out with a general statement, or hypothesis, and an exam is the possibility to reach a specific, logical conclusion.

  • Top-down approach
119
Q

What is a journal article?

A

A collection of articles that is published regularly throughout the year. Journals present the most recent research, and journal articles are written by a psychologist, for psychologists. They may be published in print or online format, or both.

120
Q

What are the features of a journal article?

A
  • Abstract
  • Introduction
  • Method
  • Results
  • Discussion
  • References
121
Q

What is the abstract?

A

A summary of the study including aims, hypotheses, method, results conclusions and implications. it allows the reader to determine if the rest of the report is worth reading.

122
Q

What is the introduction?

A

A review of previous research that is relevant to the current study, should lead logically to your research, it should start general and become specific and ends with the researcher stating their aims and hypotheses.

123
Q

What is the method?

A

A detailed description of what the researchers did. There should be enough detail for someone to precisely replicate your study

124
Q

What is the results?

A

Details of the findings which include:

  • Descriptive statistics
  • Inferential statisitcs
125
Q

What is the discussion?

A

A summary of the results, the relationship it has with previous research, strengths and weaknesses, the implication for theories (real-world applications), the contribution that the investigation has made and suggestions for future research.

126
Q

What is a reference?

A

The full details of any journal articles, books or websites that are mentioned.

For journals:
Authors name(s), date, title of article, journal title, volume, page numbers.

for books:
Authors name(s), date, title of book, place of publication, publisher.