Methodology Flashcards

1
Q

What is an environmentalist tradition?

A

A type of criminological tradition. Asserts that all people have essentially equal probabilities of running afoul of the law; what alter those probabilities are the social conditions to which each person is exposed. (nurture)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is biosocial tradition?

A

A type of criminological tradition. Asserts that some individuals have certain evolutionary traits that are better adapted to life in an age pre-industrialisation. These so-called ‘atavistic’ individuals are believed to be especially prone to persistent criminality. (nature)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What makes a discipline scientific?

A

Scientific method. Consists of 7 characteristics:
1. Empiricism
2. Verify-ability
3. Cumulativeness
4. Self-correcting
5. Determinism
6. Ethical and ideological neutrality
7. Statistical generalisability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is scientific method?

A

The thing that makes a discipline scientific. Consists of 7 characteristics:
1. Empiricism
2. Verifiability?
3. Cumulativeness
4. Self-correcting
5. Determinism
6. Ethical and ideological neutrality
7. Statistical generalisability

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is empiricism?

A

Empirical phenomena are those which can be sensed (heard, seen etc.). Ultimately, all scientific knowledge rests on what can be perceived.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is verifiability?

A

Assumes that we can use our own empirical observations to confirm or refute the empirical observations made by others, and they can do the same for us.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is cumulativeness?

A

Means that research findings are, in a sense, timeless, since we all build on the work of each other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is self-correcting (in terms of the scientific method)?

A

When errors in observations are made, sooner or later the mistakes will be identified.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is determinism?

A

The assumption that any explanation given to phenomenon must entail only empirical (natural), as opposed to supernatural factors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is ethical and ideological neutrality? (in terms of the scientific method?

A

Scientists should not allow such things as personal ethics and ideology to influence what is being empirically observed and reported.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is statistical generalisability?

A

The scientific method is less interested in individuals, and more interested in large samples of individulas to statistically generalise about overall patterns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is critical rationalism?

A

A theory prompted by Karl Popper, in which he theorised falsification and deduction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is falsification?

A

The idea that only over prolonged lengths of time and unsuccessful attempts to disprove a theory can it be “correct” (just not yet disproven)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is inductive reasoning?

A

The process observing, finding patterns, making a hypothesis, and then proving or disproving a theory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is deductive reasoning?

A

The process of generating a theory, making a hypothesis, observing, and confirming or not confirming the theory based on observations.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is empirical research?

A

Empirical research is defined as any research where conclusions of the study are strictly drawn from concrete empirical evidence. This empirical evidence can be gathered using quantitative and qualitative research methods.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is observation, within empirical research?

A

Typically the first step of a methodological cycle. An idea is sparked for a hypothesis, and empirical data is gathered.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is induction, within empirical research?

A

Inductive reasoning is carried out to form a general conclusion from the data gathered, through observation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is deduction, within empirical research?

A

Helps the researcher to deduce a conclusion from their experiment. Based on logic and rationality to give specific unbiased results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is testing, within empirical research?

A

The use of empirical methods to put a hypothesis to the test. Use of statistical analysis plans (outlining the analytical approach towards quantitative and qualitative data gathered) to see if results prove/disprove hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is evaluation, within empirical research?

A

The researcher puts forward the data they have gathered, the supportive argument and the conclusion. They also state the limitations of their experiment and hypothesis, and suggest how others could continue their research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the steps in a criminal research project?

A
  1. Clarify research
  2. Define the study design
  3. Sampling
  4. Data collection
  5. Measuring
  6. Analysis
  7. Reporting
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the dark figure of crime?

A

The generally accepted notion that police data/reporting of crime statistics etc. are incomplete.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the types of research questions?

A
  1. Measurement questions
  2. Descriptive questions
  3. Exploratory questions
  4. Explanatory/causal questions
  5. Evaluative questions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is a measurement question?
Questions that try to find out how to measure variables, e.g. how do we measure psychopathy?
26
What is a descriptive question?
These provide a basic picture of the extent and scope of a phenomenon. They try to describe things. e.g. How can a criminal career be characterised?
27
What is an exploratory question?
These go beyond descriptions, and begin to probe for details surrounding the phenomenon. They often involve conducting in-depth interviews of people affected by some phenomenon/relationships between phenomena (but not why there is a relationship- that is explanatory). Studies may also attempt to narrow possibilities of causes of phenomena- but they do not attempt to identify the causes. E.g. does marriage reduce crime?
28
What are causal/explanatory questions?
These seek to identify one or more factors that actually bring about a phenomenon. What mechanisms are there that explain why a specific relationship exists? How does X happen etc.? e.g. Why do people get involved in crime?
29
What are evaluative questions?
After one or more causes of a phenomenon have been identified, scientists sometimes seek to alter the prevalence of that phenomenon. When this is done, research designed to assess how well the intervention worked is needed. E.g. what is the effect of a custodial sentence on reoffending?
30
What are the elements of a good research question?
- Clear - Focused - Relevant - Testable
31
What is a variable?
Empirical phenomena that take on different values or intensities. The measurable representations of the constructs that you are interested in. Typically, these variables are things that we would turn into some kind of data file. If we want to use variable in statistical tests, we have to assign numbers to words.
32
What is a constant?
Any phenomenon that always takes on the same value or intensity.
33
What is the age crime curve?
A given individual will usually be more criminal at some times in their life than others (usually a a gradual curve, peaking in the middle)
34
What is a conceptual definition of a variable?
Essentially what you find in the dictionary. Familiar terms are used to describe a word or phrase.
35
What is an operational definition of a variable?
These specify in empirical terms precisely what should be done to observe variations in the variable being studied, or exactly how to measure the variable. The process of making an abstract concept (the construct) measurable, as a variable.
36
What is a construct?
An abstract concept, undefined. You can apply operational and conceptual definitions to it.
37
What could the conceptual and operational definitions of crime be?
Conceptual: An act or omission that violates a law, and is punishable by the State, typically involving harm to others, society or property. Operational: Any behaviour that is formally labelled as illegal by legislative statutes (leading to arrest, prosecution, and punishment)
38
What could the conceptual and operational definitions of victim be?
Conceptual: A person who suffers harm, injury or loss due to another's actions. Operational: An individual recognised by the criminal justice system as having been harmed by a crime.
39
What could the conceptual and operational definitions of crime rate be?
Conceptual: The number of crimes committed in a particular area, divided by the the number of inhabitants in that area Operational: Find the number of crimes committed over the relevant time period (official data and/or self report), and divide this number by the total number of inhabitants. Then multiply by 100,000 for the rate per 100,000 people.
40
What are the categories of variables?
1. Demographic variables 2. Cultural variables 3. Social institutional variables 4. Behavioural and personality variables 5. Cognitive variables
41
What is a demographic variable?
These pertain to basic human characteristics like age, gender, ethnicity. They are widely used to describe human populations.
42
What is a cultural variable?
Within this category, there are 2 subcategories: - Artificial cultural variables: The wide variety of material things characterising most human societies, such as the homes they occupy, the neighbourhoods they live in etc. - Customary cultural variables: Values and practices that most members of a particular society share, e.g. languages, day-to-day customs we perform and expect from others, rules we follow.
43
What is an artificial cultural variable?
The wide variety of material things characterising most human societies, such as the homes they occupy, the neighbourhoods they live in etc.
44
What is a customary cultural variable?
Values and practices that most members of a particular society share, e.g. languages, day-to-day customs we perform and expect from others, rules we follow.
45
What is a social institutional variable?
These have to do with the functioning of social institutions and tangible manifestations of culture. Economic variables, like national unemployment rates, inflation rates, homicide or assault rates, as well as neighbourhood (dis)organisation.
46
What is a behavioural and personality variable?
How we act as individuals. It is difficult to make a clear distinction between the two, but behavioural pertains to more specific acts, while the personality pertains to acts that are much more consistent and characteristic of a particular person.
47
What is a cognitive variable?
These encompass the emotions we feel, the individual attitudes we hold, the thoughts and intellectual abilities we possess, and the mental health or illnesses we have.
48
What are the main data collection methods we can use to answer a scientific question?
1. Self-report 2. Archival data 3. Observation
49
What is self-report?
A data collection method. Asking a person questions about themselves, and receiving answers which are then studied
50
What types of self-report are there?
1. Interviews 2. Questionnaires
51
What is an interview?
A type of self report, which is a type of data collection method. Interviews involve face-to-face questions, generally qualitative data, but there can be exceptions (quantitative)
52
What is a questionnaire?
A type of self report, which is a type of data collection method. Questions given to respondents in written form- can be seen (often) as quantitative data, since there are ranking systems, but there can be exceptions (qualitative)
53
What is a crime victimisation survey?
A type of self-report (questionnaire), which is a type of data collection method. They involve asking large numbers of people if they have been criminally victimised within some specific time frame, regardless of whether they reported the incident to police.
54
What are the positives of self-report data collection methods?
- Can collect information known only to respondents - Researcher has control over the data collection - Includes unreported crimes - If re-taken, trends can be tracked/predicted - Anonymity of online surveys can prompt people to give honest answers
55
What are the negatives of self-report data collection methods?
- "victimless" crimes are harder to include, e.g. drug dealing, gambling, and even victims of murder - Often surveys are given to households, thus excluding crimes against commercial establishments - No guarantee that information given by the respondents is true/valid - Concerns over memory and misreporting - Can be time-consuming - Many surveys are convenience based, and the respondents are typically students
56
What does prevalence mean?
The proportion of people who are (committing crimes)
57
What does incidence mean?
The number of (criminal) events that occur.
58
What does criminality mean?
People's varying propensity to commit crimes and other antisocial acts.
59
What is the telescoping effect?
A type of recall bias. The temporal displacement of a past event in someone's mind, specifically: - Recent events are perceived as more distant than they are - Distant events are perceived as more recent than they are
60
What is social desirability bias?
The tendency of respondents to give answers they believe to be more socially acceptable or favourable.
61
What should you do when designing a questionnaire?
1. Make a list of things you want to know 2. Draft different parts of the questionnaire 3. Write the items and response options 4. Check the items and response options 5. Add instructions/notes for the participants
62
What is an item?
A question or statement in a questionnaire, which you want some kind of response to.
63
What types of items are there in a questionnaire?
1. Descriptive items 2. Recall items 3. Attitudinal items 4. Projective items
64
What is a descriptive item?
A question or statement in a questionnaire, that ascertains descriptions about a person, e.g. what is your age? or what is your marital status?
65
What is a recall item?
A question or statement in a questionnaire, that asks someone to recall something from the past, e.g. have you ever been a victim of X?
66
What is an attitudinal item?
A question or statement in a questionnaire, that asks for attitudes or opinions from the respondent, e.g. what do you think of the death penalty?
67
What is a projective item?
A question or statement in a questionnaire, that asks about the future. They can sometimes be called scenarios or vignettes. E.g. What would you do if you witnessed a crime? or are you planning on buying a house in the next year?
68
What are multi-item questionnaires?
Also known as scales. Unlike simple constructs that can be assessed with one question, e.g. age or sex. We should have some kind of measurement, or instrument, to measure them. We can use a measurement question with multi-item questionnaires for this. E.g. If things I do upset people, it's their problem, not mine.
69
What is a scale?
A multi-item questionnaire.
70
What is a research instrument?
Any physical thing used to collect data.
71
What are response options?
The range of answers that researchers offer subjects in connection with the questions that are asked.
72
What is coding?
The process of deciding how data will be entered into a standard format for analysis.
73
What are the types of response options for items?
1. Open-ended 2. Fill-in-the-blank 3. End-anchored continuum 4. All-point anchored 5. Ranking
74
What are open-ended response options?
They are response options for items. They present a question/statement without providing constraints on how they are answered. E.g. how would you feel if you ended up in prison?
75
What are fill-in-the-blank response options?
They are response options for items. Respondents must write/say a word or phrase missing from a statement. E.g. I consider __ the most serious crime.
76
What are end-anchored continuum response options?
They are response options for items. Subjects are allowed to respond along a continuum with extreme opposites at each end. Demarcated linear forms are from adjective - adjective. (e.g. Strong - weak, or good - bad) Numerical forms are from 1 - 10.
77
What are all-point anchored response options?
They are response options for items. Options range from strongly agree to strongly disagree, and are also known as Likert scales.
78
What are ranking response options?
They are response options for items. Respondents are given a list to rank in some way.
79
What should be considered when choosing a response option to use in your studies?
- The amount of coding to do later - Try to measure variables at an interval level or higher. - Do not switch too much between types of response options. - Give subjects considerable latitude in their responses Items should be clear, neutral, relevant, should provide useful response items, and ask for one thing at a time.
80
What type of questions should you avoid asking in your studies?
- Rhetorical and leading questions - Conjunctive items (asking multiple things at once) - Yea-saying items (philosophical sounding questions that tend to compel respondents to agree) - kernel-of-truth items (partially true statements, especially ones which can sometimes insult) - Questions about behaviour not bounded by time (and sometimes place)
81
What is a discrete variable?
A variable that exists in two or more segments, regardless of how researchers decide to measure it. E.g. Sex, religion, nationality. Normally, discrete variables are measured at the nominal or ordinal level.
82
What is a continuous variable?
Does not exist in segments (as in discrete), but instead varies gradually from low to high, weak to strong etc. E.g. Age, academic ability, fear of crime. Normally, continuous variables are measured at the interval or ratio level.
83
What are secondary/tangential questions?
These break down a major question into several components, and then focus on answering each component.
84
What is a peer review?
When a journal/manuscript is sent to educated peers for review before publishing.
85
What is a blind peer review?
When a journal/manuscript is sent to educated peers for review before publishing, and the reviewers (also known as referees) are not known to the author.
86
What is archival data?
A data collection method. Official data, looking back in time, always collecting empirical data, but not always collected with a specific research project in mind. E.g. police records
87
What is secondary analysis?
The analysis of archival data
88
What is contemporary archival data?
Relatively recent data, usually since the turn of the 20th century, often compiled by agencies that still exist. E.g. police data (modern)
89
What is historic archival data?
May be centuries old, often (usually) collected by agencies that no longer exist. E.g. church or jail records.
90
What does open access archival data mean?
Archival data that is open to the public.
91
What does restricted archival data mean?
Archival data that needs permission to access it, often because the data is sensitive.
92
What are the pros of archival data?
- Cost and time efficient - Often contains more information than researchers could collect alone
93
What are the cons of archival data?
- Can be outdated - Data is often gathered at a group level, leading to ecological fallacies - Differences between countries (e.g. resources, definitions, biases) - Not all data is gathered for research purposes, so it could be incomplete - Doesn't account for crime funnel
94
What is a review article?
A statistically based literature review.
95
What is a meta-analysis?
A type of review article, in which findings from individual studies are pooled with other findings from similar studies. The studies should always be accompanied by a systematic review.
96
What is the crime funnel?
Of crimes committed, not all are reported, of which not all are investigated, of which not all are prosecuted, of which not all are convicted- which affects data.
97
What is ecological fallacy?
Archival data are often aggregated- they don't look at individuals, rather groups, making inferences about individuals based on results from groups. This is an incorrect assumption- relationships that apply at group level do not always apply on an individual level.
98
What are narrative reviews?
A method of summarising and synthesising primary studies on a topic, without a premeditated research question, often contributed to by the researcher's own experience and existing bias.
99
What are systematic reviews?
A method of summarising and synthesising primary studies on a topic. Authors have been careful to be more exhaustive than in narrative reviews. They are different from narrative reviews, because they: - Have a protocol for literature search and data extraction - Aim to be comprehensive - Appraise the quality of a study
100
What is Cohen's d?
The standard of differentiation between two means. Usually represented in forest plots.
101
What is triangulation?
Putting together findings of different data sources and analysing them to gain understanding. (Different to meta and review analyses, which are more "book" orientated).
102
What is observation?
A data collection method. The investigator simply observes-
103
What type of observation studies are there?
1. Cohort studies 2. Cross-sectional studies 3. Case-control studies
104
What is a cohort study?
A type of observation study. They describe incidence or natural history. They measure events in chronological order, so they can be used to find causes and effects. They may be prospective or retrospective.
105
What is a prospective cohort study?
A type of cohort study, which is a type of observation study. People in the sample are observed to see whether they develop the outcome of interest.
106
What are some pros and cons of prospective cohort studies?
Pros: - Can measure causes separately from effects, so there is no debate - A single study can examine various outcome variables - Can calculate the effect of variables on the probability of developing the outcome Cons: - Subjects don't always follow up - The rarer the condition studied, the harder it can be to determine causes and effects.
107
What is a retrospective cohort study?
A type of cohort study, which is a type of observational study. They use data already collected for other purposes. People in the sample are observed to see whether they developed the outcome of interest. The cohort is followed up retrospectively.
108
What are some pros and cons of retrospective cohort studies?
Pros: - Cheaper, since data is already collected - Lack of bias, since study was for different purposes Cons: - Unlikely that all relevant information will have been collected - Following up with people later can lead to recall bias
109
What are confounding variables?
When two cohorts are compared, one will be exposed to the agent of interest and one will not. However, it is not always possible to control all other factors that differ between groups. This means that confounding variables occur, which are associated with both the exposure and outcome that is not the studied variable. Confounding variables are a threat to the internal validity of a study, or the extent to which its results indicate a cause-and-effect relationship between X and Y.
110
What are cross-sectional studies?
A type of observation study. Used to determine prevalence. They do not permit distinctions between cause and effect. All measurements on each person are made at one point in time. They can be used to infer causation. Subjects are not exposed/treated, so there are seldom ethical difficulties.
111
What are some pros and cons of cross-sectional studies?
Pros: - Quick and cheap - Best way to determine prevalence - Useful at identifying associations Cons: - Cannot differentiate cause and effect - Often multiple plausible explanations - Rare conditions cannot be effectively studied
112
What are case-controlled studies?
A type of observation study. They compare groups retrospectively. They seek to identify possible predictors of an outcome. Used to generate hypotheses that can be studied via prospective cohort or other studies. People with the outcome of interest are matched to a control group without it. Retrospectively, the researcher determines which individuals were exposed to the agent.
113
What are some pros and cons of case controlled studies?
Pros: - Good for rare conditions - Only option when there is a long time between exposure and outcome - Fewer subjects required to test - Useful for generating hypotheses for other studies Cons: - Restricted outcomes- the only outcomes possible are the presence or absence of the subject of study - Big concerns surrounding bias- sampling bias and observation and recall bias
114
What is sampling bias?
The bias within test or control groups. Sampling bias occurs when a sample used in a study or experiment is not representative of the population from which it was drawn.
115
What is observation and recall bias?
Biased assessment of (predictor) variables, their presence and significance, by the subject and/or the investigator.
116
What is latency?
A period of time between exposure to an agent and the "development of symptoms", signs or other evidence of changes associated with exposure.
117
What is odds ratio?
The ratio of the probability of an event occurring to the probability of the non-occurrence.
118
What is relative risk?
The ratio of the probability of developing the condition if exposed to a certain variable, compared with the probability if not exposed.
119
What is internal validity?
The rigour with which a study has been designed and executed- that is, can the experiment be relied upon? It largely depends on study design.
120
What is external validity?
The usefulness of findings of a study with respect to other populations.
121
What is an independent variable?
A variable that is thought to cause the dependent variable.
122
What is the dependent variable?
A variable that is thought to be caused by the independent variable.
123
What are the conditions for establishing causality between variables?
1. X must occur before Y (temporarily) 2. X is statistically related to Y (there has to be statistical evidence that they are related) 3. No confounding variables involved (the statistical relationship that we find between X and Y is not because there is a causal relationship between X and Y, but because of factor Z)
124
What is an experimental design?
A research method where the researcher manipulates one or more independent variables to observe their effect on the dependent variable, usually within controlled conditions. This design helps establish cause-and-effect relationships by isolating the variable of interest and minimising the influence of confounding variables. Participants are often randomly assigned to groups.
125
What groups are there in an experimental design?
1. The experimental group 2. The control group
126
What is an experimental group?
A group within an experimental design. Comprised of participants who receive a certain intervention. The difference between the results of the experimental group and the control group will show how effective the intervention was. People are often blind to the knowledge of which group they are in, to avoid bias.
127
What is a control group?
A group within an experimental design. Comprised of participants who do not receive an intervention, or receive a different one. The difference between the results of the experimental group and the control group will show how effective the intervention was. People are often blind to the knowledge of which group they are in, to avoid bias.
128
What is random assignment?
Random determination of who is in the control group v who is in the experimental group. If, after random assignment, you can still see similarities amongst people within certain groups, ignore it- random assignment is trustworthy enough to not need to change the groups further.
129
What are the characteristics of the classical experimental design?
1. The independent variable is manipulated by the researcher 2. Experimental and control group involved 3. Random assignment of participants 4. Measurement of the dependent variable before and after the independent variable is introduced.
130
What is a quasi-experimental design?
In criminology, the classic experimental design often cannot be used because of practical or ethical constraints. An alternative design, quasi-experimental, is possible. (Though most criminological research is observational). In quasi-experimental designs, the independent variable is still controlled by the researchers, but one or more of the features of the classical experimental design is not present. (i.e. there may be no control group (reversal design), random assignment or pretest (e.g. after-only design).
131
What is reversal design?
An example of a quasi-experimental design. 1. Independent variable is manipulated by the researcher 2. Only an experimental group (experimental and control conditions occur within same participants) 3. No random assignment (same participants undergo all phases) 4. The dependent variable is repeatedly measured under experimental and control conditions, before, during and after the independent variable is introduced (rather than having 2 control groups, there is one that sometimes gets it and sometimes doesn't)
132
What is an after-only design?
An example of a quasi-experimental design. This is a design which skips the pretest, and only measures the dependent variable after the intervention. 1. Independent variable is manipulated by the researcher 2. Experimental and control groups are involved 3. Random assignment of participants 4. Measurement of the dependent variable is taken only after the independent variable is introduced (no pretest)
133
What are some problems of reversal design experimental groups?
- Withholding/not giving every person the item or treatment can be bad, e.g. withholding medication - There is no account taken of the learning effect (i.e. that people will learn from the experiment and change accordingly during it)
134
What are some problems of the after-only experimental design?
- When you don't measure beforehand, you can't be sure of a lack of bias - Placebo effect - Retesting bias - Limiting ecological validity (application in the real world) - Hawthorne effect - Spill-over effect
135
What is retesting bias?
If you test people before, and then test them after, they may remember how they previously responded, which can influence results
136
What is the Hawthorne effect?
Knowing that you are participating in a study and being watched can influence your behaviour
137
What is spill-over effect?
When a particular intervention is introduced to a group, they may still interact with other experimental or control groups, meaning that your intervention may be received in those groups, too.
138
What is an observational design?
Refers to a type of research method where the researcher observes and records behaviour without manipulating any variables.
139
What is the hierarchy of evidence?
Top: Reviews - Meta-analysis - Systematic review - Narrative review Middle: Experiment - Classical - Quasi Bottom: Observational - Cohort - Case-control - Cross-sectional
140
What is a population?
A group of people who share one or more characteristics that are of interest to the researcher
141
What is a sample?
A subset of a population
142
What is a representative sample?
A sample whose members possess all characteristics in the same proportion as the population
143
What is probability sampling?
A sampling method, in which: - Each person has the same chance of being selected - Sample can be considered representative - Sampling frame required
144
What is a sampling frame?
A list of everyone in the population
145
What is non-probability sampling?
A sampling method, in which: - Some people have a higher chance of being selected than others - Sample cannot a priori be considered representative (unless checked after the fact) - No sampling frame - Generally seen as weaker than probability sampling methods- you shouldn't non-probability methods when you're trying to generalise your findings to a wider population
146
What does a priori mean?
Refers to knowledge or justification that is independent of experience. Truths can be known through reasoning rather than empirical observation.
147
What are the types of probability sampling methods?
1. Simple random sample 2. Stratified sample 3. Systematic sample 4. Cluster sample
148
What are the types of non-probability sampling methods?
1. Convenience sample 2. Quota sample 3. Snowball sample
149
What is a simple random sampling method?
Randomly picking from the population, and everyone that you randomly pick belongs to the sample. Everyone has an equal chance of being included in the sample (but this is difficult to do).
150
What is a stratified sampling method?
You have a population, within which some subgroups are not the same size (e.g. more men than women). In this case, you split your population into strata (subpopulations), and then randomly select from each strata- this allows for everyone to be included.
151
What is a disproportional stratified sampling method?
Within stratified samples, you have disproportional stratified samples, which means including an equal amount of people in your sample- this means there is no longer an equal chance of being included (more women would be included than men based on how many were originally available).
152
What is a proportional stratified sampling method?
You still have strata, but there is a proportionate amount of each subgroup that was in the population. Better estimates of the actual population, but if you want to know more about the smaller subgroup, you are at a disadvantage.
153
What are strata?
Subpopulations within a population
154
What is a systematic sampling method?
Sampling frame is laid out in a row, and every kth entity is sampled. This means that the individual chances of being selected are not the same, since if you are not divisible by k, you stand no chance. If you have 100 people and you need 3, you should not take the 3rd, 6th, and 9th. You should split it equally throughout the population, so the ones at the end are also represented. Requirements: - Random place to start - No periodicity with the same frequency as k (no patterns in the data that are reflected in the same pattern as k- so every k happens to also be German/pregnant/male/under aged 50 etc.)
155
What is a cluster sampling method?
Sampling of naturally occurring groups within a population (e.g. prisons, schools, classes, sports clubs, cities etc.) From these multiple clusters, you select a random sample- then everyone within that final cluster will be included in your sample.
156
What is a convenience sampling method?
Individuals are selected because they can be easily reached. An example would be that a lot of psychology studies are done using first-year psychology students. As long as you report on what your sample looks like when sharing your research results, this can be ok.
157
What is a quota sampling method?
Non-probabilistic equivalent of stratified sampling. You create subpopulations and draw samples, but they are less purposely random. You draw a non-probability sample from each subpopulation. A cousin of stratified, just that stratified is based on probability and quota is not.
158
What is a snowball sampling method?
Respondents help to recruit new respondents. Used when respondents are difficult to locate, or unlikely/unwilling to participate. This could make this less random- since they know each other, and are likely to have commonalities.
159
Why is it important to have a large sampling size?
- Representativeness (it will be a better representation of the wider population) - Statistical power (making sure you have enough data to find a difference where there is a difference/to be able to conclude something meaningful) - Precision (the more precise your estimates will be)
160
What is a chosen sample?
Those who are selected and invited to participate in the study
161
What is the obtained sample?
Participants who participate in a study (not all who are chosen participate)
162
What is sample attrition?
The difference between the chosen sample and the obtained sample.
163
Why can sample attrition occur?
- Willingness to participate - Inability to participate - Participants cannot be contacted - Participants do not follow the study protocol
164
What is attrition bias?
Participants who do not participate (anymore) differ systematically from those who do. This can be a threat to the internal validity (are our results measuring what we say they are measuring? You may not have an accurate/relevant study group to successfully measure your question) and external validity (how much it can be attributed to the wider population) of a study. If you see that this occurs, you can still publish results- as long as you qualify your results by saying they only apply to the cohort that actually participated in the study.
165
How can you minimise sample attrition?
- Notifying respondents beforehand - Including a polite and clear introduction - Personalising the correspondence - Explaining the importance of your study - Having a well-respected or well-liked sponsor - Keep the questionnaire short - Making sure the questionnaire is neatly presented - Compensating participants (this may affect the type of people that take part, though)
166
What is operationalisation?
The process of making an abstract concept (the construct) measurable (as a variable) for a study.
167
What are the steps of operationalisation?
1. Identifying the construct 2. Defining the construct (conceptual definition) 3. Conceiving a method to measure the conceptual definition (operational definition)
168
What are the levels of measurement?
1. Nominal 2. Ordinal 3. Interval 4. Ratio
169
What is the nominal level of measurement?
Categories that cannot be ranked. E.g. countries, gender, eye colour, type of crime
170
What is the ordinal level of measurement?
Categories have a logical order but no quantifiable distances between them. E.g. opinions (strongly agree, agree etc.), Olympic medals, severity of crimes
171
What is the interval level of measurement?
Categories have a logical order and have equal distances between them. E.g. acidity, temperature
172
What is the ratio level of measurement?
Categories have a logical order and have equal distances between them, and there is a meaningful 0 (i.e. nothing). E.g. income, age
173
What does reliability mean?
The overall consistency of a measure. Based on the idea that our observations have a true value + some kind of measurement error.
174
What does validity mean?
The degree to which one is measuring what is supposed to be measured (e.g. the validity of an IQ test giving valid indications on the level of intelligence)
175
How can reliability be measured?
1. Measurement error caused by differences between assessors, inter-rater reliability 2. Measurement error caused by situational awareness, test-retest reliability 3. Measurement error caused by the instrument itself, split-half reliability
176
What is inter-rater reliability?
Agreement between measurements taken with the same instrument by multiple assessors. Then cross referencing results between assessors into a table (see slide) to see where conclusions are the same/differ. Typically, the reliability is then calculated.
177
What is test-retest reliability?
Agreement between measurements taken with the same instrument on multiple occasions. Certain conditions should be the same, including observers, location, time of day. Would hope that when taking a test again, the result would remain the same.
178
What is split-half reliability?
Agreement between measurements taken with the two halves of an instrument, (e.g. even- vs odd-numbered items). A measure of internal consistency, or the extent to which the items of an instrument measure one construct.
179
What is face validity?
The suitability of an instrument based on overall appearance. If you want to measure the criminal tendency of people, weighing them would be inappropriate- but we need to keep our minds open to more creative ways of measuring.
180
What is content validity?
The extent to which an instrument covers all aspects of the construct.
181
What is comparative validity?
The correspondence between scores on instruments designed to measure the same construct.
182
How can you maximise the validity of self-report data?
- Assurance of anonymity or confidentiality - Clearly phrased questions - Give examples - Non-judgemental phrasing or prefacing of questions - Short reference periods - ‘Warm-up’ questions
183
How is research communicated?
- Academic: journals, books, conferences - Non-academic: policy briefs, government reports, social media posts, blogs, public lectures, media interviews, stakeholder meetings
184
Why is the reporting of findings important?
1. Research tries to increase our understanding of something (the problem) 2. Through this, the research seeks to improve our lives (the solution) 3. Effective communication is vital to ensure that research findings reach the people who can implement them (knowledge transfer)
185
What is a stakeholder?
People who have an interest in your research. Potential stakeholders include academics, policy makers, professionals, target group, general public
185
Why is the reporting of methods important?
1. Transparency 2. Replication
186
What are the main components of scientific papers?
1. Introduction - explains the "problem" that the study tries to resolve 2. Methods - describes the actions (e.g. data collection, study design, analyses) taken to resolve the "problem" 3. Results - presents the findings from the analyses, which forms the basis of the "solution" to the "problem" 4. Discussion - interprets the findings and makes recommendations for practice (the "solution") and future research Other components: - References - Conflict of interest - Author contributions - Acknowledgements - Title (and subtitle) - Authors and affiliations - Abstract
187
What kinds of referencing are there?
- In text - Reference list
188
What circumstances can raise ethical questions within research activites?
- Involvement of people or animals - Environmental consequences - Impacts on society or future generations
189
What are some codes and policies that guide research ethics?
- International: Declaration of Helsinki - National: Codes of ethics - Local: ethics committees
190
What are some general principles of ethics within research?
- Informed consent - No deception - Minimising harm - Anonymity or confidentiality - Approved by ethics committee
191
What is informed consent in research?
A principle of ethical research. Prospective subjects must be fully informed about the study before they agree to participate. This can be: - Explicit: Signed by the subject - Implicit: Anonymous questionnaires
192
What is a lack of deception in research?
A principle of ethical research. Deception cannot be allowed, either: - by omission: giving incomplete information - "orchestrated": instructions that cause subjects to believe something that is not true (there are exceptions)
193
What are some exceptions of the prohibition of deception in research?
- Observations in public spaces (can be difficult to contact people in public etc.) - Ethnographic studies (observations of people’s behaviour, “infiltrating” communities) - Experiments, under certain (strict) conditions (give some information but not the whole truth, for the purpose of not ruining the study. Fully debrief afterwards!)
194
What is anonymity in research?
The researcher does not know who the participants are
195
What is confidentiality in research?
The researcher knows who the participants are, but removes identifying information from the dataset and report
196
How do you ensure confidentiality in research?
- Train all researchers in research ethics - Restrict access to the participants and their data - Remove identifying information from data files - Store files securely (e.g. locked cabinets, encryption)
197
What do ethics committees do?
Oversee research involving human participants. They: - Are local and field-specific - Review proposals for research - Are accredited, but generally supervised
198
What is scientific misconduct?
Typically has to do with reporting your research. “Intention or gross negligence leading to fabrication of the scientific message, or a false credit or emphasis given to a scientist”
199
What are the (broad) types of scientific misconduct?
- Plagiarism - Fabrication of data - Falsification of data
200
What is plagiarism?
Intentionally presenting someone else's writing or ideas as your own.
201
What types of plagiarism are there?
1. Copying passages without credit 2. Ghost-writing 3. Improper authorship
202
What is fabrication?
Making up data. Includes the intentional use of references that are fake or do not support an argument.
203
What is falsification?
Misrepresenting data. Manipulating results- for example, you collected real data, but you leave certain people out of your sample so that it answers a question in the way that you want.