eo Flashcards
Epistemology
- What is the nature of knowledge?
- And when is this knowledge scientific?
In other words, what is the proper
way to study things?
positivism and I n t e r p r e t i v i s m
Ontology
What is reality?
* Age-old philosophical debate
* How should we consider the being of things
What is social reality?
* Is there such a thing as a social reality?
* Is this reality external to us?
2 positions
→Objectivism
→Constructionism
Relation theory-research
D e d u c t i v e I n d u c t i v e
CAUSE → EFFECT
Some criteria
* Plausibility Good reason why cause leads
to effect (theory)
* Temporality Cause comes before effect
* Consistency Cause invariably leads to effect
Causality
Remember intervening effect and confounding effect
Spurious relation
A & B (mere coincidence)
Cohort
Different people are selected on the basis of a recurring
criterion (age, sex, …)
Surveying 18 year-olds in 1991, 2001, 2011, 2021, …
Panel
Single group of people is selected once and then
studied repeatedly
Surveying people who were 18 in 1991 again in 2001,
2011, 2021, …
So, if you have an idea of what happens in many
situations, what happens most or more often?
→ Some things are more likely or … probable
→ Pattern
Theory
A coherent account that makes a part of reality
understandable by identifying patterns
And how can you be sure it is not a personal interpretation?
Standardized categories free from personal viewpoint
If you think that you know how reality works,
you might be inclined to only look for confirmation
and ignore clues that suggest otherwise
VERIFICATION BIAS
Hypothesis
Concrete claim about reality, on the level of concrete observations
Conceptualization
Delineating what your theory is about specifically
Likert scale
Set of interrelated statements (indicators or items)
about a certain topic for which people can state
their level of (dis-)agreement on a scale (1-5 or
1-7 or …)
Internal reliability =
consistency of indicators
Example
Likert scale
Attitudes about importance of art
* I like art
* Art makes you a better person
* Art is interesting
* Art is important for society
→ On average, people who agree with one
statement, should also agree with the other
statements
convergent validity
How does the operationalization compare to
another operationalization of that concept?
If a concept can be studied in various ways, the results
should nonetheless be similar
Comparable to internal reliability, but on the level of
variables, rather than indicators
The validity of a measure ought to be gauged by comparing it to measures of the same concept developed through other methods
Example
Popularity of films
* could be studied by counting visitors
* or by looking at box-office revenue
* or by looking at streams
→ Similar figures should result
construct validity
Theories establish patterns between concepts,
and variables are operationalizations of concepts,
so if variables behave in the way it is theoretically expected,
they are likely to be valid operationalizations of concepts
Example
Theory says that …
higher social positions consume more exclusive art
Research finds that …
people with higher educational degree and income go more
to opera→ Opera visits is a valid variable to operationalize
exclusive art
Dependent variables (outcome variable)
- because their variation depends on something else
that is being studied - The category a case belongs to is the effect of
belonging to a certain category on a different
variable
E.g. Racial prejudice might be regarded as the dependent variable, which is to be explained, and authoritarianism as an independent variable, and which therefore has a causal influence upon prejudice.
E.g. ‘Children cry more than adults’; Age (independent variable) crying (dependent variable).
But there are also adults who cry, so…
Causality – example probabilistic.
Observations in dependent variables can only be the outcome of independent variables.
independent variables (explaining variable)
- because their variation does not depend on
something else that is being studied - The category a case belongs to causes that case to
belong to a certain category on a different variable
E.g. Racial prejudice might be regarded as the dependent variable, which is to be explained, and authoritarianism as an independent variable, and which therefore has a causal influence upon prejudice.
E.g. ‘Children cry more than adults’; Age (independent variable) crying (dependent variable).
But there are also adults who cry, so…
Causality – example probabilistic.
Observations in dependent variables can only be the outcome of independent variables.
Sources for research questions
Where do your research questions come from?
Why?
The research question provides a motivation for the study. It justifies why the study is relevant.
- Personal interest.
Not a very formal reason not often mentioned.
But very important (a precondition really). - Theoretical puzzles.
Theory identifies patterns.
But theories can be inconsistent.
For example, one view, which draws on rational choice theory, depicts street robbery as motivated by a trade-off between the desire for financial gain against the necessity to reduce the likelihood of detection. The other view of street robbery portrays it as a cultural activity from which perpetrators derived an emotional thrill and which helped to sustain a particular lifestyle.
Or seemingly unable to explain certain observations. - Gaps in the literature.
Theory does not take into account certain situations.
The chief strategies for doing this are: spotting overlooked or under-researched areas and identifying areas of research that have not been previously examined using a particular theory or perspective. - New developments in society.
Does the theory still hold given the new circumstances? Do we expect to find new things?
Examples might include the rise of the internet and the diffusion of new models of organisation–for example, call centres. - Social problems.
Pressing or urgent situations call for insight.
An example might be the impact of asylum-seekers being viewed as a social problem by some sectors of society. This seems to have been one of the main
Research question – the core of studies
As research questions justify the study,
They are the bridge between general theory and concrete hypothesis.
By indicating the main concepts of interest and why they are related.
Usually they can be found at the beginning of a report.
Relevance (scientific, societal).
Justified by making connections to literature.
Lead up to concrete specifics of study.
- Translated into hypothesis.
- Lead to methodological choices (because some things can be studied better in specific ways).
Types of research questions
Which are the main forms of research questions?
What? The research question indicates what it will tell about reality. It mentions the type of pattern that will be studied.
- Descriptive
What are things like.
o Characteristics
Which things go together?
o Associations between characteristics, typologies.
But often not so clear-cut.
Probabilistic
o Which things tend to go together?
Gradual
o To what extent?
E.g. What are the defining characteristics of museum visitors?
- Causal
What causes something?
What is the effect of something?
But, often more complex.
Multi-causality
o What are the causes, what are the effects of?
Gradual
o To what extent does something cause?
Probabilistic
o What generally causes?
E.g. Why do people go to museums.
- Comparative
What are the differences? Between cases, places, time periods.
But, often including.
Descriptive
o In which respects do cases differ, in which respect are they similar.
Causal
o How do causes differ?
Probabilistic and gradual
o To what extent to they differ/ are they similar?
E.g. How do visitors of modern art museums differ from visitors of natural science museums?
- Interpretive
How are things? How can we make sense of reality?
o Often qualitative.
E.g. How do visitors of modern art museums experience the collection?
In any case;
* Often there is more than one question in one study or there is a main question with sub-questions.
* Various types of questions can be addressed in one study (descriptive, and comparative, and causal).
* Often research questions are not explicitly stated, but they are there.
Criteria for research questions
- Clear and unambiguous.
- Researchable, and testable (for quantitative).
o This means that they should not be formulated in terms that are so abstract they cannot be converted into researchable terms. - Have connection(s) with established theory and earlier research.
o This means that there should be a literature on which you can draw to help illuminate how your research questions should be approached. - Links between various research (sub)questions.
o Research questions should be linked to each other. Unrelated research questions are unlikely to be acceptable since you should be developing an argument. - Make an original contribution, except for the case of replications.
- Not too broad – not too narrow.
o Not too broad so that you’d need a massive grant to study them, not too narrow so that you cannot make a reasonably significant contribution to the area of study. - They should indeed be questions (not hypotheses).
Examples:
* How does social media effect self-awareness in teenagers? (casual)
* How are gender stereotypes represented in press coverage of rock ‘n roll lifestyles? (descriptive/interpretive)
* How do parents deal with teenage sexuality in the Netherlands and the U.S.A. (comparative)
Lecture 5: The Survey
- Self-administered (written).
o On paper.
o Web survey. - Structured interview (oral).
o Face-to-face.
o By telephone.
Surveys in general
Advantages:
* Highly standardised
Information is collected in a unified way, which makes cases comparable.
o No differences in treatment between cases.
o No (or rather, less) room for interpretation.
* Efficient
Data can easily be collected on large numbers of people (or cases).
o Straightforward.
o Fast.
Disadvantages
* Can reality be captured on paper?
* Will only provide information on what you explicitly take into account. (‘Don’t ask, don’t know).
* Possible answers are limited.
* Are questions/answer categories indeed interpreted similarly?
* No nuances (often only one option can be selected and no room for explaining answers.
priming effect
If a respondent indicates the arts are important, they will feel like they cannot afford to underestimate their participation to the arts.
There are three solutions to priming effect
- Keep the sequence fixed.
o So effects do not differ between respondents. - Use more than one sequence.
o To get an idea of the effect of the sequence. - Randomise the sequence.
o So effects for some respondents are offset by countereffects for others.
As you will only learn what you ask, questions should be clear.
Three rules of thumb:
- Always bear in mind your research questions.
o The questions for the self-completion questionnaire or structured interview should always be geared to answering you research questions. - What do you want to know specifically?
o Do you have a car?
(What does ‘have’ mean? Own? At your disposal? - How would you answer?
o Put yourself in the position of the respondent. Does it make sense to you yourself?
- Overlapping categories
Categories should be mutually exclusive, otherwise respondents won’t know what to select.
How often do you go to the cinema?
A. Less than once a year.
B. 1-5 times a year.
C. 5-10 times a year.
D. 10 or more times a year.
- Leading questions
Questions should not suggest a ‘right’ answer. Questions of the kind ‘Do you agree with the view that…?’ fall into this class of question. The obvious problem with such a question is that it is suggesting a particular reply to respondents, although invariably they do have the ability to rebut any implied answer.
E.g. Have you read any of the classic masterpieces of literature?
- Double questions
Questions should not presume an answer to a question that has not been asked.
Which political party did you vote for at the last general election?
But what if the respondent did not vote? It is better to ask two separate questions.
- Double barrelled questions
Questions should not address various topics at once.
Do you think that the arts are beneficial for their audience and society alike?
So, does the question mean ‘beneficial for the audience’ or for ‘society’?
- Technical terms/jargon
Make sure that the vast majority of respondents know what you are talking about.
How much of the GDP should the government spend on arts and culture policy?
What is GDP?
What are arts and culture policy?
- Unbalanced answer categories
If positive as well as negative answers are possible, make sure that there are as many positive as negative ones.
Lack of representativeness
Distribution of certain characteristics in sample does not reflect distribution in population.
Bias
- Sampling error.
o Problem in sampling procedure leading to a biased sample. This refers to errors in the findings deriving from research due to the difference between a sample and the population from which it is selected. This may occur even though probability sampling has been employed.
- Non-sampling error.
o Other problems leading to a biased sample. This refers to errors in the findings deriving from research due to the differences between the population and the sample that arise either from deficiencies in the sampling approach (e.g. an inadequate sampling frame), or from such problems as poor question wording, poor interviewing, or flawed data processing
sampling frame!
A number of units is drawn from the sample at random
Sampling fraction
Stratifying:
deciding beforehand on the proportion of units of a certain category that is needed in the sample.
Two points are relevant here. First, you can conduct stratified sampling sensibly only when it is relatively easy to identify and allocate units to strata. If it is not possible or it would be very difficult to do so, stratified sampling will not be feasible. Second, you can use more than one stratifying criterion. Thus, it may be that you would want to stratify by both, for example, faculty and gender or faculty and whether students are undergraduates or postgraduates.
Stratified sampling is really feasible only when the relevant information is available.
- Multi-stage cluster sample
With cluster sampling, the primary sampling unit (the first stage of the sampling procedure) is not the units of the population to be sampled but groupings of those units. It is the latter groupings or aggregations of population units that are known as clusters.
If units are part of larger groups (higher level), sampling can be organised by randomly selecting groups, then, within those groups, randomly selecting units.
Mostly useful when samples have to be drawn from very large and widely dispersed population and organisation of administering will be very complex.
The problems with surveys
- Inaccuracies
o Omissions (lacking categories).
o Phrasing and question sequence.
o Memory.
o Social desirability.
o Interviewer effects. - Intensive data collection
Designing and conducting a survey takes a lot of time and money. - Obtrusive
People know that they are filling out a survey, so they may alter their response, reactivity. - What about behaviour?
In a survey, you can only ask about behaviour, but what people say may differ from what people do.
Secondary data analysis
Analysis of data that was collected by someone else for another primary purpose. The utilisation of this existing data provides a viable option for researchers who may have limited time and resources.
There is a massive amount of data lying around, so why collect it yourself? There are:
1. Data from other researchers and research institutes.
2. Official statistics and administrative data.
3. Meta-analysis.
4. Big data.
Data from other researchers.
* Replication of others’ analyses from a fresh perspective.
* Data collection is expensive, so often researchers collect more data than they actually use.
Data from research institutes (such as SCP, CBS, Eurostat, …)
* Professional organisations specialised in data collection.
* Constant stream of new data.
* Often freely available (for scientific purposes).
Advantages of secondary data analysis
- Professionally collected (with loads of resources).
Data collection is expensive and requires loads of administration, so professional institutes set up organisational structure for repeated data collection.
o High quality data.
Measurement validity extensively tested.
The sampling procedures have been rigorous, in most cases resulting in samples that are as close to being representative as one is likely to achieve.
High response rate.
o Large samples.
The samples are often national samples or at least cover a wide variety of regions. The degree of geographical spread and the sample size of such data sets are invariably attained only in research that attracts quite substantial resources.
External validity.
Subgroup analysis.
o Often longitudinal (various waves).
Where similar data are collected over time, usually because certain interview questions are recycled each year, trends (such as shifting opinions or changes in behaviour) can be identified over time. With such data sets, respondents differ from year to year, so that causal inferences over time cannot be readily established, but nonetheless it is still possible to gauge trends.
o Often comparative (various countries/regions).
Cross-cultural analysis. - Practical
Someone else is doing the collection, so you don’t have to bother with that.
o Cost and time cheap and fast.
Secondary analysis offers the prospect of having access to good-quality data for a tiny fraction of the resources involved in carrying out a data-collection exercise yourself.
o More time for analysis.
Advanced and thorough analysis.
Reanalysis may offer new interpretations.
Data can be analysed in so many different ways that it is very unusual for the range of possible analyses to be exhausted. Several possibilities can be envisaged. A secondary analyst may decide to consider the impact of a certain variable on the relationships between variables.
Disadvantages of secondary data analysis
Disadvantages
* Not familiar with data.
Get grips how the data are structured, coded, or which categories are used.
Time consuming to understand, especially with large complex data sets.
- Complex datasets.
Large datasets are organised in complicated ways. Sometimes, the sheer volume of data can present problems with the management of the information at hand, and, again, a period of acclimatisation may be required.
E.g. Unit-level, household level, regional level, country level. - No control over data quality.
o Sampling? Phrasing? Answer categories?
Tends to be good, but what if it’s not?
o Differences between waves of longitudinal surveys and between versions of comparative surveys. - Restricted to variables that are variable.
The absence of key variables. Because secondary analysis entails the analysis of data collected by others for their own purposes, it may be that one or more key variables are not present. You may, for example, want to examine whether a relationship between two variables hold even when one or more other variables are taken into account.
Official statistics and administrative data Major advantages
- Cheap and fast (if you are granted access to them) since the data has already been collected.
- Often population level, not just samples. Therefore, there is the prospect as well as cross-cultural analysis.
- Unobtrusive – people don’t know that they are being studied, thus the problem of reactivity will be much less pronounced.
- Longitudinal and (sometimes) comparative since the data are compiled over many years.
Official statistics and administrative data Major disadvantages
- Not collected for research purposes.
o Categories may differ over time (due to political decisions, etc).
o Registrations may be incomplete or sloppy.
o Sometimes not yet digitised.
Quantitative content analysis
Rather than asking people questions (in a survey), you can analyse communication they have produced by putting its elements into categories.
Content analysis quantifies the contents of ‘texts’ (whether written, spoken, or even images). This is interesting because we constantly produce communication on numerous things, so analysing that offers a wealth of information about what is on our mind as a society.
There is also a qualitative version of content analysis (ethnographic, semiotic), that tries to understand rather than quantify, but that’s not for now.
Quantitative content analysis – Research questions
Content analysis does not lend itself to just any research question.
* Descriptive / (cor-)relation.
Which art forms get most coverage in the news?
* Comparative.
How do the reactions on a particular event differ in the media in various countries, how do they differ between tweets and newspaper articles?
* Longitudinal.
How does coverage on the environment change over time?
Quantitative content analysis – Sampling
Which type of ‘texts’ are you going to study? And which type of media?
* Print media (newspapers, articles, magazines, photos, etc).
* Online (blogs, tweets, Instagram, etc).
* Televised (news broadcasts, series, films, etc).
* Speech (interpersonal talks, public speeches, meetings, etc).
Which sample will you draw and on which basis?
* Random.
* Relevance (readership, likes, etc).
Which time frame?
* Entire series, relevant dates, relevant times, etc.
Quantitative content analysis – Coding
Which categories are relevant for you?
What are you going to count (quantify)?
* Significant actors
o Who is talking, to whom, and whom are they talking about?
* Words
o Number of words, use of specific words.
o How long are articles about a specific topic and what are often recurring words?
* Subjects and themes
o What is a text about in general?
o Involves some interpretation.
* Dispositions
o Underlying values, is a text positive or negative about its subject?
o Involves even more interpretation.
Relevant elements need to be categorised by assigning a code to them.
* Mutually exclusive codes no overlap.
* Complete list of codes no omissions.
* Clear instructions no doubts for people doing the coding.
* Clear unit of analysis some codes may apply to the medium, other to the people mentioned, still others to specific things they have said, etc.
Coding manual
To do the coding, coders need very elaborate instructions on:
* The categories that are relevant.
* How to recognise them.
* The numeric codes corresponding to a category.
Important to ascertain consistency:
- Intra-coder
o Coding of single coder does not change over time.
Check by coding a part of the texts again. - Inter-coder reliability
o Coding of different coders does not differ.
Check by having various coders code a common sample.
Advantages:
- Unobtrusive
o The ‘texts’ are not made for the purpose of research; the authors are not aware that they will be studied. No reactivity. - Transparent
o Systematic procedures and assigning codes objectively replicability. - Longitudinal analysis
o A lot of texts are produced constantly, coding makes it easy to compare current with older material.
Disadvantages
- Valid?
o How trustworthy / authentic are certain texts, whose viewpoints do they represent, what are their interests? - Representative?
o What can we learn from particular texts about society at large? - Objective?
o Especially when interpretation is involved in coding (subjects and themes or dispositions).
Structured observation
A method for systematically observing the behaviour of individuals in terms of a schedule of categories. It is a technique in which the researcher employs explicitly formulated rules for the observations and recording of behaviour. One of its main advantages is that it allows behaviour to be observed directly, unlike in survey research, which allows behaviour only to be inferred.
What about behaviour?
You can ask people in a survey (quite common actually), but there may be a difference between what they say and what they do/
So, if you can code texts, why not observe behaviour (look, listen) and code that?
This is structured or systematic observation.
* Without participating.
* Either overt or covert.
Structured observation – Research questions
Of course, specifically for questions that concern the things that people do (not what they think or feel).
* Descriptive / (cor-)relation
structured observation
Research questions
Descriptive or (cor-)relation
What do people do, when and for how long?
* Comparative
How does behaviour differ depending on the
circumstances?
coding schedule
To avoid potential pitfalls in devising coding schemes, make sure:
- Mutually exclusive codes.
No overlap in the categories supplied for each dimension. If they categories are not mutually exclusive, coders will be unsure about how to code each item. - Complete list of codes.
No omissions. For each dimension, all possible categories should be available to coders. - Clear instructions.
Who is to be observed, what is to be recorded (and what is not). - Coding system easy to operate.
Because it has to be done fast on a lot of information, clarity about the unit of paralysis is crucial. - Minimise interpretation by observer.
Although some interpretation is unavoidable.
Criteria
Reliability:
* Intra-coder reliability
Coding of single coder does not change over time.
Check by coding a par of behaviour again.
- Inter-coder reliability
Coding of different coders does not differ.
Check by having various observers code a common sample.
Structured observation – Field Stimulation
Also known as contrived observation.
Research environment is created by the researcher and consumers are observed in the simulated situation.
Special case:
Researcher manipulates part of the setting and see how will behaviour change as a result?
(Comparable to (quasi-)experiment).
Advantages (of structured observation):
- Directly observe what people do.
o Instead of relying on what they say they do. - Triangulation (confirm the findings of other methods).
o Triangulation means using more than one method to collect data on the same topic. This is a way of assuring the validity of research through the use of a variety of methods to collect data on the same topic, which involves different types of samples as well as methods of data collection. - Unobtrusive if covert no reactivity.
o High ecological validity.
Disadvantages:
Disadvantages:
* What is the bigger picture? Obtrusive if overt reactivity
* Which categories are relevant to code behaviour?
o Often advisable to do some ‘unstructured observation’ before doing the actual data collection to finetune the method.
* Fine line between observation and interpretation.
o As only overt behaviour is studied, underlying intentions are hard to grasp.
* Take into account the context.
o Only overt behaviour is studied but take into account that this comes from somewhere.
* Fragmentary data.
- Harm to participants.
Participants should not be harmed by a study.
o Physical (pain, health threats, etc.)
o Mental (stress, loss of self-confidence, etc.)
o Interests (loss of resources, time, opportunities, etc.)
o No electric shocks.
o No permanent or lasting effects.
o Confidentiality
- Informed consent
It’s about knowing what the study is about so that they could actively consent (or reject).
Participants should not be made to participate against their will, this has to do with interests. Often, participants are requested to give their ‘informed consent’ (by signing a form)
o Informed – knowing what the study is about.
o Consent – actively agreeing to participate.
However, very often, we can’t give full disclosure of the study since it may affect the way they answer the question or doing things differently. reactivity.
So, what are they consenting to? At least they should be informed things that would directly affect them, e.g. how long is it going to take?
- Invasion of privacy
Participants should retain the right to choose which information they want to divulge. But this is easier said than done.
o In cases where cover observation is necessary.
o People may be overprotective without understanding what a study is about.
Again, confidentiality (and anonymity) is crucial, because at least information cannot be traced back. Or ask them for their permission afterwards: ‘You’ve been observed. Can we use this data for analysis?’ Still, it is a problem that people are very protective about their privacy, even if they don’t fully understand what’s the benefit of the information they’ve been asked for
- Deception
Participants should not be deceived. This is a good principle, but mostly, it’s not possible. This becomes very common because of reactivity. Often there is no other way to get to certain types of information:
o Especially in experiments.
o But actually in any type of study where it is not mentioned in full detail what it is about (and that is very common, because of reactivity).
If deception is used, better make sure there are good reasons and there is no alternative (‘mild deception’).
Throughout, we’re starting to notice, that doing research in an ‘ethical’ way is not a clear-cut line, not black or white, right or wrong. Make sure that we take an ethical stand that at least we consider the various ethical implications that are involved in the study that we are conducting.
- Deontological (idealistic)
o Certain practices should be avoided because they are unethical in themselves. In other words, this is taking a moral highroad. We don’t do certain things because they are in themselves ethically unacceptable.
- Consequentialist (pragmatic)
o Certain practices should be avoided because their consequences are harmful for (future) research. In other words, it is not so much out of a concern for the participants, but more out of a concern for science because using techniques or ‘tricks’ that might be ethically questionable can cause people to no longer trust science. People will no longer want to participate in research.
- Universalism
o Ethical principles are strict and should not be broken. If research cannot be done to the highest ethical standards, it should not be done. There are certain things that we shouldn’t do to people because it’s unethical, there are very clear ‘instructions’. This is a very principal stance.
- Situation ethics
o Ethical standards should be applied, but we should also be able to study things. So, it’s more permissible. There are certain clear ethical standards that shouldn’t be transgressed, but there are things that are important to find out and it requires some transgressions of ethical boundaries.
o The end justifies the means, because some things are so important to study/understand e.g. racism, sexism, and if we can only study them properly by transgressing boundaries, then that’s the price that we should pay.
o No choice.
o As humanity, we have no choice but to do this, because it is very important.
This is also a very principal approach, but it does make some concessions because some things can only be studied properly with some deception, invasion of privacy or even harm.
- Ethical transgression is pervasive
o Breaking of some ethical principles is basically unavoidable and it’s hypocritical to think that it wouldn’t be. It’s impossible to study anything without transgressing ethical standards. ‘Don’t be too touchy, it’s just the way that things are done’. Where situation ethics are still principal, this one is more accepting of transgressing.
- Anything goes (more or less)
o There is no problem with some ethical transgression (especially deception) as long as it serves a scientific purpose. If we just have a look at what multinational corporations are doing, or the media, or the government, it’s way worse than what scientists are doing. So, why should scientists take the high road? At least when scientists transgress the boundaries, it’s for the greater good. Just do what is necessary to get the job done.
- Repeated trials.
o Keep changing methods until something comes out.
o Sometimes the result concludes that the hypothesis is falsified, so the theory is being rejected. If this happens, we should change the theory. But sometimes scientists are reluctant to do this; so, they think the theory might not be wrong, maybe the way they’re studying it is wrong. So, they would change the methods repeatedly until they find the expected finding.
When findings don’t confirm expectations, blaming the methods rather than the theory (verification bias).
- Sweeping findings under the carpet (‘cherry picking’)
o Some of our findings might be in line with our expectations, but some are not. So, what are we to do? Should we accept the theory, or should we reject it on the basis of the fact that some findings are not in line? It’s very tempting to just not pay attention to that what doesn’t fit our case.
Because they don’t confirm expectations and are therefore deemed ‘irrelevant’.