Lecture 9 concepts Flashcards
Experimenter bias
Refers to the unintentional influence of the experimenter’s expectations, beliefs, or preconceived notions on the outcome of a study or research experiment
Demand characteristics
Experimenters may (unintentionally) communicate their expectations or desires to participants, either through verbal or non-verbal cues, causing participants to adjust their behaviour in response to these perceived expectations.
Solutions to experimenter bias
Single blind experiment: researcher knows the participant groups
Double blind experiment: neither the researcher nor the participants know the group
Construct bias
When a test measures what it claims to measure differently in different groups of participants (e.g., cultural backgrounds) because the test only predicts a result within the construct of a particular cultural group and not another. (Example: culture influences the way mental disorders are perceived and experienced, as well as the institutional context in which they are treated, which may influence diagnostic tests and clinical observations).
Sample bias
Subsuming all differences related to specific aspects of a sample
Administration bias
Can result from communication problems (e.g., poor mastery of the testing language by one of the parties), interview characteristics (e.g., sex and cultural group) or other procedural aspects of the data collection
Item bias
A test item is biased when it favours (i.e., more easy to answer the item correctly) one group of test takers, but is harder to answer correctly by another group.
Salami-slicing research
‘Slicing’ research that would form one meaningful paper into several different papers. It is bad scientific practice, as it can result in a distortion of the scientific literature by leading unsuspecting leaders to believe that data presented in each ‘slice’ is derived from a different subject sample. It is, however, cheap, easier and more efficient to use the same data set, and helps with the pressure of needing to publish more papers. Increase in citation counts.
Publication bias
Occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favour of positive results.
Many factors contribute to publication bias, such as:
1) Significosis, an inordinate focus on statistically significant results
2) Neophilia, an excessive appreciation for novelty
3) Theorrhea, a mania for new theory
In an effort to combat this problem, some journals require studies submitted for publication to pre-register (before data collection and analysis) with organizations like the Center for Open Science
Adversarial collaborations
A type of collaboration wherein opposing views work together in order to jointly advance knowledge of the area under dispute. Aims to end the political bias in academia (in psychology, there are 16.8 democrats per 1 republican, in sociology there are 43.8 democrats per 1 republican, and in anthropology, there are no republicans)
Replication crisis/replicability crisis/reproducibility crisis
An ongoing methodological crisis in which the results of many scientific studies are difficult or impossible to reproduce.
Such failures undermine the credibility of theories building on them and potentially call into question substantial parts of scientific knowledge.
Reproducibility vs replication
- Reproducibility: refers to re-examing and validating the analysis of a given set of data
- Replication: repeating the experiment or study to obtain new, independent data with the goal of reaching the same or similar conclusions.
Examples of questionable research practices
1) Inaccurate referencing of ideas and concepts
2) Failing to keep accurate records of the research process
3) P-hacking: running statistical tests on a set of data until some statistically significant results arise
4) Incomplete reporting of relevant aspects of the study design
5) Too small samples
6) HARKing: Hypothesising After Results are Known => incorrectly reporting exploratory analyses as confirmatory
7) Collecting more - or less - data than planned, to achieve lower p-values
8) Publication bias: selective reporting of results/and or studies
Ghost authorship vs gift authorship
Ghost authorship: when individuals who made a significant contribution to a study are not properly credited as authors, and conversely, when people who did not contribute substantially are listed as authors
Gift authorship: including individuals as authors on a research paper who did not contribute substantially to the work is considered unethical. This can be done as a favour for political or career-related decisions.
How to prevent biases in scientific research
1) Double blind procedures: both the experiment and participants are unaware of the specific conditions or treatments being administered
2) Standardization: establishing standardised procedures and protocols for conducting experiments can help minimise the potential for experimenter bias by reducing variability and ensuring consistent application of experimental conditions
3) Peer review: having research studies reviewed by independent peers can help identify potential sources of bias and provide feedback on the validity and reliability of the research methodology and findings
4) Replication: if the results can be replicated by other researchers using the same methodology, it increases confidence in the validity of the findings
Falsification
Involves altering or manipulating existing data or results to make them more favourable or to support a particular hypothesis. This can include selectively omitting data points, changing measurement, or adjusting figures and graphs.
Fabrication
Involves making up data or results that were never actually generated in the research.