Research Methods Flashcards
What is content analysis?
A research technique that enables the indirect study of behaviour by examining communications that people produce, such as texts, emails, TV, film and other media. The aim is to summarise and describe this communication in a systematic way so overall conclusions can be drawn.
Describe coding.
Coding is the initial stage involved in content analysis. Very large data sets are catergorised into meaningful units. For example, counting the amount of times a word occurs to produce a quantitative value.
Describe thematic analysis.
Themes may occur once data has been coded. A theme may be explicit or implicit and occurs when an idea is recurrent. Themes are often more descriptive than codes. E.g. ‘A drain on the resources of the NHS’. Once identified, hey can be placed under broader categories such as ‘control’ or ‘treatment’. Once a researcher has collected a wide range of themes that cover the data he/she is analysing, they will collect new data to check for validity. If this new research supports previous conclusions, a researcher will write up a report using quotes from the data analysed to illustrate each theme.
Evaluate case studies.
✅ - Rich detail that sheds light on very unusual and atypical forms of behaviour. Proffered to superficial data collected from experiments.
✅ - Add to our understanding of normal behaviour - e.g. HM demonstrating 2 separate stores in the multi store model.
✅ - Create more hypothesis for further testing which can lead to paradigm shifts.
✅ - Ethics - participants not forced in to unethical situations.
❌ - Low generalisability and reliability and control - small samples and longitudinal studies.
❌ - BIAS - researcher and family - recall of past events may be inaccurate - lowers validity.
❌ - Ethics - consent and right to withdraw and privacy.
Evaluate content analysis and thematic analysis.
✅ - circumnavigates ethical issues associated with psychological research - much content analysed is already in the public domain - no issues with consent.
✅ - High external validity - e.g. Emails and text messages
✅ - flexible - produces both quantitative and qualitative data - objective research.
✅ - Inter-rater reliability.
❌ - Indirect - means conclusions may not make sense because the data was studied outside of the context in which it was created - researcher may infer opinions and motivations that were never actually there.
❌ - Simplistic - quantitative data can lack representation of real life activities. OR LACK OBJECTIVITY.
❌ - Researcher bias - but researchers are clear of this and often refer to it in their final reports.
Describe what a case study is.
A case study is the analysis of a specific (group of) individual(s), institution or event. These are often unique, such as a person with a rare disorder or the lead up to the 2011 London riots. They are often longitudinal. Mainly qualitative data is collected throughout the use of interviews, observations and questionnaires. Sometimes experimental methods, which test what the case is capable or not capable of, are used to collect quantitative data.
Define reliability.
The extent to which findings from an investigation or measuring device are reliable. A measuring device is said to be reliable if it measures consistent results every time it is used.
Describe the test-retest.
Test-retests are used to measure the reliability of a measuring tool. It involves administrating the same experiment or questionnaire to the same group of people on a different occasion.
There must be enough time between tests for the participants not to remember the aims or answers to questions but also not for their opinions or abilities to have changed. The scores from both tests must be correlated and if they are significant, they are classed as having good reliability.
Describe inter-rater reliability.
Inter-rater reliability is the extent to which the observations and decisions of two researchers are the same. It is highly applicable to observations where using only one researcher may produce subjectivity bias.
To achieve this, a small-scale pilot study may be run prior to the study to see if researchers are applying behavioural categories in the same way.
The results of each researcher is correlate DNA dig they have a correlation of >+.80 then they are said to have inter rested reliability.
Describe the ways in which reliability can be improved.
- On questionnaires that have low correlations, ambiguous questions which may be interpreted differently by separate participants will need rewording. For example, open questions should we replaced by closed questions with fixed answers.
- For interviews, the same researcher should be used each time. If this is not possible, then all interviewers should be trained so that one does not ask more leading questions than the other. Alternatively, more closed questions could be used as this reduces ‘free-flowing’ answers.
- For laboratory experiments, participants should be tested under the same conditions each time as being extremely tired, for example, may lower performance in a condition compared to being alert in another condition. Alternatively, counter balancing can be used to assess for any order effects that may reduce reliability. If found, the experiment would need redesigning to reduce these effects.
- In observations, all variables must be clearly operationalised, e.g. Pushing rather than aggression. Catergories must be self-evident, measurable and not overlap, e.g. Hugging and cuddling.
What is the split-half method?
This involves comparing answers from the first half of a questionnaire to the second half to look for a positive correlation.
What is validity?
The extent to which an observed effect is genuine, has measured what it set out to measure and can be generalised beyond the situation in which it was found.
Define internal validity.
The extent to which findings are due to the manipulation of the independent variable or a result of another confounding variable, such a de,and characteristics.
Define external validity.
The extent to which findings of an investigation can be generalised to other settings, peoples or eras.
Define ecological validity.
The extent to which findings can be applied to ‘everyday’ situations. Many aspects must be analysed to decide whether settings can be generalised beyond the research setting.
Define temporal validity.
The extent to which findings from a research study can be generalised to other historical eras. It is a form of external validity. Examples of psychology that have low temporal validity include findings from Asch and Milgram or Freud’s concept of penis envy.
Define face validity.
A basic form of validity in which a measure is scrutinised to determine whether it appears to measure what it is supposed to measure. For example, does a test of anxiety look like it measures anxiety. It can be measured by simply observing a measuring instrument or asking the opinion of an expert to check it.
Define concurrent validity.
The extent to inch a psychological measure relates to an existing psychological measure. E.g. A new intelligence test compared to a Stanford-Binet test.
List the ways of improving validity.
- In an experiment, a control group can be used to see whether changes in the DV were due to the independent variable.
- To reduce demand characteristics/participant reactivity, standardised procedures, double blind and single blind methods may be used.
- In questionnaires, lie scales or promises that answers will be keep anonymous will reduce social desirability bias.
- In observations covertness is used to ensure participant behaviour is natural. Behavioural categories must be well operationalised to ensure that only the behaviour described is being marked.
- in qualitative research such a case studies and interviews, triangulation is used to reduce interpretive validity. This involves collecting data though many research methods such as interviews, questionnaires and observations.
- In general to increase validity, participants should made less aware they are under investigation, should be placed in more natural settings, representative samples should be used, extraneous variables should be controlled for and research could be repeated throughout different times of the day.
What is nominal data?
Data that is represented in the form of categorical data. It is sometimes called categorical data. It is discrete in that one item can only appear in one of the categories.
What is ordinal data?
Ordinal data is data that is ordered in some way for example a scale of loving psychology where 1 is hate and 10 is love. Original data does not have equal intervals between each unit. It also lacks precision as it is based on subjective opinion rather than objective measures. E.g. IQ tests are derived from a view of what constitutes intelligence rather than any universal measure. They measure psychological constructs. Ordinal data is also called unsafe data and because of this I is converted into ranks before being used in statistical testing.
What is interval data?
Interval data is data based on numerical scales that include units or equal, precisely defined size. For example, a stopwatch is a public scale of measurement that produces data on accepted units of measurement. It is the most precise and sophisticated form of data in psychology and is a necessary criterion for the use of parametric tests. It is better than ordinal data because more detail is preserved (and ordinal is ‘better’ than nominal level).
What are the sub sections of a journal?
- Title.
- Authors.
- Introduction.
- Abstract.
- Methods.
- Results.
- Discussion.
- Referencing.
- Appendix.
Describe the abstract of a scientific report.
A section describing the key details of the report in a short summary. It includes all the major elements; the aims, the hypothesis, the methods, the results and he conclusions.
It allows researchers to read a range of of abstracts to decide which are important enough for further investigation.
Describe the Introduction section of a scientific investigation.
A literature review which reflects on past research (relevant theories, concepts and theories) into the researchers chosen topic and introduces the aims and hypotheses of the investigation. It should follow follow a logical progression, beginning with concepts and developing into aims and hypotheses.
It shows a reader why and how you will tackle the investigation and what is original about your research.
Describe he methods section of a scientific investigation.
A detailed description of what the researcher(s) did, including:
- design - this is clearly described with reasons of justification.
- Sample - including the sampling method and target population. This includes information on the participants such as how many there were and demographic information.
- Apparatus/materials - this provides detail of any instruments or relevant material used.
- Procedure - A recipe style list of that happened in the investigation, beginning to end. This includes a verbatim record of everything that was said to the participants, such as in the briefing, standardised instructions and debriefing.
- Ethics - an explanation stating how these were addressed in the study.
This section allows a reader to decided whether they wish to skip sections and return later. It also allows them to draw enough detail to replicate he study if they wish.
Describe what the results section of a scientific report is like.
A description of what the researcher(s) found, including inferential (choice of statistical test, calculated and critical values, level of significance and the final outcome) and descriptive statistics (tables, graphs,charts, measures of central tendency and measures of dispersion etc). It summarises the key findings from the study. If the researcher used qualitative methods, there is likely to be analysis of themes and categories I’m this section too.
This shows the reader that you have evidence to support your findings.
Describe what the discussion section of a scientific report is like.
A consideration of what the results of a research study tell us in terms of psychological theory. It involves many sections such as:
- A verbal summary of results and findings in the context of the information provided in the introduction.
- A discussion of the limitations of the study (e.g. methodological issues) and how these might be addressed in further studies.
- A consideration of wider implications of the research such as application to real life or contribution to existing psychological knowledge.
This shows a reader how your results do or do not support your research question.
Describe what the reference section of a scientific investigation is like.
A list of sources that are referred to or quoted in the article, e.g. journal articles, books or websites, and their full details. This references must include full detail including the authors, date, title of book (in italics), place of publication, publisher and page numbers.
This provides the reader with the information needed to find the referenced cited if they wish.
Describe what the appendix section of a journal article is like.
A summary of the whole research which contains raw data, arguments for your experiment and other other relevant information that did not belong in the other sections.
This section allows the reader to get the results of the investigation/experiment without having to read the whole of the results section. It also helps them to understand what they can and can’t conclude exactly.
What are the features of a science?
- Evidence based.
- Empirical methods.
- Reliability.
- Falsifiability.
- Replicability.
- Validity.
- Objectivity/Subjectivity.
- Induction.
- Deduction.
- Paradigms.
- Paradigm shift.
- Operationalised variables.
- Hypothesis testing.
- Control (Cause and effect).
- Theory construction.
- Sampling.
What are paradigms/paradigms shifts?
A paradigm is an accepted set of shared assumptions and agreed methods within a scientific discipline. Kuhn (1962) argues that unlike physics, which has the standard model of the universe at its core, psychology has too many conflict approaches to be classed as a science. He sees it as a pre-science as there is no central number of principals at its core.
A paradigm shift occurs when a handful of researchers begin to question the accepted paradigm and slowly but with increasing pace, more research hers join is critique. When there is too much contradictory evidence to disagree, researchers accept this new view and a shift has occurred, e.g. The shift from the Newtonian theory to einsteinian theory. A Paradigm shift is the result of a scientific revolution; a scientific change in the dominant, unifying theory within a scientific discipline.
What is theory construction?
A theory is a set of general laws or principles that have the ability to explain particular events or behaviours. They are simple and economical principles that reflect reality. Eh explain regularities in behaviour. Theory construction happens inductively or hypothetico-deductively. Evidence is gathered through direct observation and experiments are used to provide evidence to prove or contradict this theory.
What is hypothesis testing?
Clear and precise predictions that can be scientifically tested should be able to be made on the basis of a theory. This is know as hypothesis testing. A theory should generate many hypotheses. Systematic and objective measures will be used to test a hypothesis to see if it will be supported (which strengthens the hypothesis) or refuted (which weakens the hypothesis and may mean it needs revising).
What is falsifiability?
The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue. Popper (1934) argued that all scientific theory should hold itself up for hypothesis testing and he possibility of being proven wrong. He argued that theories that had been repeatedly proven right were simply ones that had not been proven wrong and that therefore they may not be true. Popper believed that in good science, hypotheses are constantly challenged but in pseudosciences, hypotheses had falsifiability. Hypotheses that survive most attempts to falsify them are the strongest. Scientists never say ‘prove’ due to the issue of falsifiability but rather suggest. Falsifiability also explains why an alternative hypotheses is accompanied a null hypothesis.
What is Replicability?
The extent to which scientific procedures and findings can be replicated by other researchers. To be trusted, a scientific theory must have replicability across many different contexts and circumstances. Replicability is important in determining the reliability of methods and findings but also the validity as it shows the extent to which generalisation can be applied. For replication to happen, scientists must report their investigations in as much detail as possible in order to allow other researchers to verify their methods and findings.
What is objectivity and the empirical method?
Objectivity is the minimisation of all personal bias in research so as to not distort or influence the research process. It involves researcher a keeping a ‘critical distance’ during research. Methods associated with the highest levels of levels of control (e.g. lab experiments) are believed to have the highest objectivity.
The empirical method is the use of direct observation and experience to gather evidence on which researchers use to build scientific approaches. For example, the experimental and observational methods. A theory cannot claim to be scientific unless it has been empirically tested and verified. Locke argued all knowledge is determined by experience and sensory perception.
What is induction/deduction?
Induction involves studying in detail topics of interest in order to identify any trends in the observed data and then suggesting a possible explanation for this pattern in the form of a theory.
Deduction involves creating one or more hypotheses and using research to test his hypothesis. This research allows the researcher to accept or reject their hypothesis and carry out further research into his topic.
Define the experimental method.
Involves the manipulation and the independent variable to measure the effect on the dependent variable. Experiments may be laboratory, field, natural and quasi.
Define aims.
A general but focused state to of what the researcher intends to investigate; the purpose of the study. It is derived from previous research or theories.
What is a hypothesis?
A hypothesis is a clear, precise, testable statement that states the relationship between the variables to be investigated. It is made at the outset of the study. Directional hypotheses states the direction of the difference or relationship. No -directional hypotheses states there will be a difference but not the direction of this difference.
What are variables?
Any ‘thing’ that can vary or change within an experiment. Variables are generally used in experiments to determine if changes in on thing result in changes to another.
What is the independent variable?
Some aspect of the experimental situation that is manipulated by the researcher - or ch ages naturally - so the effect on the DV can be measured.