Research Methods - Scientific Processes Flashcards
The Aim
- All research begins with an idea or question that need further investigation, with the first part of the research cycle being identification of the research aim, formation of the hypothesis and operationalisation of variables
- The aim of an experiment is a general statement of the intent of the research, based on previously published research and theories
- A hypothesis is then formulated, which is a clear and testable prediction of what the researcher expects to happen in the experiment
Hypotheses
- A directional hypothesis is one where a clear and testable statement that predicts a specific outcome
- A non-directional hypothesis is one where a clear and testable statement predicts a difference or relationship but is not specific in its outcome - not specific whether it is positive or negative
- A null hypothesis is a statement that predicts that there will be no difference or relationship on the outcome of the experiment, and any observed difference is down to other factors
Experimental Design - Repeated Measures
- A repeated measures design is when the same group of participants takes part in both conditions
Strengths - - As the same participants are used in each condition, the effects of individual differences are controlled, so any changes in the DV are due to the IV and not participant variables
- Fewer participants are needed to conduct the experiment
Limitations - - The second condition can be affected by order effects such as practice effect, fatigue, or boredom
Experimental Design - Independent Group Design
- An independent group design is where participants take part in only one of the two conditions.
Strengths - - As the participants only take part in one condition they are less likely to experience order effects such as boredom, fatigue, and practice effect.
- The same materials are used for both conditions, so for example harder or easier word lists would not become a confounding variable in this design
Limitations - Individual differences between the two groups will be a problem in this design. For example, if one group was more alert than the other this would systematically distort results on a quick response test.
Experimental design - Matched Pairs Design
- This is the same as the independent group design, except that the participants have been matched in terms of key characteristics, IQ, height, gender, age, ethnic origin etc.
Strengths - - As the participants only take part in one condition they are less likely to experience order effects such as boredom, fatigue, and practice effect.
- The same materials are used for both conditions, so for example harder or easier word lists would not become a confounding variable in this design
- Individual differences are minimised as participants are matched on important variables
Limitations - - It is difficult and time consuming process to find participants who match on a number of key variables. A large pool of participants is needed making it less practical than the other designs
Pilot studies
- A small study conducted in advance of the main study, normally good practice in order to identify any flaws or areas for improvement which can then be corrected before the main study. For example questions can be checked for clarity and ambiguity.
- The pilot study is also used to check procedures for design errors and timings.
- This means that any adjustments can be made before the main study, which saves time and money.
- Pilot studies are often used to determine if the experimental design is appropriate
- When deciding on the methodology from research it is important to know what sort of results you may get
- Pilot studies allow this to happen
- They are also a good way of getting your investigation regulated by the BPS
Objectivity and the Scientific method
- When carrying out scientific research, psychologists need to make sure that they uncover the truth without contaminating it in any way. This is done through careful consideration of the following key features of the scientific process.
- In order to be truly scientific, all research evidence must be empirical, that is based on evidence gathered through carefully controlled and tested observation and/or experiment. In addition it must also be objective, meaning free from personal feelings, prejudices and interpretations - For psychology to be truly scientific it needs to use empirical methods; Replicability, Objectivity, falsification
- Popper - Observations made through experience and not own views to ensure empiricism. Variables must be fully operationalised.
Replicability
Replication is a key feature of the scientific process as it enables the researcher to look at different situations and participants to determine if the basic findings of the original study can be generalised to other participants and circumstances. This is done through repeating an investigation under the same carefully controlled conditions.
- Popper - Repeating research to check the validity of results. Methodology must be clear and detailed for repetition under the same conditions.
Falsifiability
Researchers must be able to evaluate evidence in a way that also includes the possibility that a particular theory may be proven false as well as correct. It does not mean that something is false, it merely means that if the claim was false then they must be able to prove that it is false.
- Popper - Theory must be empirically testable to check if it is true for all. However, this is almost impossible to do and so it is generally agreed that nothing can be proven.
Theory construction
A scientific theory is constructed by bringing together ideas and definitions in a logical way to explain and describe a specific event or a relationship between events. A researcher can then use the theory to make specific predictions about the outcome of their investigation, this is known as the hypothesis
Hypothesis testing
It is important for the scientific process that a hypothesis is clear and testable and that an appropriate experimental method is used to test the hypotheses.
Paradigms and paradigm shifts
Within the scientific process a collective set of assumptions, concepts, values and practices is known as a paradigm. Over time the paradigm may be brought into question by further research as new ways of looking at the same information are adopted, a paradigm shift has occurred.
Validity
The term validity is one of the most important concepts within scientific research as it asks whether any effect or conclusions found are genuine. Validity is broken down into two types internal and external
Internal validity
Whether or not the test or experiment measuring what it is meant to be measuring - researchers need to be sure that any effect or change to the dependent variable (DV) occurred as a direct result of the independent variable (IV).
Assessed in the following ways -
- Face validity –Are we measuring what we think we are measuring? In its simplest form does the research make sense?
- Concurrent validity - how well a particular test compares with a previously validated measure? For example, testing a group of students for intelligence, with an IQ test and then performing the new intelligence test a couple of days later and achieving the same results would be an indication of the internal validity of the new test.
External validity
Can the observed effect or conclusion be applied accurately to the real world? Research findings should be valid outside the research situation and could be used to explain other situations, especially “everyday” situations.
Assessed by -
- Ecological validity – can the findings be generalised to situations outside the environment created by the researcher? If the research task is similar to a real life situation it is likely that it will have high ecological validity.
- Temporal validity - how does the time period in which the research was carried out affect the findings? For example research into attitudes carried out in the 1960’s may not have the same relevance today.
Improving validity
Internal validity can be improved by carefully controlling all other variables that are not being manipulated within the experiment.
This is done by:
- using standardised instructions and procedures to make sure that conditions are the same for all the participants in the research
- eliminating demand characteristics and investigator effects, both of which affect the way a participant behaves within the experiment, known as participant reactivity
The External validity of psychological research can be improved by making sure that experiments are set in a more natural setting and involve real-life situations and also through using random sampling to select participants
Key terms
Demand characteristics:
Cues in the research situation that might reveal the research hypothesis
Investigator Effects:
The way in which the researcher behaves may give participants clues about the research hypothesis making them behave in a certain way
Participant Reactivity:
They way participants respond to the demands of the research situation
Random Sampling:
A method of choosing participants that gives every member of the population an equal chance of being selected
Reliability
- Another key part of the scientific process is reliability, which refers to how consistent the research study or method of measure is.
- In other words, if the method was used in a similar situation again would it produce similar results. If the answer is yes then the method is said to be reliable.
As with validity it can be broken down into two types: - Internal and external reliability
Internal reliability
How consistent is the measure within the research situation itself?
- Researchers need to be sure that all parts of a research study are contributing equally to what is being measured.
This is assessed by -
- Split-half method – the results of one half of the test are compared to the other half. The same or similar results displayed in both halves means that the test has internal reliability.
External reliability
How consistent is the measure when it is repeated?
- Researchers need to be sure that if the study was repeated on the same participants over a period of time or if it was used to test others in the same situation it would prove reliable.
It is tested in the following ways -
1. Test-retest method – the participants are given the same test on two separate occasions. If the same or similar results are found then the test has external reliability.
2. Inter-observer method – the researcher compares their estimation of behaviour (rating) to the independent rating given by another observer. If their estimations are the same or similar then the test is said to have external reliability.
Improving reliability
- Making sure that researchers use standardised instructions and procedures. This is achieved by making sure that they are well trained and have been given clear and consistent instructions or observation criteria and also that the collection and recording of all data is also standardised
- Ensuring that more than one measurement is taken from each participant so that an average score can be obtained. This reduces anomalous scores, which may be due to fatigue or boredom
- Conducting a pilot study to check that everything works before carrying out the full investigation
- Reducing human error by checking that the data has been recorded and interpreted correctly
Reporting psychological investigations
- The purpose of writing a psychological report is to communicate to others within the scientific community what you did, why you did it, how you did it, what you found and what you think it means.
- Reports are intended to be read by someone who knows nothing about your experiment. They will usually see the title first, then maybe read the abstract and only then read the bulk of the report.
- There is no single style which is more ‘correct’ than any other. However, there are widely accepted standards and conventions which should be followed. All the pages must be numbered and must be written in the third person.
- This means in the past tense and don’t say ‘I’ or ‘we’. The main sections of the report should be as follows in this order; Title, Content page, Abstract, Introduction, Procedure, Results, Discussion, References, Appendices