Research Methods Flashcards
Descriptive statistics
Measures of central tendency
Measures of dispersion
Graphs
Tables
Measures of dispersion
Measure of variation in a set of scores
Range
Calculation of dispersion I.e. Variation.
Subtracting lowest number from highest number and adding 1
Content analysis definition
Content analysis is a type of observational research used to indirectly study behaviour by examining media. The aim is to summarise and describe the communication in a systematic way to make valid inferences and hence, draw conclusions.
Coding in content analysis
Coding is the initial stage of content analysis. It involves converting qualitative data into quantitive data. Coding involves counting the number of times a particular word, phrase or instance appears in order to categorise the information into a meaningful unit.
Thematic analysis
Thematic analysis involves generating qualitative data. It involved identifying a theme- which refers to any idea, explicit or implicit, that is re- current. E.g men are presented as better then women. This is likely to be descriptive and involve support from direct quotes to illustrate themes. Some themes may only emerge after coding.
Content analysis strength
Able to get around ethical issues normally associated psychological research. For example, informed consent and privacy. Most material e.g. Films, tv already exists within public domain. So, no issues obtaining permission.
Content analysis strength:
Communications of more sensitive nature e.g. Conversation by text, diary are high in external validity. And therefore, conclusions drawn likely to be able to be generalised.
.
Content analysis: weakness in
Causality cannot be established as it merely describes the data.
Content analysis weakness
Content analysis may lack objectivity where thematic analysis is involved. This is because it is down to the subjective interpretation of the researcher and so conclusions may not be valid.
Internal reliability
Is the extent to which a measure is consistent within itself. I.e measure of whether different questions on the same test measure the same thing.
External reliability
Refers to the extent to which a measure varies from one use to another. I.e. Does a test arrive at the same result but on a different day.
Test re test method
For use in questionnaires and interviews
By assessing the same person on two separate occasions this shows the extent to which the measure is externally reliable.
If the test is reliable then the results obtained each time should be the same /similar.
The scores should be correlated. Use statistical test to see if correlation is significant. If it is then it is reliable.
Inter-observer reliability
The extent to which there is agreement between two or more observers. If behavioural categories are not operationalised properly then judgements on what to record where lends itself to subjectivity . And this records may be inconsistent.
The data of the two observers should be correlated.
Use a statistical test to assess significance.
If significant then inter-rater reliability is high
Internal validity
The extent to which a researcher has measured what they intend to measure. Is the effect on the DV caused by the manipulation of the IV.
External validity
The extent to which the findings can be generalised to other settings, populations and eras, outside of the investigation.
Ecological validity
SETTING
Refers to the extent to which the findings can be generalised from from one setting to another. In particular ‘everyday life’.
Temporal validity
HISTORICAL
The extent to which findings can be generalised to other historical times and eras. I.e do they remain true over time.
Assessing validity:
Face validity
Does a test appear ‘on the face of it’ to measure what it is supposed to. I.e does it look like it does or not.
Assessing validity
Concurrent validity
The extent to which results obtained from a test a similar or match those obtained from a pre existing, well-established, similar measure.
Improvements of validity
Experiment
Single blind procedure
Double blind procedure
Randomisation of design of study, selection of pps.
Observations
Remain anonymous to avoid social desirability bias
Observations
Remain covert
Participant reactivity
Demand characteristics
Natural, valid behaviour
Type 1 error
When null hypothesis is rejected and alternative hypothesis accepted when it should have been the other way around.
When p value - level of significance is too high .
Type 2 error
When null hypothesis is accepted and alternative hypothesis rejected when it should have been the other way around.
Level of significance I.e. P value is too stringent.
Sections of a scientific report
Abstract: short summary including major elements( aims, hypothesis,method,procedure, results, conclusion
Introduction- review of past research, theories, concepts relevant to study. Aims and hypothesis stated.
Method- should be able to replicate
Design- experimental design, observation, interview etc. Justify .
Sample- technique used, target population, how many
Apparatus used for assessment
Ethics - how they were addressed
Procedure - briefing, standardised instructors and debriefing given.
Results- summary of key finding
Using descriptive statistics
Tables measures of central tendency, dispersion graphs
Qualitative data - themes and categories.
Discussion- what study tells us about psychological theory. Wider implications. Improvements.
Referencing of materials usedz