Part 1 Flashcards
What is content analysis?
An observational study in which the behaviour is usually observed indirectly in visual, written or verbal material.
What is the process of conducting content analysis?
Sampling the content- How much of it is needed?
Coding the data: Using behavioural categories/ checklist
Representing the data: Either quantitative (how many times that behaviour occurs) or qualitative (describing the behaviour)
Evaluate content analysis (S+L)
Strengths:
- High ecological validity
- If source is retained, the content analysis can be replicated and therefore can be tested for reliability.
Weaknesses:
- Observer bias: different interpretations
- Culture biased: interpretation influenced by language and meaning.
How is thematic analysis conducted?
- Read data transcript, reread and try and understand meaning.
- Break data into meaningful units
- Assign a label to eat unit; these are sub categories.
- Combine subs into larger categories
- They can validated by adding more data and applying it to te categories; they should fit well.
How can observational techniques be assessed for reliability?
Inter observer reliability: Having two or more observers making separate recordings of the same focus and then compare/ cross-analysing each others data.
-The extent in which observers agree on an observation is inter-observer reliability, calculated by correlation coefficient for pairs of scores. A result of .80 or more shows good!
How can the reliability of behavioural categories in observational techniques be improved?
Better operationalisation
Better distinction as some may overlap.
Observer needs to practice to respond more quickly
What ways can you increase the reliability of self-report techniques?
Test-retest: Same test or interview is given to the same participants on different occasions to see if they get the same results- scores should show high correlation coefficient.
Inter-interviewer reliability: By comparing answers on one occasion with answers from the same person with the same interviewer a week later- only for interviewers.
Reduce ambiguity: Questions to vague and may be interpreted differently.
How can validity be assessed?
Face validity- The extent to which the measure looks like it is measuring what they intended to measure.
Concurrent validity: comparing current measurement with a previously validated one of the same topic and scores compared. Results should have a high correlation coefficient.
How can validity be improved?
Questionnaires- questions revised to be closer to the topic.
Concurrent validity- remove questions which may seem irrelevant
Internal/external validity- better research design by doing a pilot study (such as a double blind)
What are the features of science?
Empirical methods- gained from observable, self evident and measurable experimentation, looking at the facts through direct testing.
Objectivity- Not being influenced by the expectations of a researcher and done in a systematic collection of measurable data. This is why much control is needed.
Replicability- Record procedures carefully through standardisation and can verify the results to increase validity.
Theory construction- To make sense of the facts.
Hypothesis testing- an observable, self evident and measurable hypothesis should be created, and is needed falsify theories.
What is falsifiability?
The possibility that a hypothesis can be proven wrong and in hindsight some theories incorrect.
What are paradigms?
In a time frame, a shared set of assumptions about a subject matter and the methods appropriate to it’s study. Many times it will shift as new information is presented and previous assumptions are shunned due to being falsified.
What is a Type I error.
Occurs when a researcher rejects a null hypothesis that is true, usually because significant level was too low.
What is a Type II error?
Occurs when a researcher accepts a null hypothesis, but should have rejected it. Usually when significant level is too high.