Research methods (have done on paper cards!) Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

What is a hypothesis?

A

A precise testable statement

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a variable?

A

Anything that can change within an investigation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an independent variable?

A

the thing that is manipulated by the researcher (the variable that changes between conditions)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a dependent variable?

A

The variable measured by the researcher (any changes in the DV should be caused by the IV)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are order effects?

A

A change in participants attitude to the task due to the order of the tasks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are participant variables?

A

the varying of characteristics between participants e.g. IQ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are standardised procedures?

A

Formalised procedures used for all participants in a research study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is random allocation?

A

(An attempt to control for participant variables in an independent groups design;) which ensures that each pp has the same chance of being in one condition as any other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is counterbalancing?

A

An attempt to control for order effects; half of the pps experience the conditions in one order and the other half in the opposite order

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What are standardised instructions?

A

Formalised instructions used for all participants in a study

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is randomisation?

A

Use of chance in order to control for the effects of bias when designing materials and deciding the order of conditions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is objectivity?

A

An unbiased view of a researcher upon the results of the results/data collected in the study itself

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is raw data?

A

Unmanipulated data (data straight from the experiment)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is an anomalous result?

A

A result that does not fit the pattern of the other data - an outlying result

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is a pilot study?

A

A small scale trial run- will highlight extraneous variables and any issues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a confederate?

A

An individual in a study who is not a real pp, has been instructed how to behave by the investigator/experimenter. They can even act as the IV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What are the types of hypothesis?

A

one tailed / directional
two tailed / non-directional
null

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is content analysis used for?

A
Qualitative data (e.g. books or email + look for patterns in the language, how ppl interact or how they're represented)
These patterns may be called behaviour categories or themes (in this case it's thematic analysis).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the result of content analysis and how can it be a quantitative method?

A

It is a quantitative method, as it can produce numbers + percentages.
After a content analysis, may make statement e.g: “27% HFM programs in May 2011 mentioned Lady GaGa compared to 8% in 2009.”
The counting has 2 purposes:
-To make analysis more objective
-To simplify detection of trends

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are the stages of content analysis?

A

As with observation, researcher has to choose:
sampling method, how to code the data +
how to represent data.
But observe books, films, adverts, photos etc/any artefact that people have made not actual ppl

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What various sampling methods can be used with content analysis?

A

(e.g. books) every nth page (time sampling), if comparing books- random selection or select a type of book to compare? etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What various coding for data can be used in content analysis?

A

Coding means using behavioural categories.

They will create list of behavioural categories, then count ow many times each occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

How can data be represented in content analysis?

A

qualitatively and quantitatively.
You can record instances by either:
Count the number of instances (quantitative).
Describe the examples in each category (qualitative)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the strengths of content analysis? (AO3)

A

Often high ecological validity- based on observations of real behaviour/communications. E.g. newspapers, books.
Often no ethical issues: data already exists in society. No need for consent.
Can replicated easily by others if they access same books etc. Enables researcher to check reliability.
Flexible – can produce qualitative or quantitative data, depending on what the topic requires

25
Q

What are the weaknesses of content analysis? (AO3)

A

Observer bias reduces objectivity + validity- different observers may interpret behavioural categories differently.
Ppl studied indirectly, so analysing data outside context it’s produced in…So may misinterpret.
Often culturally biased as interpretations of verbal or written content affected by language + culture of observer & behavioural categories used.

26
Q

What are the aims of thematic analysis?

A

Impose order on the data
Ensure the order represents pps’ view
Ensure the order emerges from data not own bias
Summarise data so hundreds of pages reduced down
Enable themes to be identified + general conclusions drawn.
(There is no one method to use)

27
Q

What is the general procedure/principles of thematic analysis? First three stages

A
  1. View data several times, objectively try to understand meaning & perspective of pps. No notes made.
  2. Break data into units, small units should each convey meaning. e.g. sentences, phrases etc.
  3. Label/code each unit, labels are the initial categories. Each unit may get multiple labels.
28
Q

What is the general procedure/principles of thematic analysis? 4th and 5th stage

A
  1. Combine simple labels/codes into larger categories. Identify examples of each of these larger categories.
  2. Check data by accessing new set of data + applying categories. Categories should fit well if the two data sets have similar topic.
29
Q

What are case studies?

A

Involves detailed study of single individual, institution or event.
Often used to study unusual behaviours/behaviours in more detail.
Evidence based research, gathering info from range of methods/sources, the individual + ppl around them.
Aim to be scientific & use objective & systematic methods to provide rich record of human experience, but hard to generalise from

30
Q

How are case studies carried out?

A

Many techniques may be used to produce case history. Ppl may be given:
questionnaires or interviews about experiences, be observed during daily life, psychometric tests (IQ, personality etc), take part in experiments to test what they can/cannot do.
Normally longitudinal, following individual or group over extended time period

31
Q

How can findings from case studies be organised qualitatively?

A

Might organise findings from long answers in questionnaires and interviews, (using content analysis) into themes to represent pps’ emotions, abilities etc

32
Q

How can findings from case studies be organised quantitatively?

A

Might log scores to psychometric tests or data from observations

33
Q

What are the strengths of using case studies? (AO3)

A

Produces rich/in depth info, preferable to superficial methods e.g. questionnaires for getting pps’ outlook.
Provides insights into complex interactions of many factors (rather than experiment looking at only 1 IV/DV).
Can study rare instances, e.g. children locked in basements, suffering privation (Genie)- unethical to create this for experiment.
Useful for generating ideas for further study, or disproving a theory.

34
Q

What are the weaknesses of using case studies? (AO3)

A

Difficult to generalise findings- each case unique.
Often relies on accounts from pps & family- are subjective, prone to social desirability bias & memory decay as they are retrospective.
Ethical issues, e.g. in confidentiality/consent.
Being studied over a long period of time in many ways may cause harm (Genie).

35
Q

What is reliability?

A

Consistency -Can we depend on a measurement, whether if we repeat a study, measurement, tests etc can we be sure to get same results.
If different results then method is not reliable.

36
Q

What is validity?

A

Legitimacy- does the data represent reality.
Are findings of a study a true picture of how people react, or an artefact of the research? (e.g. due to a bias or misinterpretation).

37
Q

How can reliability be tested in observation methods?

A

Test-retest; repeat observation- (e.g. repeat video) do we tally the same behaviours?
Inter observer reliability: extent to which observers agree. Calculated as a correlation coefficient for pairs of scores. Result of 0.80 or more suggests good inter observer reliability.

38
Q

How can reliability be improved in observation methods?

A

To improve inter observer reliability may need to make behaviour categories clearer, not overlapping etc.
OR need to give observers practise time

39
Q

How can the reliability of self report methods be assessed?

A

Test retest reliability: give questionnaire to group. Then wait & give it to same group again.
If outcome of both tests similar then reliable.
Inter interviewer reliability: researchers could assess reliability by interviewing same person twice with gap in between & comparing responses. Or get 2 interviewers & assess consistency of responses.

40
Q

How can reliability be improved in self report methods?

A
  • Pilot study to check interpretation, & ensuring questions are clear, not ambiguous. Or add more closed questions (harder to misinterpret).
  • Improving reliability of interview: using same interviewer each time. Interviewer should be properly trained & using structured interview will also improve reliability.
41
Q

How can reliability be assessed in experimental methods?

A

Check if method for measuring DV is consistent (are observations or self report methods consistent).
Standardised instructions & methods will aid this.

42
Q

How can reliability be improved in experimental methods?

A

Check methods used are consistent.
Same procedure often repeated with different pps so is important it’s done same way each time.
If not we cannot compare the responses

43
Q

What are the two types of validity?

A

Internal: inside the study, are we measuring what we think we are (the effect of manipulating the DV)
External: concerned with generalising the results beyond the research setting, e.g. to other people, places, times etc

44
Q

What factors may affect internal validity?

A
Investigator effects
Demand characteristics 
Confounding variables
Social desirability bias 
Poor behavioural categories
45
Q

What are examples of external validity?

A

Ecological validity – generalising results to real life / additional settings
Population validity – generalising results to other ppl
Temporal validity – generalising results to other historical periods

46
Q

How do we assess validity?

A

Face Validity – does the measure look like it measures what it claims to?
Concurrent Validity – comparing the method to a previously validated one.

47
Q

How can validity be improved?

A

Poor Face Validity – questions or behavioural categories revised to be more on topic.
Poor Concurrent Validity – remove irrelevant question, look for ways to make test more similar to ones already validated.
Poor Internal /External Validity – improve design by using single/double blind, realistic tasks, keep results anonymous, control for confounding + extraneous variables etc.

48
Q

What are the key features of a science?

A

Empirical methods, objectivity, control, replicability, theory construction, hypothesis testing

49
Q

What are empirical methods?

A

Gaining info via direct sensory experience (direct observation or experiment) not from unfounded beliefs /speculation.
Enables us to make claims about the truth of a theory.
-We should do own measurements, not rely on intuition, personal opinions or beliefs.

50
Q

What is objectivity?

A

Empirical data should always be objective.
=Not affected by expectations of researcher (their opinion or bias).
Objectivity increased by carefully controlled methods.
However arguably it’s impossible to be truly objective as researchers will go into an experiment with an idea of what they expect to find.

51
Q

Why is control a key feature of a science?

A

We aim to control many factors in research so we’re able to make cause & effect statements.
The more control we have over possible confounding variables the more likely that only our IV is affecting DV + study is Valid.

52
Q

Why is replicability a key feature of a science?

A

Replicability demonstrates validity of a study and if we gain same outcome it affirms truth of 1st result. Can be more confident in results if replicable.
Replicability guards against fraud + enables us to check if one off (chance) result because of something in how research was done.
To repeat, research scientists need to record procedures carefully so another person can repeat & verify their results.

53
Q

What is theory construction?

A

Theory: general model of how things fit together.
We construct theories to explain facts we find & they help explain observations/make predictions.
Both inductive & deductive methods can be used to create theories.
Each theory we have represents best knowledge we have on a topic at that time but many disproved later.

54
Q

What is hypothesis testing?

A

Once theory produced must be subjected to multiple tests to ascertain validity.
We test validity of theory by creating hypothesis for it + testing hypothesis with empirical methods.
Will either find replicable evidence & can assume theory valid. Or theory proved incorrect – needs amending or replacing.

55
Q

What is the scientific method of induction?

A

Starts with observing a phenomenon:
leads to developing hypothesis which can be tested empirically, which can lead to new hypothesis, eventually we may construct a theory from this.

56
Q

What is the scientific method of deduction?

A

Hypothesis developed to test theory, carry out research to test hypothesis
Assuming research valid & can be replicated theory either supported or adjusted or rejected.
More research can be carried out on same or another hypothesis to test theory.

57
Q

What is falsifiability?

A

We must create hypothesis that we try to falsify. Any that cannot be falsified considered stronger. We still refrain from saying ‘this proves a theory’ rather we say ‘it offers support for’ as it may have just not been proven wrong yet.

58
Q

What are paradigms?

A

In a science, researchers hold shared set of assumptions/methods (paradigm).
Kuhn suggests Psychology lacks universally accepted paradigm, so should be separate from natural sciences.

59
Q

What is a paradigm shift

A

Kuhn proposed two main phases of a science.
Normal science – one theory dominant (occasional challenges).
Revolutionary shift – eventually disconfirming evidence accumulates until theory can’t be maintained + is overthrown. This triggers a paradigm shift.