Social - Methods Flashcards
what is a survey, and what can surveys include?
Explain the difference between a structured and unstructured survey:
A survey is a self-report method used to gather information.
They may include: polls, mailed questionnaires or face to face/telephone interviews
structured - specific set of questions rather than having a general set of aims
unstructured - it is difficult as often the questions or topic is pre set, the interviewer develops the questions and then asks them very informally which results in answers which are very open.
what are the guidelines for writing a good questionaire?
avoid complex questions or technical/emotive language and negative terms
this can be checked by doing a PILOT STUDY on a few people first to make sure they have clarity
decide when to use open/closed ended questions - alternative scales are the Likert scale
filler questions - this will distract them from the real aim
how you are going to ask the questions e.g. face to face etc
consider your sample in terms of size and representatives - sometimes the response rate is low so you will need to be targeted
no bias - as in leading questions or social; desirability
what is a questionnaire?
a set of written questions given to the participate to answer on a specific topic
what is a closed ended question?
a question which gives quantative data, like yes or no - this involves set answers and has no expression for opinions or views
give 2 strengths of a closed ended question:
closed questions can be analysed quantitatively for example, a survey of men and women might ask them whether they have suffered sex discrimination, using a yes/no format. This will allow them to conclude that —–% of men and
—-% of women report
being discriminated against because of their gender. From this trends and patters can be plotted and analysed to see differences in gender, age, class etc… which either supports or contradicts psychological theories and studies.
- closed questions are all the same for all respondents, and the set of answers, so if the meaning is the same for everyone then RELIABILITY is maintained, and replication is possible, quick and usually easy to complete.
give 2 weaknesses of a closed question:
closed questions force a respondent to choose a set of answers, when they might not agree with any of the choices. If they cannot say what they want then the results will not be realistic and valid.
The choice of answers for closed questions may mean different things to different people and would produce invalid data which is unrealistic in relation to the topic focus
what is an open question?
it is a question which gives qualative data and allows them give their opinions, views and comments on the situation or event.
give 2 strengths of an open question:
open questions produce qualitative data which involves an extended answer which tends to be more realistic and valid because major themes are identified and measured in relation to the hypothesis
open questions can be interpreted by respondents in their own way therefore more
valid data can be obtained:
give 2 weaknesses of an open question:
- open questions can be difficult to analyse because the answers are likely to be detailed and also different from one another
SO
replication is not really possible. - Due to the fact that in open questions the data is qualitative averages cannot be calculated and the data cannot be displayed in tables or graphs, therefore trends and patterns cannot be found and used to support existing theories and studies.
what is an interview:
a face to face situation and a series of questions , they allow the opportunity to expand or clarify on the questions.
explain the differences between the different types of interviews:
STRUCTURED INTERVIEW - has predetermined questions, ie a question that is delivered face to face. Usually involves closed ended questions for example, did you like the study Yes/No?
SEMI-STUCTURED INTERVIEW - is more informal in that new questions are developed as you go along eg, similar to what a GP might say in an interview. He or she starts with a set of pre-determined questions, but further questions are developed as a response to your answers (also referred to as the clinical method). SO the questions are a mixture of open and closed ended Kohlberg (1978) interviewed boys about an imaginary situation and then asked them a set of questions about the situation and their opinions on it.
UNSTRUCTURED INTERVIEWS - are very informal and start with a topic focus or aim and is more like a conversation with the direction being set by those involved around the topic for example, in a setting between a patient and therapist concerning marital problems they may focus on financial issues and not on a variety of topics like pressure of children, work etc. So the questions are mainly open.
what are some issues to consider when conducting interviews?
Interview schedules must be planned out well in advance so that all areas are fully operationalised, and an aim or hypothesis
The way of recording the data can be either recorded or written down, BUT with every format the full conversation must be
fully TRANSCRIBED after the interview. This can be very time consuming in many ways.
In order for the interview to be successful and for the interviewer to gather the data they need, several steps are recommended,
the schedule is seen by all those involved prior so that
everyone is ready and there are no surprise questions,
the format must be agreed prior especially in relation to how it will be recorded,
the interview must see the full transcript of the interview afterwards and agree that it is what was said or occurred
what is the checklist criteria which must be completed when carrying out an interview:
Have you decided whether to use a structured, unstructured or semi-structured interview?
Have you decided how to record the interview (ie, written, take-recorded etc.)
Have you drawn up the interview schedule?
Have you included a question for each area in which you are interested?
Have you included questions requesting necessary personal data?
Have you included an explanation, so that the interviewee knows what is expected?
Have you prepared the interviewee appropriately beforehand, including obtaining permission?
Have you prepared all the materials, such as, if appropriate, a record sheet for the answers?
Have you made sure that you will gather both qualitative and quantitative data
give 2 strengths of interviews:
- they enable a large amount of data to be collected which is descriptive a may give a better picture of what is going on in real life so are VALID to what is being studied
interviews give access to information which is not available through direct observation, such as what individuals think and feel about certain topics which again makes it a more valid method
give 2 weaknesses of interviews:
in interviews people often don’t know what they feel or do and therefore are forced to rely on social desirability meaning that they tend to answer a question in a way that seems most representative of ‘good’ behaviour , this produces a social desirability a form of bias and reduces the reliability of the results.
in interviews the analysis of the information can be subjective especially if one person is carrying out the research, they may miss important information that others would pick up because of their personal opinion i.e. researcher bias
this occurs when a respondent does not give a genuine answer, but one which depicts them in a more favourable light. So they respond to a question in a way that is seen as desirable and not what is real. For example, very few respondents would say they agree with segrogation or that hetrosexuality was the only natural sexuality because they would be seen in a negative light however they have built in lie detector questions which it answered in a socially desirable way, the respondent’s questionnaire will be dropped from the sample.
Examples of Social Studies:
Adorno et al used a questionnaire linked to prejudice, they developed “fascism” scale - they found people who were more fascist were more prejudiced in their views - this suggests that personality relates to prejudice.
Burger 2009 used questionnaires in his screening process which required the individuals to complete a number of scales/questionnaires; a demographic sheet asking about age, occupation, education and ethnicity; the Interpersonal Reactivity Index; the Beck Anxiety Inventory; the Desirability of Control Scale; and the Beck Depression Inventory.
what is reliability and explain the types:
if it can be replicated to get the same results
Researcher Reliability refers to the extent to which a researcher acts entirely consistently when gathering data for example in observations we can use inter-observer or rater-reliability.
Internal reliability refers to the consistency of the measures used in an investigation for example, psychometric tests as a way of
External reliability refers to the consistency of a measure from one occasion to another for example, the extent to which a psychological test can generate similar results again with the same individual(s).
what is validity and explain the types:
refers to whether the research actually measures what is claims to measure.
The internal validity of an experiment is the extent to which we can be sure that changes to our dependent variable or variables are purely a product of our independent variables. This is why we control all variables other than the independent variable we are interested in (this means the environment, timings etc. will be the same in different conditions).
Experimenter bias, participant variables and demand characteristics also threaten internal validity but can be controlled by standard instructions, specific design and blind conditions.
External is the extent to which we can be sure that our results generalise from the experiment to real life and to other populations beyond the research.
Types:
Ecological validity - refers to the extent to which the environment is realistic of real like. This can include task validity which is whether the task used was realistic of real life eg, field experiments are likely to have better ecological validity than lab experiments because they are carried out in a real-life setting. However to have really good ecological validity the tasks participants have to carry out also need to be similar to those were encounter in real life.
Population Validity - refers to the extent o which the results can be generalised to other groups other than the sample eg, using students are not valid of other populations who are not students as in Asch’s experiments.
ways to improve reliability:
The test-retest method involves administering an entire test to a participant, waiting for them to ‘forget’ the questions (which could take several months), and then readministering the test. If the results from both presentations of the test significantly positively correlate then it is a reliable test.
Split-half method involves splitting the psychological test or questionnaire into two parts after the data has been collected and compare the results from the first half with the second half. – this will determine whether items within the same test eg, questions on a questionnaire are consistent.
Inter-rater reliability is achieved if researcher perform consistency, their ratings or measurements are the same or similar.
Inter-observer reliability is achieved when more than one observer rates the behaviour and they take an average across raters or check one rater scale with another.
Careful designing can improve reliability, the use of standardised controls, clear instructions, procedures and pilot studies.