midterm Flashcards
measurement process
concept, idea or construct> measure > observe idea empirically
quantitative researcher go deductive; qualitative researcher go inductive
measurement process, conceptualization
-process of specifying what we mean when we use particular terms
-taking an construct/idea and giving it a conceptual theoretical definition, a good definition has a clear, explicit meaning
measurement process, operationalization
-defining variables into measurable factors
validity
In social research, an indicator is valid if it measures the concept it intends to measure
content validity
asses whether a test is representative of all aspect of the construct
face validity
considers how suitable the content of a test seems to be on the surface. It’s similar to content validity, but face validity is a more informal and subjective assessment, “face value”
criterion validity
-Evaluates how closely the results of your test correspond to the results of a different test.
-To evaluate criterion validity, you calculate the correlation between the results of your measurement and the results of the criterion measurement. If there is a high correlation, this gives a good indication that your test is measuring what it intends to measure.
construct validity
evaluates whether a measurement tool really represents the thing we are interested in measuring. A construct refers to a concept or characteristic that can’t be directly observed, but can be measured by observing other indicator that are associated with it
reliability
consistency, quality of measuring an instrument that would produce same values in repeated observations
test-retest reliability
measures the consistency of results when you repeat the same test on the same sample at a different time. You use it when you are measuring something that you expect to stay consistent in your sample
interrater interobserver reliability
interrater reliability measures the degree of agreement between different people observing or assessing the same thing.
then you calculate the correlation between their different sets of results. If all the researchers give similar ratings, the tests has a higher interrater reliability
parallel forms reliability
measures the correlation between two equivalent versions of a test. You use it when you have two different assessment tools or sets of questions deigned to measure the same thing.
internal consistency
internal consistency assesses the correlation between multiple items in a test that are intended to measure the same construct.
You can calculate internal consistency without repeating the test or involving other researchers, so its a good way of assessing reliability when you only have one data set.
primary research
-primary research refers to research that has involved the collection of original data specific to a study
-PR is often carried out with the goal of producing new knowledge
-researcher aim the answer unanswered question or questions that have not bee asked
secondary research
-SR involves the summary or syntheses of the data and lit the has been written by others
Concept
an abstract idea(s), notion, plan