Module 5 Flashcards
Four levels of statistical measurement
Nominal
Ordinal
Interval
Ratio
Nominal Measurement
The lowest in the hierarchy of measurement scales.
Involves labelling or categorizing - the number does not suggest rank or ability, it is simply the means of identifying data.
Ordinal Measurement
Ranks events or objects on some attribute, assigning numbers to each category.
i.e., shortest to tallest, best student to worst student, etc.
Interval Measurement
Involves ranking events or variables on a scale in which the intervals between the numbers are equal and the zero value is arbitrarily set and does not have an absolute value.
i.e., Stanford-Binet Intelligence Scale
Ratio Measurement
The highest form of measurement.
Ratio measurement has a true zero on the scale.
i.e., time, weight, height
Statistical procedures such as calculating means and standard deviations are suitable for ratio-level data.
Descriptive Statistics
Used to describe and synthesize data.
Includes: frequency distributions, measures of central tendency, measures of variability.
Frequency Distributions
A systemic listing of all the values of a variable from the lowest to the highest with the number of times (frequency) each value was observed.
Often presented in the form of a table, graph, or frequency polygon.
Measures of central tendency
Measures to calculate an average.
Three types: mean, median, mode
Mean
The sum of a set of scores divided by the number of scores.
The most widely used measure of central tendency.
Median
The middle score.
The score of the point in a distribution above which one-half of the scores lie.
Mode
The score that occurs most frequently.
Best used with nominal data such as gender.
Measures of Variability
Used to describe the dispersion or the spread of data.
Appropriate for specific kinds of measurement and types of distributions.
Several measures: range, percentile, standard deviation
Range
The difference between the highest and the lowest scores in a distribution.
Percentile
Assigns the score to a specific place within the distribution.
Describes the number of cases a given score exceeds.
Standard Deviation
The most commonly used measure of variability.
The average amount that each of the individual scores varies from the mean of the set of scores.
Bivariate Descriptive Statistics
Allow a researcher to consider two variables together and describe the relationship between the variables.
Shows a statistical relationship between variables.
i.e., correlations and crosstabulations
Correlations
Tell the researcher to what extent the variables are related.
i.e., is there a relationship between smoking and lung capacity?
Correlation coefficient (r)
An index that describes the relationship between two variables.
Possible values range from -1.00 through .00 to +1.00
Positive Correlation
Indicates that high scores on one variable are paired with high scores on the other variable and low scores on one variable are paired with low scores on the other variable.
Negative Correlation
Indicates that low scores on one variable are paired with high scored on the other variable and high scores on one variable are paired with low scores on the other variable.
Inferential Statistics
Based on the law of probability.
Used to draw conclusions about the population on the basis of data obtained from the sample.
Purposes are to estimate the probability that the sample accurately reflects the population and to test hypotheses about the population.
Should be used when the sample is randomly selected and the measurement scale is at the interval or ratio area.
Sample
Used as a basis for making estimates of population characteristics.
Probability Samples
Selection of sample units by random selection.
Most effective means of securing representative samples.
Sampling Error
The variation in the statistical values that different samples of the population may present.
Will effect the statistical probability that the sample will accurately reflect the population.
Parameter estimation
A useful way of estimating a population parameter, such as a mean, a proportion, or a difference in the mean of two groups.
Point estimation
Involves calculating a single statistic to estimate the parameter.
Confidence interval
Constructed around the point estimate.
Establishes a range of values for the population value and the probability that the population value falls within that range.
Null hypothesis
States that there is no relationship between the independent and dependent variables.
Research hypothesis
The prediction that the researcher makes about what will happen in the study.
Type I Error
Occurs when the researcher states that a relationship exists when none exists.
Falsely rejecting a null hypothesis.
Type II Error
Occurs when the researcher states that a relationship does not exist when it does.
Falsely accepting a null hypothesis.
Level of significance
Set before the study begins.
The probability of making a Type I error.
Most commonly .05 and .01
If a researcher states that the results are significant at the .05 level, it means:
Results like these are due to chance factors only 5 in 100 times.
There is a 95% chance that the sample results are not due to chance factors alone, but reflect the population accurately.
The odds of such results based on chance alone are .05 or 5%.
One can be 95% confident that the results are due to a real relationship in the population.
Parametric tests
Use the sample statistic to estimate the population parameter.
Allow the researcher to study the effects of variables on one another and their interaction.
Three characteristics:
1) they focus on population parameters
2) they require measurements on at least an interval scale
3) they involve other assumptions
Nonparametric tests
Require fewer assumptions than parametric tests because they are not based on population parameters and involve less restrictive assumptions about the shape of the distribution.
Usually used when the data has been measured on a nominal or ordinal scale.
Most useful when the data cannot be interpreted as interval-level measures.
Bivariate Statistical Tests
Used to analyze the relationship between two variables.
Includes: t-tests, analysis of variance, chi-squared tests, and product-moment correlation coefficients.
Multivariate statistical analysis
Deals with three or more variables simultaneously.
Increasing numbers of nurse researchers are using sophisticated multivariate statistical procedures to analyze their data.
Three major challenges associated with qualitative analysis:
1) the absence of systematic rules for analyzing and presenting qualitative data.
2) the enormous amount of work required to organize the data
3) difficulties in reducing the data for reporting purposes.
Template analysis style
Involves developing a template to organize the data.
Editing analysis style
The researcher acts as an interpreter who searches the data for significant segments.
Immersion/crystallization analysis style
The researcher becomes totally involved in and reflects on the data.
Intellectual processes involved in analysis of qualitative data
Comprehending
Synthesizing
Theorizing
Recontextualizing
Comprehending
The qualitative researcher attempts to make sense of the data.
Completed when data saturation is achieved.
Synthesizing
The researcher sifts through the data and tries to put the pieces together.
Recontextualizing
The theory is further developed and its applicability to other settings or groups is explored.
Theorizing
Involves systematic sorting of the data.
Develops alternative explanations of the phenomenon under study to determine their fit with the data.
Coding scheme
Method for organizing qualitative data.
Useful strategy for classifying and indexing qualitative data.
The researcher carefully reads through the data to identify underlying concepts and to identify codes.
Coding qualitative data
Involves two simultaneous activities:
1) mechanical data reduction
2) analytic categorization of data into themes
Steps involved in analysis of qualitative materials
1) A search for themes or recurring regularities is done.
2) The themes are validated to determine whether they accurately represent the phenomenon.
3) Researchers strive to put the thematic pieces together into an integrated whole.
Ethnographic analysis
Generally begins at the time the researcher enters the field.
Continually looking for patterns in the behaviour and thoughts of the participants, comparing one pattern against another, and analyzing many patterns simultaneously.
Tools to analyze data include maps, flow charts, organizational charts, and other documents.
Descriptive phenomenology
Researchers seek common patterns by identifying essential themes
Van Manen’s Method of Phenomenological Analysis
Searches for themes using the following approaches:
1) Holistic approach
2) Selective approach
3) Detailed approach
Holistic approach
Viewing the text as a whole to grasp its meanings
Selective approach
Pulling out key statements and phrases that seem essential to the experience under study.
Detailed approach
Analysing every sentence
Grounded theory
Used to summarize results and key findings from the data in the form of conceptual maps or models.
Constant comparative method
Data analysis of grounded theory.
Where the researcher simultaneously collects, codes, and analyzes data.
Two types of coding in grounded theory
Substantive coding
Theoretical coding
Types of substantive coding
Open coding
Selective coding
Open coding
the researcher is trying to capture what is going on in the data
Selective coding
The researcher codes only those variables that are related to the core variable
Theoretical coding
Involves putting the broken pieces of data back together again
T/F: The tendency for statistical values to differ from one sample to another is known as the standard error of the mean.
False
T/F: A researcher never knows whether an error has been committed in statistical decision making
True
T/F; Parametric tests make no assumptions about the shape of the distribution in the population
False
T/F: Nonparametric tests have fewer assumptions than parametric tests
True
T/F: A +0.50 correlation coefficient indicates a stronger relationship between two variables than a correlation of -0.75
False
What are the three characteristics that can completely summarize a set of data?
Shape of the distribution
Central tendency
Variability
T/F: One of the features of qualitative analysis is that a number of universal formal rules facilitate the process
False
T/F: The process of recontextualization involves sifting the data and putting pieces together
False
T/F: The grounded theory approach is applied to qualitative data after they have been gathered in the field
False
T/F: The grounded theory analyst documents assumptions, insights, and the conceptual scheme on memos once all the data has been analyzed
False
The level of measurement that classifies and ranks objects in terms of the degree to which they possess the attribute of interest.
Ordinal
A record of fluid intake, in ounces, of a postsurgical patient is an example of which level of measurement?
Ratio
Degrees such as an associate, bachelor’s, master’s, and doctorate correspond to measures on which of the following scales?
a) nominal
b) ordinal
c) interval
d) ratio
Ordinal
A group of 100 students completed a test. The mean was 85, the standard deviation was 5, and the scores were normally distributed? About how many scores fell between 80 and 90?
68 - nooooo idea how to get this answer. Statistics suck.
A parameter is a characteristic of:
a) a population
b) a frequency distribution
c) a sample
d) a normal curve
A population
The measure of variability that takes into account all score values
Standard deviation
Which measure of central tendency is the most stable?
The mean
Which of the following is an example of a bivariate descriptive statistic?
a) frequency distribution
b) mean
c) range
d) correlation coefficient
Correlation coefficient
This allows the researcher to draw conclusions about a population, based on information gathered from a sample
Inferential statistics
A statistical procedure that is used to determine whether a significant difference exists between any number of group means on a dependent variable measured on an interval scale.
ANOVA
The analysis style that is sometimes referred to as manifest content analysis
Quasi-statistical style
In which of the following analysis styles does the researcher act as an interpreter who reads through data and develops a categorization scheme on the basis of meaningful segments?
a) quasi-statistical style
b) template analysis style
c) editing analysis style
d) immersion-crystallization style
Editing analysis style
Quasi-statistics is essentially a method of:
a) statistical analysis
b) validation
c) thematic generation
d) analytic induction
Validation
When does selective coding begin?
When a core variable has been identified.