Ch. 9 Using Statistics to Answer Questions Flashcards

You may prefer our related Brainscape-certified flashcards:
1
Q

Descriptive vs. Inferential Statistics

A

Descriptive statistics – summarize any set of numbers so you can understand and talk about them more intelligibly and communicate their essential characteristics.

inferential statistics – to analyze data after they have conducted an experiment to determine whether the IV had a significant effect.

SIGNIFICANCE – those instances in which the statistical results are likely to have been caused by our manipulation of the independent variable.

Variability – a spread

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Scales of Measurement

A

SCALES OF MEASUREMENT – The particular set of rules used in assigning a symbol to the event in question – nominal, ordinal, interval, and ratio scales.

  • How you choose to measure (i.e., which scale of measurement you use) the dependent variable (DV) directly determines the type of statistical test you can use to evaluate your data after you have completed your research project.
  • Nominal Scale simple classification system. categorizing the furniture in a classroom as tables or chairs, responses to an item on a questionnaire as “agree,” “undecided,” or “disagree” You assign the items being evaluated to mutually exclusive categories.
  • Ordinal Scale When you can rank order the events in question, but with no indication of distance between each rank.
    • Ex: we can rank the winners in a track meet, but this rank-ordering does not tell us anything about how far apart the winners were.
  • Interval Scale When you can rank order the events in question and equal intervals separate adjacent events, but there is NO true “zero = nothing” value.
    • Ex: the temperatures on a Fahrenheit thermometer:
      • forms an interval scale; rank order has been achieved
      • Distance between any two adjacent temperatures is the same, one degree.
      • Notice that the interval scale does NOT have a true zero point, however. When you reach the “zero” point on a Fahrenheit thermometer, does temperature cease to exist? No, it’s just very cold. Likewise, scores on tests such as the SAT and ACT are interval-scale measures.
      • Also, a person might get a zero on a test but that doesn’t mean they know anything about the material.
  • Ratio Scale – measurement takes the interval scale one step further. Like the interval scale, the ratio scale permits the rank ordering of scores with the assumption of equal intervals between them, but it also assumes the presence of a true zero point.
    • Ex: Physical measurements, such as length, the amplitude or intensity of sound or light, are ratio measurements. Zero-length means there is nothing there. Zero amplitude means there is nothing there.
    • Because of the true zero point, the ratio scale allows you to make ratio comparisons, such as “twice as much” or “half as much.”
  • The nominal scale, which provides the least amount of information**, to the **ratio scale, which provides the greatest amount of information. When psychologists evaluate changes in the DV, they try to use a scale of measurement that will provide the most information; frequently, they select interval scales because they do not use measurements that have a true zero, which makes sense with psychology and human interaction.
  • The scales of measurement directly determine which measure of central tendency you will use.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Charts

A

CHARTS – the type of chart you use will also be partially determined by the Scale of Measurement.

  • PIE CHART – (NOMINAL SCALE) all data (100% of the data) is broken down into categories (i.e. democrat or republican) which will add up to 100% of the pie.
  • if you used a NOMINAL scale of measurement, then you would probably use a pie chart, a histogram, a bar graph, or a frequency polygon.
  • Histograms presents data in terms of frequencies per category.
    • Uses quantitative variable – Quantitative categories are ones that can be numerically ordered.
  • Bar Graph** presents data in terms of frequencies per category; however, we are using **qualitative categories when we construct a bar graph.
    • Qualitative categories are ones that cannot be numerically ordered. Ex: single, married, divorced, and remarried are
    • Because a bar graph depicts a qualitative variable, the bars do not touch. Placing a space between the bars lets the reader know that qualitative categories are being reported.
  • Frequency Polygon** mark the middle of the crosspiece of each bar in a **histogram.
    • Same as a histogram but the connected dots are the frequency polygon.
  • Line Graph – start with two axes or dimensions.
    • ORDINATE axis – vertical or y-axis (Dependent Variable)
    • ABSCISSA – horizontal or x-axis (Independent Variable = manipulated by us)
      • Plot the variable with the greatest number of levels on the abscissa to reduce the number of lines that will appear on your graph.
    • y-axis to be approximately two-thirds as tall as the x-axis is long
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Variability

A

VARIABILITY – Range and standard deviation are two measures of variability frequently reported by psychologists.

  • Range – subtract the smallest score from the largest the range does not provide much information. Knowing the range does not tell us about the distribution.
  • Variance think of the variance as a single number that represents the total amount of variability in the distribution. The larger the number, the greater the total spread of the scores. The variance and standard deviation are based on how much each score in the distribution deviates from the mean.
  • Normal distribution (normal curve) – The concept of the normal distribution is based on the finding that as we increase the number of scores in our sample, the more the distribution becomes bell-shaped.
    • bell curve – The majority of the scores cluster around the measure of central tendency, with fewer and fewer scores occurring as we move away from it.
    • mean, median, and mode coincide in a normal distribution.
  • Standard deviationsquare root of the variance.
    • The larger the standard deviation is, the greater the variability or spread of scores will be.
    • distances from the mean of a normal distribution can be measured in standard deviation units (SD).
    • we can compare scores from different distributions by discussing them in terms of standard deviations above or below the mean.
    • 34.13% of all the scores in all normal distributions fall between the mean and 1 SD above the mean.
    • Even though your scores, the means, and the standard deviation values differ considerably, we can determine how many SD units away from the mean each of your scores is.
      • In turn, we can compare these differences.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Correlation

A
  • Correlation coefficient i– the degree of relation between two variables. The value of a correlation coefficient can range from -1 to +1.
    • Type of descriptive statistic, the correlation coefficient, is also used for predictive purposes.
    • A correlation coefficient does not have to be exactly 0 to be considered a zero correlation.
    • The existence of a perfect correlation indicates that there are no other factors present that influence the relation we are measuring. This situation rarely occurs in real life.
  • Pearson correlation coefficient (r) – appropriate to calculate when BOTH the X variable and the Y variable are interval or ratio scale measurements and the data appear to be linear.
    • Other correlation coefficients can be calculated when one or both of the variables are not interval or ratio scale measurements or when the data do not fall on a straight line.
    • The more the scores cluster close together and form a straight line, the stronger the correlation coefficient will be.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Inferential Statistics

A

Inferential statistical Test can tell us whether the Independent Variable we manipulated had a significant effect on the behavior of the participants we tested or whether the results we obtained would have occurred by chance.

  • After you have conducted an experiment, you perform a statistical test on the data that you have gathered. The results of this test will help you decide whether the IV was effective. Significant.
  • An inferential statistical test can tell us whether the results of an experiment would occur frequently or rarely by chance.
    • Inferential statistics with small values occur frequently by chance
    • Inferential statistics with large values occur rarely by chance.
  • If the result occurs often by chance, we say that it is NOT SIGNIFICANT and conclude that our IV did not affect the DV. In this case, we would accept–(CANNOT REJECT) the null hypothesis, which says that the differences between groups are due to chance (i.e., not the operation of the IV).
    • If, however, the result of our inferential statistical test occurs rarely by chance (i.e., it is significant), we can conclude that some factor other than chance is operative.
    • SIGNIFICANCE LEVEL – an event that occurs by chance alone 5 or fewer times in 100 occasions is a rare event. Thus, “.05 level of significance” – a result is considered significant if it occurs 5 or fewer times by chance in 100 replications of the experiment
    • NULL HYPOTHESIS – says the difference between two groups is DUE TO CHANCE. We want to be able to REJECT the Null Hypothesis, which would occur if the difference were at least partially due to the influence of the independent variable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

t-test

A

t-testinferential statistical test used to evaluate the difference between the means of two groups.

  • Ex: Group A will wait on customers in dressy clothes; Group B will wait on customers in sloppy clothes.
  • Is the difference you obtained large enough to be genuine, or is it just a chance happening? Merely looking at the results will not answer that question. We need to run a t-test to determine if the difference was legit or simply due to chance.
  • Because the two groups in our latency-to-service experiment were independent, we will use an independent-groups t-test.
  • our t value is 2.61; the probability of t value is .021. Because the probability of this result occurring by chance is less than .05, we can conclude that the two groups differ significantly – and we can REJECT THE NULL HYPOTHESIS knowing that the difference was unlikely to occur by chance.

You can state your experimental hypothesis in either a directional or a non-directional manner.

  • If you use the directional form, you are specifying exactly how (i.e., the direction) the results will turn out.
    • A one-tail t-test evaluates the probability of only one type of outcome – directional.
  • If we simply indicate that we expect a difference between the two groups and do not specify the exact nature of that difference, then we are using a non-directional hypothesis.
    • A two-tail t-test evaluates the probability of both possible outcomes. – non-directional
  • The probability of the result’s occurring by chance alone is split in half and distributed equally in the two tails of the distribution when a two-tail test is conducted
  • When conducting a t-test, researchers are usually interested in either outcome.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Degrees of Freedom

A

Degrees of Freedom – The ability of a number in a given set to assume any value.

  • This ability is influenced by the restrictions imposed on the set of numbers. For every restriction, one number is determined and will assume a fixed or specified value.
  • Since the final value in a restricted set of data is known (or fixed) because all other values have been determined and the overall quantity is known, there is one degree of freedom less than the total number of observations in each set of measured data)
  • As in the first example, the first nine numbers can assume any value. In this example, the sum of the first nine numbers is 75. That means that the value of the last number is fixed at 25.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Type I and Type II Errors

A

Type 1 Error – When you’ve determined that an experimental result is SIGNIFICANT when it occurs rarely by chance (i.e., 5 times or less in 100). There always is the chance that your experiment represents 1 of those 5 times in 100 when the results did occur by chance.

  • You rejected the Null Hypothesis when you shouldn’t have (Null Hypothesis true).
  • You determined the result SIGNIFICANT when it was NOT.
    • The experimenter directly controls the probability of making a Type I error by setting the significance level.
      • Ex: you are less likely to make a Type I error with a significance level of .01 than with a significance level of .05.
      • On the other hand, the more extreme or critical you make the significance level (e.g., going from .05 to .01) to avoid a Type I error, the more likely you are to make a Type II or beta (b) error.

Type II Error Your result appears to represent 1 of the 5 times in 100 when the result actually did NOT occur by chance.

  • You failed to reject the Null Hypothesis when you should have (Null Hypothesis false).
  • You determined the result was NOT SIGNIFICANT when it WAS.
    • Unlike Type I errors, Type II errors are not under the direct control of the experimenter.
    • We can indirectly cut down on Type II errors by implementing techniques that will cause our groups to differ as much as possible. For example, using a strong IV and testing larger groups of participants are two techniques that will help avoid Type II errors.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Effect Size (Cohen’s d)

A

Effect size – is a statistical measure that conveys information concerning the magnitude of the effect produced by the IV.

  • Significance tells us only that the IV had an effect. It does not show us the size of that effect.
  • Cohen’s d calculate the effect size.
  • A second technique for determining effect size** is appropriate when you calculate a **Pearson product-moment correlation (r):
    • r2 gives you an estimate of the proportion of the variance accounted for by the correlation in question.
      • Ex: even though r = .30 is significant (p < .01) with 90 pairs of scores, this correlation accounts for only 9% (.302 = .09 = 9%) of the variance.
      • This figure means that 91% of the variability in your research results is accounted for by other variables, which indicates a rather small effect size indeed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly