Exam 2 key terms and practice Flashcards

1
Q

Probability

A

the relative frequency of an event in the long run

probability= events/#number of outcomes

as a relative frequency
p(x) = # of heads/ # of outcomes = 1/2= 50 %

can find probability by finding z score, then looking at smaller portion to find probability in a-score table

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

simple experiment

A

well-defined process that leads to a single well-defined outcome

ex. coin toss

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

elementary event

A

the outcome of a simple experiment

ex. heads or tails

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

event

A

set of elementary events that share predefined characteristics

ex. dice roll (odd 1,3,5 or even 2,4,6)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

mutually exclusive events

A

cannot co-occur

ex. heads or tails are mutually exclusive, the occurrence of heads precludes the occurrence of tails

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

exhaustive events

A

representing all possible outcomes

ex. heads & tails

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

sample space

A

set of all possible elementary events that may occur in a sample experiment

ex. dice roll (1-6), six sided

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

random sampling

A

takes a small, random portion of the entire population to represent the entire data set, where each member has an equal probability of being chosen

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

addition rule

A

probability that one event or another will occur and events are mutually exclusive, can add properties

p(head or tail)= p(head) + p(tails) = 1/2 + 1/2 = 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Multiplication Rule

A

if two events are independent, can calculate probability both occur together by multiplying probabilities

p(head & head)= 1/2 x 1/2 = 1/4 = 25%

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Probability from Frequency Distribution

A

p (x) = number of elementary of events (freq) / sample size

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Probability of scores vs samples

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Sampling Error

A

a statistical error that occurs when an analyst does not select a sample that represents the entire population of data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Sampling Distribution

A

a probability distribution of a statistic that is obtained through repeated sampling of a specific population

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Sampling Distribution of the Mean

A

This method shows a normal distribution where the middle is the mean of the sampling distribution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Standard error of the mean

A

measures how much discrepancy is likely in a sample’s mean compared with the population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Central Limit Theorem

A

regardless of distribution of the original random variable, the sampling distribution of the mean associated with that random variable will be approximately normally distributed when based on a large # of classes
- rule of thumb is n>30 will yield approximately normal sampling distribution of the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Hypothesis testing

A

a type of statistical analysis in which you put your assumptions about a population parameter to the test. It is used to estimate the relationship between 2 statistical variables
- if it’s a normal distribution, we can calculate probabilities from a known table because we want to make accurate inferences about the population mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

formal steps for statistical test (4 steps)

A
  1. state hypotheses
    Ho & HA
    (given standard dev.)
  2. state statistical decision criteria
    alpha, type of test, cutoff values , draw picture
  3. calculate test statistic and compare
  4. state the statistical conclusion
    - reject the null, or fail to reject (we reject if in critical region)
    - interpret alternative (population mean is or is not equal to x)
    - explain how it answers question (ex. students are more extroverted)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

One tailed vs. Two tailed Tests

A

one tailed test
- we want to know if population mean is above or below a certain (single) hypothesized mean value
- we have a directional hypothesis and we conduct a one tailed test

two tailed test
- we want to know if the population mean is greater or less than the hypothesized value (on both ends)
- 2 critical values: one above and below hypothesized value
- in z test 2 critical values with be +/- 1.96 (we look at table to find z score)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

α value

A

the level of significance. This represents the probability of obtaining your results due to chance. The smaller this value is, the more “unusual” the results, indicating that the sample is from a different population than it’s being compared to, for example

22
Q

null hypothesis (Ho)

A

a type of statistical hypothesis that proposes that no statistical significance exists in a set of given observations

23
Q

alternative hypothesis (HA)

A

a statement used in statistical inference experiment. It is contradictory to the null hypothesis and denoted by Ha or H1. We can also say that it is simply an alternative to the null. In hypothesis testing, an alternative theory is a statement which a researcher is testing

24
Q

critical or cutoff values

A

Critical values are essentially cut-off values that define regions where the test statistic is unlikely to lie; for example, a region where the critical value is exceeded with probability \alpha if the null hypothesis is true

assumed to be 0.05 if not told otherwise

25
Q

Type 1 error

A
  • the 5% we have left in 95% or 0.05 significance

When the Ho is true, and we reject it (false alarm)

the p(reject Ho|Ho is True)= α

incorrect decision

26
Q

Type 2 error

A

when Ho is false, and we accept it (miss)

the p(retain Ho|Ho is false) = β

incorrect decision

27
Q

Power & factors that affect it

A

1-β

correct decision

reject Ho|Ho is false = 1-β

we want to limit α and maximize β

factors that affect power:
- the probability of a type 1 error (alpha), the priori level of significance, and the criterion for rejecting Ho
- the sample size and variance (as sample size increases, the variance of the sampling distribution decreases. If the distribution is more narrow, then there will be less overlap between the two sampling distributions resulting in fewer type II (false negative) errors and a greater statistical power)

how to increase:
- Switch from a 2-tailed test to a 1-tailed test
- increase mean difference
- Use z distribution instead of t distribution
- decrease std dev.
- increase sample size (most practical way)

28
Q

z-statistic

A

relative location of a sample mean with in a sampling distribution of the mean

29
Q

assumptions for z-test

A
  • random sample (normally we do is convenience, lack of external validity)
  • independent observations (performance can’t influence others); random assignment helps combat this
  • normality (normal sampling distribution of the mean). If N>30, scores are normal distribution
  • population std. dev. known and unchanged by treatment
30
Q

one sample t-test

A
  • we often will have a hypothesized mean value but not known population variance (sometimes using s^2)
  • the t distribution is single peaked and symmetric like the standard normal distribution
  • somewhat flatter and relatively heavier tails for a small sample
  • as N gets larger, the t distribution approaches the standard normal destitution (i.e the t-statistic coverage)
  • the one-sample t-statistic is distributed with N-1 degrees of freedom
  • when we use s^2 to estimate the population variance, we must first estimate the mean
31
Q

t-statistic

A

the ratio of the departure of the estimated value of a parameter from its hypothesized value to its standard error

32
Q

degrees of freedom (df)

A

the number of observations free to vary (N-1) when estimating a population parameter

we loose a degree of freedom for each of the sample means we estimate

33
Q

relationship between t and z distributions

A

the t-distribution adjusts for a natural decrease in confidence at lower sample sizes that the normal distribution does not account for.

The standard normal or z-distribution assumes that you know the population standard deviation. The t-distribution is based on the sample standard deviation.

34
Q

related samples t-test

A

-test whether it is reasonable to believe that two sample means from related samples came from a population with the same mean uH

ex. roomates

repeated sample, repeated measures, matched samples, paired sample, dependent samples all refer to this same test

What makes a related sample?
- within-subjects design
-matched design
- if two measures ae not independent

35
Q

Similarities and differences between repeated-measures & matched-samples

A

They are similar in that the role of individual differences in the experiment is reduced.

They differ in that there are two samples in a matched-subjects design and only one in a repeated measures study.

36
Q

Within-participants designs

A

participants participate in all treatments

ex. before & after treatment

37
Q

Independence assumption violation and how it is solved for related-samples

A
  • it assumes independent observations (one doesn’t influence the other)

solution for related samples (ex. roommates) : different scores
- now we have single sample of different scores
- gives N independent observations
- test mean of the difference scores

38
Q

Effect size for related sample

A

measure of magnitude of mean difference

guidelines
small effect d=0.2
med. effect d=0.5
large effect d=0.8

39
Q

Advantages & disadvantages of within-participant designs

A

advantages:
- related samples remove individual variability from the data
- typically will need less participants to have the same ability to reject the null hypothesis

Disadvantages
-order effects (counter balancing)
- participant attrition

40
Q

Independent-samples t-test

A

test whether it is reasonable to believe that two sample means from independent samples came from a population with the same mean

41
Q

Pooling Sample Variance

A
  • typically assume that in the population, both sample standard deviations are equal
  • best estimate of variance is known as the pooled variance estimate (weighted avg. when combined)
  • sample variance estimates population variance, usually equal so you can estimate
  • 1 estimate for 2 things
42
Q

Effect size in independent-samples t-test

A
43
Q

Assumptions of independent-samples t-test

A
  • scores in populations are normally distributed
    a. the t-test is said to be robust to violations of the normality assumption
  • variances are homogenous (in populations)
    a. t-test is not robust to heterogeneity of variance unless sample sizes are equal
44
Q

Post-hoc vs. a priori power

A

post hoc
- when we think the null is true
- it didn’t work but we think we just didn’t have enough power
- have already analyzed data
typically an effect of interest that was NOT statistically significant
- you ask yourself if failure to reject the null was not due a lack of population differences, but rather a lack of power

a priori analysis (better)
- you are planning a study, and you want to determine the power you might achieve with a given samples size if group differences and variability in the dependent measure were similar to your predictions

45
Q

three characteristics of an effect

A
  1. it’s statistical significance
    - was the mean in group 1 statistically different from the mean for group 2 at the desired type 1 error rate?
  2. it’s magnitude
    - what is the size of the mean difference in the context of the variability in the dependent measure? effect size estimates like Cohen’s d attempt to get at this
    - note that most effects that are “large” in magnitude are also statistically significant. However “small” effects may be statistically significant or not depending on whether your sample size is large enough to give you the power necessary to declare them so.
  3. it’s meaningfulness
    - a new drug cures only 1 out of 1000 people relative to the standard treatment, but what if you are that 1 person? Statistically significant effects that are small may still be meaningful to someone; maybe not the researcher, but someone nonetheless.
46
Q

Why do we sometimes use small-N design?

A

large-N designs can sometimes fail to provide individual-subject validity
- individual-subject validity - the extent to ehich the findings for the grouo apply to an indivudal

most early research in psychology…
- small N
- data not summarized

47
Q

Reasons for small N designs

A

occasional misleading results from statistcal summaries of grouped data
- failure of individual-subject vaidity
e.g discrimination learning

participnats with a particualr attribute are rare
- e.g obessive-compulsive disorder occurs in < 2% of popualtiom
- members of a specific animal soecies are rare, costly, or require much time for training
- practical and philisophica problmes with large N

48
Q

The experimental analysis of behavior

A

operant conditioning
- behavior conditioned, in a particualr environment, by consequnces (e.g reinforcement)
- primary DV -> rate of response
a. recorded cumulativley

applied behavioral analysis
- contemplatove 9understanding) vs. technological (making chnage)
a. skinner -> using science to achive control
-controlling behvaior
a. comparing watson and skinner
- attempt to improve society
who decides?
b. use of punsihment for controlling slef-destructive behaviors
- justfied?

49
Q

Samll N Designs in Applied Behvaior Anlaysis

A

elements of a single-subject design
- importnace of operational defeinitions (again)

withdrawl designs
- A-B-A-B design
a. treatment evaluated twice
b. experimental ends with treatment in place
- idela pattern

multiple baseline designs
- research exmaple
a. reducing drooling in individuals with mild mental disability
b. multiple behaviors targeted for intervatnion
- primary (food) and secondary (

50
Q

Case Study Designs

A

evaluatng case studies
- level of detail not found elsewhere
- can serve falsification
- limited control
- external validity isses
a. an in depth look at a representative situation
b. adds reality and meaning
c. The Diary of Anne Frank

51
Q
A