Asking The Research Question - Dr. Wofford Flashcards

1
Q

Two types of hypotheses

A
  • •Research hypothesis: researcher’s true expectation of results
  • •Statistical hypothesis: Always expresses an absence of relationship between the independent and dependent variables
    • •Also called a null hypothesis

Two types of hypotheses:

Research hypothesis: the researcher’s prediction

Statistical Hypothesis (the same thing as the null hypothesis): this ALWAYS says there is not relationship between IV and DV

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
1
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

State the Research Problem

A

Step 2 in asking the research question:

  • •Begins by determining what is known and unknown about the topic of interest
    • •Is the foundation of forming a specific research question that can be answered in a single study
  • •Generally begins with a broad, general clinical problem and is filtered down to a specific question
  • •Based on clinical experience, clinical theory, and professional literature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Type II Error: Statstical Power

A
  • Do not reject the null hypothesis when it is indeed false (stating there is no difference when in fact there is a difference)
  • Beta= .20
  • Statistical power= 1-beta

Directly related to Type II error: when we conclude there are no differences, when really one is better than the other. A false negative.

We don’t reject H0 when it is actually false (and we should reject it).

We are more lenient with Beta. We are normally about 20% okay with making a type II error in the profession of PT.

Traditional acceptable level of Beta = 0.20

Beta is used to calculate statistical power

Statistical Power = 1 – beta

0.80 = 1 – 0.20

90% power would need more subjects than 80% power

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Directional Hypotheses

A

Research hypothesis: the researcher’s prediction

Statistical Hypothesis (the same thing as the null hypothesis): this ALWAYS says there is not relationship between IV and DV

_____

Can have a directional or non-directional hypothesis

Non-directional is about the same as a null hypothesis (similar, but not exactly the same)

Directional is where we have to pick a side. Must say something is better than the other.

We can only have directional hypothesis if we have good evidence that one should be better than the other. We will be shot down in publishing if we don’t have a lot of research to support a directional hypothesis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define the Research Question

A

Step Three in asking the research question

The question should be important, answerable, and feasible for the study

  • Important (“so what?” and Innovative - new)
    • Significance and innovation
  • Answerable
    • Incorporate variables which can be defined and measured
  • Feasible
    • Researcher must have the necessary knowledge, skill, and resources to complete the study
    • Role of pilot studies as feasibility studies

Keep filtering from research problem to research question

This is a very difficult thing to do

It has to pass the “Who Cares” test.

It is supposed to add an incremental amount of knowledge to the existing base

Answerable:

We must be able to operationally define the variables. Some way to measure.

Must have a reliable and valid tool to measure variable.

What to used will be based in the current literature. Must have literature backing (published research) on why we used a certain measure. Evidence on reliability and validity.

Feasibility:

Are you able to do this research or is it beyond your capacity to do. It is very easy to take on too much. Pilot studies can help us know if study is feasible, and can be really really helpful! Can publish pilot studies, but they are not the main goal. A way to get funding too? Take feasibility studies with a grain of salt. Have more confidence in a main study in comparison to a pilot study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Four Different Types Of Research Objectives

A

•Four different types:

  1. •Intent of the study is descriptive; characterize clinical phenomena or existing conditions within a population (specific aims)
  2. •Intent of the study is to determine validity and/or reliability of an instrument (research objectives or purpose of the study, not hypothesis)
  3. •Intent of the study is exploration of relationships and predict risk factors which contribute to an impaired state (hypotheses)
  4. •Intent of the study is to establish cause and effect relationship (hypotheses)

Can have a correlational study (explores relationships) – falls under point three I think.

Last one is the true experimental design.

More qualitative uses the term specific aims.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Hypothesis (not the types)

A
  • Declarative statement that predicts the relationship between the independent and dependent variables
  • Purpose of the study is to test a hypothesis so that researcher can either accept or reject it

Declarative statement (Not a questions). We are talking about experimental or quasi-experimental studies here.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are four steps to ask the research question

A
  1. •Select a topic
  2. •State the research problem
  3. •Define the research question
  4. •Clarify research objectives
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Clarify the Research Objectives

A

Part 4 in Asking the Research Question

  • Are developed after defining the target population, research rationale, and research variables
  • May be presented as hypotheses, specific aims, purpose of the research, or the research objectives

Now we need research objectives. Rationale should already be done because it is the core of picking the research questions.

Research Objectives have several synonymous names: hypothesis, specific aims, purpose of the research, or research objectives. Different terms are used depending on the context.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Power Analysis

A
  • Determines sample size in experimental and correlational studies
  • Determining sample size requires the following factors:
  • •Alpha
  • •Beta
  • •Variance
  • •Effect size
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Selecting A topic (what guides you?)

A

Step 1 in asking the research question

  • •Identified based on studying a population, type of intervention, clinical theory, or a fundamental policy issue in the profession.
  • •Based on experiences
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

type I error: level of significance

A
  • Reject the null hypothesis when it is actually true (stating there is a difference when there is really no difference- false positive)
  • Alpha= .05
  • p-value= probability that the observed difference is due to chance alone
    • •Either the results are significant or not significant. p-values cannot tell you whether a difference is “highly significant”

Directly related to type I error: when you reject H0 and H0 was actually true.

So a false positive?

Always a risk there will be a type I error, but how high of a risk are we okay with? That is the Alpha level.

Alpha = 0.05 means we are 95% sure we did not make a type I error. We have a 5% chance of making a type I error with this many people.

0.05 is a fairly standard (albeit arbitrary) in physical therapy research

alpha can never = 0.0

p stands for probability. What is the probability that what we see is due to chance. P must be below 0.05 to be stat significantly. When we see p = 0.02, it means there is a 2% chance that

Can never say that something is highly significant. It is either significant or not significant. p = 0.04 is not less significant than p = 0.01. There is no trending, it is just black and white.

Alpha level is related to if we choose a one or two-tailed test

Type I error is a more deadly error to make than a Type II error.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Variance

A
  • •Variability within groups
  • •Decreasing variance within a set of data will increase the statistical power
  • •Ways to decrease variance include:
    • •Using an experimental design
    • •Controlling for sources of random measurement error
    • •Increasing the size of the sample

This relates to SD in parametric statistics

Variance is the statistical concept and standard deviation is the way it is expressed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Effect Size

A
  • •When comparing groups, effect size is the difference between sample means
  • •With correlational designs, the effect size is the degree of correlation or relationship between variables
  • •“How large an effect will my treatment have?” or “How strong is the relationship between two variables?”
  • •Effect size= degree to which the null hypothesis is false
  • •Larger effect size= greater the effective difference between groups

Relates to the degree to which the H0 is false

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Cohen’s D Effect Sizes

A
  • Effect size= 0= no difference in an outcome measure between groups
  • Cohen’s d effect sizes:
  • •Small effect size= .2-.3
  • •Medium effect size= .5
  • •Large effect size= .8+

These are kind of arbitrary, but it is good to know them.

Zero –> no difference

One is tons of difference

Whether you want a large effect size depends on what your research question is

Smaller effect size requires larger sample size

17
Q

Sample Size

A
  • •Larger the sample size= larger the statistical power
  • •A priori analysis: how many subjects are needed in order to detect a significant difference for an expected effect size
  • •Smaller effect size= larger the required sample
  • •Significant challenge to performing a “a priori analysis” is the unknown effect size
  • •Must make an educated guess about what effect would be considered clinically meaningful

A priori means before you started the study.

Determining effect size is a really big problem. Usually have to make an educated guess.