Ch 2 Flashcards

1
Q

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Are typically anecdotal Don’t allow us to infer causation. con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Can help predict behavior pro for Correlational design

Are typically anecdotal Don’t allow us to infer causation. con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

pro for Correlational design

Are typically anecdotal Don’t allow us to infer causation. con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Don’t allow to infer causation con for Correlational design

Can help predict behavior pro for Correlational design

Are typically anecdotal Don’t allow us to infer causation. con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

A

con for Correlational design

Can help predict behavior pro for Correlational design

Are typically anecdotal Don’t allow us to infer causation. con for Case studies

Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies

low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation

high in external validity pro for Naturalistic observation

hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because

Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.

Daniel Kahneman coined analytical thinking

Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.

Malcolm Gladwell bb. coined intuitive thinking

Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.

Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus

50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Allow to infer causation High in internal validity pro for Experimental design

A

pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Obtaining random selection this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Nonrandom selection. this is misleading to conclusions

Obtaining random selection this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

this is misleading to conclusions

Obtaining random selection this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

reliable? Valid? When evaluating results from dependent variable or measure. Two questions:

Nonrandom selection. this is misleading to conclusions

Obtaining random selection this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

A

When evaluating results from dependent variable or measure. Two questions:

Nonrandom selection. this is misleading to conclusions

Obtaining random selection this is usually more important than obtaining a large sample

Random selection. this is crucial if researchers want to generalize results to the broader populations

Self- reports. are surveys that are used to measure people’s opinions and attitudes

Self- report measures. questionnaires that asses variety of characteristics; related to self-reports

Reliability consistency of measurement

Random selection. ensures every person in a population has an equal chance of being chosen to participate

Existence proof demonstration that a given psychological phenomenon can occur

Case study research design that examines people or person in depth over time

Internal validity extent to which we can draw cause and effect inferences from a study

External validity. extent to which we can generate findings to real world settings

Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior

Can sometimes be low in external validity con for Experimental design

Allow to infer causation High in internal validity pro for Experimental design

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Reliability. this applies to interviews and observation data.

A

this applies to interviews and observation data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

34
Q

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

35
Q

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

36
Q

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

37
Q

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

38
Q

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

39
Q

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

40
Q

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

41
Q

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

42
Q

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

43
Q

Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

grouping of points on a 2 D graph in which each dot represents a single person’s data

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

44
Q

Illusory correlation

A

perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions

Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

45
Q

Experimental designs allow to draw cause and effect conclusions and when they’re done correctly, they permit cause and effect inferences. Can also manipulate variables.

perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions

Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

A

allow to draw cause and effect conclusions and when they’re done correctly, they permit cause and effect inferences. Can also manipulate variables.

perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions

Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data

-1.0 to 1.0. correlation coefficient range from

Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations

Correlational design research design that examines the extent to which two variables are associated

the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects

horns effect or pitchfork effect. The converse of the halo effect is called

Response set tendency of researcher participants to distort their responses to questionnaire items.

assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage

easy to administer and measures personality traits and behaviors often work well. Self-report advantage

Reliability is necessary for validity because we need to measure something consistently before we can measure it well.

Validity. extent to which a measure assesses what if purports to measure.

Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.

Reliability. this applies to interviews and observation data.

46
Q

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

research design characterized by random assignment of participants to conditions and manipulation of an independent variable

47
Q

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

48
Q

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

49
Q

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

50
Q

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

51
Q

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

52
Q

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

53
Q

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

54
Q

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

55
Q

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

56
Q

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

57
Q

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

58
Q

Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

59
Q

Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.

Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.

Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

60
Q

blind. To avoid the placebo effect; patients need to be .

Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.

Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

A

To avoid the placebo effect; patients need to be .

Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.

Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design

Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations

Confounding variables any variable that differs between the experimental and control groups other than the independent variable

only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be

`operational definition a working definition of what a researcher is measuring

operational definition When we define the dependent or independent variable for the purposes of a study, we provide an

Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect

Independent variable. variable that an experimenter manipulates

Control group. the group of participants that doesn’t receive manipulation

Experimental group. the group of participants that receives the manipulation

Experimental and Control group Randomly sorting participants into two groups

Random assignment Manipulative independent variable What makes an experiment?

Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable

61
Q

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

If the ‘blind is broken’ 2 different things can happen:

62
Q

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

63
Q

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

64
Q

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

65
Q

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

66
Q

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

67
Q

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

68
Q

Statistics application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

69
Q

Descriptive stats Inferential stats. What are the two types of statistics

Statistics application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

What are the two types of statistics

Statistics application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

70
Q

Descriptive stat. numerical characteristics that describe data.

Descriptive stats Inferential stats. What are the two types of statistics

Statistics application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

A

numerical characteristics that describe data.

Descriptive stats Inferential stats. What are the two types of statistics

Statistics application of math to describing and analyzing data

Informed consent informing research participants of what is involved in a study before asking them to participate

Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses

Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group

Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study

Nocebo effect harm resulting from the mere expectation of harm

Blind unaware of whether one is in the experimental or control group

Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:

71
Q

Central tendency Variability Two major types of Descriptive stats

A

Two major types of Descriptive stats

72
Q

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

73
Q

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

74
Q

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

75
Q

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

76
Q

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

77
Q

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

78
Q

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

79
Q

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

80
Q

Standard deviation. measure of variability that takes into account how far each data point is from the mean

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

measure of variability that takes into account how far each data point is from the mean

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

81
Q

Inferential stats. math methods that allow us to determine whether we can generalize findings from our sample to the full population

Standard deviation. measure of variability that takes into account how far each data point is from the mean

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

A

math methods that allow us to determine whether we can generalize findings from our sample to the full population

Standard deviation. measure of variability that takes into account how far each data point is from the mean

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats

82
Q

identify flaws and tell researchers how to do the study better next time.

A

One crucial task of peer reviewers is to

Inferential stats. math methods that allow us to determine whether we can generalize findings from our sample to the full population

Standard deviation. measure of variability that takes into account how far each data point is from the mean

Range difference between the highest and lowest scores

Range Standard deviation. Variability measures

Variability measures of how loosely or tightly bunched scores are.

Mode most frequent score in a data set

Median middle score in a data set

Mean average

Mean Median Mode. 3 measures of Central tendency

Central tendency measure of the central scores in a data set, or where the group tends to cluster.

Central tendency Variability Two major types of Descriptive stats