Ch 2 Flashcards
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
how many Americans received prefrontal lobotomies in the 40s and 50s?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Are typically anecdotal Don’t allow us to infer causation. con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Can help predict behavior pro for Correlational design
Are typically anecdotal Don’t allow us to infer causation. con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
pro for Correlational design
Are typically anecdotal Don’t allow us to infer causation. con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Don’t allow to infer causation con for Correlational design
Can help predict behavior pro for Correlational design
Are typically anecdotal Don’t allow us to infer causation. con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
con for Correlational design
Can help predict behavior pro for Correlational design
Are typically anecdotal Don’t allow us to infer causation. con for Case studies
Can provide existence of proofs Allows to study rare or unusual phenomena Can offer insights for later systematic testing pro for Case studies
low in internal validity; doesn’t allow to interfere causation con for Naturalistic observation
high in external validity pro for Naturalistic observation
hunches and snap judgments aren’t always right. Intuitive thinking relies on heuristics leads to mistakes because
Heuristic. mental shortcut or rule of thumb that helps to streamline thinking and make sense of the world.
Daniel Kahneman coined analytical thinking
Analytical thinking. also called System 2 of thinking. It is slow and reflective and has mental effort. It rejects gut hunches when they seem wrong.
Malcolm Gladwell bb. coined intuitive thinking
Intuitive thinking. based on first impressions which are usually accurate. It is quick, reflexive, gut hunches, not much mental effort. Also called System 1 of thinking.
Intuitive and Analytical thinking. What are the two modes of thinking?
prefrontal lobotomies surgical procedures that severs fibers connecting the frontal lobes from the thalamus
50 k. how many Americans received prefrontal lobotomies in the 40s and 50s?
Allow to infer causation High in internal validity pro for Experimental design
pro for Experimental design
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Obtaining random selection this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Nonrandom selection. this is misleading to conclusions
Obtaining random selection this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
this is misleading to conclusions
Obtaining random selection this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
reliable? Valid? When evaluating results from dependent variable or measure. Two questions:
Nonrandom selection. this is misleading to conclusions
Obtaining random selection this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
When evaluating results from dependent variable or measure. Two questions:
Nonrandom selection. this is misleading to conclusions
Obtaining random selection this is usually more important than obtaining a large sample
Random selection. this is crucial if researchers want to generalize results to the broader populations
Self- reports. are surveys that are used to measure people’s opinions and attitudes
Self- report measures. questionnaires that asses variety of characteristics; related to self-reports
Reliability consistency of measurement
Random selection. ensures every person in a population has an equal chance of being chosen to participate
Existence proof demonstration that a given psychological phenomenon can occur
Case study research design that examines people or person in depth over time
Internal validity extent to which we can draw cause and effect inferences from a study
External validity. extent to which we can generate findings to real world settings
Naturalistic observation. Watch behavior in real world settings and unfolds naturally; Understand range of behavior
Can sometimes be low in external validity con for Experimental design
Allow to infer causation High in internal validity pro for Experimental design
Reliability. this applies to interviews and observation data.
this applies to interviews and observation data.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
grouping of points on a 2 D graph in which each dot represents a single person’s data
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Illusory correlation
perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions
Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Experimental designs allow to draw cause and effect conclusions and when they’re done correctly, they permit cause and effect inferences. Can also manipulate variables.
perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions
Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
allow to draw cause and effect conclusions and when they’re done correctly, they permit cause and effect inferences. Can also manipulate variables.
perception of a statistical association between two variables where none exists; a statistical mirage; form basis of superstitions
Scatterplot. grouping of points on a 2 D graph in which each dot represents a single person’s data
-1.0 to 1.0. correlation coefficient range from
Can be positive, none, or negative Correlation coefficient b. 2 facts about correlations
Correlational design research design that examines the extent to which two variables are associated
the horns effect or pitchfork effect. This effect, the ratings of one negative trait, spill over to influence the ratings of other negative effects
horns effect or pitchfork effect. The converse of the halo effect is called
Response set tendency of researcher participants to distort their responses to questionnaire items.
assumes participants have insight into their personality characteristics to report accurately and assumes that they are being honest. Self-report disadvantage
easy to administer and measures personality traits and behaviors often work well. Self-report advantage
Reliability is necessary for validity because we need to measure something consistently before we can measure it well.
Validity. extent to which a measure assesses what if purports to measure.
Interrater reliability. extent to which different people who conduct an interview or make behavioral observations, agree on the characteristics they’re measuring.
Reliability. this applies to interviews and observation data.
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.
Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.
Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
blind. To avoid the placebo effect; patients need to be .
Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.
Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
To avoid the placebo effect; patients need to be .
Placebo effect improvement resulting from the mere expectation of improvement and is a powerful reminder that expectations can create reality.
Placebo effect Nocebo effect Experimenter expectancy effect Demand characteristics Disadvantage in the experimental design
Is it an experiment If it isn’t, don’t draw causal conclusions b. To decide to infer cause and effect relations
Confounding variables any variable that differs between the experimental and control groups other than the independent variable
only difference between experiment and control groups. For an experimenter to possess adequate internal validity the level of independent variable must be
`operational definition a working definition of what a researcher is measuring
operational definition When we define the dependent or independent variable for the purposes of a study, we provide an
Dependent variable. variable that an experimenter measures to see whether the manipulation has an effect
Independent variable. variable that an experimenter manipulates
Control group. the group of participants that doesn’t receive manipulation
Experimental group. the group of participants that receives the manipulation
Experimental and Control group Randomly sorting participants into two groups
Random assignment Manipulative independent variable What makes an experiment?
Experiment research design characterized by random assignment of participants to conditions and manipulation of an independent variable
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
If the ‘blind is broken’ 2 different things can happen:
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Statistics application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Descriptive stats Inferential stats. What are the two types of statistics
Statistics application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
What are the two types of statistics
Statistics application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Descriptive stat. numerical characteristics that describe data.
Descriptive stats Inferential stats. What are the two types of statistics
Statistics application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
numerical characteristics that describe data.
Descriptive stats Inferential stats. What are the two types of statistics
Statistics application of math to describing and analyzing data
Informed consent informing research participants of what is involved in a study before asking them to participate
Demand characteristics. cues that participants pick up from a study that allow then to generate guesses regarding the researcher’ hypotheses
Double blind. when neither researchers nor participants are aware of who’s in the experimental or control group
Experimenter expectancy effect. when researchers’ hypotheses lead them to unintentionally bias the outcome of a study
Nocebo effect harm resulting from the mere expectation of harm
Blind unaware of whether one is in the experimental or control group
Patients in experimental group may improvise more Participants in the control group may become resentful If the ‘blind is broken’ 2 different things can happen:
Central tendency Variability Two major types of Descriptive stats
Two major types of Descriptive stats
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Standard deviation. measure of variability that takes into account how far each data point is from the mean
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
measure of variability that takes into account how far each data point is from the mean
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
Inferential stats. math methods that allow us to determine whether we can generalize findings from our sample to the full population
Standard deviation. measure of variability that takes into account how far each data point is from the mean
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
math methods that allow us to determine whether we can generalize findings from our sample to the full population
Standard deviation. measure of variability that takes into account how far each data point is from the mean
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats
identify flaws and tell researchers how to do the study better next time.
One crucial task of peer reviewers is to
Inferential stats. math methods that allow us to determine whether we can generalize findings from our sample to the full population
Standard deviation. measure of variability that takes into account how far each data point is from the mean
Range difference between the highest and lowest scores
Range Standard deviation. Variability measures
Variability measures of how loosely or tightly bunched scores are.
Mode most frequent score in a data set
Median middle score in a data set
Mean average
Mean Median Mode. 3 measures of Central tendency
Central tendency measure of the central scores in a data set, or where the group tends to cluster.
Central tendency Variability Two major types of Descriptive stats