Competence I: Ch 3, 4, & 21 Flashcards
Research Questions
research questions are key elements in developing a topic area, and in essence are questions that explore the relations among or between constructs. Ex: Does the client’s level of dysfunction affect the working alliance formed in counseling?
Research Hypotheses
more specific in that it states the expected relationship between the constructs. Ex: More-dysfunctional clients will form poorer alliances in counseling than less-dysfunctional clients.
Descriptive Questions
ask what some phenomena or events are like. Developed by collecting inventories, surveys, or interviews to describe events.
Difference Questions
Ask if there are differences between groups of people, or even within individual participants. A comparison must be taking place in some form. Ex. Between group and within group designs
Relationship Questions
explore the degree to which two or more constructs are related or vary together. Use correlations or regression analyses.
Function of a testable research question/ hypothesis
- Asks a question about 2. the relationships between two or more constructs that can be 3. Measured in some way.
- The question should be worded clearly and unambiguously in question form
- the research question should inquire into a relationship between two or more constructs, asking whether construct A is related to construct B (mostly for difference or relationship questions, descriptive questions collect information or categorize information)
- Relationship is examined and also measured
- The question should be worded clearly and unambiguously in question form
Operational Definition
specifying the activities or operations necessary to measure a construct in a particular experiment. Provides a working definition of a phenomenon.
Uniformity Myth
We have oversimplified counseling to assume that psychotherapeutic treatments are standard (uniform) set of techniques, applied in a consistent (uniform) manner, by a standard (uniform) therapist, to a homogeneous (uniform) group of clients. Greatly hampered our progress in understanding counseling research, so encourage questions about best types of treatments for particular types of clients across various settings. Translates into a research design myth, that one type of research is better than all others when we should be asking, what is the best research design for this particular problem at this time.
MAXMINCON Principle
Researcher first tries to maximize the variance of the variable(s) pertaining to research question, second minimize the error variance of random variables, third control the variance of extraneous/ unwanted variables. Applies most directly to experimental research (between or within groups design) can also be applied to all research designs.
Internal Validity
internal validity of a study is the experimental control allowing researchers to make more inferences about causal relationships between variables. High in control use random selection of participants, random selection to treatments, manipulation of IV to allow researcher to make inferences about causality.
External Validity
generalizability of the results to applied settings
Experimental Field Studies
Moderate external validity and internal validity. (high and high in table) Characterized by investigations that manipulate IVs and are conducted in real life settings. Examine causality through random assignment of treatment and control of IVs. Moderately high in external validity. Can never exercise same control in field as in laboratory, only moderately high internal validity as well.
Experimental Laboratory Studies
low in external validity and high in internal validity. Characterized by manipulation of IVs and are conducted in laboratory settings. Low external because instead of using participants directly sampled from a population of interest experimenter sets up a situation to resemble a naturally occurring one. High in internal because experimenter randomly assign participants to treatments and manipulate one or more IV. Can make inferences about causality.
Descriptive Field Studies
High external validity, low internal validity. Are investigations that do not exercise experimental controls and are conducted in a real life setting. High external because sample can be taken directly from a population of interest. Low internal because variables are studied as they occur naturally rather than being manipulated.
Descriptive Laboratory Studies
Low in both internal and external validity. Investigations that do not exercise experimental controls (randomization or manipulation of IV) that are conducted in laboratory settings. Low in external because uses a setting that can only simulate real life. Low in internal validity because lacks experimental control in manipulation of an IV or randomization of participants. This study involves describing , identifying, and categorizing data and obtaining descriptive stats. Two reasons conduct this study: 1. Laboratory setting allows researcher some control over extraneous variables 2. It is impossible to study some phenomena in a field or real life setting.
Experimental Controls
randomization of subjects, or manipulation of the independent variable. MAXMINCON principle, maximize variable of variable related to research question, minimize error variance of random variables and control variance of extraneous variables.
Existing Body of Knowledge
information previous research suggest about a topic and the kinds of questions remaining unanswered. Want new research to add to the existing literature. Also the design is important, if topic has been researched in a laboratory setting, new research helpful in field setting.
Independent Variable
variables you manipulate
Dependent Variable
Variable you measure
Bubble Hypothesis
suggests doing research is similar to trying to apply a sticker to a car windshield. As one air bubble is eliminated another pops up. Every research design is flawed. Each will have different limitations and strengths, but no single design can entirely eliminate the bubble.
Evaluation Program Stakeholders
program managers, funders, and others who have special interest in the program. NOT people responsible for carrying out the program. Get input about the program evaluation, ask for participation and incorporate opinions and suggestions. What questions would the stakeholders like the evaluation to answer?
Structured Observations
requires the observer to be detached from the activity and typically the observations sample the program’s activities in a structured fashion. Multiple observers may code human behaviors and tally frequencies of behaviors that occur.
Evaluating Programs via surveys, focus groups, journals and content testing
Evaluation must not only examine how the program was implemented, but also ascertain whether the anticipated effects of the program are evident and whether the program is on track to achieve the stated program goals.
- Surveys; low cost way collect info from program participants. Can be used to get pre and post test data, quick and easy to administer,
- focus group: interacton of people as they discuss a common experience or viewpoint. Self report of how program affected participants. Also program staff can share their concerns and impressions
- Content Testing: course exams evaluates program participants knowledge about a topic using paper pencil activity. Test must be highly correlated with material presented in the workshop and individual items must be specific and difficult enough that only workshop attendees answer questions correctly
Calculating Per Unit Cost
- Determine fixed costs (those you have no control) 2. Calculate per unit cost (printing costs= cost per page, overhead costs calculated per month, per person costs should include all expenses associated with individuals such as benefits, income tax, and social security
Program evaluator responsibility for data integrity and validity and timely collection of data
steps must be taken to ensure that the data are clean, as error-free as possible, unbiased, and collected on time within budget
1. data collection procedure should undergo pilot testing. 2. Establish a checks and balances system: everything filled out correctly, and things done to their entirety.
Pilot Testing
What all data collection procedures should undergo. Requires thoroughly training any data collectors or site personnel concerning how the data are to be collected. Face to face training with several opportunities for role playing and questions can be eliminate confusion..
Primary Data Analysis
identical to those presented in results section of empirical articles. Data from each component of evaluation are analyzed in isolation from other data. Are also excellent ways to present data to stakeholders in some preliminary fashion. Also prevent data from becoming backlogged.
Secondary Data Analysis
ties together the individual primary analyses to describe a component of the program. The secondary analysis moves back and forth from programs objectives to the evidence weaving together of the individual pieces of data into a holistic picture.
Triangulating the data collection process
use multiple methods multiple data sources and more than one data collector/ observer for data collected over more than one period of time. Strategy reduces the reliance on any particular source.
Executive Summary
no more than 3 pages, give reader overview of the goals of the program, and indicate services provided, the outcomes anticipated, and the extent to which these objectives were met. Should stand on its own, contain enough detail so that person can grasp the program, its purpose, and its impact by reading it alone.
Accountability & Action Research
action research focuses on generating local rather than generalized knowledge (used for outcome research). Cover all data collection activities that lead to findings that are useful for evaluating local programs. This responsibility is on the counseling practitioners. Counselor educators need to address causes of resistance when training entry level counselors.
Linking Evidence Based Practice with Accountability
Evidence based practice product of outcome research findings, accountability activities employ action research methods. Want to view evidence based practice more broadly, currently evidence based practice is product of outcome research but should also be considered product of action research.
Outcome Research
evidence was product of rigorous scientific empirical studies, the domain and responsibility of trained researchers employed at university settings. Counselors responsible to locate and use evidence based interventions
Action Research
focuses on generating local knowledge, all data collection activities that lead to findings that are useful for evaluating local programs.
Baker, 2012
Explaining outcome research produces evidence based practice and action research produces accountability activities. Responsibility of counselor education to both produce outcome research and to education counselor practitioners to consume this research and produce their own action research. Responsibility of practitioners to consume and use outcome research as well as produce action research.