Quantitative Flashcards

1
Q

(Wester et al., 2013)

Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research Quality: Critique of Quantitative Articles in the Journal of Counseling & Development. Journal of Counseling & Development, 91(3), 280–290. https://doi.org/10.1002/j.1556-6676.2013.00096.x

A

purpose to examine quality of quant articles in CES journals

found omissions of psychometric info on instruments, effect sizes, and statistical power. Type VI and II errors found.

increases in qual research but only 28% in JCD

Quality of research design: (a) conceptual basis or theoretical foundation, (b) research question, (c) sampling procedure and its connection to the research design, and (d) psychometrics of instrumentation.

Quality for data analysis: (a) type of statistic used and the appropriateness of the statistic, (b) statistical power, and (c) effect size.

types of quantitative articles published: 75% descriptive, 7.5% experimental; 7.5% quasi-experimental; 2.5% outcome based; 2.5% secondary data analysis; remaining 5% (2 articles) dropped from study due to methodological inappropriateness to study

majority of research in JCD tend to be comparitvie, descriptive as opposed to experimental

valdity and reliability scores for instrumentation used: 29 of 38 articles did not include previous instrument validity

  • majority of instruments used were reliable, but 5 instruments below acceptable reliability threshold
  • validity not reported for 97% of instrument, and not discussed for the 9 instruments created fort their studies (Common practice to report reliability with current sample once done.)
    anaysis: 21 of 191 analyses exhibited Type VI error (i.e., the statistical procedures were not connected to the research question)

Power: type II error, failure to reject null when it should be; many authors did not provide specific power numbers, only 13% reported power at all (although only needs to be reported when nonsignificant results found)

  • power rarely reported in social science research

Effect size: needed further discussion

the field is not examining what is effective in counseling, nor within educational encounters. Unknown reasons but could be lack of knowledge of designs, and feasibility for conducting interventions.

Problems in instrumentation is a main reason for submission rejection from publication, need to include validity/reliability

Ingredients for promoting research competencies: (a) having access to research mentors who provide guidance and advice, (b) having educational experiences in research, (c) involving students in research projects from idea inception and design to completion, (d) protecting time for research activities to occur, and (e)
providing research infrastructure

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

(Wester & Borders, 2014)

Wester, K. L., & Borders, L. D. (2014). Research Competencies in Counseling: A Delphi Study. Journal of Counseling & Development, 92(4), 447–458. https://doi.org/10.1002/j.1556-6676.2014.00171.x

A

Research quality and competencies developed using delphi method

The common and consistent problems include sampling errors, inappropriate statistical analyses, a lack of research questions, a lack of statistical power, and a lack of validity information for instrumentation

6 domains and 6 competency components:

  1. Informed and critical thinking: have knowledge of the field, think theoretically and critically, and frame
    significant research questions.
  2. Steps in the research process: ability to design, implement, and interpret research; provide results in a way that is accessible and understandable to others; and eliminate bias in the research process; identify appropriate method of inquiry; collect and analyze data; communicate research findings
  3. Ethical and professional competence: requires knowledge of relevant professional ethical codes and the ability to solve ethical problems that arise during the research process.
  4. Breadth and appreciation: knowledge and skill of the entire research process, from idea inception to dissemination
  5. Relational aspects: be collaborative and build relationships within their research teams and with individuals in the surrounding community
  6. Continual education: engaged in continuing education, accept and seek feedback

Researcher practitioner gap- researchers need to be asking questions that are relevant to practitioners, and accessible to them; form relationships

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

(Black & Helm, 2010)

Black, L. L., & Helm, H. M. (2010). Defining Moments: The Golden Anniversary of Counselor Education and Supervision. Counselor Education and Supervision, 50(1), 2–4. https://doi.org/10.1002/j.1556-6978.2010.tb00104.x

A

golden anniversary of CES

call for more rigorous standards within CES

critically evaluate scholarship, approach with new imagination

need for high research quality, scholarship that is respected in and outside the CES profession

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

(Trusty, 2011)

Trusty, J. (2011). Quantitative articles: Developing studies for publication in counseling journals. Journal of Counseling & Development, 89(3), 261–267. https://doi.org/10.1002/j.1556-6678.2011.tb00087.x

A

choosing variables and measures and on selecting statistical
analyses models.

preparation of manuscripts for publication, including article organization and presentation of data

APA 6th (at the time) useful and this article as a compliment to expand on studies that fall outside the publication manual, and more specifics within

Research parameters: full review of literature first, counseling, psychology and wider social science lit

pilot studies helpful for finding design flaws prior to full study

define the research problem- filling gaps is goal of research, look at limitations in articles, population subgroups

research problems are the driving force behind research methods

inductive- research decisions arise from data; deductive- decisions based on past findings or theory

Rejection from publication because little contribution to the knowledge base, and weaknesses in measurement

(a) Some instruments are constructed and used for research, whereas others are constructed and used for clinical or evaluation purposes; (b) often, the more established, well-researched instrument is better than one about which little is known; (c) measurement error can undermine the validity of any study; (d) new or unknown measures demand a more thorough description of the instrument in the manuscript; and (e) any measure selected should be conceptually consistent with the theory used to specify the variables and model.

using own measures requires thorough description

Method variance is a psychometric validity concept;
when method variance is present, the relationship between the independent and dependent variable is artificially and spuriously strong. use different methods to measure IV and DV

design and measures used determine analysis used; not statistic determines design (ex: chi square = correlation)

Article organization- follow APA pub manual, specific items to include and when

no new data in discussion section; often one discussion and implication section

many fall short for publication in discussion; why you found what you found, meaning of findings, intro lit cited again; tied to counseling practice; balance to make findings meaningful without over extrapolating

Limitations- internal validity and external validity noted

Presentation of data- refer to APA; journals have limits on tables/figures in articles; use them judiciously and are helpful in enhancing understanding of findings

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

(Thompson, 2002)

Thompson, B. (2002). “Statistical,” “practical,” and “clinical”: How many kinds of significance do counselors need to consider? Journal of Counseling & Development, 80(1), 64–71. https://doi.org/10.1002/j.1556-6678.2002.tb00167.x

A

types of statistical significances and how to chose

statistical, practical, and clinical

Statistical significance: estimates probability of sample results deviated as much or more than do the actual sample results from those specified by the null hypothesis for the pop.

  • limitations- very unlikely events may still be very important; values are not integrated into p-values

Practical significance: significance tests can only test ordinal data; effect size tests; example of statistical difference found in IQ when no practical application seen (1 IQ pt)

Clinical significance: making decisions about client needs (involuntary hospitalization); 2 studies use mmpi and require hospitalization and do not in another study, practical implications significant here

Kazdin said “to the practical or applied value or importance of the effect of the intervention—that is, whether the intervention makes a real (e.g., genuine, palpable, practical, noticeable) difference in everyday life to the clients or to others with whom the client interacts”

move from reject-nonreject model to how much? better, more effective, etc.

effect size is necessary for practical significance; clinical significance is much more complicated to assess than practical

large practical effect sizes to not guarantee clinical significance, it does increase chances

Glass’ g or Cohen’s d

Cohen- .5 is medium; .2 is low; .8 is high; but argued against using these numbers too rigidly

corrected effect size values are more likely to be shrunken (smaller effect size) but more likely to be accurate

p-values are not sufficient, need effect sizes that are correctly chosen and often corrected

report and interpret effect size, includes an element of subjectivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly