ETHICS IN RESEARCH & TEST DEVELOPMENT Flashcards
This is the correct rules of conduct necessary when carrying out a research.
Ethics
Ethics in Research
- informed consent
- integrity
- confidentiality
- protection: do good, do no harm
- debrief
- withdrawal from investigation
This is an ethic in research that is to be done to let the participants know what they are agreeing to. It is a documentation of them saying yes.
informed consent
This is an ethic in research wherein investigators commit to accuracy, intellectual honesty, and truthfulness in the conduct and reporting of studies.
integrity
This is an ethic in research that is marked by protective handling of information revealed in a relationship of trust and with the expectation that it will not be divulged to anyone without permission.
confidentiality
This is an ethic in research that states researchers must not cause distress to the participants. If ever there are any need to cause distress, risk of harm must not be greater than in ordinary life.
-protection: do good, do no harm
This is an ethic in research wherein investigators inform participants on the general idea of what we are studying and why.
debrief
True or False. Debriefing may be conducted weeks after the research is done.
False. Debriefing must occur as soon as the research is done.
This is an ethic in research wherein participants are allowed to withdraw from the research whenever they wish.
withdrawal from investigation
Contents of the informed consent
- statement that it is voluntary
- purpose of the research
- all foreseeable risk and discomfort (physical/psychological)
- procedure
- benefits to society or participants
- length of time
- person to contact for queries
- subject’s right to confidentiality or withdrawal
Stages in Test Development
Test Conceptualization Test Construction Test Tryout Item Analysis Test Revision
This phase in developing a test is where we consider factors like: Who will take the test? What is the test designed to measure? How will it be conducted? Ideal format
etc…
Test Conceptualization
In this stage of test development, the researchers will make their own test questionnaire according to what they have conceptualized. Also it takes into account what scaling method is to be utilized.
Test Construction
This stage of test development is when researchers conduct a pilot study or practice administration of the test.
Test Tryout
2 Scaling Method
Rating scale
Likert scale
This scale is a grouping of statements on which judgement of the strength of a particular trait are indicated by the test taker.
Rating scale
It is a scaling method that uses an agree to disagree or approve to disapprove continuum.
Likert scale
Refers to a prototype of test administration.
pilot study
True or False. Pilot study must be taken by people who are not the actual targeted participants of the study.
False. Pilot study examinees must be taken from the same population the researchers are studying.
What is the ideal number of participants that should participate in a test tryout.
30 (if not, then 5-10 will do)
True or False. A good test must have clear instruction for administration, scoring, and interpretation.
True
True or False. A good test must be reliable and valid.
True
It is the consistency of the measure of a concept.
Reliability
It is the judgement whether the test measures what it intends to measure.
Validity
3 Ways to Assess Reliability
Stability
Inter-rater reliability
Internal reliability
Also called test-retest reliability. It is when the test achieve the same results over time.
Stability
This reliability assessment answers the question, “Do the researchers agree on which observed phenomena fit the measure?”
Inter-rater reliability
It is the state wherein a measure is consistent within itself. It answers the question, “Do the indicators of a measure yield consistent results?”
Internal reliability
It is the mean of all possible split-half correlations. It is the estimate of internal consistency reliability. Used for tests with nondichotomous items.
Cronbach or Coefficient Alpha
What formula corrected the Cronbach or Coefficient Alpha
Spearman-Brown formula
2 assessment of validity
content validity
criterion validity
This assesses whether the content is appropriate.
content validity
This assesses validity through its relationship to other measures.
criterion validity
Validity assessment that answers the question:
Does the test appear to test what it aims to test?
face validity
Validity assessment that answers the question:
Does the test relate to underlying theoretical concepts?
construct validity
Validity assessment that answers the question:
Does the test relate to a existing similar measure?
concurrent validity
Validity assessment that answers the question:
Does the test predict later performance on a related criterion?
predictive validity
The final step of test development wherein researchers will decide on what items to retain, improve, or omit. This molds the test into its final form.
Test Revision