PSY 201 Flashcards

1
Q

What is science?

A

Science is the systematic study of the natural world through observation, experimentation, and analysis to understand how things work. It involves forming hypotheses, conducting experiments, gathering evidence, and drawing conclusions based on data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What makes psychological research scientific?

A

Psychological research is considered scientific because it follows the principles of the scientific method, which ensures that the investigation is systematic, objective, and replicable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Explain 5 key characteristics of an ideal scientist

A
  1. PRECISION: Precision involves being accurate and exact in conducting experiments, measuring data, and reporting findings. They avoid vague or ambiguous conclusions, and pay close attention to the details to ensure their findings are reliable.
  2. SKEPTICISM: Scientific skepticism is the attitude of questioning and doubting claims, especially those that are not supported by robust evidence. An ideal scientist does not accept findings at face value but critically evaluates the evidence, methodology, and logic behind them. This skepticism helps prevent the acceptance of flawed or unverified ideas and ensures that only well-supported conclusions are adopted.
  3. RELIANCE ON EMPIRICAL EVIDENCE: An ideal scientist bases their conclusions on empirical evidence rather than assumptions or theoretical speculation. This reliance ensures that scientific knowledge is grounded in observable, measurable reality and can be tested and validated.
  4. OPENNESS: Openness refers to a willingness to share findings, methods, and data with the broader scientific community and being open to critique and feedback. An ideal scientist remains transparent about their research and welcomes challenges to their work.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

4 primary rules that define and guide scientific methods of research

A
  1. EMPIRICISM: Empiricism refers to the reliance on observable, measurable evidence as the foundation for drawing conclusions. Scientific knowledge is based on data gathered from the real world through observation, experimentation, or experience, rather than on abstract reasoning or theoretical speculation.
  2. DETERMINISM: Determinism is the concept that all events are determined by previously existing causes. It suggests that everything is subject to the laws of cause and effect, leaving no room for randomness or free will.
  3. PARSIMONY: In scientific research, parsimony refers to the principle that, when presented with competing hypotheses or theories, the simplest one (i.e., the one with the fewest assumptions) is preferred. This is also known as Occam’s Razor.
  4. TESTABILITY: Testability is the property of a hypothesis or theory that allows it to be tested through experimentation or observation. A testable idea can be supported or refuted by empirical evidence. A theory is considered scientific if it can be tested and potentially falsified.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

3 main types of scientific investigation

A
  1. DESCRIPTIVE INVESTIGATION: This involves observing and describing characteristics or behaviors of a subject without influencing it. Researchers collect data about a phenomenon to understand what is happening. No manipulation of variables occurs. For example, observing and recording the distribution of different species in an ecosystem.
  2. COMPARATIVE INVESTIGATION: This type of investigation compares two or more groups or conditions to identify differences. Researchers look for correlations or contrasts between the variables being compared, but there’s no control over variables. For example, comparing plant growth in different soil types to see which one fosters better growth.
  3. EXPERIMENTAL INVESTIGATION: This investigation involves manipulating one or more independent variables to observe the effect on a dependent variable, usually under controlled conditions. Researchers actively intervene, establish a control group, and draw conclusions about cause and effect. For example, testing how different amounts of sunlight affect plant growth.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is a theory?

A

A theory is a well-substantiated explanation that organizes facts, principles, and findings to explain a particular phenomenon. It is based on extensive evidence and can generate predictions that can be tested (e.g., the theory of evolution).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is a concept?

A

A concept is an abstract idea or mental representation of something, used to simplify complex phenomena (e.g., “intelligence” or “health”). Concepts are building blocks for theories, providing the terminology and basic ideas that frame research.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is a construct?

A

A construct is a specifically defined concept created for scientific measurement, often used for intangible or abstract attributes (e.g., “self-esteem” or “anxiety”). Constructs are measured indirectly through indicators or scales and play a central role in testing theories.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is a sample?

A

A sample is a subset of individuals, items, or data points selected from a larger population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a variable?

A

A variable refers to any factor or characteristic that can change or vary within an experiment. Variables are essential because they help researchers measure relationships, test hypotheses, and draw conclusions about human behavior and mental processes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is sampling bias?

A

Sampling bias occurs when a sample is collected in a way that is not representative of the population from which it is drawn, leading to skewed or inaccurate results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a sampling frame?

A

A sampling frame is a list or database that includes all the members of a population from which a sample is to be drawn.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is a sampling error?

A

A sampling error is the statistical error that arises when a sample does not truly represent the entire population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is sampling size?

A

Sample size refers to the number of observations or data points collected in a study or survey. It represents a subset of the population that is used to make inferences about the entire population.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Types of sampling (Probability sampling)

A
  1. SIMPLE RANDOM SAMPLING: Every member of the population has an equal chance of being selected. For example, drawing names from a hat to select participants for a study.
  2. SYSTEMATIC SAMPLING: Every nth member of the population is selected after a random starting point. For example, selecting every 10th person from a list of customers.
  3. STRATIFIED SAMPLING: The population is divided into subgroups (strata) based on a specific characteristic, and random samples are taken from each stratum. For example, dividing a population into age groups and randomly sampling from each age group.
  4. CLUSTER SAMPLING: The population is divided into clusters, and entire clusters are randomly selected. For example, randomly selecting entire schools from a list and surveying all students in those schools.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Types of sampling (Non probability sampling)

A
  1. ACCIDENTAL OR CONVENIENT SAMPLING: Samples are taken from a group that is easy to access or contact. For example, surveying people who pass by on a street corner.
  2. PURPOSIVE SAMPLING: The researcher uses their judgment to select participants who are believed to be representative of the population. For example, selecting experts in a field to participate in a specialized study.
  3. QUOTA SAMPLING: The researcher ensures certain characteristics are represented in the sample to meet specific quotas. For example, ensuring that a survey includes 50% males and 50% females, but selecting them based on convenience.
  4. SNOWBALL SAMPLING: Existing study subjects recruit future subjects from among their acquaintances, useful in hard-to-reach populations. For example, A study on a rare disease where current patients help to find other patients to participate.
  5. CAPTIVE SAMPLING: This indicates a kind of forced participation.
17
Q

What is reliability?

A

Reliability in research refers to the consistency and stability of the results obtained from a measurement or assessment tool. If a research instrument or method produces the same results under consistent conditions over multiple trials, it is considered reliable.

18
Q

Methods of establishing reliability of measure

A
  1. TEST RETEST METHOD: Administer the same test to the same group of individuals at two different points in time and then correlate the scores.
  2. INTER-RATER RELIABILITY: Have multiple raters or observers evaluate the same set of items independently and then calculate the degree of agreement or correlation between their ratings.
  3. ALTERNATE/PARALLEL FORM: Develop two equivalent forms of a test that measure the same construct, administer both forms to the same group of individuals, and correlate the scores. For example, creating two different versions of a vocabulary test and giving both versions to the same group of students.
  4. SPLIT-HALF RELIABILITY: Divide the test into two halves (e.g., odd vs. even items) and correlate the scores from each half.
19
Q

What is validity?

A

Validity is the extent to which a measurement or test accurately reflects or assesses the specific concept it is intended to measure.

20
Q

Types of validity

A

• Content Validity: This checks whether a test covers all the parts of the concept it’s supposed to measure. For example, if you’re testing math skills, content validity ensures the test includes questions on all the key areas (addition, subtraction, multiplication, etc.).

• Construct Validity: This type makes sure that the test truly measures what it’s supposed to measure and not something else. For example, if you’re testing someone’s intelligence, the test should measure intelligence, not something like test-taking ability.

• Criterion-Related Validity: This checks how well the test results correlate with other measures that are supposed to assess the same thing.

21
Q

Difference between probability sampling and non probability sampling

A

The main difference between probability sampling and non-probability sampling lies in how samples are selected from a population.

PROBABILITY SAMPLING

  • Every member of the population has a known and equal chance of being selected.
  • It relies on random selection, which reduces bias and allows for generalization to the larger population.
  • Used in scientific research, surveys, and experiments where accuracy and representativeness are crucial.

NON-PROBABILITY SAMPLING

  • Not every member of the population has an equal chance of being selected; selection is based on subjective criteria.
  • It does not use random selection, making it more prone to bias.
  • Used in exploratory research, qualitative studies, and situations where a representative sample is not essential.
22
Q

What is an experiment?

A

An experiment is a controlled procedure carried out to test a hypothesis, observe a phenomenon, or determine cause-and-effect relationships. It typically involves manipulating one or more variables while keeping others constant to measure their effects. Experiments are commonly used in scientific research to validate theories and make discoveries.

23
Q

3 main types of experiment

A
  1. Laboratory Experiments – Conducted in a controlled environment where the researcher manipulates the independent variable and controls extraneous variables. Results are highly reliable due to strict control, but they may lack real-world applicability (low ecological validity). For example, testing the effect of caffeine on reaction time in a psychology lab.
  2. Field Experiments – Conducted in a natural setting like a school. The researcher still manipulates the independent variable but has less control over external factors. It is more realistic than lab experiments, but less control means lower reliability. For example, studying the effect of a new teaching method in a real classroom.
  3. Natural (or Quasi) Experiments – The independent variable is not manipulated by the researcher but occurs naturally, and its effects are observed. For example, studying the impact of a natural disaster on people’s mental health.