PSYC 523 - Statistics Flashcards

1
Q

ANOVA

A

Analysis of variance - A parametric statistical technique used to compare more than two experimental groups at a time. (1 variable with more than 2 independent groups) Determines whether there is a significant difference between the groups, but does not reveal where that difference lies. You can look at variations between the groups and variations within the groups. You are looking to test a hypothesis - if the variance are equal, then there is a null hypothesis. This is better than running multiple t-tests - error control. **Measured using an f-ratio, which is the between/within group variance. **
Clinical example: Karen is interested in doing a research project on the amount of meditationtime and how it affects anxiety. She devises an ANOVA with three groups (15 minutes, 25 minutes and 35 minutes) and tests to see the variance of these groups and their scores on a anxiety scale.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Clinical v. statistical significance

A

Clinical significance refers to the meaningfulness of change in a client’s life and everyday functioning/symptom reduction (not determined by mathematical procedure). Statistical significance is calculated when p < .05, meaning the likelihood that your results are due to chance is less than 5%. Statistical significance indicates that it is unlikely you have made a Type I error (false-positive). Statistical significance does not necessarily mean there is clinical significance, and vice versus.
EXAMPLE: Tony comes to therapy suffering from PTSD. He’s mentioned a new form of therapy that you’ve never heard of, so when you research it, the studies show that is has shown clinical significance by reducing symptoms of PTSD patients, but was not statistically significant in it’s mathematical procedures. You discuss this with Tony and decide to go ahead with the treatment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Construct validity

A

Part of: research design
Construct validity is the degree to which a test or study measures the qualities or the constructs that it is claiming to measure. Two parts: Convergent validity: does test correlate highly with other tests that measure the concept. Divergent validity: test does not correlate with tests that measure other constructs. To have high construct validity, test must correlate with measure of same construct and NOT correlate with measures of other constructs.
Clinical example: Amy comes into your office with symptoms of depression. She says she has taken a test that shows she could have MDD. When you research this test, you see that this test shows construct validity (convergent) with other tests like the Beck Depression Inventory.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Content validity

A

Part of: research design
Content validity is the degree to which a measure or study includes all of the facets/aspects of the construct that it is attempting to measure. Content validity cannot be measured empirically but is rather assessed through logical analysis. Validity=accuracy Construct underrepresentation (doesn’t include all facets) and construct** irrelevant** variance (includes irrelevant aspects that influence score) Important because all symptoms or aspects must be represented to test accurately. Clinical example: A depression scale may lack content validity if it only assesses the affective dimension of depression (emotion related- decrease in happiness, apathy, hopelessness) but fails to take into account the behavioral dimension (sleeping more or less, eating more or less, energy changes, etc).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Correlation v. causation

A

Part of: research design and statistical analysis
Correlation means that a relationship exists between two variables. Can be positive or negative; coefficient will fall between -1.00 and +1.00. Correlation does not indicate causation - mediating variables may have caused relationship, can be bi-directional. Causation means that a change in one variable affects a change in the other variable. Determined via controlled experiments, when dependent variables can be isolated and extraneous variables controlled. Why: Important to be able to interpret research appropriately and not assume causation when there is none.
Clinical example: A study found that minutes spent exercising correlated with lower depression levels. This study was able to show that depression levels and exercise were correlated, but could not go so far as to claim that one causes the other.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Correlational research

A

Research method that examines the relationships between variables. Does not establish causal factors. Produces correlation coefficient ranging from 1.0 to -1.0 depending on strength/direction of relationship between the two variables. Statistical tests include Pearson, Spearman, & point-biserial PROS: inexpensive, produces wealth of data, encourages future research, precursor to experiment determining causation CONS: cannot establish causation or control for confounds.
Clinical example: Shelia’s patient Donna suffers from illness anxiety disorder. She brings Shelia an article claiming that eating out of plastic containers causes cancer. After reading the article, Shelia explains that the study referenced in the article is a correlational study, which only shows that there is a relationship between eating out of plastic containers and cancer, but it does not prove that eating out of plastic containers causes cancer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Cross-sectional design

A

A type of research design that samples different age groups to look at age group differences across a dependent variable. Used in online surveys. Quasi-experimental (participants selected based on age, not randomly) Advantages include collection of large amounts of data in a short amount of time & low cost. Drawbacks include inability to infer causation or show changes over time.
EXAMPLE: George was looking to study the difference in peer relations and self-esteem in various age groups. He decided to use a cross-sectional design comparing 6 year-olds, 12 year olds, 18 year olds, and 25 year olds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Dependent t-test

A

(Paired-sample t-test) Statistical analysis that compares the means of two related groups to determine whether there is a statistically significant difference between these means. Sometimes called a correlated t-test because the data are correlated. Used when the design meet requirements for parametric test and involves matched pairs or repeated measures, and only two conditions of the independent variable It is called “dependent” because the subjects carry across the manipulation–they take with them personal characteristics that impact the measurement at both points—thus measurements are “dependent” on those characteristics.
Clinical example: A researcher wants to determine the effects of caffeine on memory. They administer a memory test to a group of subjects, have the subjects consume caffeine then administer another memory test. Because they used the same subjects, this is a repeated measures experiment that requires a dependent t-test during statistical analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Descriptive v. inferential

A

Descriptive statistics are those which are used to just describe and summarize the sample or population - includes measures of central tendency and variance (mean, standard dev)- can be used with any type of data (experimental and non-experimental). Inferential statistics allow inferences to be made from the sample to the population. Sample must accurately reflect the population (importance of random sampling) - Infer causality - Limited to experimental data. Techniques include hypothesis testing, regression analysis. The statistical results incorporate the uncertainty that is inherent in using a sample to understand an entire population. EXAMPLE: A researcher conducts a study examining the rates of test anxiety in Ivy League students. This is a descriptive study because it is concerned with a specific population. However, this study cannot be generalized to represent all college students, so it is not an inferential study.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Double-blind study

A

A type of experimental design in which both the participants and the researchers are unaware of who is in the experimental condition and who is in the placebo condition. (In contrast to a single-blind study, where only the participants are unaware of who is in the experimental condition.) Double-blind studies eliminate the possibility that the researcher may somehow communicate (knowingly or unknowingly) to a participant which condition they are in, thereby contaminating the results. Example: A study testing the efficacy of a new SSRI for anxiety is using a double-blind study. Neither the experimenter nor the participants are aware of who is in the treatment group and who is receiving a placebo. This setup ensures that the experimenters do not make subtle gestures accidentally signaling who is receiving the drug and who is not, and that experimenter expectations could not affect the studies outcome.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Ecological validity

A

Extent to which the experimental situation approximates the real-life situation being studied. The applicability of the findings of a study to the real world. Experiments high in ecological validity tend to be low in reliability because there is less control of the variables in real-world like settings. More generalizable. A type of external validity. EXAMPLE: A researcher wants to study the effects of alcohol on sociability, so he administers beer to a group of subjects and has them interact with each other. To increase their ecological validity, he decides to carry out the study in an actual bar.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Effect size

A

Part of: statistical analysis
A measure of the strength of a significant relationship; the proportion of variance accounted for. Indicates if findings are weak, moderate, or strong. Also called shared variance or the coefficient of determination. Why: Quantifies the effectiveness of a particular intervention, relative to some comparison; commonly used in meta-analyses.tell us more about the meaningfulness of our statistics - how much of the variance was explained in the difference in the variables. Used alongside the p-value. Cohen’s d.

.75 -1- substantial
.50 -.74– moderate
.25-.49 - weak
Below - not meaningful

Example: A researcher conducts a correlational research study on the relationship between caffeine and anxiety ratings. The study produces a correlation coefficient of 0.8 which is considered a large effect size. The effect size reflects a strong relationship between the caffeine and anxiety.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Experimental research

A

Part of: research design What: An independent variable is manipulated in order to see what effect it will have on a dependent variable. Researchers try to control for any other variables (confounds) that may affect the dependent variable(s). Establishes causation (not correlation) - stronger evidence. Example: A researcher conducts an experimental research study to examine the relationship between caffeine intake and anxiety ratings. The study administers various levels of caffeine (the independent variable) to the low, high, and no caffeine groups. The participants are then asked to report their anxiety levels (the dependent variable). They found that those who had more caffeine reported feeling more anxious.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hypothesis

A

Part of: research What: a formally stated prediction about the characteristics or appearance of variables, or the relationship between variables, that acts as a working template for a particular research study and can be tested for its accuracy. Essential to the scientific method. Hypotheses help to focus the research and bring it to a meaningful conclusion. Without hypotheses, it is impossible to test theories. EXAMPLE: A famous hypothesis in social psychology was generated from a news story, when a woman in New York City was murdered in full view of dozens of onlookers. Psychologists John Darley and Bibb Latané developed a hypothesis about the relationship between helping behavior and the number of bystanders present, and that hypothesis was subsequently supported by research. It is now known as the bystander effect.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Independent t-test

A

Statistical analysis that compares the means of two independent groups (different groups of people). Used when scores meet requirements of parametric test, when there are independent samples, and when there are only two conditions in the independent variable (groups). Dependent variable will be interval or ratio.
Determines if there is a statistical difference between the two groups’ means. We make the assumption that if randomly selected from the same population, the groups will mimic each other; the null hypothesis is no difference between the two groups.
EXAMPLE: Tom is a student and tells his therapist that he found a study comparing test scores from students who listened to music they enjoyed prior to their exam or listened to Mozart, showing that those who listened to their favorite music did better.
Memory of those who drink alcohol compared to nondrinkers.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Internal consistency

A

Part of: research design What: Type of reliability - Extent to which different items on a test measure the same ability or trait (testing once) - measures whether several items that propose to measure the same general construct produce similar scores and are free from error. usually measured with Cronbach’s alpha by calculating all possible split-half configurations (scores ranging from 0-1) - also measured with split-half and KR-20.
EXAMPLE: you are doing research on a measurement that will assess for Bipolar disorder. You want yo make sure your test questions all have internal consistency or produce the same results and measure the same construct. You test using cronbach alpha and find that your assessment does indeed have high internal consistency.

17
Q

Internal validity

A

Part of: research design What: The extent to which the observed relationship between variables in a study reflects the actual relationship between the variables. ability to draw inferences from results based on level of control exhibited in the study - dictated by how well controlled your study is. Control for confounding variables can increase internal validity, as well as a random selection of participants and large sample size (attrition, test bias, historical events, confounding variables). types of internal validity = content, construct, face, criterion.
EXAMPLE: Researchers investigated a new tx for depressing and wanted to be sure they had internal validity. They set strict rules for the implementation of the tx and made sure their sample size was large.

18
Q

Interrater reliability

A

Part of: research design What: a type of reliability that measures the agreement level between independent raters. useful with measures that are less objective and more subjective. used to account for human error in the form of distractibility, misinterpretation or simply differences in opinion. Uses ethogram as key for behavior observed - kappa statistics. EXAMPLE: Three graduate students are performing a natural observation study for a class that examines violent video games and behavior in a group of 9 year old boys. The students rated the behavior on a scale of 1 (not aggressive) to 5 (very aggressive). However, the responses were not consistent between the observers. The study lacked inter-rater reliability.

19
Q

Measures of central tendency

A

Part of: statistical analysis What: Tendency of the data to lump somewhere around the middle across the values on X; provides a statistical description of the center of the distribution. help to summarize the main features of a data set and identify the score around which most scores fall. Three main measures are used: the mean, mode and median. Mean is the arithmetic average of all scores within a data set. Mode is the most frequently occurring score. Median is the point that separates the distribution into two equal halves. Median and mode are the most resilient to outliers. Makes results easier to compare to one another.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. To better understand the data that was gathered, they start by calculating the measures of central tendency: the most frequently occurring number of episodes in the group, the average number of episodes, and the number of episodes in the middle of the set. In other words, the mode median and mean.

20
Q

Measures of variability

A

In statistics, measures of variability are how the spread of the distribution vary around the central tendency. Three primary measures: range, variance and standard deviation. Range is obtained by taking the two most extreme scores and subtracting the lowest from the highest. Variance is the average squared deviation around the mean.
Standard deviation is the square root of the variance and is highly useful in describing variability.Why: Helps determine which statistical analyses you can run on a data set.
EXAMPLE: A researcher is studying the frequency of binge eating in a group of girls suffering from binge eating disorder. After calculating the measures of central tendency, they decide that they want to know more about the distribution of number of episodes. They decide to calculate the measures of variability. This includes the range, variance, and standard deviation

21
Q

Nominal/ordinal/interval/ratio measurements

A

These are four types of measurements seen in statistics. Nominal data: dichotomous, only two levels, such as male and female, or categorical, such as Republican, Democrat, Independent. Ordinal data (numbers) indicate order only (1st born, 2nd born) Interval data: true score data where you know the score a person made and you can tell the actual distance between individuals based on their respective scores, but the measure used to generate the score has no true zero (temperature, F or C, SAT scores) Ratio data: interval data with a true zero (age, height, weight, speed)
EXAMPLE: A researcher is creating a questionnaire to measure depression. They include nominal scale questions (“what is your gender?”) ordinal scale questions (“rank your mood today from 1-very unhappy to 5-very happy”) and ratio scale questions (“how many hours of sleep do you get on average?”)

22
Q

Normal curve

A

Part of: statistics A normal curve is a normal distribution, graphically represented by a bell-shaped curve. A frequency where most occurrences take place in the middle of the distribution and taper off on either side. All measures of central tendency are at the highest point of the curve. Symmetrical, extremes are at the tails. Divisible into deviations. Important for parametric statistics.
EXAMPLE: A researcher is developing a new intelligence test. After obtaining the results, they found that the scores fell along a normal curve: most participants scored in the middle range with very few obtaining either the highest or lowest scores (scores were normally distributed).

23
Q

Probability

A

A mathematical statement indicating the likelihood that something will happen when a particular population is randomly sampled, symbolized by (p) or a %.The higher the p value, the more likely that the phenomenon or event happened by chance. Probability is based on hard data (unlike chance); p is between 0 and 1. A p-value of less than .05 is considered statistically significant (5% chance you made a type 1 error). EXAMPLE: Researchers create a study comparing depression and the efficacy of CBT versus ACT. The result show a p value of <0.05, which means that your results are statistically significant.

24
Q

Parametric v. nonparametric statistical analyses

A

Parametric statistical analyses: inferential procedures that require certain assumptions about the distribution of scores. usually used with scores most appropriately described by the mean. based on symmetrical distributions (normal bell-curve). greater statistical power and more likely to detect statistical significance than nonparametric analyses. Nonparametric statistical analyses involve inferential procedures that do not require stringent assumptions about the parameters of the raw score population represented by the sample data. usually used with scores most appropriately described by the median or the mode. Nonparametric data have skewed distributions (not normal curve). EXAMPLE: Researchers sets up a study to determine if there is a correlation between hours of sleep per night and ratings of happiness. Because they used a very small sample, they cannot assume the data are symmetrically distributed and therefore must use a nonparametric test.

25
Q

Quasi-experimental research

A

Research design that cannot fully control for loss of internal or external validity due to risks that would go with the variable, since the variable itself might be harmful i.e. depression, smoking; subjects cannot be randomly assigned to them due to health/ethical concerns (not randomized group). EXAMPLE: Researchers want to conduct a study examining how opioid addiction affects depression. Because they cannot ethically assign the condition of opioid addiction to their participants, they must place the participants in groups according to whether they are already addicted or not. Opioid addiction is a quasi-experimental variable, qualifying this study as quasi experimental research.

26
Q

Random sampling

A

In research design, the process of selecting a sample randomly from the population to better represent the entire group as a whole; that is, all members of the population being studied have an equal chance of being chosen/sampled. the mean of a random sample is a good estimate of population mean. Helps control for confounding variables EXAMPLE: A researcher is doing an experiment on college students and must select a sample of students from a larger population. To ensure that they are not biased in their selection of students, they assign each student a number and then randomly draw numbers to create the sample.

27
Q

Regression

A

regression: prediction (based on correlated data) Correlation tells us whether a relationship exists. Regression allows us to predict based on that relationship by identifying the line of best fit. A descriptive statistical technique developed by Sir Francis Galton. Regression=prediction and is based on significantly correlated data. If two variables are significantly correlated, then we should be able to predict one from another. linear regression is predicting one variable to another or vise versa. Multiple regression is the same idea but utilizes multiple predictor variables. Strength of the relationship determines the amount of error in making predictions; stronger correlation = better prediction. Produces a line of best fit- a straight line that best matches the data and can be used to predict Y given a known X
EXAMPLE: A developmental psychologist performed a study on aggressive behavior in boys and hormone levels. Researchers performed a regression analysis on the data. Their results showed that the severity and frequency of the boys’ aggression could be accurately predicted based on the levels of testosterone.

28
Q

Sample v. population

A

Part of: research design. A sample is a relatively small subset of the population that is selected to represent the population in a study; sample must be representative of the population being studied. A population is all members of a group; the larger group of individuals from which a sample is selected. All members of the population should have an equal chance of being selected for the sample in a randomized study in order to control for confounding variables and bias. EXAMPLE: Researchers want to conduct a study to investigating how opioid addiction affects depression. As it would be nearly impossible to study every single individual with an opioid addiction, they select a sample of individuals that closely represents the whole population. To ensure that the sample is representative, they compare the sample and population means.

29
Q

Scientific methodology

A

Process of objectively establishing facts through testing and experimentation. 4 Step Process: 1. conceptualize process/problem to be studied 2. collect relevant data through research 3. analyze the data 4. draw conclusions based on analyses
APA: A set of procedures, guidelines, assumptions, and attitudes required for the organized and systematic collection, interpretation, and verification of data and the discovery of reproducible evidence, enabling laws and principles to be stated or modified.
Data should be empirical (measurable), objective, systematically gathered, controlled, and verifiable by others (study duplication); all human measures subject to error. Researchers should be uncertain, open-minded, skeptical, cautious, and ethical. EXAMPLE: A researcher wants to understand the relationship between caffeine and sociability. First, they form a hypothesis that caffeine consumption increases sociability. Next, they conduct an experiment and collect the relevant data. Then they analyze their results. Finally, they draw the conclusion that caffeine increases sociability, based on their results.

30
Q

Standard error of estimate

A

Statistical analysis. What: In regression analysis, this is a measure of the accuracy of predictions made: how much the data points are spread around the regression line (the amount that actual Y scores differ from predicted Y scores). The smaller the value of the standard error of the estimate, the better the fit of the regression model to the data Also referred to as standard error of the residuals. S tells us if a model is precise enough to use for a prediction. EXAMPLE: A researcher wants to find out if there is a relationship between social media usage and depression. They collect data and find that there is a positive relationship between the two variables. Next, they calculate the regression line. Next, they want to know how accurate the predictions made using the regression equation are, so they calculate the standard error of estimate.

31
Q

Standard error of measurement

A

Part of: statistical analysis What: A common tool in research and standardized testing which provides an estimate of how much an individual’s score would be expected to change on re-testing with same/equivalent form of test. How much the test scores are spread around the true score. Avg the scores over the infinite number of tests, the average of scores is considered an estimate of the true ability/knowledge (T true score). The standard deviation of all those scores= SEM (The smaller the SEM the more precise the measurement capacity of the instrument) creates a confidence band within which a person’s true score would be expected to fall. Has an inverse relationship with the reliability coefficient (High SEM = Low Reliability). Common tool in psychological research and standardized testing EXAMPLE: A researcher develops a test to measure depression, then administers it to a sample. They want to analyze the data that they gathered using statistics. They calculate the SEM and it turns out to be low which indicates that the measurement is fairly precise. They then decide to carry out further statistical analysis.

32
Q

Standard error of the difference (2 sample t-test)

A

Part of: statistical analysis What: the estimated standard deviation of the differences between the means of two independent samples, meaning it’s the estimate of error between the two independent groups. Takes into account group variance, group means and group size. (Larger sample size, smaller SED - high SED means not representative of population). A two-sample t-test compares the means of two samples to see if they are equal or not. This allows us to compare the differences between the two groups. Example: A researcher conducts a study on how caffeine affects test scores. They use a t-test to calculate the differences between the means. They then used the standard error of the difference and found that since they had a large sample size, their SE of difference was low and therefore representative of the population.

33
Q

Standard error of the mean (single sample z-test)

A

Part of: statistical analysis What: The standard error of the mean is the average of the deviations of sample means around the population mean. Compares a random sample back to the population if the population mean and standard deviation are known (as opposed to a single sample t-test, wherein we don’t know the population standard deviation). Comparison can be done before experimental manipulation to make sure the sample is representative, or it can be done after manipulation EXAMPLE: A researcher creates a new test to measure intelligence, which they test on a random sample. Because they know the mean IQ and the SD of the population, they run a single sample z test by calculating the standard error of mean. This confirms that their sample is indeed representative of the population.

34
Q

Standard error of the mean, estimated (single sample t-test)

A

Part of: research and statistical analysis What: The standard error of the mean is the average of the deviations of sample means around the population mean. Standard error of mean, estimated, is the basis of a single sample t-test. Single sample t tests compare a single random sample back to the population, wherein the population mean is known, but the population standard deviation is unknown, so it uses sample standard deviation to estimate population standard deviation. (In contrast to a single sample z-test, which is used when both population mean and population standard deviation are known.) Comparison can be done before experimental manipulation to make sure the sample is representative, or it can be done after manipulation. Example:A researcher creates a new test to measure intelligence, which they test on a random sample. Because they know the population mean but not the population standard deviation, they use the sample standard deviation to make an estimate. They run a single sample t-test by calculating the standard error of mean, estimated, which compares the random sample back to the population.

35
Q

Standardization sample

A

Part of: statistics and test development What: A large sample of test takers who represent the population for which the test is intended; also called norm group. Results used to establish normal distribution & norms. Norms are not standards of performance, but serve as a frame of reference for test score interpretation. Can be an issue if standardization samples do not include culturally diverse clients. In general, increasing standardization sample size adds to psychometric soundness. EXAMPLE: You’re a child psychologist that administers IQ tests to all of your patients. You compare all of the scores with those of a standardization sample in order to determine whether a pt’s score is above or below average. Issues can arise for culturally diverse clients, though, as they were not always included in the standardization sample.

36
Q

Statistical significance

A

**Part of: statistical analysis and hypothesis testing What: the likelihood that results did not occur by chance. Criterion of significance (p-value or probability) is typically 0.05, meaning that you are 95% confident the obtained results are not due to chance/error (type 1 - false positive or type 2-false negative). When significance is established, null hypothesis is rejected. Important to note statical and clinical significance. EXAMPLE: A new drug for anxiety is being tested. The p-value was determined to be <0.05, which means the test was statistically significant and the chance of type 1 error is less that 5%.

37
Q

Type I and Type II error

A

Two types of errors seen in research. Random sampling and increased sample size help avoid these errors. Type I error: occurs when researcher rejects a null hypothesis when it is actually true (false-positive); detecting an effect that is not present. How to minimize/avoid Type I error: Increase significance level to higher threshold i.e. from 0.05 to 0.01. Type II error occurs when a researcher does not reject a null hypothesis when it is not true; failure to observe a difference when in truth there is one. - researchers incorrectly conclude that the independent variable(s) had no effect on the dependent variable(s) - Aka “false negative” - How to minimize/avoid Type II error: Increase sample size, Increase significance level to higher threshold i.e. from 0.05 to 0.01 EXAMPLE: A researcher is testing a new drug for PTSD. After reviewing the results, they concluded that the drug effectively reduced symptoms; however, the conclusion was wrong and the drug had no impact. This is an example of Type I error because the researchers rejected the null hypothesis of no difference between the tx and control groups when in fact there was no difference .