Data Preprocessing Flashcards
7 DATA PROCESSING TASKS / METHODS
- Aggregation
- Sampling
- Dimensionality Reduction
- Feature Subset Selection
- Feature Creation
- Discretization and Binarization
- Attribute Transformation
combining two or more attributes into a single attribute.
Aggregation
3 PURPOSE OF AGGREGATION
- Data Reduction
- Change of Scale
- More Stable Data
is the main technique employe for data selection.
Sampling
4 TYPES OF SAMPLING
- Sampling without replacement
- Sampling with replacement
- Simple Random Sampling
- Stratified Sampling
a type of sampling where each item is selected, it is removed from the population.
Sampling with replacement
a type of sampling where objects are not removed from the population as they are selected.
Sampling without replacement
a type of sampling where there is an equal probability of selecting any particular items.
Simple Random Sampling
a type of sampling where it splits the data into several partitions, then drawn random samples from each partition.
Stratified Sampling
is the number of samples in a data set.
SAMPLE SIZE
2 SAMPLE SIZE DETERMINATION
- Statistics
- Machine Learning
a determination where it implies the confidence interval, for parameter estimate or desires statistical power of test.
Statistics
a determination where it implies that often more is better, cross-validated accuracy.
Machine Learning
when dimensionality increases, the size of the data space grows exponentially.
Curse of Dimensionality
its purpose of to avoid the curse of dimensionality.
Dimensionality Reduction
Reduces the amount of time and memory required by data mining algorithms.
Dimensionality Reduction
3 TECHNIQUES FOR DIMENSION REDUCTION
- Principal Component Analysis
- ISOMAP
- Low Dimensional Embedding
is another way to reduce dimensionality of data.
Feature Subset Selection
2 TYPES OF FEATURES
- Redundant Features
- Irrelevant Features
is a type of feature where there are many duplicate or all of the information contained in one or more other attribute.
Redundant Features
is a type of feature where it contains no information that is useful foe the data mining task at hand.
Irrelevant Features
4 APPROACHES IN FEATURE SUBSET SELECTION
- Embedded Approach
- Filter Approach
- Brute-force Approach
- Wrapper Approach
feature selection occurs naturally as part of the data mining algorithm,
Embedded Approach
features are selected before data mining algorithm is run.
Filter Approach
try all possible feature subsets as input to data mining algorithm and choose the best.
Brute-force Approach
use the data mining algorithm as a black box to find bests subsets of attributes.
Wrapper Approach
creates new attributes that can capture the important information in a data set much more efficiently than the original attributes.
Feature Creation
3 GENERAL METHODOLOGIES FOR FEATURE CREATION
- Feature Extraction
- Feature Construction / Feature Engineering
- Mapping Data to New Space
is a methodology in feature selection where it is domain specific.
Feature Extraction
is a methodology in feature extraction where it combines features.
Feature Construction / Feature Engineering
2 WAYS OF MAPPING DATA TO NEW SPACE
- Fourier Transform
- Wavelet Transfor
a function that maps the entire set of values of a given attributes to a new set of replacement values such that each old value can be identifies with one of the new values.
Attribute Transformation
are numerical measure of how alike two data objects are.
Similarity
numerical measure of how different two data objects are.
Dissimilarity
refers to a similarity or dissimilarity.
Proximity
6 METHODS TO KNOW THE SIMILARITY OR DISSIMILARITY:
- Euclidean Distance
- Mikowski Distance
- Mahalanobis Distance
- Cosine Similarity
- Correlation
- Rank Correlation
is the generalization of Euclidean.
Mikowski Distance
measures the linear relationship between two variables.
Correlation
measures the degree of similarity between two ratings.
Rank Correlation
describes the likelihood of a random variable taking a given value.
Probability Density (Function)
is a non-parametric way to estimate the probability density function of a random variable.
Kernel Desnity Estimation
implies that the simplest approach is to divide region into a number of rectangular cells of equal volume and define density as # of points the cell contains.
Euclidean Density - Cell-Based
implies that the Euclidean density is the number of points within a specified radius of the point.
Euclidean Density - Center-Based