Week 5: Data Preparation Flashcards

1
Q

What are the problems in the definition of data quality (4)

A
  • Unmeasurable: accuracy and completeness are extremely difficult, perhaps impossible to measure
  • Context independent: no accounting for what is important
  • Incomplete: what about interpretability, accessibility, metadata, analysis etc
  • Vague: the previous definition provides no guidance towards practical improvements of the data
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How are correlation and covariance related?

A

corr(A,B) = cov(A,B)/sd(A)*sd(B)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How do you compute the levenshtein similarity between strings s1 and s2 in python?

A

lev_sim = sm.levenshtein.Levenshtein()
lev_sim.get_sim_score (s1, s2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What are data quality issues? (7)

A
  • Noise
  • Duplicate data
  • Outliers
  • Unreliable sources
  • Inconsistent values
  • Outdated values
  • Missing values
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How do you normalise data by decimal scaling?

A

Transform the data by moving the decimal points of values of attribute A
v’ = v/10j where j is the smallest integer such that max(|v’|) < 1
e.g. if the maximum absolute value of A is 986, divide each value by 1000 (j=3)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the python library for computing similarity measures?

A

from py_stringmatching import similarity_measure as sm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is data validation?

A

checking permitted characters
finding type-mismatched data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are irrelevant attributes?

A

Attributes that contain no information that is useful for the data mining task at hand

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are 3 ways of handling missing data? (3)

A
  • Ignore the tuple: usually done when class label is missing - not effective when the % of missing values is large
  • Fill in the missing value manually: tedious + inflatable
  • Fill in the missing value automatically (data imputation) with: a global constant e.g. “unknown” or a new class, the attribute mean, the attribute mean for all samples belonging to the same class, the most probable value found through regression, inference or decision tree
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How do you reduce data with histograms?

A
  • Divide data into buckets and store average sum for each bucket
  • Partitioning rules: equal-width (equal bucket range) and equal-frequency (equal depth) (each bucket contains same number of data points)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What type of discretisation method is binning?

A

unsupervised, top down splitting method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are the three types of outlier detection methods?

A
  • Supervised methods: domain experts examine and label a sample of the underlying data and the sample is used for testing and training. Outlier detection modelled as a classification problem
  • Unsupervised methods: assume that normal objects are somewhat clustered. Outliers are expected to occur far away from any of the groups of normal objects
  • Semi-supervised methods: only a small set of the normal or outlier objects are labelled, but most of the data are unlabelled. The labelled normal objects together with unlabelled objects that are close by, can be used to train a model for normal objects
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the code in python to: fill nas in column 1 with mean values of column 1 grouped by column 2

A

data[“column1”].fillna(data.groupby(“column2)[“column1”].transform(“mean”))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the python code for removing missing values?

A

data.dropna()

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is univariate data?

A

data set involving only one attribute or variable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do you reduce data using clustering?

A

Partition data set into clusters based on similarity and store cluster representation (e.g. centroid and diameter) only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How do you normalise data by z-score in python?

A

from sklearn.preprocessing import StandardScaler
StandardScaler().fit_transform(df)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are proximity based methods for outlier detection?

A
  • Assume that an object is an outlier if the nearest neighbours of the object are far away
  • Two types of proximity based methods: distance-based and density-based
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is an outlier?

A
  • An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism
  • Outliers are data or model glitches
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is data discretisation?

A

dividing the range of a continuous attribute into intervals

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is difference between labelling versus scoring for outlier detection:

A

Considering the output of an outlier detection algorithm
Labelling approaches: binary output - data objects are labeled either normal or outlier
Scoring approaches: continuous output - for each object an outlier score is computed. E.g. the probability for it being an outlier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What are the steps of CRISP-DM (Cross industry processing for data mining) (6)

A

Business understanding
Data understanding
Data preparation
Modelling
Evaluation
Deployment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is mahalanobis distance for outlier detection?

A

Let o* be the mean vector for a multivariate dataset. Mahalanobis distance for an object o to o* is:
MDist(o, o*) = (o-o*)^TS^-1(o-o*) where S is the covariance matrix
Use the outlier detection technique of Grubbs test on the MDist to detect outliers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What is the time complexity of computing pairwise similarity?

A

O(n^2)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What is the time complexity of doing pairwise similarity in blocks with k blocks and block size n/k?
O(k(n/k)^2)
26
What similarity measures can be used for matching features? (6)
- Difference between numerical values - Jaro for comparing names - Edit distance for typos - Phonetic-based - Jaccard for sets - Cosine for vectors
27
What is multivariate data?
data set involving two or more attributes or variables
28
What is data reduction?
Obtain a reduced representation of the dataset that is much smaller in volume but yet produces the same (or almost the same) analytical results
29
What are the names of 2 techniques for turning categorical data into numerical data?
label encoding, one-hot encoding
30
What are the three kinds of outliers?
global, contextual, collective
31
What are examples of data quality metrics? (5)
- Conformance to schema: evaluate constraints on a snapshot - Conformance to business rules: evaluate constraints on changes in the database - Accuracy: perform inventory (expensive), or use proxy (track complaints) - Glitches in analysis - Successful completion of end-to-end process
32
What are collective outliers?
A subset of data objects collectively deviate significantly from the whole data set, even if the individual data object may not be outliers Need to have the background knowledge on the relationship among the data objects, such as distance or similarity measure on objects
33
What is the definition of data quality? (7 parts)
- Accuracy: the data was recorded correctly - Completeness: all relevant data was recorded - Uniqueness: entities are recorded once - Timeliness: the data is kept up to date - Consistency: the data agrees with itself - Believability: how much the data is trusted by users - Interpretability: how easy the data is understood
34
What is z-score normalisation?
Transform the data by converting the values to a common scale with an average of zero and a standard deviation of one v’ = (v - mean(A))/sd(A)
35
What ways can you handle noisy data through binning? (3)
- Smoothing by bin means: each value in a bin is replaced by the mean value of the bin - Smoothing by bin medians: each value in a bin is replaced by the median value of the bin - Smoothing by bin boundary: the minimum and maximum values in a given bin are identified as the bin boundaries. Each bin value is then replaced by the closest boundary value
36
What is correlation analysis for discretisation?
- Supervised: use class information - Bottom-up merge: find the best neighbouring intervals to merge - Initially each distinct value is an interval, Chi squared tests are performed on every adjacent interval and those with the least chi squared values are merged together. Merge performed recursively until a predefined stopping condition is satisfied
37
What is the python code for filling in missing values?
``` data.fillna() #inplace=TRUE replaces the values in the original dataframe ```
38
What is the maximum likelihood method for outlier detection?
Assume that the data are normally distributed, learn the parameters from the input data. An object is an outlier if it is more than 3sd from the mean. Ie the z-score (x-mean/sd) has absolute value more than 3
39
What are the disadvantages of too many or too little bin numbers for smoothing data?
- Too many bins, won’t smooth data, will keep the noise, lot of computation required - Too little bins, hide a lot of details in the data
40
How can you reduce the time complexity of pairwise similarity
Blocking: divide the records into blocks, perform pairwise comparison between records in the same block only
41
What is equal width partitioning for discretisation? What are the 2 problems with it?
- Divides the range into N intervals of equal size: uniform grid - If A and B are the smallest and largest values of the attribute, the width of the intervals will be W = (B-A)/N - The most straightforward, but outliers may dominate presentation - Skewed data is not handled well
42
What is equal-depth partitioning for discretisation? What is a problem with it?
- Divides range into N intervals, each containing approximately the same number of samples - Managing categorical attributes can be tricky
43
What is data compression?
Transformations are applied to obtain a reduced or compressed representation of the original data
44
What is the chi-squared correlation test for nominal data?
Tests the hypothesis that attributes A and B are independent based on the chi-squared statistic
45
What are parametric methods for outlier detection?
Assumes that the normal data is generated by a parametric distribution with the parameter theta - The probability density function of the parametric distribution f(x, gamma) gives the probability that x is generated by the distribution - The smaller this value, the more likely x is an outlier
46
What does a low local reachability distance mean?
The closest cluster is far from x
47
What is a non-parametric method for outlier detection with multivariate data?
Using a histogram: use histogram to graph results as a percentage, a number is in outlier if it falls with a very small percentage of the data Or use a kernel density estimation the probability density distribution of the data. For an object o, the density function f(o) gives the estimated probability that the object is generated by the stochastic process. If f(o) is low the object is likely an outlier
48
What is the general approach for outlier detection with multivariate data?
Transform the multi aria text outlier detection task into a univariate outlier detection problem
49
What is min-max normalisation?
Transform the data from a given range with [minA, maxA] to a new interval [new\_maxA, new\_minA] for a given attribute A v' = (v - minA)/(maxA - minA) \* (newmaxA - newminA) + newminA where v is the current value
50
What are methods for data transformation? (8)
- Smoothing: removing noise from the data. includes binning, regression, clustering - Attribute/feature construction: new attributes constructed from the given ones - Aggregation: summary or aggregation operations applied, data cube construction - Normalisation: scaled to fall within a smaller, specified range. Includes min-max normalisation, Z-score normalisation, normalisation by decimal scaling - Data reformatting: e.g. Jack Wilsher -\> Wilsher, J. - Using the same unit: e.g. inches and cm - Discretisation: raw values of numeric data attributes by interval labels or conceptual labels - Concept hierarchy generation: attributes such as street generalised to higher level concepts like city
51
How do you find the correlation matrix for a dataframe in python?
df.corr()
52
What is the python code to generate a dataframe with 20 elements with 5 rows and 4 columns?
df = pd.DataFrame(np.arange(20).reshape(5, 4))
53
What is attribute subset selection?
Removing irrelevant or redundant attributes
54
What are issues with computing similarity measures? (2)
- similarity measures have different scales - pairwise similarity between records is expensive?
55
When is data reduction through clustering useful and when is it not useful?
Effective if data is clustered but not if data is “smeared”
56
What is schema normalisation?
- Schema matching: e.g. contact number vs phone - Compound attributes: e.g. address vs street, city, zip
57
How do you reduce data by sampling?
obtaining a small sample s to represent the whole data set N. choose a representative subset of the data
58
What is a statistical approach to outlier detection?
Assume that the normal data objects are generated by a stochastic process (a generative model) and that data not following the model are outliers. Learn a generative model fitting the given data set, and then identify the objects in low probability regions of the model as outliers
59
What makes data "dirty"? (2)
- Inconsistent: containing discrepancies in codes or names - Intentional: e.g. disguised missing data such as Jan 1st for all birthdays
60
What is the difference between global and local approaches to outlier detection
Global approaches: the reference set contains all other data objects Local approaches: the reference contains a small subset of data objects and there is no assumption on the number of normal mechanisms
61
What type of data can you perform principal component analysis on?
numeric data only
62
What do we need in a definition of data quality? (3)
- Reflects the use of the data - Leads to improvements in processes - Measurable (we can define metrics)
63
How do you take a sample of a dataframe with and without replacement in python?
``` #take sample of 3 rows without replacement: df.sample(3) ``` ``` #take sample of 3 rows with replacement: df.sample(3, replace=True) ```
64
What is schema integration?
integrate metadata from different sources
65
What is concept hierarchy generation?
- Some hierarchies can be automatically generated based on the analysis of the number of distinct values per attribute in the dataset - The attribute with the most distinct values is placed at the lowest level of the hierarchy - E.g. Country (highest level) -\> state -\> city -\> street (lowest level) - This is also a type of data smoothing
66
For multivariate data, how do you overcome the simplified assumption that data is generated by a normal distribution? What method for outlier detection can you use for this new assumption?
assume the normal data is generated by a mixture of normal distributions For any object o in the dataset, the probability that o is generated by a mixture of distributions is the sum of the probability density functions at o Use the EM algorithm to learn the parameters of the data and an object is an outlier if it does not belong to any of the main groups of the data
67
What are contextual and behavioural attributes?
contextual attributes define the context, behavioural attributes define the characteristics of the object used in outlier evaluation
68
What is principal component analysis?
- Find a projection that captures the largest amount of variation in data - We find the eigenvectors of the covariance matrix, and these eigenvectors define the new space
69
What are redundant attributes?
Attributes that duplicate much or all of the information contained in one or more other attributes
70
What is matching features?
Given two records, compute a vector of similarity scores for corresponding features -Score can be Boolean (match/mismatch) or a continuous value based on specific similarity measure (distance function)
71
What is data transformation?
A function that maps the entires set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new value
72
What is the local outlier factor for outlier detection?
Quantifies the local density of a data point with the use of a neighbourhood of size k -Introduces a smoothing parameter: reachability distance RD RDk(x,y) = max{K dist(x), dist(x,y)}, where K dist(x) is the distance between x and its K-nearest neighbour -the local reachability distance of point x is: LRDk(x) = k/[sum of y in KNN(x) \* RDk(x,y)] -the local outlier factor LOF is: LOFk(x) = sum of y in [KNN(x)\*LRDk(y)/LRDk(x)] / k -Generally, LOF \>1 means x has a lower density than its neighbours
73
What does split and merge mean in discretisation?
``` split = top down method merge = bottom up method ```
74
What is noise in data?
Random error or variance in a measured variable
75
What is the classification/decision tree method of discretisation?
- Supervised: given class labels, top down recursive split - Using entropy to determine split point (discretisation point)
76
What proximity based approach should you use to detect local outliers?
Must use density. distance based can't detect local outliers
77
How do you compute jaro similarity between strings s1 and s2 in python?
jaro\_sim = sm.jaro.Jaro() jaro\_sim.get\_raw\_score(s1, s2)
78
What is dimensionality reduction?
remove unimportant attributes
79
What are the 2 different methods for filling nas in python?
``` #fill each na with the value before it data.fillna(method=‘pad') or method=‘ffill’ ``` ``` #fill each na with the value after it data.fillna(method=‘bfill’) or method=‘backfill’ ``` ``` #set a limit on the number of forward or backward fills data.fillna(method=‘pad’, limit=1) ```
80
How do you get summary statistics such as mean using numpy as np in python?
np.mean(data)
81
What are 4 ways to handle noisy data?
- Binning: first sort data and partition into equal frequency (equidepth) bins, then one can smooth by bin means, smooth by bin median, smooth by bin boundaries etc - Regression: smooth by fitting the data into regression functions - Clustering: detect and remove outliers that do not belong to any of the clusters - Combined computer and human inspection: detect suspicious values and check by human
82
How can you detect/handle redundant data attributes?
Redundant attributes can be detected by correlation and covariance analysis
83
What is data integration?
Combining data from multiple sources into a coherent data store
84
What is the difference between outlier detection and novelty detection?
Novelty detection involves seeing if new data fits with an existing data or would be considered an outlier
85
What are non-parametric methods for outlier detection?
Don't assume an a-priori statistical model and determine the model from the input data e.g. histogram and kernel density estimation
86
When does simple random sampling have poor performance?
Simple random sampling may have poor performance in the presence of skew
87
What are challenges of outlier detection? (6)
- Modelling normal objects and outliers properly - Application-specific outlier detection - Handling noise in outlier detection - Understandability - A data set may have multiple types of outlier - One object may belong to more than one type of outlier
88
What are the steps of principal component analysis?
Given N data vectors from d-dimensions, find k \<= d principal components that can accurately represent the data. Steps: - Normalise the input data: so that each attribute falls within the same range - Compute k orthonormal (unit) vectors i.e. principal components. These are unit vectors that each point in a direction perpendicular to the others. Each input data (vector) is a linear combination of the k principal components - The principal components are sorted in order of decreasing significance or strength. The principal components serve as new set of axes for the data. The first axis (first ranked principal component) shows the most variance among the data - The components are sorted. Reduce the data dimensionality by eliminating the weak components. Weak components have low variance
89
What is a global outlier and what is the issue with detecting them?
Object is a global outlier (Og) (or point anomaly) if it significantly deviates from the rest of the data set Issue: find an appropriate measure of deviation
90
What is an advantage of sampling?
The cost of obtaining a sample is proportional to the size of the sample s, not the size of the dataset N. Therefore sampling complexity is potentially sublinear to the size of the data
91
What is numerosity reduction?
- Replace the original data volume by alternative, smaller forms of data representation - Includes modelling, histograms, clustering, sampling and data cube aggregation
92
What is a contextual outlier and what is the issue with detecting them
Object is Oc (or conditional outlier) if it deviates significantly based on a selected context Issue: how to define or formulae meaningful context
93
What is data cube aggregation?
- Data can be aggregated for example if you have the sales for each quarter, create a new variable with yearly sales. the resulting dataset is smaller - Data cubes store multidimensional aggregated information
94
How does the distance-based approach to outlier detection work?
Judge a point based on its distance to its neighbours Given a radius (r) and a percentage (pi), a datapoint x is considered to be an outlier if the ratio of all other data points that have a distance less than r to x to the total size of the dataset is less than pi
95
What is clustering-based outlier detection?
Assume that the normal data objects belong to large and dense clusters, whereas outliers belong to small or sparse clusters
96
What are the 4 heuristic methods for selecting the subset in attribute subset selection?
- Stepwise forward selection: starts with empty set of attributes. Best of original attributes are determined and added to the set at each step - Stepwise backward elimination: starts with full set of attributes. At each step, removes worst of remaining attributes - Combination of forward selection and backward elimination: start with empty set, combine methods so that at each step the procedure adds the best attribute to reduced set and removes the worst attribute from initial set - Decision tree induction: tree is constructed from given data. All attributes that do not appear in the tree are considered irrelevant
97
What are 3 strategies for dimensionality reduction?
- Principal component analysis (PCA) - Singular value decomposition (SVD) - Feature subset selection, feature creation
98
How do you compute the affine gap similarity in python?
aff = sm.affine.Affine(...) aff.get\_raw\_score(s1, s2)
99
How do you add two lists A and B by element addition using numpy as np in python?
np.add(A, B)
100
What is a model-based approach to outlier detection?
Use a model to summarise the data e.g. linear regression. data points that do not conform to the model are potential outliers
101
What is model based data reduction?
Fit a model to the data and save the model instead
102
What are discretisation methods? (5)
- Binning - Histograms - Clustering - Classification (e.g. decision trees) - Correlation
103
What is entity resolution?
Problem of identifying and linking/grouping different representations of the same real-world object
104
What is data normalisation in text?
capitalisation, white space normalisation, correcting typos, replacing abbreviations, variations, nick names
105
What is the curse of dimensionality?
when dimensionality increases, data becomes increasingly spare and density and distance between points becomes less meaningful
106
What are the 5 types of sampling?
- Simple random sampling: there is an equal probability of selecting any particular item - Simple random sampling without replacement: once an object is selected, it is removed from the population - Simple random sampling with replacement: a selected object is not removed from the population - Cluster sampling: random sampling of clusters - Stratified sampling: partition data set and draw samples from each partition proportionally, i.e. approximately the same percentage of the data. Used in conjunction with skewed data