General Data Analytics Questions Flashcards

1
Q

Statistical Analysis:
What is the difference between descriptive and inferential statistics?

A

Descriptive statistics are used to describe the basic features of the data, such as mean, median, mode, and standard deviation. Inferential statistics, on the other hand, are used to make inferences or predictions about a population based on sample data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Machine Learning:
Explain the difference between supervised and unsupervised learning algorithms.

A

Machine Learning:
Supervised learning algorithms learn from labeled data, where each example is tagged with the correct answer. Unsupervised learning algorithms, on the other hand, learn from unlabeled data and must infer the structure from the input data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Data Visualization:
Why is it important to choose the right type of visualization for your data?

A

Data Visualization:
Choosing the right type of visualization is crucial because different types of data require different visualizations to effectively communicate insights. For example, bar charts are suitable for comparing categories, while line charts are better for showing trends over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Big Data Technologies:
What are the main components of the Hadoop ecosystem, and how do they work together?

A

Big Data Technologies:
The main components of the Hadoop ecosystem include Hadoop Distributed File System (HDFS) for storage, MapReduce for processing, YARN for resource management, and various higher-level frameworks like Hive, Pig, and Spark for data processing and analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Data Mining:
Describe the process of association rule mining and provide an example of its application.

A

Data Mining:
Association rule mining is a data mining technique used to find interesting relationships or associations between variables in large datasets. An example application is market basket analysis, where associations between products purchased together are discovered to improve product placement or promotional strategies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Data Warehousing:
What is a star schema in data warehousing, and how does it differ from a snowflake schema?

A

Data Warehousing:
A star schema is a data warehousing design where a fact table is surrounded by multiple dimension tables, forming a star-like structure. In contrast, a snowflake schema is a more normalized version of the star schema where dimension tables are further normalized into sub-dimension tables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Predictive Analytics:
What evaluation metrics are commonly used to assess the performance of a predictive model?

A

Predictive Analytics:
Common evaluation metrics for predictive models include accuracy, precision, recall, F1-score, ROC-AUC, and mean squared error (MSE), depending on the nature of the problem (classification or regression) and the specific requirements of the task.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Data Engineering:
Explain the concept of ETL (Extract, Transform, Load) in the context of data engineering.

A

Data Engineering:
ETL (Extract, Transform, Load) is a process used in data engineering to extract data from various sources, transform it into a consistent format, and load it into a target database or data warehouse for analysis. This process involves cleaning, filtering, aggregating, and transforming the data as needed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Statistical Analysis:
Can you explain the concept of hypothesis testing and provide an example of how it’s used in practice?

A

Statistical Analysis:
Hypothesis testing is a statistical method used to make inferences about a population based on sample data. It involves formulating a hypothesis about the population parameter and then using sample data to assess the likelihood of the hypothesis being true. An example could be testing whether the mean height of students in a school is significantly different from the national average height.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Machine Learning:
Let’s discuss the difference between classification and regression algorithms. Can you provide examples of each, along with their applications?

A

Machine Learning:
Classification algorithms are used to predict the categorical class labels of new instances based on past observations, while regression algorithms are used to predict continuous numerical values. An example of a classification algorithm is logistic regression, which can be used for email spam classification. An example of a regression algorithm is linear regression, which can be used to predict house prices based on features like square footage and number of bedrooms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Data Visualization:
What are some best practices for creating effective data visualizations? Can you explain the importance of storytelling in data visualization?

A

Data Visualization:
Best practices for creating effective data visualizations include choosing the right type of chart or graph to represent the data, ensuring clarity and simplicity, labeling axes appropriately, using color strategically, and providing context to aid interpretation. Storytelling in data visualization involves using visualizations to tell a compelling narrative that communicates insights and engages the audience emotionally.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Big Data Technologies:
Could you explain the role of Apache Spark in the big data ecosystem? How does it differ from traditional MapReduce processing?

A

Big Data Technologies:
Apache Spark is a fast and general-purpose distributed computing system for big data processing. It offers in-memory computing and provides APIs in multiple languages like Scala, Java, and Python. Spark is known for its ease of use and versatility, supporting various workloads such as batch processing, iterative algorithms, interactive queries, and streaming data. Unlike traditional MapReduce processing, Spark performs computations in memory, leading to faster processing times and better performance for iterative algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Data Mining:
Let’s talk about clustering algorithms. What are they, and how are they used in practice? Can you provide an example application of clustering?

A

Data Mining:
Clustering algorithms are unsupervised learning techniques used to group similar data points together based on their characteristics. An example is the k-means clustering algorithm, which partitions the data into k clusters, aiming to minimize the distance between data points within the same cluster while maximizing the distance between clusters. An application of clustering is customer segmentation in marketing, where customers with similar purchasing behavior are grouped together for targeted marketing campaigns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Data Warehousing:
What are some common challenges faced in designing and implementing data warehouses? How can denormalization be beneficial in data warehousing?

A

Data Warehousing:
Common challenges in designing and implementing data warehouses include data integration from multiple sources, ensuring data quality and consistency, managing metadata, scalability, and performance optimization. Denormalization in data warehousing involves combining normalized tables to reduce the number of joins needed for query processing, which can improve query performance and simplify data access.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Predictive Analytics:
How do decision trees work, and what advantages do they offer for predictive modeling? Can you explain the process of building and interpreting a decision tree?

A

Predictive Analytics:
Decision trees are a type of supervised learning algorithm used for classification and regression tasks. They work by recursively partitioning the feature space into subsets based on the value of certain features, with the goal of maximizing the purity of the resulting subsets. Decision trees offer advantages such as interpretability, ease of understanding, and handling both numerical and categorical data. The process of building a decision tree involves selecting the best split at each node based on criteria like Gini impurity or entropy and pruning the tree to avoid overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Data Engineering:
What are some key considerations when designing a data pipeline for real-time data processing? Can you discuss the differences between batch processing and stream processing?

A

Data Engineering:
When designing a data pipeline for real-time data processing, key considerations include scalability, fault tolerance, data consistency, and latency requirements. Batch processing involves processing data in fixed-size chunks or batches, suitable for scenarios where data can be collected over a period and processed periodically. Stream processing, on the other hand, involves processing data in real-time as it arrives, suitable for scenarios requiring low latency and continuous processing of data streams.