1. DP-203 Microsoft Azure in Data Engineering Flashcards
High-availability systems must be available most of the time. Service-level agreements (SLAs) specify your organization’s availability expectations. System uptime can be expressed as three nines, four nines, or five nines. These expressions indicate system uptimes of 99.9 percent, 99.99 percent, or 99.999 percent.
How many downtime hours per year will be covered with an SLA of 99.9% uptime?
8.76 Hours
Which of the following terms describes the final cost of owning a given technology and includes the following costs:
Hardware
Software licensing
Labor (installation, upgrades, maintenance)
Datacenter overhead (power, telecommunications, building, heating and cooling)
TCO
The total cost of ownership (TCO) is the purchase price of an asset plus the costs of operation. Assessing the total cost of ownership represents taking a bigger picture look at what the product is and what its value is over time.
Azure supports both structured and unstructured data. Unstructured data in Azure would commonly be held in which of the following?
-Azure SQL Data Warehouse
-Azure SQL Database
-Azure Cosmos DB
Azure Cosmos DB
The term lift and shift is used when migrating physical or virtualized servers from an on-premises environment to a Microsoft Azure virtual machine cloud-based environment without the need to rearchitect the application.
Which of the following are benefits of carrying out a Lift and Shift operation?
Select all options that apply.
-Lower operational costs
-Higher Availability
-Take advantage of all Azure Features.
-Lower operational costs
-Higher Availability
The schema of what data type can be defined at query time.
Unstructured data
Which cloud technical requirement is met by duplicating customer content for redundancy and meeting service-level agreements (SLAs) in Azure?
High availability
As data processing techniques change with technology, new roles are starting to appear. These roles provide specialized skills to help streamline the data engineering process. Which of the following roles have been identified as new roles in modern data Projects?
-Database Administrator
-Artificial Intelligence Engineer
-Data Scientist
-Data Engineer
-Artificial Intelligence Engineer
-Data Scientist
-Data Engineer
The role of a Data Engineer includes which of the following tasks?
Select all options that apply.
-Manage, monitor, and ensure the security and privacy of data to satisfy business needs.
-Working with services such as Cognitive Services, Cognitive Search, and Bot Framework.
-Using services and tools to ingest, egress, and transform data from multiple sources.
-Perform advanced analytics to extract value from data
-Manage, monitor, and ensure the security and privacy of data to satisfy business needs.
-Using services and tools to ingest, egress, and transform data from multiple sources.
Data Engineers may sometimes perform ETL process when processing data. The extract may come from many sources including databases, files, and streams. As part of the Extract process which of the following information must be supplied?
-Define the Data Source
-Define the Data
-Define the Transformation
-Define the Data Source
-Define the Data
In Microsoft Azure, Data Engineers will use several tools to perform ETL processes.
Which of the following tools will commonly be used to perform ETL processes in Azure?
-Azure Cosmos DB
-Azure Synapse Analytics
-Azure Data Factory
-Azure Data Factory
Which role works with Azure Cognitive Services, Cognitive Search, and the Bot Framework?
AI engineer
Which Azure data platform is commonly used to process data in an ELT framework?
-Azure Data Lake Storage
-Azure Data Factory
-Azure Databricks
-Azure Data Factory
Moving resources such as servers and services from an on-premises environment to a cloud-based solution will have a benefit on which of the following.
Both capital and operational expenditure
Which of the following data processing frameworks are used by data engineers to ingest data from an on-premises database to an on-premises data warehouse?
Extract, Transform, and Load (ETL)
Extract, Transform and Load (ETL) is a typical process for ingesting data from an on-premises database to an on-premises data warehouse.
Unstructured data differs from Structured Data in many features. \
Which of the following are features of Unstructured data?
Select all options that apply.
-Commonly stored in data warehouses
-Predefined format
-Native format
-Commonly stored in data lakes
-Schema-on-write
-Schema-on-read.
-Schema-on-read.
-Commonly stored in data lakes
-Native format