test7 Flashcards

1
Q

HOTSPOT

You are designing a data pipeline that will integrate large amounts of data from multiple on-premises Microsoft SQL Server databases into an analytics platform in Azure. The pipeline will include the following actions:

  • Database updates will be exported periodically into a staging area in Azure Blob storage.
  • Data from the blob storage will be cleansed and transformed by using a highly parallelized load process.
  • The transformed data will be loaded to a data warehouse.
  • Each batch of updates will be used to refresh an online analytical processing (OLAP) model in a managed serving layer.
  • The managed serving layer will be used by thousands of end users.

You need to implement the data warehouse and serving layers.

What should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
To implement the data warehouse:
An Apache Spark pool in Azure Synapse Analytics
An Azure Synapse Analytics dedicated SQL pool
Azure Data Lake Analytics
To implement the serving layer:
Azure Analysis Services
An Apache Spark pool Azure Synapse Analytics
An Azure Synapse Analytics dedicated SQL pool

A

Answer Area:
To implement the data warehouse: An Azure Synapse Analytics dedicated SQL pool
To implement the serving layer: Azure Analysis Services

Explanation:

To implement the data warehouse: An Azure Synapse Analytics dedicated SQL pool

Dedicated SQL pool (formerly SQL Data Warehouse) is a massively parallel processing (MPP) database designed for enterprise data warehousing workloads. It is optimized for storing and querying large volumes of data for analytical purposes.

Scenario Alignment: The requirement is to load “transformed data” into a “data warehouse”. Dedicated SQL pool is specifically built for this purpose. It can handle large amounts of data from multiple SQL Server databases, and it is designed for the analytical queries needed for a data warehouse.

Why not Apache Spark pool in Azure Synapse Analytics? While Apache Spark pools in Synapse are excellent for data transformation (as mentioned in the “cleansed and transformed” step), they are not primarily designed to be the data warehouse itself for structured analytical querying and serving OLAP models. Spark is more of a data processing engine. Dedicated SQL pool is the actual data warehouse component.

Why not Azure Data Lake Analytics? Azure Data Lake Analytics is a serverless analytics service for processing data in Azure Data Lake Storage using U-SQL. It’s more suited for data transformation and exploration in a data lake environment, not as a structured data warehouse for loading transformed data and serving OLAP models.

To implement the serving layer: Azure Analysis Services

Azure Analysis Services (AAS) is a fully managed platform as a service (PaaS) that provides enterprise-grade semantic modeling capabilities in the cloud. It is specifically designed to build and deploy OLAP models (semantic models) and serve them to end-users for analysis.

Scenario Alignment: The requirement is to “refresh an online analytical processing (OLAP) model” in a “managed serving layer” that will be used by “thousands of end users”. Azure Analysis Services is the perfect fit for this. It’s designed to host and serve OLAP models efficiently, handling queries from a large number of users.

Why not An Apache Spark pool Azure Synapse Analytics? While Spark can perform analytical processing and even some form of data serving, it’s not optimized for serving interactive OLAP models to thousands of concurrent users in the same way that Azure Analysis Services is. AAS is specifically built for this kind of workload with optimized query performance for OLAP models.

Why not An Azure Synapse Analytics dedicated SQL pool? While Dedicated SQL pool can be queried directly, using it as the serving layer for an OLAP model for thousands of users is less efficient and less feature-rich than using Azure Analysis Services. AAS is designed to create semantic models on top of data warehouses (like Dedicated SQL pool) and provide optimized OLAP serving capabilities. It provides features like caching, aggregations, and semantic modeling that are crucial for efficient OLAP serving and are not directly available in Dedicated SQL pool in the same optimized manner for end-user consumption.

In summary:

For the data warehouse, Azure Synapse Analytics dedicated SQL pool is the most appropriate service due to its data warehousing capabilities. For the serving layer of an OLAP model for thousands of users, Azure Analysis Services is the ideal choice because it is specifically designed for this purpose.

Final Answer:

Answer Area:
To implement the data warehouse: An Azure Synapse Analytics dedicated SQL pool
To implement the serving layer: Azure Analysis Services

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

You store web access logs data in Azure Blob Storage.

You plan to generate monthly reports from the access logs.

You need to recommend an automated process to upload the data to Azure SQL Database every month.

What should you include in the recommendation?

A. Microsoft SQL Server Migration Assistant (SSMA)
B. Data Migration Assistant (DMA)
C. AzCopy
D. Azure Data Factory

A

The correct answer is D. Azure Data Factory.

Here’s why:

Azure Data Factory (ADF): ADF is a cloud-based data integration service that allows you to create data-driven workflows for orchestrating data movement and transformation at scale. It’s specifically designed for scenarios like this, where you need to:

Connect to various data sources: ADF can connect to Azure Blob Storage to read the web access logs.

Transform the data (if needed): ADF can perform data transformations if the log data needs to be cleaned or structured before loading it into Azure SQL Database.

Load data into Azure SQL Database: ADF can connect to Azure SQL Database and load the processed data.

Schedule the process: ADF allows you to schedule the data upload process to run automatically every month.

Here’s why the other options are incorrect:

A. Microsoft SQL Server Migration Assistant (SSMA): SSMA is used for migrating databases from on-premises SQL Server to Azure SQL Database. It’s not designed for continuously uploading data from blob storage.

B. Data Migration Assistant (DMA): DMA is another tool used for migrating databases. It can help you assess and migrate databases, but it’s not designed for ongoing data ingestion.

C. AzCopy: AzCopy is a command-line utility for copying data to and from Azure Blob Storage. While you could use AzCopy to copy the log files to a temporary location, you would still need another process to parse the data and load it into Azure SQL Database. ADF provides a more comprehensive and automated solution.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

HOTSPOT –

You are planning an Azure Storage solution for sensitive data. The data will be accessed daily. The dataset is less than 10 GB.

You need to recommend a storage solution that meets the following requirements:

✑ All the data written to storage must be retained for five years.

✑ Once the data is written, the data can only be read. Modifications and deletion must be prevented.

✑ After five years, the data can be deleted, but never modified.

✑ Data access charges must be minimized.

What should you recommend? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Storage account type:
General purpose v2 with Archive access tier for blobs
General purpose v2 with Cool access tier for blobs
General purpose v2 with Hot access tier for blobs
Configuration to prevent modifications and deletions:
Container access level
Container access policy
Storage account resource lock

A

Here’s the breakdown of the correct selections:

Storage account type: General purpose v2 with Hot access tier for blobs

General purpose v2: This is the recommended storage account type for most scenarios, including storing blobs. It supports the latest features and offers competitive pricing.

Hot access tier: While the Archive access tier would seem appealing for long-term retention, it’s not appropriate for data that needs to be accessed daily. Archive is designed for rarely accessed data and has significant retrieval costs and delays. The cool tier is also inappropriate since the files are accessed daily. The hot access tier is optimized for frequently accessed data, which aligns with the requirement of daily access.

Configuration to prevent modifications and deletions: Storage account resource lock

Storage account resource lock: Azure Resource Locks are the only option that provides a true, system-enforced prevention of modifications and deletions at the storage account level. Locks can be set to CanNotDelete or ReadOnly, providing the required immutability. Once the five years have passed, the lock can be removed to allow deletion.

Container access policy: Only useful for controlling read or write access to the container, and not for preventing modifications or deletions. This would control what identities can do to the data, not to block all actions.

Here’s why the other options are incorrect:

General purpose v2 with Archive access tier for blobs: As explained above, Archive is not suitable for data that is accessed frequently.

General purpose v2 with Cool access tier for blobs: The cool tier is also inappropriate since the files are accessed daily.

Container access level: Container access level controls the level of public read access to a container and its blobs. It doesn’t prevent modifications or deletions.

Container access policy: Container access policies are used to define access rights for shared access signatures (SAS). While SAS can grant read-only access, the container access policy itself doesn’t prevent someone with the storage account key or sufficient permissions from modifying or deleting the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

HOTSPOT –

You are designing a data storage solution to support reporting.

The solution will ingest high volumes of data in the JSON format by using Azure Event Hubs. As the data arrives, Event Hubs will write the data to storage. The solution must meet the following requirements:

✑ Organize data in directories by date and time.

✑ Allow stored data to be queried directly, transformed into summarized tables, and then stored in a data warehouse.

✑ Ensure that the data warehouse can store 50 TB of relational data and support between 200 and 300 concurrent read operations.

Which service should you recommend for each type of data store? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Data store for the ingested data:
Azure Blob Storage
Azure Data Lake Storage Gen2
Azure Files
Azure NetApp Files
Data store for the data warehouse:
Azure Cosmos DB Cassandra API
Azure Cosmos DB SQL API
Azure SQL Database Hyperscale
Azure Synapse Analytics dedicated SQL pools

A

Here’s the breakdown of the correct selections:

Data store for the ingested data: Azure Data Lake Storage Gen2

Azure Data Lake Storage Gen2: This is the most suitable option for storing high volumes of data in JSON format when you need to organize it by date and time and query it directly. It combines the scalability and cost-effectiveness of Azure Blob Storage with the hierarchical namespace capabilities of Hadoop Distributed File System (HDFS), allowing you to organize data into directories for efficient querying. In addition, with its tiered access you can also retain the data long-term.

Data store for the data warehouse: Azure Synapse Analytics dedicated SQL pools

Azure Synapse Analytics dedicated SQL pools: This is the best choice for a data warehouse that needs to store 50 TB of relational data and support 200-300 concurrent read operations. Synapse Analytics dedicated SQL pools are designed for large-scale data warehousing workloads and provide excellent performance for complex queries and reporting.

Here’s why the other options are incorrect:

Azure Blob Storage: While Blob Storage can store large amounts of data, it lacks the hierarchical namespace of Data Lake Storage Gen2, making it less suitable for organizing data into directories by date and time for efficient querying. It is also difficult to perform advanced analytics on the data when stored in this fashion.

Azure Files: Azure Files provides fully managed file shares in the cloud, but it’s not designed for ingesting and storing high volumes of data for analytical purposes. It is not suitable for reporting.

Azure NetApp Files: Azure NetApp Files provides high-performance, enterprise-grade file storage, but it’s not the right choice for data warehousing or analytical workloads. Azure NetApp files are also more expensive that Data Lake.

Azure Cosmos DB Cassandra API: While Cosmos DB offers high scalability and availability, the Cassandra API is not well-suited for complex analytical queries and data warehousing.

Azure Cosmos DB SQL API: Azure Cosmos DB is a NoSQL database designed for transactional workloads. It is not an ideal choice for a data warehouse that requires relational data storage and support for a high number of concurrent read operations. It is not designed for reporting.

Azure SQL Database Hyperscale: While Azure SQL Database Hyperscale can store up to 100 TB of data, it’s primarily designed for OLTP (Online Transaction Processing) workloads. Synapse Analytics dedicated SQL pools are better optimized for analytical queries and data warehousing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

You have an app named App1 that uses an on-premises Microsoft SQL Server database named DB1.

You plan to migrate DB1 to an Azure SQL managed instance.

You need to enable customer managed Transparent Data Encryption (TDE) for the instance. The solution must maximize encryption strength.

Which type of encryption algorithm and key length should you use for the TDE protector?

A. RSA 3072
B. AES 256
C. RSA 4096
D. RSA 2048

A

The correct answer is C. RSA 4096

Here’s why:

Customer-Managed TDE and Encryption Strength: With customer-managed TDE, you control the encryption key used to protect your database. The goal is to maximize encryption strength.

RSA and Key Length: RSA (Rivest–Shamir–Adleman) is a widely used public-key cryptosystem. The key length determines the strength of the encryption. A longer key length provides greater security.

SQL Managed Instance TDE Protector Algorithms and Key Lengths:

Azure SQL Managed Instance TDE currently supports both RSA and AES encryption algorithms for the TDE protector.

When using RSA, a key length of 2048 bits or greater is generally recommended for strong security.

The option with the largest key length will be the strongest, which is RSA 4096.

Here’s why the other options are not the best choice:

A. RSA 3072: While RSA 3072 is stronger than RSA 2048, RSA 4096 is even stronger.

B. AES 256: AES (Advanced Encryption Standard) is a symmetric-key encryption algorithm. While AES 256 is a strong symmetric encryption algorithm, the question does not say to use a symmetric key.

D. RSA 2048: RSA 2048 is a common and generally acceptable key length for many scenarios, RSA 4096 would be preferred.

Therefore, RSA 4096 offers the greatest encryption strength among the options provided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

You have an Azure subscription. The subscription contains a tiered app named App1 that is distributed across multiple containers hosted in Azure Container Instances.

You need to deploy an Azure Monitor monitoring solution for App. The solution must meet the following requirements:

  • Support using synthetic transaction monitoring to monitor traffic between the App1 components.
  • Minimize development effort.

What should you include in the solution?

A. Network insights
B. Application Insights
C. Container insights
D. Log Analytics Workspace insights

A

The correct answer is B. Application Insights.

Here’s why:

Application Insights: Application Insights is an Application Performance Management (APM) service that allows you to monitor the availability, performance, and usage of your web applications. It provides a variety of features that make it well-suited for monitoring App1, including:

Synthetic transactions (web tests): Application Insights supports creating web tests (ping tests, multi-step web tests) that simulate user interactions with your application. This allows you to monitor the availability and responsiveness of your application from different locations and to monitor the traffic between the components within App1.

Auto-instrumentation: For many common application frameworks (like .NET, Java, Node.js), Application Insights can automatically collect telemetry data without requiring you to manually add code to your application. This helps minimize development effort.

Dependency tracking: Automatically detects and tracks calls between components. This helps monitor traffic and performance between the different container instances hosting App1.

Performance analysis: It will show where App1 is performing slow or failing, and it will give guidance on where to fix the issues.

Here’s why the other options are incorrect:

A. Network insights: Network insights primarily focuses on monitoring the health and performance of your Azure network infrastructure. While useful for network-related issues, it doesn’t provide the application-level monitoring capabilities needed to monitor the traffic between App1 components or support synthetic transactions.

C. Container insights: Container insights is designed for monitoring the health and performance of your container infrastructure. It provides metrics and logs for your containers, but it doesn’t offer the same level of application-specific monitoring features as Application Insights, such as synthetic transactions or dependency tracking.

D. Log Analytics Workspace insights: Log Analytics Workspace is a powerful tool for collecting and analyzing logs from various Azure resources. While you can use it to collect and query logs from App1, it doesn’t provide the same level of application monitoring features as Application Insights, such as synthetic transactions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

You have 12 Azure subscriptions and three projects. Each project uses resources across multiple subscriptions.

You need to use Microsoft Cost Management to monitor costs on a per project basis. The solution must minimize administrative effort.

Which two components should you include in the solution? Each correct answer presents part of the solution.

NOTE: Each correct selection is worth one point.

A. budgets
B. resource tags
C. custom role-based access control (RBAC) roles
D. management groups
E. Azure boards

A

The two components you should include in the solution are:

B. resource tags

D. management groups

Here’s why:

Resource Tags: Tags are key-value pairs that you can apply to Azure resources. By tagging resources with a project identifier (e.g., “Project: Project1”), you can then use Cost Management to group and filter costs based on those tags. This allows you to easily see how much each project is costing you across all the subscriptions they use.

Management Groups: Management groups allow you to organize your Azure subscriptions into a hierarchy. Since each project uses resources across multiple subscriptions, you can create a management group for each project and move the relevant subscriptions into that management group. This allows you to view costs at the management group level, providing a consolidated view of project costs. Management Groups also allow you to apply Policies and role-based access control across your subscriptions which can simplify management.

Here’s why the other options are not as suitable:

A. Budgets: Budgets are useful for setting spending limits and receiving alerts when costs exceed those limits. However, they don’t help you organize and group costs across subscriptions on a per-project basis. They act more like a “governor” or a “limit”, rather than a way to analyze costs across projects.

C. Custom role-based access control (RBAC) roles: RBAC roles are used to control access to Azure resources. While you can use RBAC to control who can manage costs, it doesn’t help you organize and group costs for reporting purposes.

E. Azure boards: Azure Boards is a service for managing work items and tasks. It’s not related to cost management or resource organization.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

HOTSPOT

You have an Azure subscription that contains multiple storage accounts.

You assign Azure Policy definitions to the storage accounts.

You need to recommend a solution to meet the following requirements:

  • Trigger on-demand Azure Policy compliance scans.
  • Raise Azure Monitor non-compliance alerts by querying logs collected by Log Analytics.

What should you recommend for each requirement? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
To trigger the compliance scans, use:
An Azure template
The Azure Command-Line Interface (CLI)
The Azure portal
To generate the non-compliance alerts, configure diagnostic settings for the:
Azure activity logs
Log Analytics workspace
Storage accounts

A

Here’s the breakdown of the correct selections:

To trigger the compliance scans, use: The Azure Command-Line Interface (CLI)

The Azure CLI provides the necessary commands to initiate on-demand policy evaluation/compliance scans. Specifically, the az policy state trigger-scan command can be used to trigger a policy evaluation for a specific scope (subscription, resource group, or individual resource). While the other options will perform the compliance scans, they are not on-demand.

To generate the non-compliance alerts, configure diagnostic settings for the: Azure activity logs

The Azure Activity Log records all actions taken on your Azure resources. When a resource is found to be non-compliant with a policy, this event is recorded in the Activity Log. By configuring diagnostic settings to export the Activity Log to a Log Analytics workspace, you can then query those logs to identify non-compliant resources and create alerts based on those queries. Diagnostic settings are also called “resource logs”.

Here’s why the other options are incorrect:

An Azure template: Azure templates are used for deploying and configuring Azure resources. While you can include policy assignments in a template, the template deployment itself doesn’t trigger on-demand compliance scans for existing resources.

The Azure portal: You can view the compliance status of resources through the Azure portal, but the portal doesn’t directly offer a way to trigger on-demand scans. Azure Policy scans happen on a schedule or when a change is made to a resource.

Log Analytics workspace: Log Analytics workspace is where logs are stored and analyzed, but it doesn’t generate its own “non-compliance alerts” without the Activity Log data being sent to it first. Configuring the diagnostic settings on the activity logs enables the workspace to access the data.

Storage accounts: While storage accounts can generate their own diagnostic logs for storage-related activities, these logs don’t include policy compliance information. The compliance status is captured in the Azure Activity Log.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

HOTSPOT

You have an Azure subscription. The subscription contains 100 virtual machines that run Windows Server 2022 and have the Azure Monitor Agent installed.

You need to recommend a solution that meets the following requirements:

  • Forwards JSON-formatted logs from the virtual machines to a Log Analytics workspace
  • Transforms the logs and stores the data in a table in the Log Analytics workspace

What should you include in the recommendation? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.
Answer Area
To forward the logs:
A linked storage account for the Log Analytics workspace
An Azure Monitor data collection endpoint
A service endpoint
To transform the logs and store the data:
A KQL query
A WQL query
An XPath query

A

Here’s the breakdown of the correct selections:

To forward the logs: An Azure Monitor data collection endpoint

Azure Monitor data collection endpoints (DCEs) are a central point of configuration to define how data is ingested into Azure Monitor. You can use DCEs to specify the source of the data (in this case, the Windows Server 2022 VMs), the transformation to apply (see below), and the destination (the Log Analytics workspace). The Data Collection Endpoint (DCE) ensures that the agent on the VMs knows where to send the data.

These logs are JSON, so they need to be transformed to be stored in the table in the Log Analytics workspace.

To transform the logs and store the data: A KQL query

Kusto Query Language (KQL) is the query language used to analyze data in Azure Monitor Logs (which is what Log Analytics uses). You can use KQL to parse the JSON logs, extract the relevant fields, and transform the data into a structured format suitable for storing in a table.

KQL is also used in Azure Data Explorer, which allows you to perform complex transforms on the data.

These logs are JSON, so they need to be transformed to be stored in the table in the Log Analytics workspace.

Here’s why the other options are incorrect:

A linked storage account for the Log Analytics workspace: While linking a storage account to Log Analytics can be useful for archiving data or for certain advanced scenarios, it’s not required or used to forward logs from VMs using the Azure Monitor Agent. This is required for VM Insights.

A service endpoint: Service endpoints provide secure and direct connectivity from your virtual network to Azure service resources. While they can enhance security, they don’t play a role in forwarding logs from VMs to Log Analytics.

A WQL query: WQL (WMI Query Language) is a query language used for querying Windows Management Instrumentation (WMI). While WMI can be used to collect some data from Windows VMs, it’s not the primary way to forward logs to Log Analytics in this scenario, especially JSON logs.

An XPath query: XPath is a query language used for navigating XML documents. While you can use XPath to parse XML data, it’s not relevant for parsing JSON-formatted logs.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

You have an Azure subscription that contains an Azure Blob Storage account named store1.

You have an on-premises file server named Server1 that runs Windows Server 2016. Server1 stores 500 GB of company files.

You need to store a copy of the company files from Server1 in store1.

Which two possible Azure services achieve this goal? Each correct answer presents a complete solution.

NOTE: Each correct selection is worth one point.

A. an Azure Logic Apps integration account
B. an Azure Import/Export job
C. Azure Data Factory
D. an Azure Analysis services On-premises data gateway
E. an Azure Batch account

A

Correct Answer: B, C

B: You can use the Azure Import/Export service to securely export large amounts of data from Azure Blob storage. The service requires you to ship empty drives to the Azure datacenter. The service exports data from your storage account to the drives and then ships the drives back.

C: Big data requires a service that can orchestrate and operationalize processes to refine these enormous stores of raw data into actionable business insights.

Azure Data Factory is a managed cloud service that’s built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

You have an Azure subscription that contains two applications named App1 and App2. App1 is a sales processing application. When a transaction in App1 requires shipping, a message is added to an Azure Storage account queue, and then App2 listens to the queue for relevant transactions.

In the future, additional applications will be added that will process some of the shipping requests based on the specific details of the transactions.

You need to recommend a replacement for the storage account queue to ensure that each additional application will be able to read the relevant transactions.

What should you recommend?

A. one Azure Data Factory pipeline
B. multiple storage account queues
C. one Azure Service Bus queue
D. one Azure Service Bus topic

A

Correct Answer: D

A queue allows processing of a message by a single consumer. In contrast to queues, topics and subscriptions provide a one-to-many form of communication in a publish and subscribe pattern. It’s useful for scaling to large numbers of recipients. Each published message is made available to each subscription registered with the topic. Publisher sends a message to a topic and one or more subscribers receive a copy of the message, depending on filter rules set on these subscriptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

You are designing a SQL database solution. The solution will include 20 databases that will be 20 GB each and have varying usage patterns.

You need to recommend a database platform to host the databases. The solution must meet the following requirements:

✑ The solution must meet a Service Level Agreement (SLA) of 99.99% uptime.

✑ The compute resources allocated to the databases must scale dynamically.

✑ The solution must have reserved capacity.

Compute charges must be minimized.

What should you include in the recommendation?

A. an elastic pool that contains 20 Azure SQL databases
B. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine in an availability set
C. 20 databases on a Microsoft SQL server that runs on an Azure virtual machine
D. 20 instances of Azure SQL Database serverless

A

Correct Answer: A

The compute and storage redundancy is built in for business critical databases and elastic pools, with a SLA of 99.99%.

Reserved capacity provides you with the flexibility to temporarily move your hot databases in and out of elastic pools (within the same region and performance tier) as part of your normal operations without losing the reserved capacity benefit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

HOTSPOT –

You have an on-premises database that you plan to migrate to Azure.

You need to design the database architecture to meet the following requirements:

✑ Support scaling up and down.

✑ Support geo-redundant backups.

✑ Support a database of up to 75 TB.

✑ Be optimized for online transaction processing (OLTP).

What should you include in the design? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
Service:
Azure SQL Database
Azure SQL Managed Instance
Azure Synapse Analytics
SQL Server on Azure Virtual Machines
Service tier:
Basic
Business Critical
General Purpose
Hyperscale
Premium
Standard

A

Here’s the breakdown of the correct selections:

Service: Azure SQL Database

Azure SQL Database is a fully managed platform as a service (PaaS) database engine. It offers automatic scaling, geo-redundant backups, and is generally optimized for OLTP workloads. Managed Instance and Synapse Analytics are not the correct choices.

Service tier: Hyperscale

Hyperscale is the Azure SQL Database service tier that supports a database of up to 100 TB (well exceeding the 75 TB requirement). It also provides rapid scaling capabilities to accommodate fluctuating workloads and offers geo-redundant backups. Azure SQL Hyperscale is designed for scaling compute and storage resources independently.

Here’s why the other options are incorrect:

Azure SQL Managed Instance: While Managed Instance offers compatibility with on-premises SQL Server, it’s generally more complex to manage than Azure SQL Database. Also, Managed Instance has limitations around the maximum database size (smaller than Hyperscale) and scaling flexibility.

Azure Synapse Analytics: Azure Synapse Analytics is optimized for analytical workloads (OLAP) and data warehousing, not OLTP. It also has different pricing and scaling models compared to Azure SQL Database.

SQL Server on Azure Virtual Machines: This is an Infrastructure-as-a-Service (IaaS) option where you manage the SQL Server installation and infrastructure yourself. It doesn’t offer the same level of managed scaling and geo-redundancy as Azure SQL Database with Hyperscale.

Basic, Standard, Premium: These service tiers for Azure SQL Database have limitations on database size (smaller than 75 TB) and scaling capabilities compared to Hyperscale.

Business Critical, General Purpose: These service tiers for Azure SQL Database do not support the 75 TB requirement. Also, they do not scale as quickly as the hyperscale tier.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

HOTSPOT –

You have an Azure subscription that contains the resources shown in the following table.

Name Type Account Kind Location
storage1 Azure Storage account Storage (general purpose v1) East US
storage2 Azure Storage account StorageV2 (general purpose v2) East US
Workspace1 Azure Log Analytics workspace Not applicable East US
Workspace2 Azure Log Analytics workspace Not applicable East US
Hub1 Azure event hub Not applicable East US

You create an Azure SQL database named DB1 that is hosted in the East US Azure region.

To DB1, you add a diagnostic setting named Settings1. Settings1 archive SQLInsights to storage1 and sends SQLInsights to Workspace1.

For each of the following statements, select Yes if the statement is true. Otherwise, select No.

Hot Area:
Answer Area
Statements
You can add a new diagnostic setting that archives SQLInsights logs to storage2.
You can add a new diagnostic setting that sends SQLInsights logs to Workspace2. You can add a new diagnostic setting that sends SQLInsights logs to Hub1.

A

Here’s the breakdown of the answers:

You can add a new diagnostic setting that archives SQLInsights logs to storage2.

Yes. Diagnostic settings can archive logs to any storage account within the same Azure region as the source resource (DB1 in this case). storage2 is a general purpose v2 storage account in the same region.

You can add a new diagnostic setting that sends SQLInsights logs to Workspace2.

Yes. Diagnostic settings can send logs to any Log Analytics workspace within the same Azure region as the source resource (DB1). Workspace2 is in the same region.

You can add a new diagnostic setting that sends SQLInsights logs to Hub1.

Yes. Diagnostic settings can stream logs to Event Hubs, so long as the event hub exists in the same Azure region.

In summary:

Yes

Yes

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

DRAG DROP

You have an Azure AD tenant that contains an administrative unit named MarketingAU. MarketingAU contains 100 users.

You create two users named User1 and User2.

You need to ensure that the users can perform the following actions in MarketingAU:

  • User1 must be able to create user accounts.
  • User2 must be able to reset user passwords.

Which role should you assign to each user? To answer, drag the appropriate roles to the correct users. Each role may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content.

NOTE: Each correct selection is worth one point.
Roles
Answer Area
Helpdesk Administrator for MarketingAU
User1:
Role
Helpdesk Administrator for the tenant
User2:
Role
User Administrator for MarketingAU
User Administrator for the tenant

A

Here’s the correct mapping of roles to users:

User1: User Administrator for MarketingAU

The User Administrator role grants the ability to create, manage, and delete user accounts. Since the requirement is to create user accounts within MarketingAU, this is the correct role to assign to User1. The scope of this role assignment needs to be MarketingAU to limit actions to only the scope of that administrative unit.

User2: Helpdesk Administrator for MarketingAU

The Helpdesk Administrator role allows users to reset passwords. This action should be performed at the “MarketingAU” unit, so that only the passwords under this administrative unit are reset.

Here’s why the other role options are incorrect:

Helpdesk Administrator for the tenant: While the Helpdesk Administrator role does allow resetting passwords, assigning it at the tenant level would grant User2 the ability to reset passwords for all users in the Azure AD tenant, which is not the requirement.

User Administrator for the tenant: This is not useful, as it can create users in other administrative units, not restricted to the specified MarketingAU admin unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

You plan to deploy an Azure SQL database that will store Personally Identifiable Information (PII).

You need to ensure that only privileged users can view the PII.

What should you include in the solution?

A. dynamic data masking
B. role-based access control (RBAC)
C. Data Discovery & Classification
D. Transparent Data Encryption (TDE)

A

Correct Answer: A

Dynamic data masking limits sensitive data exposure by masking it to non-privileged users.

Dynamic data masking helps prevent unauthorized access to sensitive data by enabling customers to designate how much of the sensitive data to reveal with minimal impact on the application layer. It’s a policy-based security feature that hides the sensitive data in the result set of a query over designated database fields, while the data in the database is not changed.

16
Q

You plan to deploy an app that will use an Azure Storage account.

You need to deploy the storage account. The storage account must meet the following requirements:

✑ Store the data for multiple users.

✑ Encrypt each user’s data by using a separate key.

✑ Encrypt all the data in the storage account by using customer-managed keys.

What should you deploy?

A. files in a premium file share storage account
B. blobs in a general purpose v2 storage account
C. blobs in an Azure Data Lake Storage Gen2 account
D. files in a general purpose v2 storage account

A

The correct answer is C. blobs in an Azure Data Lake Storage Gen2 account

Here’s why:

Azure Data Lake Storage Gen2 and Fine-Grained Access Control: Data Lake Storage Gen2 allows you to organize data into a hierarchical file system (directories and files), which is crucial for storing data for multiple users and applying individual encryption keys.

Encryption at Rest with Customer-Managed Keys: All storage accounts support encryption at rest using either Microsoft-managed keys or customer-managed keys. You can configure the entire storage account (Data Lake Storage Gen2 in this case) to use customer-managed keys, satisfying the requirement.

Key per User: Within the Data Lake Storage Gen2 account, you can create separate directories for each user and encrypt the contents of each directory with a unique customer-managed key.

Performance: Azure Data Lake Storage Gen2 is optimized for analytics and data intensive workloads, enabling cost-effective data storage with great performance.

Here’s why the other options are not as suitable:

A. files in a premium file share storage account: Premium file shares are designed for high-performance file storage, but don’t easily support customer-managed keys per user.

B. blobs in a general purpose v2 storage account: While you can store data for multiple users, it would be harder to manage it without a hierarchical directory structure. Also, the data will not be encrypted individually for each user.

D. files in a general purpose v2 storage account: Same as above, but files shares are not designed for analytics and high performance.

17
Q

HOTSPOT –

You deploy several Azure SQL Database instances.

You plan to configure the Diagnostics settings on the databases as shown in the following exhibit.
Diagnostics setting

Diagnostic setting name: Diagnostic1
Category details

log

SQLInsights
Retention (days): 90
AutomaticTuning
Retention (days): 30
QueryStoreRuntimeStatistics
Retention (days): 0
QueryStoreWaitStatistics
Retention (days): 0
Errors
Retention (days): 0
DatabaseWaitStatistics
Retention (days): 0
Timeouts
Retention (days): 0
Blocks
Retention (days): 0
Deadlocks
Retention (days): 0
metric

Basic
Retention (days): 0
Destination details

Send to Log Analytics (checked)
Subscription: Azure Pass - Sponsorship
Log Analytics workspace: sk200814 (eastus)
Archive to a storage account (checked)
Location: East US
Storage account: contoso20
Stream to an event hub (unchecked)
Use the drop-down menus to select the answer choice that completes each statement based on the information presented in the graphic.

NOTE: Each correct selection is worth one point.

Hot Area:
Answer Area
The amount of time that SQLInsights data will be stored in
blob storage is [answer choice]. 30 days
90 days
730 days
indefinite
The maximum amount of time that SQLInsights data can be
stored in Azure Log Analytics is [answer choice]. 30 days
90 days
730 days
indefinite

A

Box 1: 90 days –

As per exhibit.

Box 2: 730 days –

How long is the data kept?

Raw data points (that is, items that you can query in Analytics and inspect in Search) are kept for up to 730 days.

18
Q
A
19
Q
A
20
Q
A