Azure Data Engineering Certification Flashcards
resource group
grouping of your services for project or app. pricing, budget, permissions. policies
performance (stand./premium)
premium: uses solid state drives, more expensive, better perform. changes access tier, replication,
4 Services Offered in Azure Storage
Blob: any type of unsctructured data
Table: NoSQL non-relational data tables
File: similar to OneDrive/GoogleDrive, attaches to multiple VMs for read/write access
Queue: message storage
Attributes of Azure Storage (CAASS)
Cost Effective
Available & Durable: redundant & replication of data
Accessable: REST APIs, SDKs (software dev. kit), Azure CLI (Command Line Interface), Powershell, Storage Explorer, AzCopy
Security: private endpoint, user access, HTTPS, vitrual networks, encrypted data
Scalable
5 APIs of Cosmos DB
(Maggie Crosses The Street w/ Greg)
(details)
SQL: json format
Table: key-value pairs (think dictionary in python)
MongoDB: json format (file storage)
Casandra: wide column data
Gremlin: graph data
Account Selection
For Standard performance, what account types can you select and what replication options are offered?
For Premium performance, what account types can you select and what replication options are offered?
Standard
General Purpose v1: LRS, GRS, RA-GRS
General Purpose v2: LRS, GRS, RA-GRS, ZRS, GZRS, RA-GZRS
BlobStorage: LRS, GRS, RA-GRS
Premium
General Purpose v1: LRS
General Purpose v2: LRS
BlockBlobStorage: LRS, ZRS
FileStorage: LRS, ZRS
What functions do you get with the different storage types (5)?
Differences with Premium performance?
General Purpose v1
- support all storage types up to 100 terabytes
- supports all blob storage types: Block, Append, Page, Hierarchical
- used for VMs & VN (virtual networks) still on classic deployment
- Premium: only page blob supported
General Purpose v2
- support all storage types up to 100 terabytes
- supports all blob storage types: Block, Append, Page, Hierarchical
- supports blob access tiers (hot or cold)
- Premium: only page blob supported
BlobStorage
- only avaiable in Standard performance
- supports only block & append blob storages, no other storage types supported
BlockBlobStorage
- only avaiable in Premium performance
- only supports block & append blob storages, no other storage types supported
- does not support blob access tiers (hot or cold)
- designed for high performance low latency, interactive workloads, and mapping apps. (analytics/data trans., ecommerce, quick display)
FileStorage
- only available in Premium performance
- higher performance and lower latency compared to general purpose
- IOPS bursting: 3x input/output per sec.
- billed based on provisioned storage up to 100 terabytes
Setting up an Account: What are the different Networking options (3)?
Public (all networks): all networks can access and the internet
Public (selected networks): prevents internet access
Privant Endpoint: secured access on private virtual networks using an IP address. Connects to on-prem or express route connections, placing Azure services inside you virtual network. Needs a Business Network Zone (BNZ) to function.
What is Blob Soft Delete?
recycled data builted for your blob in case of accidental deletion
3 Blob Access Tiers & Definitions
Hot: lowest acess, highest storage cost. Designed for current data, freq. used)
Cold: higher access, lower storage cost. Designed for older data not used frequently.
Archieved: highest access, lowest storage cost. Data is offline and can take hours to access. Designed for historical, very old data.
Local Redundant Storage
Replication within a zone, within a region across different hardware racks (also called nodes).
If the zone goes down, data is lost. This is the default replication option.

Zone Redundant Storage
Data copied across availability zones
If a region goes down, data is lost

Geo Redundant Storage
Data copied across regions to prevent loss of data in the event a natural diasters. Generally done within the same country.
Data in secondary region is not available to applications without a failover initiated by MS. If region A goes down MS will initiate a failover and then your data will be available from region B.

Geo-Zone Redundant Storage
Copied across availability zones within 1st region. Copied within an availability zone in the 2nd region.
Data in secondary region is not available to applications without a failover initiated by MS. If region A goes down MS will initiate a failover and then your data will be available from region B.
High availability and diaster recovery.

Read-Access Geo Redundant Storage
Data copied across regions to prevent loss of data in the event a natural diasters. Generally done within the same country.
Without failover data is accessable to individuals closest to the region. Data is always avaialable for your applications.
High availability, disasterrecovery, & immediate access in the event of natural disaster.

Read-Access Geo-Zone Redundant Storage
Copied across availability zones within 1st region. Copied within an availability zone in the 2nd region. Without failover data is accessable to individuals closest to the region.
Data is always avaialable for your applications

Azure Blob Storage Types (4)
Block: upto 4.7 terrabytes, composed of blocks to optimize data for uploading
Append: append blocks, ideal for logs
Page: VM disk & databses, frequent & random read/write applications.
Hierarchical: allows for collection of files to be organzied into a hierarchy of directories
Advantages (5) & Disadvantages (5)
Azure Blob Storage
Advantages
- designed for all types of unstructured data
- scalable
- cheap
- simple set up, no configuration
- no need for powerful computing to manage
Disadvantages
- no indexes
- no search tooles
- not optimized for performance
- user responsible for replication & syncing
- requires external computing to process
Multi-model Cosmos DB
(4 General Types of NoSQL Databases)
(5 APIs per NoSQL Type)
(provide information about each API)
Document APIs
SQL (Core)
- supports server-side programming model
- json documents
- SQL like for NoSQL
- default programming language after transitioning other APIs into Azure
mongoDB
- all mongo SDKs can interact with Azure API, fully compatable with mongo app. code
- implements “wire” protocol
- bson documents (binary json)
Key-Value API
Table
- premium offering for Azure Table storage
- not traditional SQL “table”
- rows can be of different lengths
- row value can be simple number
Wide-Column API
Cansandra
- data is stored in columns, each column is stored seperatly (each attribute is seperated from the other, think of individual list of columns)
- name and format of columns can vary from row to row
- compatible with current, external Casandra
- ways to interact with Casandra
- Casandra base tools
- Data Explorer
- SDK: CansandraCSharpedriver
Graph API
Gremlin
- entity relationships: nodes and edges
- use cases
- geospatial
- recommendation engines
- social networkds
- IoT
- Presist relationships at the storage layer
- no model required
Redundancies Available in Cosmos DB
Geo-Redundant
Multi-Region Write
Availability Zone
Encryption in Cosmos DB
(defaults & choices)
Default
- encryption is always set at rest (stored data)
Choices
- Service Managed: Azure managed
- Customer Managed: User set encryption and key
Latency
(definition & mitigation)
Latency is the wait time between request and response. It is migitated by housing the server as close as possible to the user.
Throughput
(definition, when is the amount set, in what units does Azure manage throughput & the calculation)
Throughput is the number of requested that can be processed by the database within a given timeframe. The Throughput amount can be defined either at database level or a the container level. If throughput exceeds the alotted time an error is thrown.
Cosmos manages throughput in request units (RU)
RU calculation: Memory + CPU + IOPs (input/output proessing per second)
Container
Componenets (5) & Names per API (3 for each API)
Components
- Database
- Throughput
- Container ID
- Partition Key
- Analytical Store
SQL API
Database is defined as Databse
Container is defined as Container
Item is defined as Document
Cassandra API
Database is defined as Keyspace
Container is defined as Table
Item is defined as Row
MongoDB API
Database is defined as Databse
Container is defined as Collection
Item is defined as Document
Gremlin API
Database is defined as Databse
Container is defined as Graph
Item is defined as Node of edge
Table API
Database is not defined
Container is defined as Table
Item is defined as Item
Partitioning Definitions
- Partitioning
- Partition Keys (rem. imp. fct.)
- Logical Partition
- Physical Partition (rem. imp. fct.)
- Composite Key (add. term)
- Partition Restrictions
Partition: items in a container are divided into distinct subsets called logical partitions.
Partition Key: the value by which Azure organizes your data into logical divisions. Cannot change partition key after creation of the database or container.
Logical Partitions: subsets of your data divided by the partition key
Physical Partitions: the physical machines that house the different logical partitions. Logical partitions are never divided across multiple physical partitions.
Composite Key: multiple unique identifiers combined to create a single partition key, further subdividing data into smaller units.
Restrictions:
- Each document cannot exceed 2MB
- Each logical partition cannot exceed 20GB

Dedicated & Shared Throughput
(definitions)
When you define throughput at the database level:
Shared: throughput is evenly distributed across containers (recommended)
Dedicated: defined throughput for each container, if throughput is defined at the container level by default it will be dedicated
Hot Partition
(definition)
not enough RUs for the logical partition, while other logical partitions have plenty of available RUs
(good practice to create partition keys that evenly distribute data across logical partitions)
Single v. Cross Partition Queries
Single: can identify all data from a query in a single logical partition (most efficient).
Cross: queries has to look across multiple logical partitions to find data for query. Also called a fan out query.
High Cardinality
(definition within context of databases)
columns with values that are unique or very uncommon
Fixed Request Charge
the cost to run each query against your data
Time to Live
- the time period for the data to be active before it is deleted
- set the time to live value under settings
- defaults to comsuming only leftover RUs - if other workloads are running, time to live data deletion will be delayed
Cosmos DB Global Distribution
(def., paired regions, multi-region write & choices)
Definition
Data can be replicated globally and read from any selected region. Storage and throughput are copied into selected global region.
Paired Regions
Two geographic centers with high speed connection. Used for diaster recovery and business contunuity purposes.
Multi-Region Write (Multi-Master Write)
User from two seperate regions (ex. Japan & US) update data at the same time. Options:
- last write wins (must define, ex. time stamp)
- merge procedure (define the procedure)
- merge procedure ( don’t define)
- actions are stored and manually define the stored procedure later
Automatic v. Manual Failover
(definitions & when does it applies)
applies when there is only one write enabled center
Manual: user chooses the next write enabled center
Automatic: decide prior to natural diaster
replication will automatically occur in either scenario as long as a global backup center has been identified
Consistency Levels of Cosmos DB
&
Definitions
(5)
In general, there is a trade-off between consistency and availability.
Strong: always read most up-to-date data, no dirty reads. High latency, highest cost.
Bounded Staleness: dirty reads are only possible within a bounded timeframe.
Session: within a session no dirty reads. Once session ends dirty reads possible. No dirty reads for writers in the same session, however dirty reads possible for other users.
Consistency Prefix: dirty reads are possible but never seen out-of-order for updates. Data is always read in order althought it may not be the most recent data.
Eventual: automatically respond to request, so dirty reads are possible and those read may be out of order. Evenetually everything will be updated to the correct data.
Is it possible for clients to override consistency levels?
clients can set consistency levels to a lower level at connection time
(Strong is the highest consistency level)
Areas Covered in Non-Relational Portion of Exam
Azure Storage
- how to provision an account
- replication options (LRS, GRS, ZRS, GZRS, RA-GRS, RA-GZRS)
- blob storage
Data Lake
- evolution from Blob & distinctions
- security options
Cosmos DB (largest area of this portion)
- features
- multi-model
- consistency levels
- databases & containers
- throughput & request
- partitioning & horizontal scaling
- global distribution
- multi-master write
- failover
- time to live
- CLI (code to create an account)
- security
- pricing