Ch. 2 Implement and manage storage. Flashcards

1
Q

How are storage accounts managed?

A

Through the Azure Resource Manager.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

How are storage accounts authenticated and authorized?

A

Through Azure Active Directory and RBAC.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Azure storage account

A
  • contains all of your Azure Storage data objects (blob, file shares, queues, tables, and disks)
  • provides a unique namespace for your Azure Storage data that’s accessible from anywhere in the world over HTTP and HTTPS
  • data in the storage account is durable and highly available, secure, and massively scalable
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Endpoint

A

the combination of the account name and the service endpoint

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Storage firewall

A

allows you to limit access to specific IP addresses or an IP address range

  • by limiting access to the IP address range of your company, access from other locations will be blocked
  • service endpoints are used to restrict access to specific subnets within an Azure VNet
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Where do you configure the storage firewall using Azure portal?

A
  1. open the storage account blade
  2. Click Firewalls and virtual networks
  3. Under All Access From, click Selected Networks to reveal the Firewall and Virtual Network settings
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Address space for storage firewall

A

When creating a storage firewall, you must use public Internet IP address space. You cannot use IPs in the private IP address space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How do you access the storage account via the Internet?

A

Use the storage firewall to specify the Internet-facing source IP addresses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Benefits of using network service endpoints

A
  1. allows you to remove access from the public Internet and only allow traffic from a virtual network for improved security
  2. optimized routing - service endpoints create a direct network route from the virtual network to the storage service
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Configuring service endpoints

A

Step 1. create the route from the subnet to the storage service

Step 2. configure which virtual networks can access a particular storage account

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How do you enable anonymous user access in Blob storage?

A

you much change the container access level

by default no public read access is enabled for anonymous users

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Blob Storage access levels

A

Private
Blob
Container

the access level is configured separately on each blob container

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Blob Storage access levels - Private

A

only the storage account owner can access the container and its blobs, no one else has access

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Blob Storage Access levels - Blob

A

only blobs within the container can be accessed anonymously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Blob Storage access levels

A

only blobs within the container can be accessed anonymously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Blob Storage access levels Container

A

blobs and their containers can be accessed anonymously

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Shared Access Signature token (SAS token)

A

a URI query string parameter that grants access to specific containers, blobs, queues, and tables

use a SAS token to grant access to a client that shouldn’t have access to the entire contents of the storage account (and storage account keys) but still requires secure authentication

grant access to a specific resource, for a specified period of time, and with a specified set of permissions

used to read and write the data to users’ storage accounts

used to copy blobs or files to another storage account

only use HTTPS because active SAS tokens provide direct authentication to your storage account, you must use a secure connection to distribute SAS token URIs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Types of services within a storage account

A

blobs
tables
queues
files
disks

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Blobs

A

provides highly scalable service for storing arbitrary data objects such as text or binary data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

tables

A

provides a NoSQL-style store for storing structured data

tables in Azure storage do not require a fixed schema, different entries in the same table can have different fields

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

queues

A

reliable message queuing between application components

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

files

A

managed files shares that can be used by Azure VMs or on-premises servers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

disks

A

persistent storage volume for Azure VM which can be attached as a virtual hard disk

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Types of Storage Blobs

A
  1. Block Blobs
  2. Append Blobs
  3. Page Blobs: used to store VHD files when deploying unmanaged disks (older disk storage technology for Azure virtual machines, managed disks are recommended for new deployments)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Configurable Options when creating a storage account

A

Performance Tier
Account Kind
Replication Option
Access Tier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Naming storage accounts

A
  • the storage account name must be unique across all existing storage account names in Azure
  • the name must be between 3 to 24 characters and can contain only lowercase letters and numbers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

Performance tiers

A

Standard: supports all storage services, blobs, tables, files, queues, and unmanaged Azure virtual machine disks. It uses magnetic disks to provide cost-efficient and reliable storage.

Premium: designed to support workloads with greater demands on I/O and is backed by high-performance SSD disks
It only supports General-Purpose accounts with Disk Blobs and Page Blobs
It also supports Block Blobs or Append Blobs with BlockBlobStorage accounts and files with FileStorage accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Can the performance tier setting be changed once selected?

A

No, the setting cannot be changed at a later date.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Replication options with premium tier

A

premium tier only supports LRS as a replication option for general-purpose storage accounts
it supports LRS and ZRS both for BlockBlobStorage and File Storage accounts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Standard Tier account kind

A
  1. Storage V2 ( General-Purpose V2)
  2. Storage (General-Purpose V1)
  3. BlobStorage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Premium Tier Account kind

A
  1. Storage V2 (General-Purpose v2)
  2. Storage (General - Purpose v1)
  3. BlockBlobStorage
  4. FileStorage
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Key points to remember.

A
  • the Blob Storage account is a specialized storage account used to store Block Blobs and Append Blobs. You can’t store Page Blobs in these accounts, therefore you can’t use them for unmanaged disks
  • Only General-Purpose V2 and Blob Storage accounts support the Hot, Cold, and Archive access tiers
  • General-Purpose V1 and Blob Storage accounts can both be upgraded to General-Purpose V2 account. This operation is irreversible. No other changes to the account kind are supported.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Storage Account Replication Options

A
  1. Locally redundant storage (LRS)
  2. Zone Redundant storage (ZRS)
  3. Geographically redundant storage (GRS)
  4. Read access geographically redundant storage (RA-GRS)
  5. Geographically zone redundant storage (GZRS)
  6. Read access geographically zone redundant storage (RA-GZRS)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Locally Redundant Storage (LRS)

A

makes 3 synchronous copies of your data within a single datacenter

available for General-Purpose or Blob Storage accounts at both Standard and Premium Performance tiers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Zone redundant storage (ZRS)

A

makes 3 synchronous copies to 3 separate availability zones within a single region

available for General-Purpose V2 storage accounts only at the Standard Performance tier only, also available for BlockBlobStorage and FileStorage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

Geographically redundant storage (GRS)

A

There are 3 local copies in the same datacenter (LRS) and 3 additional asynchronous copies to a second datacenter hundreds of miles away from the primary region. Data replication occurs within 15 min although no SLA s provided.

available for General-Purpose or Blob Storage accounts at the Standard Performance tier only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

Read access geographically redundant storage (RA-GRS)

A

There are 3 local copies, plus 3 additional asynchronous copies to a second datacenter hundreds of miles away from the primary region as well as read-only access to the data in the secondary datacenter.

Available for General-Purpose or Blob Storage account at the Standard Performance tier only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

Geographically zone redundant storage (GZRS)

A

There are 3 synchronous copies across multiple availability zones, plus 3 additional asynchronous copies to a second datacenter hundreds of miles away from the primary region. Data replication occurs within 15min although there is no SLA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

Read access geographically zone redundant storage (RA-GZRS)

A

There are 3 synchronous copies across multiple availability zones, plus 3 additional asynchronous copies to a second datacenter hundreds of miles away from the primary region, plus read-only access to the data in the secondary datacenter

Available for General-Purpose V2 storage accounts only at the Standard Performance tier only

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

Note on Replication options

A
  • if an entire datacenter is down then LRS would incur an outage
  • if a primary region is down both the LRS and ZRS options would incur an outage, but GRS and GZRS would provide the secondary region that takes care of requests
  • not all the replication options are available in all regions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

Specifying replication and performance tier settings

A

When creating a storage account via the Azure portal, the replication and performance tier options are specified using separate settings

When creating an account using Azure PowerShell, the Azure CLI, or via template, these settings are combined within the SKU settings

ex. to specify a Standard storage account using locally redundant storage using the Azure CLI, use –sku Standard_LRS

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

Access Tiers

A
  1. Hot
  2. Cool
  3. Archive

– access tiers apply to Blob Storage only. They do no apply to other storage services including Block Blob Storage

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

Access Tier - Hot

A
  • used to store frequently accessed objects
  • data access costs are low while storage costs are higher
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Access Tiers - Cool

A
  • used to store large amounts of data that is not accessed frequently and that is stored for at least 30 days
  • the availability SLA is lower than for the Hot tier
  • relative to the Hot tier, data access costs are higher and storage costs are lower
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

Access Tiers - Archive

A
  • used to archive data for long-term storage, that is accessed rarely, can tolerate several hours of retrieval latency and will remain in the Archive tier for at least 180 days
  • most cost effective option for storing data, but accessing that data is more expensive than accessing data in the Hot or Cool tiers
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

Note on Access Tiers

A
  • new blobs will default to the access tier that is set at the storage account level, though you can override that at the blob level by setting a different access tier, including the archive tier
  • the archive tier is not supported for ZRS, GZRS, and RA-GZRS
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

Shared Access Signatures (SAS) Token

A
  • a SAS token is a way granularly control how a client can access data in an Azure storage account
  • you can use an account - level SAS to access the account itself
    • you can control what services and resources the client can access
    • what permission the client has
    • how long the token is valid for
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

What is the simplest way to create a SAS token?

A

Through the Shared Access Signature blade in the Azure portal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

Using shared access signatures (SAS)

A
  • each SAS token is a query string parameter that can be appended to the full URI of the blob or other storage resource for which the SAS token was created.
  • create the SAS URS by appending the SAS token to the full URI of the blob or other storage resources
50
Q

Account-level SAS

A
  • with account-level SAS you can manage all the resources belonging to the storage account
  • you can also perform write and delete operations for all the resources (blobs, tables, etc)
  • stored access policy is not supported for account level SAS
51
Q

Using user delegation SAS

A
  • create user delegation SAS using Azure AD credentials
  • only supported by the Blob Storage
    * can grant access to containers and blobs
  • SAS is not supported for user delegation SAS
52
Q

Using shared access signatures

A
  • each SAS token is a query string parameter that can be appended to the full URI of the blob or other storage resource for which the SAS token was created
    • URI: uniform string identifier, a unique sequence of characters that identifies a logical or physical resource used by web technologies. They provide a means of locating and retrieving information resources on a network
  • create the SAS URI by appending the SAS token to the full URI of the blob or other storage resources

example
storage account: examref
blob container: examrefcontainer
blob path: sample-file.png

Full URI to the blob in storage is: https://examrefstorae.blob.core.windows.net/examrefcontainer/sample-file.png

The combine URI with the generated SAS token is: https://examrefstorage.blob.core.windows.net/examrefcontainer/sample-file.png?sv=2019-10-10&ss=bfqt&srt=sco&sp=rwdlacupx&se=2020-05-08T08:50:14Z&st=2020-05-08T00:50:14Z&spr=https&sig=65tNhZtj2lu0tih8HQtK7aEL9YCIpGGprZocXjiQ%2Fko%3D

53
Q

Using a stored access policy

A
  • A SAS token incorporates the access parameters (start to end, time, permissions, etc). Stored access policies allow the parameters for a SAS token to be decoupled from the token itself.
  • the access policy sets the start to end time, access permissions etc.
  • generated SAS tokens will reference the access policy rather than having it embedded
  • You can update the access policy rather than regenerating a SAS token when you want to change the parameters
54
Q

How many access policies can you have on a container, table, queue or file share?

A

5

55
Q

Access keys

A
  • the easiest way to manage a storage account
  • with the storage account name and an access key of the Azure storage account you have full access to all data in all services within the storage account
  • applications use the storage account name and key for access to Azure Storage
56
Q

Key rolling

A
  • each storage account has 2 keys, a primary and secondary
  • If the primary key needs to be reset, then the secondary key can be used by applications
  • in this way there will be no downtime for applications that directly access storage using and access key
57
Q

Access keys and SAS tokens

A
  • rolling a storage account access key will invalidate any SAS tokens that were generated using that key
58
Q

Regenerate storage account access keys

A
  • can be regenerated in Azure portal or the command-line tools
  • in PowerShell use New-AzStorageAccountKey cmdlet

-Azure CLI use az storage account keys renew command

59
Q

Azure Key Vault

A
  • helps safeguard cryptographic keys and secrets used by cloud applications and services
  • stores authentication keys, storage account keys, data encryption keys, and certificate private keys
60
Q

hardware security modules (HSM)

A
  • protects keys in the Azure Key Vault
  • HSM keys can be generated in place or imported
61
Q

bring your own key (BYOK)

A
  • keys that are imported
62
Q

Accessing encrypted keys from Azure Key Vault

A
  • accessing and unencrypting keys is performed by a developer
  • keys from the Key Vault can also be accessed from ARM templates during deployment
63
Q

Azure AD Authentication (AAD Authentication)

A
  • beneficial for large customers who want to control data access at an enterprise based on their security and compliance standards.
  • Azure blobs and queues are supported by AAD authentication
  • Azure Table storage is not supported with AAD authentication
  • storage accounts that are created with Azure Resource Manager deployment model only support Azure AD authorization
64
Q

manage service identity (MSI)

A
  • an automatically managed identity in AAD for applications to use when connecting to resources that support AAD authentication
  • applications use managed identities to obtain AAD tokens without having to manage any credentials
65
Q

Resource scopes for blobs and queues

A
  • the scope of access for a security principal for assigning a RBAC role
  • container, queue, storage account, resource group, subscription
66
Q

Access Scope - Container

A
  • the role assignment will be applicable at the container level
  • all blobs inside the container, the container properties, and the metadata will inherit the role assignment when this scope is selected
67
Q

Access Scope - Queue

A
  • the role assignment will be applicable at the queue level
  • all messages inside queue, queue properties and metadata will inherit the role assignment
68
Q

Access Scope - Storage Accounts

A
  • the role assignment will be applicable at the storage account level
  • all containers, blobs, queues, and messages within the storage account will inherit the role assignment
69
Q

Access Scope - resource group

A
  • role assignment will be applicable at the resource group level
  • all the containers or queues in all the storage accounts in the resource group will inherit the role assignment when this scope is selected
70
Q

Access scope - subscription

A
  • the role assignment will be applicable at the subscription level
  • all the containers or queues in all the storage accounts in all the resource groups in the subscription will inherit the role assignment
71
Q

RBAC roles effect

A
  • RBAC roles take up to 5 min to propagate the role assignments
72
Q

What protocol does Azure use to access file shares?

A
  • SMB
73
Q

Azure Files uses what two types of identity based authentication to access the shares?

A
  1. On-premises Active Directory Domain Services (AD DS)
  2. Azure Active Directory Domain Services (Azure AD DS)
74
Q

Configuring identity-based access for file shares using AD DS

A

Follow these steps for AD DS authentications

  1. Enable AD DS authentication on your storage account.
  2. Assign share-level access permissions to an Azure AD identity.
  3. Assign directory/file-level permissions using Windows ACLs
  4. Mount the Azure file share.
  5. Update the password of your storage account identity in AD DS.
75
Q

Azure Active Directory Domain Services (Azure AD DS) authentication and authorization

A
  • Azure AD DS joined Windows machines can access Azure file shares with AZure AD credentials over SMB.
  1. Enable Azure AD DS for your storage account.
  2. Register your storage account with AD DS and enable AD DS authentication for you Azure files shares
  3. Configure share-level permissions in order to get access to your file shares
  4. Assign granular-level permissions at the root, directory, or file level using basic and advanced Windows ACLs
  5. Mount an Azure file share from a domain-joined VM Log in to the domain-joined VM using an Azure AD identity.
  6. Grant permissions to additional users (optional)
  7. Mount the share by using your storage account key from your domain-joined VM. This enables you to configure ACLs with superuser permissions.
  8. Configure the Windows ACLs using either Windows File Explorer or icacls.
  9. You can use Windows File Explorer to grant the necessary permissions.
  10. Update the password of the AD DS identity/account that represents your storage account in an organizational unit or domain that enforces password expiration time.
76
Q

Azure Import/Export Service

A
  • allows you to ship data into or out of an Azure Storage account by physically shipping disks to and Azure datacenter
  • only used with Blob Storage and Azure Files
77
Q

Export Blob

A
  • export large volumes of data from Azure Storage to your on-premises environment by shipping you the data on disk
  • only supports the export of blobs
78
Q

Import Azure job

A
  • allows you to import large volumes of data to Azure by shipping the data on disk to MS
79
Q

WAImportExport tool

A
  • Tool used to import/export data for Azure
  • 2 versions
    • Version 1 is for Azure Blob Storage
    • Version 2 is for Azure Files
80
Q

Azure Import/Export Job tool requirements

A
  • Win 7, Win Svr 08 R2 or later OS required
  • .NET Framework 4.5.1 or later and BitLocker
  • All storage account types are supported (General-Purpose V1, General-Purpose V2, and Blob Storage)
  • Block, Page, and Append Blobs are supported for both import and export
  • Azure Files service is only supported for import jobs but not export jobs
81
Q

How many disks can a single import/export job have?

A

10 HDDs and SDDs and a mix of HDDs and SDDs of any size

82
Q

Prepare your Drives

A
  • enter parameters such as destination storage account key, the BitLocker key, and the log directory
  • a journal file is created to contain the information necessary to restore the files on the drive to Azure Storage account (mapping a folder or file to a container)
  • Each drive used in the import job will have a unique journal file that is created by the tool
  • to add a single file to the drive and journal file, use the /srcfile parameter rather than the /srcdir parameter
83
Q

Steps to create an import into Azure job

A
  1. import data using the WAImportExport tool
  2. prepare your drives using the WAImportExport tool and copy the data to transfer to the drives
  3. create an import job through the Azure portal
  4. physically ship the disks to MS using a supported courier service with a tracking number for your package. The drives will be returned using the same package
84
Q

Azure Storage Explorer

A

a cross-platform application designed to help you quickly manage one or more Azure Storage accounts

  • it can be used with all storage services and supports Cosmos DB and Azure Data Lake Storage services
85
Q

Connecting Storage Explorer to Storage Accounts

A

After Storage Explorer is installed you can connect to Azure Storage in one of five different ways

  1. Add an Azure account
  2. Use a connection string - the connection string is obtained by opening the stoerage account blade in the Azure portal and clicking Access Keys
  3. Use a shared access signature URI
  4. Use a storage account name and key
  5. Attach to a local imulator
86
Q

Using Storage Explorer

A
  • you can manage each of the storage services, Blob Storage, Azure Tables, Queue Storage, and Azure Files
87
Q

Storage blob copy

A

To copy between storage accounts:

  1. Navigate to source account, select one or more files, click the Copy button in the toolbar
  2. Navigate to the destination storage account, expand the container that you want to copy to, and click Paste from the toolbar
88
Q

AZCopy

A
  • a command line utility used to perform large scale bulk transfers of data to and from Azure Storage
  • performs all the operations asynchronously and can run simultaneously
  • fault-tolerant, if the operation is interrupted, it can resume from where it left off
  • can be used to copy between storage accounts
  • Storage Explorer is a graphical user interface which uses AzCopy to perform all its data transfer operations in the backend
89
Q

AzCopy Authorization

A
  • it needs authentication to Azure Storage before it runs any operations
  • run the azcopy login command and sign in
  • also supports service principal, SAS token, access key, managed identity
90
Q

service principal

A
  • an identity within an app that allows the app to access or modify resources
  • roles can be assigned to the service principal like a user
  • use service principals with automated tools rather than allowing them to log in with a user identity
91
Q

Upload Using AzCopy

A
  • upload data to Azure Blob Storage
  • the storage account and destination container should already exist
  • the CreateUserTemplate.csv file is copied to the destcontainer

azcopy copy “CreateUserTemplate.csv” “https://examref.blob.core.windows.net/destcontainer/”

92
Q

Download with AzCopy

A
  • download data from Azure Blob Storage using AzCopy
  • the CreateUserTemplate.csv will be downloaded from the srccontainer
93
Q

Async blob copy

A
  • copy between storage accounts using a SAS token

AzCopy copy “https://examref.blob.core.windows.net/ srccontainer/ [blob-path]?

94
Q

Sync Blob Copy

A

-azcopy sync command does a sychronized copy between two blob containers

  • synchronizes the contents of a destination container with a source container by copying blobs if the last modified time a blob in the destination is earlier than that of the corresponding blob in the source
95
Q

Delete Destination Flag

A
  • delete blobs in the destination container that don’t exist in the source
  • –delete-destination flag with azcopy sync command
  • can be set to true, false, or prompt
    *prompt will prompt you for deletions to make it safer
96
Q

Changing storage account replication mode

A
  • storage accounts can be moved freely between LRS, GRS, and RA-GRS application modes
  • for ZRS, GZRS, and RA-GZRS you should copy the data to a new storage account with the desired replication mode using a tool like AzCopy
    *there may be application downtime
    *you can request a live data migration via Azure Support
97
Q

Blob Object Replication

A
  • provides asynchronous replication of block blobs from one storage account to another
  • blobs are replicated based on the defined replication rules
98
Q

Blob versioning

A
  • blob versioning captures the state of a blob when it is modified or deleted
  • Azure storage creates a new version ID for a blob with each change
  • object replication can be used only when blob versioning is enabled for both the source and destination storage accounts
99
Q

Blob change feed

A
  • provides all the changes with the blobs and its metadata in the from of transactional logs
  • object replication can only used if blob change feed is enabled for the source storage account
100
Q

Benefits of using Object Replication

A
  • for large processing jobs, you can analyze the data in a single region and you can distribute results to additional regions as needed. saves processing time
  • users can read data from the replicated region as well, reduce latency
  • compute workloads can now process the same sets of block blobs in different regions
  • reduced costs by moving replicated data to the archive tier using Lifecycle Management policies
101
Q

Blob object replication limitations

A
  • object replication doesn’t work with the Archive tier
  • Blob snapshot and immutable snapshots are not supported
  • OR doesn’t working with accounts with a hierarchical namespace (Azure Data Lake Storage Gen2)
  • since block blob data is replicated asynchronously, there is no SLA on when accounts are in sync, however you can check the status of the blob
  • the source account can only have a max of 2 destination accounts
  • once you create a replication policy, the destination container is read-only, and you can no longer perform write operations against it
102
Q

Use Cases for Azure Files

A
  • migration of existing applications that require a file share for storage
  • shared storage of files such as web content, log files, application configuration files, or even installation media
  • replace an existing fileserver
103
Q

Connecting to Azure Files outside of Azure

A
  • since Azure Files supports for SMB 3.0 it’s possible to connect directly to an Azure file share from a computer running outside of Azure
    * open outbound TCP port 445
    * you can leverage VPNs or ExpressRoute where port 445 can’t be unblocked
  • Win 7 and Wind Server 2008 R2 doesn’t support SMB 3.0
104
Q

Azure File Sync

A
  • extends Azure Files to allow on-premises files services to be extended to Azure while maintaining performance and compatibility
  • multi site access: the ability to write files across Windows and Azure Files
  • cloud tiering: storage only recently accessed data on local servers, the rest gets tiered to Azure in a storage account
  • azure backup integration: backup in the cloud
  • fast disaster recovery: restore file metadata immediately and recall as needed
105
Q

Azure sync group

A
  • defnes the topology for how your file synchronization will take place
  • add server endpoints, which are file servers and paths within the file server you want the sync group to sync with each other
  • to add endpoints to the sync group Internet Explorer Enhanced Security configuration must be disabled before installing the agent
106
Q

cloud tiering

A
  • decreases the amount of local storage required while keeping the performance of an on-premises file server
  • stores frequently accessed (hot) files on your local server, infrequently accessed (cool) files are split into namespace (file and folder structure) and file content
107
Q

Monitoring synchronization health

A
  • a health indicator is displayed by each of the server endpoints
    • green is healthy
  • you can see stats such as the number of files remaining, size, and any resulting errors
108
Q

Configure Azure Blob Storage

A
  • Azure Blob Storage is used for large-scale storage of arbitrary data objects, such as media files, log files, and so on
109
Q

blob containers

A
  • each storage account can have one or more blob containers and all blobs must be stored within a container
  • containers are like hard drives in that they provide a storage space for data in your storage account
  • you put blobs in a container as you would store files on a hard drive
  • blobs can be placed at the root of the container or organized into a folder hierarchy
110
Q

Blob URL

A
  • each blob has a unique URL
  • format: https://[account name].blob.core.windows.net/[container name]/[blob path and name].
111
Q

Root container

A
  • you can create a container at the root of the storage account by specifying the special name $root for the container name
  • allows you to store blobs in the root of the storage account and reference them with URLs such as: https://[account name].blob.core.windows.net/fileinroot.txt
112
Q

Blob Types

A
  1. Page Blobs
  2. Block Blobs
  3. Append Blobs
  • blobs of all 3 types can share a single container
  • the type of blob is set at creation and cannot be changed after the fact
  • ex. if a .vhd file was accidently uploaded as a Block Blob instead of a Page Blob then the blob must be deleted and reuploaded as a Page Blob before it can be mounted as an OS or data disk to an Azure VM
113
Q

Page Blobs

A
  • optimized for random-access read and write operations
  • used to store virtual disk (VHD) files which using unmanaged disks with Azure virtual machines
  • max Pagle Blob size is 8TB
  • most commonly used for log files
114
Q

Block Bobs

A
  • optimized for efficient uploads and downloads, for video, images, and other general-purpose file storage
  • max size is slightly more than 4.75 TB
115
Q

Append Blobs

A
  • optimized for append operations
  • updating or deleting existing blocks in the blob is not supported
  • 50,000 blocks can be added to each append blob
  • each block can be 4MB in size
  • max size 195GB
116
Q

Soft Delete

A
  • a feature that allows you to save and recover you data when blobs or blob snapshots are deleted even in the event of an overwrite
  • usually the default behavior of deleting a blob is that the blob is deleted and lost forever

-feature must be enabled in the Azure storage account and a retention period must be set for how long the deleted data is available

  • the max retention period for soft delete is 365 days
117
Q

account-level tiering

A
  • hot, cool, archive
  • storage account blobs can coexist between three tiers within the same account
  • if any blob doesn’t have an assigned tier, it infers the access tier from the account access tier setting by default
  • the access tier’s Inferred blob property is set to true
  • changing the account access tier applies to all access tier-inferred objects stored in the account that don’t have an explicit tier set
  • change the access tier in the Configuration blade or using the Lifecycle Management feature
118
Q

blob-level tiering

A
  • can be assigned with the desired access tier while you upload them to the container
  • you can change the access tier among the Hot, Cool, or Archive tiers without having to move data between the accounts
  • all requests to change tiers will take place immediately between Hot and Cool tiers
  • data in the Archive storage tier is stored offline and must be rehydrated to the Cool or Hot tier before it can be accessed, this process can take up to 15 hours
  • Use the Change Tier option to change the tier
119
Q

Changing the access tier

A
  • changing the account access tier will result in tier changes for any tier-inferred blobs stored in the account that does not have an explicit tier set
120
Q

Configure blob Life Cycles Management

A
  • the lifecycle-management capability can be used to transition data to lower-access tiers automatically based on pre-configured rules
  • you can delete the data at the end of its lifecycle
  • these rules can be executed against the storage account once per day
  • specific blobs and containers can be targeted using filter sets
  • up to 100 rules can be defined
121
Q

What are the 3 blob subtypes?

A
  1. Base Blob
  2. Version
  3. Snapshot
122
Q

Lifecycle management effect

A
  • the policy can take up to 24 hours to go into effect and then the action can take an additional 24 hours to run
  • it can take up to 48 hours for policy actions to complete once you set up Lifecycle Management