Create long-running applications Flashcards

1
Q

Designing reliable applications

A

1) Reliability is the probability that a system functions correctly during any given period of time.
2) Azure helps you to handle these challenges.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fault domains and update domains

A

1) hardware failures are unavoidable

2) To help you to cope with such failures, Azure introduces two concepts: fault domain and update domain.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Fault domain

A

1) This is a group of resources that can fail at the same time
2) you can distribute service instances evenly to multiple fault domains so that all service instances won’t fail at the same time due to hardware failures.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Update domain

A

1) An update domain is a logical group of resources that can be simultaneously updated during system upgrades. 2) When Azure updates a service, it doesn’t bring down all instances at the same time.
3) Instead, it performs a rolling update via an update domain walk.
4) Service instances in different update domains are brought down group by group for updates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Transient errors

A

Transient errors are caused by some temporal conditions such as network fluctuation, service overload, and request throttling. Transient errors are quite elusive; they happen randomly and can’t be reliably re-created. A typical way to handle transient error is to retry a couple of times

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Loose coupling

A

1) dynamic scaling and load leveling.
2) don’t have direct dependencies on one another
3) failing component won’t produce the ripple effect on the entire system
4) integrated by Azure Service Bus queue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Health monitoring

A

1) Azure Diagnostics and Application Insights

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

workload of an application

A

two typical workload change patterns: gradual changes (scaling up, out) and spikes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Dynamic scaling

A

1) Predicable (weekend demand)

2) UnPredicable news website might experience unexpected spikes when breaking news occurs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Two major autoscaling methods

A

1) Scheduled scaling This is suitable for expected workload changes.
2) Reactive scaling Reactive scaling is suitable for unexpected workload changes. It monitors certain system metrics and adjusts system capacity when those metrics attain certain thresholds.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A practical challenge of reactive scaling

A

is the latency in provisioning new resources. Provisioning a new VM and deploying a new service instance need time. So, when you design your reactive-scaling solution, you need to leave enough space for new resources to be brought online

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Containers to avoid latency

A

1) Container technologies such as Docker make it possible for you to package workloads in light-weight images, which you can deploy and activate very quickly.
2) Such agility affords new possibilities to reactive scaling by eliminating the needs to account for latencies.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Workload partitioning

A

1) total workload is sliced into small portions, and each portion is assigned to a number of designated instances. 2) There are several advantages to using workload partitioning compared to homogeneous instances, among them is tenant isolation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Tenant isolation

A

you can route workloads for a certain tenant to a designated group of instances instead of being randomly routed to any instances in a bigger instance pool

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Tenant isolation new scenarios

A

1) per-tenant monitoring
2) tiered service offerings,
3) independent updates to different tenants.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

dynamic partioning

A

1) dynamic scaling.
2) When a new node is joined to a cluster, the new node takes on a fair share of the workload without impacting more running nodes than is necessary.
3) Techniques such as consistent hashing are commonly used to facilitate such workload relocation

17
Q

Cloud Services

A

1) (PaaS) offering for building and hosting cloud-based applications
2) It’s designed for building n-Tiered applications in the cloud
3) When the application is deployed to Azure, it might become a SaaS application that is accessible to all authenticated users across the globe.

18
Q

Role endpoints

A

1) Input Endpoints,
2) Internal Endpoints, and
3) Instance Input Endpoints

19
Q

Input Endpoints

A

1) Accessed openly over the Internet.
2) The Input Endpoints of a role point to an Azure-provided load balancer.
3) User requests are distributed to role instances by the load balancer.
4) The public port is what the load balancer exposes to service consumers;
5) the private port is what the load balancer uses to communicate with role instances.
6) Input Endpoints support HTTP, HTTPS, TCP, and UDP.

20
Q

Internal Endpoints

A

1) are private to the cloud service.
2) They are used for role instances to communicate with one another.
3) Internal Endpoints are not load-balanced.
4) support HTTP, HTTPS, TCP, and ANY.

21
Q

Instance Input Endpoints

A

1) These are publicly accessible endpoints with port ranges.
2) A service consumer can directly access different instances by choosing different ports in the given range. For instance, an Instance Input Endpoint with port range from 8008 to 8012 corresponds to 5 instances, with 8008 mapped to the first one, 8009 mapped to the second one, and so on.
3) Instance Input Endpoints support TCP and UDP.

22
Q

Access cloud service

A

1) By default, all cloud services are deployed to the cloudapp.net domain.
2) service1.cloudapp.net.
3) http://service1.cloudapp.net will be mapped to the role that has an HTTP-based Input Endpoint at port 80.

23
Q

Cloud Services availability features

A

1) Built-in load balancer
2) Rolling updates
3) Swap deployment

24
Q

Cloud Services reliability

A

1) Automatic instance recovery
2) Built-in diagnostics
3) Integration with Application Insight
4) Multiregion deployment with Traffic Manager Use this to fail-over from a primary site to a secondary site when needed.

25
Q

scalability

A

1) Planned or reactive autoscale.

2) Independent adjustments of instance numbers of different roles.