Architecting for the Cloud Flashcards
Cloud computing differs from a traditional environment in the following ways:
IT assets become programmable resources
Global, available, and unlimited capacity
Higher level managed services
Security built-in
IT assets become programmable resources
On AWS, servers, databases, storage, and higher-level application components can be instantiated within seconds.
You can treat these as temporary and disposable resources, free from the inflexibility and constraints of a fixed and finite IT infrastructure.
This resets the way you approach change management, testing, reliability, and capacity planning.
Global, available, and unlimited capacity
Using the global infrastructure of AWS, you can deploy your application to the AWS Region that best meets your requirements.
For global applications, you can reduce latency to end users around the world by using the Amazon CloudFront content delivery network.
It is also much easier to operate production applications and databases across multiple data centers to achieve high availability and fault tolerance.
Higher level managed services
AWS customers also have access to a broad set of compute, storage, database, analytics, application, and deployment services.
These services are instantly available to developers and can reduce dependency on in-house specialized skills and allow organizations to deliver new solutions faster.
These services are managed by AWS, which can lower operational complexity and cost.
Security built-in
The AWS cloud provides governance capabilities that enable continuous monitoring of configuration changes to your IT resources.
Since AWS assets are programmable resources, your security policy can be formalized and embedded with the design of your infrastructure.
Design Principles
Scalability
Disposable Resources Instead of Fixed Servers
Automation
Loose Coupling
Services, Not Servers
Databases
Removing Single Points of Failure
Optimize for Cost
Scalability Design Principle
Systems that are expected to grow over time need to be built on top of a scalable architecture.
Scaling Vertically
Scaling vertically takes place through an increase in the specifications of an individual resource (e.g., upgrading a server with a larger hard drive or a faster CPU).
On Amazon EC2, this can easily be achieved by stopping an instance and resizing it to an instance type that has more RAM, CPU, IO, or networking capabilities.
Examples:
Add more CPU and/or RAM to existing instances as demand increases
Requires a restart to scale up or down
Would require scripting or automation tools to automate
Scalability limited by maximum instance size
Scaling Horizontally
Scaling horizontally takes place through an increase in the number of resources (e.g., adding more hard drives to a storage array or adding more servers to support an application).
This is a great way to build Internet-scale applications that leverage the elasticity of cloud computing.
Example:
Add more instances as demand increases
No downtime required to scale up or down
Automatic using services such as AWS Auto-Scaling
Unlimited scalability
Stateless applications
A stateless application is an application that needs no knowledge of previous interactions and stores no session information.
A stateless application can scale horizontally since any request can be serviced by any of the available compute resources (e.g., EC2 instances, AWS Lambda functions).
Stateless components:
Most applications need to maintain state information.
For example, web applications need to track whether a user is signed in, or else they might present personalized content based on previous actions.
Web applications can use HTTP cookies to store information about a session at the client’s browser (e.g., items in the shopping cart).
Consider only storing a unique session identifier in a HTTP cookie and storing more detailed user session information server-side.
DynamoDB is often used for storing session state to maintain a stateless architecture.
For larger files a shared storage system can be used such as S3 or EFS.
SWF can be used for a multi-step workflow.
Stateful components:
Databases are stateful.
Many legacy
applications are stateful.
Load balancing with session affinity can be used for horizontal scaling of stateful components.
Session affinity is however not guaranteed and existing sessions do not benefit from newly launched nodes.
Distributed processing:
Use cases that involve processing of very large amounts of data (e.g., anything that can’t be handled by a single compute resource in a timely manner) require a distributed processing approach.
By dividing a task and its data into many small fragments of work, you can execute each of them in any of a larger set of available compute resources.
Disposable Resources Instead of Fixed Servers
Think of servers and other components as temporary resources.
Launch as many as you need and use them only for as long as you need them.
An issue with fixed, long-running servers is that of configuration drift (where change and software patches are applied over time).
This problem can be solved with the “immutable infrastructure” pattern where a server is never updated but instead is replaced with a new one as required.
Instantiating compute resources
You don’t want to manually set up new resources with their configuration and code.
Use automated, repeatable processes that avoid long lead times and are not prone to human error.
Bootstrapping:
Execute automated bootstrapping actions to modify default configurations.
This includes scripts that install software or copy data to bring that resource to a particular state.
You can parameterize configuration details that vary between different environments.
Golden Images:
Some resource types can be launched from a golden image.
Examples are EC2 instances, RDS instances and EBS volumes.
A golden image is a snapshot of a particular state for that resource.
Compared to bootstrapping golden images provide faster start times and remove dependencies to configuration services or third-party repositories.
Infrastructure are Code:
AWS assets are programmable, so you can apply techniques, practices, and tools from software development to make your whole infrastructure reusable, maintainable, extensible, and testable.
Automation Design Principle
In a traditional IT infrastructure, you often must manually react to a variety of events.
When deploying on AWS there is a lot of opportunity for automation.
This improves both your system’s stability and the efficiency of your organization.
Examples of automations using AWS services include:
AWS Elastic Beanstalk – the fastest and simplest way to get an application up and running on AWS.
Amazon EC2 Auto Recovery – You can create an Amazon CloudWatch alarm that monitors an Amazon EC2 instance and automatically recovers it if it becomes impaired.
Auto Scaling – With Auto Scaling, you can maintain application availability and scale your Amazon EC2 capacity up or down automatically according to conditions you define
.
Amazon CloudWatch Alarms – You can create a CloudWatch alarm that sends an Amazon Simple Notification Service (Amazon SNS) message when a particular metric goes beyond a specified threshold for a specified number of periods.
Amazon CloudWatch Events – The CloudWatch service delivers a near real-time stream of system events that describe changes in AWS resources.
AWS OpsWorks Lifecycle events – AWS OpsWorks supports continuous configuration through lifecycle events that automatically update your instances’ configuration to adapt to environment changes.
AWS Lambda Scheduled events – These events allow you to create a Lambda function and direct AWS Lambda to execute it on a regular schedule.
Loose Coupling Design Principles
As application complexity increases, a desirable attribute of an IT system is that it can be broken into smaller, loosely coupled components.
This means that IT systems should be designed in a way that reduces interdependencies—a change or a failure in one component should not cascade to other components.
Design principles include:
Well-defined interfaces – reduce interdependencies in a system by enabling interaction only through specific, technology-agnostic interfaces (e.g. RESTful APIs).
Service discovery – disparate resources must have a way of discovering each other without prior knowledge of the network topology.
Asynchronous integration – this is another form of loose coupling where an interaction does not need an immediate response (think SQS queue or Kinesis).
Graceful failure – build applications such that they handle failure in a graceful manner (reduce the impact of failure and implement retries).
Services, Not Servers
With traditional IT infrastructure, organizations must build and operate a wide variety of technology components.
AWS offers a broad set of compute, storage, database, analytics, application, and deployment services that help organizations move faster and lower IT costs.
Managed services:
On AWS, there is a set of services that provide building blocks that developers can consume to power their applications.
These managed services include databases, machine learning, analytics, queuing, search, email, notifications, and more.
Serverless architectures:
Another approach that can reduce the operational complexity of running applications is that of the serverless architectures.
It is possible to build both event-driven and synchronous services for mobile, web, analytics, and the Internet of Things (IoT) without managing any server infrastructure.
Databases Design Principle
With traditional IT infrastructure, organizations were often limited to the database and storage technologies they could use.
With AWS, these constraints are removed by managed database services that offer enterprise performance at open-source cost.