Cloud Practitioner - PRACTICE 1 Flashcards
What advantages does a database administrator obtain by using the Amazon Relational Database Service (RDS)?
- RDS provides 99.99999999999% reliability and durability.
- RDS databases automatically scale based on load.
- RDS enables users to dynamically adjust CPU and RAM resources.
- RDS simplifies administration tasks relational database
RDS simplifies relational database administration tasks
Amazon RDS is a managed relational database service on which you can run several types of database software. The service is managed so this reduces the database administration tasks an administrator would normally undertake. The managed service includes hardware provisioning, database setup, patching and backups.
A Cloud Practitioner requires point-in-time recovery (PITR) for an Amazon DynamoDB table. Who is responsible for configuring and performing backups?
- AWS is responsible for both tasks.
- The customer is responsible for configuring and AWS is responsible for performing backups.
- The customer is responsible for both tasks.
- AWS is responsible for configuring and the user is responsible for performing backups.
The customer is responsible for configuring and AWS is responsible for performing backups
Point-in-time recovery (PIT) provides continuous backups of your DynamoDB table data. When enabled, DynamoDB maintains incremental backups of your table for the last 35 days until you explicitly turn it off. It is a customer responsibility to enable PITR on and AWS is responsible for actually performing the backups.
A large company is interested in avoiding long-term contracts and moving from fixed costs to variable costs. What is the value proposition of AWS for this company?
- Economies of scale
- Pay-as-you-go pricing
- Volume pricing discounts
- Automated cost optimization
Pay-as-you-go pricing
Pay-as-you-go pricing helps companies move away from fixed costs to variable costs in a model in which they only pay for what they actually use. There are no fixed term contracts with AWS so that requirement is also met.
A customer needs to determine Total Cost of Ownership (TCO) for a workload that requires physical isolation. Which hosting model should be accounted for?
- Dedicated Hosts
- Reserved Instances
- On-Demand Instances
- Spot Instances
Dedicated Hosts
An Amazon EC2 Dedicated Host is a physical server with EC2 instance capacity fully dedicated to your use. Dedicated Hosts allow you to use your existing per-socket, per-core, or per-VM software licenses, including Windows Server, Microsoft SQL Server, SUSE, and Linux Enterprise Server. Note that dedicated hosts can be considered “hosting model” as it determines that actual underlying infrastructure that is used for running your workload.
Which tasks can a user complete using the AWS Cost Management tools? Select TWO
- Automatically terminate AWS resources if budget thresholds are exceeded.
- Break down AWS costs by day, service, and linked AWS account.
- Create budgets and receive notifications if current or forecasted usage exceeds the budgets.
- Launch either EC2 Spot instances or On-Demand instances based on the current pricing.
- Move data stored in Amazon S3 Standard to an archiving storage class to reduce cost.
Break down AWS costs by day, service, and linked AWS account & Create budgets and receive notifications if current or forecasted usage exceeds the budgets
The AWS Cost Management tools include services, tools, and resources to organize and track cost and usage data, and enhance control through. consolidated billing and access permissions, enable better planning through budgeting and forecasts, and further lower costs with resources and pricing optimizations.
Which of the following AWS services are compute services? (Select TWO.)
- AWS Batch
- AWS CloudTrail
- AWS Elastic Beanstalk
- Amazon EFS
- Amazon Inspector
AWS Batch & AWS Elastic Beanstalk
AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.
Which design principles are enabled by the AWS Cloud to improve the operation of workloads? (Select TWO.)
- Minimize platform design
- Loose coupling
- Customized hardware
- Remove single points of failure
- Minimum viable product
Loose coupling & Remove single points of failure
Loose coupling is when you break systems down into smaller components that are loosely coupled together. This reduces interdependencies between systems components. This is achieved in the cloud using messages buses, notification and messaging services.
A user is planning to launch three EC2 instances behind a single Elastic Load Balancer. The deployment should be highly available.
- Launch the instances across multiple Availability Zones in a single AWS Region.
- Launch the instances as EC2 Spot Instances in the same AWS Region and the same Availability Zone.
- Launch the instances in multiple AWS Regions, and use Elastic IP addresses.
- Launch the instances as EC2 Reserved Instances in the same AWS Region, but in different Availability Zones.
Launch the instances across multiple Availability Zones in a single AWS Region
To make the deployment highly available the user should launch the instances across multiple Availability Zones in a single AWS Region. Elastic Load Balancers can only serve targets in a single Region so it is not possible to deploy across Regions.
Which resource should a new user on AWS use to get help with deploying popular technologies based on AWS best practices, including architecture and deployment instructions?
- AWS CloudFormation
- AWS Artifact
- AWS Config
- AWS Quick Starts
AWS Quick Starts
Quick Starts are built by Amazon Web Services (AWS) solutions architects and partners to help you deploy popular technologies on AWS, based on AWS best practices for security and high availability. These accelerators reduce hundreds of manual procedures into just a few steps, so you can build your production environment quickly and start using it immediately. Each Quick Start includes AWS CloudFormation templates that automate the deployment and a guide that discusses the architecture and provides step-by-step deployment instructions.
A company needs to publish messages to a thousands of subscribers simultaneously using a push mechanism. Which AWS service should the company use?
- AWS Step Functions
- Amazon Simple Workflow Service (SWF)
- Amazon Simple Notification Service (Amazon SNS)
- Amazon Simple Queue Service (Amazon SQS)
Amazon Simple Notification Service (Amazon SNS)
Amazon SNS is a publisher/subscriber notification service that uses a push mechanism to publish messages to multiple subscribers. Amazon SNS enables you to send messages or notifications directly to users with SMS text messages to over 200 countries, mobile push on Apple, Android, and other platforms or email (SMTP).
A company uses Amazon EC2 instances to run applications that are dedicated to different departments. The company needs to break out the costs of these applications and allocate them to the relevant department. The EC2 instances run in a single VPC. How can the company achieve these requirements?
- Enable billing access for IAM users and view the costs in Cost Explorer.
- Enable billing alerts through Amazon Cloud Watch and Amazon SNS.
- Create tags by department on the instances and then run a cost allocation report.
- Add additional Amazon VPCs and launch each application in a separate VPC.
Create tags by department on the instances and then run a cost allocation report
The company should create cost allocation tags that specify the department and assign them to resources. These tags must be activated so they are visible in the cost allocation report. Once this is done and a monthly cost allocation report has been configured it will be easy to monitor the costs for each department.
An application uses a PostgreSQL database running on a single Amazon EC2 instance. A Cloud Practitioner has been asked to increase the availability of the database so there is automatic recovery in the case of a failure. Which tasks can the Cloud Practitioner take to meet this requirement?
- Migrate the database to Amazon RDS and enable the Multi-AZ feature.
- Configure an Elastic Load Balancer in front of the EC2 instance.
- Configure EC2 Auto Recovery to move the instance to another Region.
- Set the DeleteOnTermination value to false for the EBS root volume.
Migrate the database to Amazon RDS and enable the Multi-AZ feature
Moving the database to Amazon RDS means that the database can take advantage of the built-in Multi-AZ feature. This feature creates a standby instance in another Availability Zone and synchronously replicates to it. In the event of a failure that affects the primary database an automatic failover can occur and the database will become functional on the standby instance.
A company is launching a new website which is expected to have highly variable levels of traffic. The website will run on Amazon EC2 and must be highly available. What is the MOST cost-effective approach?
- Use the AWS CLI to launch and terminate Amazon EC2 instances to match demand.
- Create an Amazon EC2 Auto Scaling group and configure an Elastic Load Balancer.
- Determine the highest expected traffic and use an appropriate instance type.
- Launch the website using an Amazon EC2 instance running on a dedicated host.
Create an Amazon EC2 Auto Scaling group and configure an Elastic Load Balancer
The most cost-effective approach for ensuring the website is highly available on Amazon EC2 instances is to use an Auto Scaling group. This will ensure that the appropriate number of instances is always available to service the demand. An Elastic Load Balancer can be placed in front of the instances to distribute incoming connections.
Which of the following statements best describes the concept of agility in relation to cloud computing on AWS? (Select TWO.)
- The speed at which AWS rolls out new features.
- The ability to experiment quickly.
- The elimination of wasted capacity.
- The ability to automatically scale capacity.
- The speed at which AWS resources can be created.
The ability to experiment quickly & The speed at which AWS resources can be created
In a cloud computing environment, new IT resources are only a click away, which means that you reduce the time to make those resources available to your developers from weeks to just minutes. This results in a dramatic increase in agility for the organization, since the cost and time it takes to experiment and develop is significantly lower.
A company runs a batch job on an Amazon EC2 instance and it takes 6 hours to complete. The workload is expected to double in volume each month with a proportional increase in processing time. What is the most efficient cloud architecture to address the growing workload?
- Run the batch job on a larger Amazon EC2 instance type with more CPU.
- Change the Amazon EC2 volume type to a Provisioned IOPS SSD volume.
- Run the application on a bare metal Amazon EC2 instance.
- Run the batch workload in parallel across multiple Amazon EC2 instances.
Run the batch workload in parallel across multiple Amazon EC2 instances
The most efficient option is to use multiple EC2 instances and distribute the workload across them. This is an example of horizontal scaling and will allow the workload to keep growing in size without any issue and without increasing the overall processing timeframe.
An individual IAM user must be granted access to an Amazon S3 bucket using a bucket policy. Which element in the S3 bucket policy should be updated to define the user account for which access will be granted?
- Action
- Principal
- Resource
- Condition
Principal
The Principal element specifies the user, account, service, or other entity that is allowed or denied access to a resource. The bucket policy below has a Principal element set to * which is a wildcard meaning any user. To grant access to a specific IAM user the following format can be used.
A Cloud Practitioner needs a tool that can assist with viewing and managing AWS costs and usage over time. Which tool should the Cloud Practitioner use?
- AWS Budgets
- Amazon Inspector
- AWS Organizations
- AWS Cost Explorer
AWS Cost Explorer
AWS Cost Explorer has an easy-to-use interface that lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost Explorer provides you with a set of default reports that you can use as the starting place for your analysis. From there, use the filtering and grouping capabilities to dive deeper into your cost and usage data and generate custom insights.
A company plans to deploy a relational database on AWS. The IT department will perform database administration. Which service should the company use?
- Amazon EC2
- Amazon RedShift
- Amazon ElastiCache
- Amazon DynamoDB
Amazon EC2
A self-managed relational database can be installed on Amazon EC2. When using this deployment you can choose the operating system and instance type that suits your needs and then install and manage any database software you require.
A company is planning to move a number of legacy applications to the AWS Cloud. The solution must be cost-effective. Which approach should the company take?
- Migrate the applications to dedicated hosts on Amazon EC2.
- Rehost the applications on Amazon EC2 instances that are right-sized.
- Use AWS Lambda to host the legacy applications in the cloud.
- Use an Amazon S3 static website to host the legacy application code.
Rehost the applications on Amazon EC2 instances that are right-sized
The most cost-effective solution that works is to use Amazon EC2 instances that are right-sized with the most optimum instance types. Right-sizing is the process of ensuring that the instance type selected for each application provides the right amount of resources for the application.
A company must provide access to AWS resources for their employees. Which security practices should they follow? (Select TWO.)
- Enable multi-factor authentication for users.
- Create IAM policies based on least privilege principles.
- Disable password policies and management console access.
- Create IAM users in different AWS Regions.
- Create IAM Roles and apply them to IAM groups.
Enable multi-factor authentication for users & Create IAM policies based on least privilege principles
There are several security best practices for AWS IAM that are listed in the document shared below. Enabling multi-factor authentication is a best practice to require a second factor of authentication when logging in. Another best practice is to grant least privilege access when configuring users and password policies.
An Amazon Virtual Private Cloud (VPC) can include multiple…
- AWS Regions.
- Edge locations.
- Internet gateways.
- Availability Zones.
Availability Zones
An Amazon VPC includes multiple Availability Zones. Within a VPC you can create subnets in each AZ that is available in the Region and distribute your resources across these subnets for high availability.
A Cloud Practitioner anticipates an increase in application traffic at a future date and time when a sales event will take place. How can the Cloud Practitioner configure Amazon EC2 Auto Scaling to ensure the right number of Amazon EC2 instances are available ahead of the event?
- Configure predictive scaling.
- Configure a target tracking scaling policy.
- Configure a scheduled scaling policy.
- Configure a step scaling policy.
Configure a scheduled scaling policy
Scheduled scaling helps you to set up your own scaling schedule according to predictable load changes. For example, let’s say that every week the traffic to your web application starts to increase on Wednesday, remains high on Thursday, and starts to decrease on Friday. You can configure a schedule for Amazon EC2 Auto Scaling to increase capacity on Wednesday and decrease capacity on Friday.
A company is deploying an application on Amazon EC2 that requires low-latency access to application components in an on-premises data center. Which AWS service or resource can the company use to extend their existing VPC to the on-premises data center?
- Amazon Connect
- AWS Outposts
- AWS Direct Connect
- Amazon Workspaces
AWS Outposts
AWS Outposts is a fully managed service that offers the same AWS infrastructure, AWS services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. With AWS Outposts you can extend your VPC into the on-premise data center.
Which type of credential should a Cloud Practitioner use for programmatic access to AWS resources from the AWS CLI/API?
- SSL/TLS certificate
- SSH public keys
- Access keys
- User name and password
Access keys
Access keys are long-term credentials for an IAM user or the AWS account root user. You can use access keys to sign programmatic requests to the AWS CLI or AWS API (directly or using the AWS SDK).
A company is designing a new a service that must align with the operational excellence pillar of the AWS Well-Architected Framework. Which design principles should the company follow? (Select TWO.)
- Anticipate failure.
- Make large-scale changes.
- Perform operations as code.
- Perform manual operations.
- Create static operational procedures.
Anticipate failure & Perform operations as code
AWS Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications and workloads.
There are 5 pillars and under the operational excellence pillar the following best practices are recommended:
* Perform operations as code
* Make frequent, small, reversible changes
* Refine operations procedures frequently
* Anticipate failure
* Learn from all operational failures
A website has a global customer base and users have reported poor performance when connecting to the site. Which AWS service will improve the customer experience by reducing latency?
- AWS Direct Connect
- Amazon EC2 Auto Scaling
- Amazon CloudFront
- Amazon ElastiCache
Amazon CloudFront
Amazon CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos, applications, and APIs to customers globally with low latency, high transfer speeds, all within a developer-friendly environment.