AWS Certified Cloud Practitioner Practice Test 3 (Bonso) Flashcards
You are planning to create point-in-time backups of your Amazon EBS volumes. Which of the following are correct statements? (Select TWO.)
A.You can take EBS backups by creating Amazon Machine Images (AMIs)
B.You can create point-in-time backups through EBS snapshots
C.EBS backups are stored durably in Amazon S3
D.Instances will have to be stopped first to start the EBS backup
E.Backing up the same EBS volume
B.You can create point-in-time backups through EBS snapshots
C.EBS backups are stored durably in Amazon S3
Explanation:
You can back up the data on your Amazon EBS volumes to Amazon S3 by taking point-in-time snapshots. Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. This minimizes the time required to create the snapshot and saves on storage costs by not duplicating data. Each snapshot contains all of the information that is needed to restore your data (from the moment when the snapshot was taken) to a new EBS volume.
Hence, the correct answer are:
- You can create point-in-time backups through EBS snapshots
- EBS backups are stored durably in Amazon S3
The option that says: You can take EBS backups by creating Amazon Machine Images (AMIs) is incorrect because EBS backup volumes are not called AMIs. An Amazon Machine Image (AMI) is a backup of an EC2 instance along with all of its EBS volumes. Since only a single volume needs to be backed up, you should create EBS Snapshots instead.
The option that says: Backing up the same EBS volume will create a new back up of the whole volume is incorrect because EBS Snapshots are incremental, which means that only the latest changes are backed up when you run the backup process. It does not backup the whole volume again.
The option that says: Instances will have to be stopped first to start the EBS backup is incorrect because you can actually run EBS backup while the EC2 instance is running. This may cause some deterioration in EC2 performance, but it should not have a noticeable effect.
You noticed that you cannot reach one of your EC2 web servers behind an ELB whenever you enter the DNS name of your load balancer. Which of the following should you first check to gain more insight on the issue? A.ELB Health Check B.AWS CloudTrail C.Amazon CloudWatch D.AWS Config
A.ELB Health Check
Explanation:
Your Application Load Balancer periodically sends requests to its registered targets to test their status. These tests are called health checks.
Each load balancer node routes requests only to the healthy targets in the enabled Availability Zones for the load balancer. Each load balancer node checks the health of each target, using the health check settings for the target groups with which the target is registered. After your target is registered, it must pass one health check to be considered healthy. After each health check is completed, the load balancer node closes the connection that was established for the health check.
If a target group contains only unhealthy registered targets, the load balancer nodes route requests across its unhealthy targets.
In this scenario, a possible cause why you are not able to connect to an EC2 instance behind an ELB is because the load balancer deems the instance as unhealthy. This is verified by the ELB health checks that you can see in your ELB dashboard, which determines whether an instance is healthy or not. If the instance is unhealthy under the load balancer, then it will not receive any traffic.
Hence, the correct answer is ELB Health Check.
Amazon CloudWatch is incorrect because this is just used to monitor your AWS resources and collect information in the form of logs, metrics, and events. Although this service can prove useful for investigation, it is not the first thing you should check in this scenario.
AWS CloudTrail is incorrect because this simply provides an event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. Although this service can prove useful for investigation, it is not the first thing you should check in this scenario.
AWS Config is incorrect because it just continuously monitors and records your AWS resource configurations and allows you to automate the evaluation of recorded configurations against desired configurations. This service will not help you very much in your investigation of the issue.
Which among the options below is tied to a specific Availability Zone where it was launched? A.Amazon EFS B.EBS Volume C.Amazon EC2 Key Pairs D.Amazon Machine Image
B.EBS Volume
Explanation:
An Amazon EBS volume is tied to its Availability Zone and can be attached only to instances in the same Availability Zone. Your EC2 instance is also tied to the Availability Zones where it was launched. However, its EC2 instance ID is tied to the AWS region.
An AWS resource can be a Global, Regional or a Zonal service. A Global service means that it covers all of the AWS Regions across the globe, while a regional service means that a resource is only applicable to one specific region at a time. A regional service may or may not have the ability to replicate the same resource to another region. Lastly, a Zonal service can only exist in one Availability Zone.
You don’t need to memorize the scope of all of the AWS services as long as you know the pattern. There are actually only a handful of services that are considered as global services such as IAM, STS, Route 53, CloudFront, and WAF. For Zonal services, the examples are EC2 Instance and EBS Volumes where they are tied to the Availability Zone where they were launched. Take note that although EBS Volumes are considered as a zonal service, the EBS snapshots are considered as regional since they are not tied to a specific Availability Zone. The rest of the services are regional in scope.
Hence, the correct answer is: EBS volume.
Amazon EC2 key pairs, Amazon EFS and Amazon Machine Image are all incorrect because these are considered as regional resources.
Which service in AWS is best used for data analytics and data warehousing? A.Amazon Aurora B.Amazon RedShift C.Amazon DynamoDB D.Amazon S3
B.Amazon RedShift
Explanation:
Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing Business Intelligence (BI) tools. It allows you to run complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution.
Traditional data warehouses require significant time and resource to administer, especially for large datasets. In addition, the financial cost associated with building, maintaining, and growing self-managed, on-premise data warehouses is very high. As your data grows, you have to constantly trade-off what data to load into your data warehouse and what data to archive in storage so you can manage costs, keep ETL complexity low, and deliver good performance.
Amazon Redshift not only significantly lowers the cost and operational overhead of a data warehouse, but with Redshift Spectrum, also makes it easy to analyze large amounts of data in its native format without requiring you to load the data.
Hence, the correct answer is Amazon Redshift.
Amazon Aurora is not the optimal choice for data warehousing since it uses a traditional SQL setup, which is not as fast as Amazon Redshift.
Amazon DynamoDB and Amazon S3 are incorrect because these are not the best choices for business analytics databases due to the complex queries that are being performed on large sets of data. You need a storage type that is designed for this purpose, such as Redshift.
Which of the following should you set up in order to connect your AWS VPC network to your local network via an IPsec tunnel?
A.A NAT Gateway in your private subnet connected to your on-premise network
B.An on-premise NAT gateway device connected to your VPC’s Internet Gateway
C.A VPN Gateway in your VPC connected to the Customer Gateway in your on-premise network
D.VPC Peering connection between your on-premise network and VPC
C.A VPN Gateway in your VPC connected to the Customer Gateway in your on-premise network
Explanation:
An Amazon VPC VPN connection links your data center (or network) to your Amazon Virtual Private Cloud (VPC). A customer gateway is the anchor on your side of that connection. It can be a physical or software appliance. The anchor on the AWS side of the VPN connection is called a virtual private gateway.
By default, instances that you launch into an Amazon VPC can’t communicate with your own (remote) network. You can enable access to your remote network from your VPC by attaching a virtual private gateway to the VPC, creating a custom route table, updating your security group rules, creating an AWS Site-to-Site VPN (Site-to-Site VPN) connection, and configuring routing to pass traffic through the connection. Site-to-Site VPN supports Internet Protocol security (IPsec) VPN connections.
Hence, the correct answer is A VPN gateway in your VPC connected to the Customer Gateway in your on-premises network.
The option that says: VPC Peering connection between your on-premises network and VPC is incorrect because VPC Peering connects two different VPCs for inter-VPC communication. It does not connect your local network via IPsec VPN.
The option that says: A NAT gateway in your private subnet connected to your on-premises network is incorrect because a NAT Gateway is primarily used to allow EC2 instances launched in your private subnet to be able to connect to the public Internet, but disallows external servers to establish Internet connection to the VPC.
The option that says: An on-premises NAT gateway device connected to your VPC’s Internet Gateway is incorrect because as mentioned above, a NAT Gateway is not a suitable service/network device to be used here.
You are having a hard time tracking the different background jobs with parallel or sequential steps across your distributed application system. What AWS service will simplify this task for you? A.AWS Lambda B.Amazon SQS C.Amazon Simple Workflow D.AWS CodePipeline
C.Amazon Simple Workflow
Explanation:
With AWS Simple Workflow, you can create workflow tasks that will simplify and automate operations for you. You can configure automatic retry to allow SWF to retry failed activity or workflow tasks along with your given conditions. If your app’s steps take more than 500 milliseconds to complete, you need to track the state of processing, and you need to recover or retry if a task fails, Amazon SWF can help you.
Amazon SWF promotes a separation between the control flow of your background job’s stepwise logic and the actual units of work that contain your unique business logic. This allows you to separately manage, maintain, and scale “state machinery” of your application from the core business logic that differentiates it. As your business requirements change, you can easily change application logic without having to worry about the underlying state machinery, task dispatch, and flow control.
Hence, the correct answer is Amazon SWF.
Amazon SQS is incorrect because this is just a fully managed message queuing service that enables you to decouple and scale microservices, distributed systems, and serverless applications. It does not solve the problem stated in this scenario.
AWS Lambda is incorrect because this one is basically a serverless compute service in AWS and not an orchestration service. You cannot efficiently track processes using AWS Lambda, so it’s better to use AWS SWF instead.
AWS CodePipeline is incorrect because this is simply a fully managed continuous delivery service that helps you automate your release pipelines. This is a CI/CD tool in which you can deliver application updates faster and more reliably. Though, in a distributed application system, CodePipeline is not the correct service to use if you want to track and coordinate work across distributed application components.
Which of the following is the most cost-effective service to use if you want to coordinate multiple AWS services into serverless workflows? A.AWS Batch B.AWS Lambda C.AWS Step Functions D.Amazon SWF
C.AWS Step Functions
Explanation:
AWS Step Functions provides serverless orchestration for modern applications. Orchestration centrally manages a workflow by breaking it into multiple steps, adding flow logic, and tracking the inputs and outputs between the steps. As your applications execute, Step Functions maintains application state, tracking exactly which workflow step your application is in, and stores an event log of data that is passed between application components. That means that if networks fail or components hang, your application can pick up right where it left off.
Application development is faster and more intuitive with Step Functions because you can define and manage the workflow of your application independently from its business logic. Making changes to one does not affect the other. You can easily update and modify workflows in one place, without having to struggle with managing, monitoring and maintaining multiple point-to-point integrations. Step Functions frees your functions and containers from excess code, so your applications are faster to write, more resilient, and easier to maintain.
Hence, the correct answer is AWS Step Functions.
Amazon SWF is incorrect because it is just a fully-managed state tracker and task coordinator service. It does not provide serverless orchestration to multiple AWS resources.
AWS Lambda is incorrect because although this service is used for serverless computing, it does not provide a direct way to coordinate multiple AWS services into serverless workflows.
AWS Batch is incorrect because this is primarily used to efficiently run hundreds of thousands of batch computing jobs in AWS.
Which of the following is true if you store your data in AWS?
A.You are the owner of the data you store in AWS
B.Encryption is required for all data at rest and in transit
C.AWS has the right to review any data stored for potential threats
D.All data are stored durably and redundantly in different AZs
A.You are the owner of the data you store in AWS
Explanation:
AWS knows that their customers care deeply about privacy and data security. That’s why AWS gives you ownership and control over your content through simple, powerful tools that allow you to determine where your content will be stored, secure your content in transit and at rest, and manage your access to AWS services and resources for your users. They also implement responsible and sophisticated technical and physical controls that are designed to prevent unauthorized access to or disclosure of your content.
Hence, the correct answer is: You are the owner of the data you store in AWS.
The option that says: Encryption is required for all data at rest and in transit is incorrect since there is no such requirement imposed by AWS. Customers are free to encrypt data they wish to encrypt.
The option that says: All data are stored durably and redundantly in different AZs is incorrect since not all data receive such treatment. It is up to the customer to decide how they want their data stored in AWS.
The option that says: AWS has the right to review any data stored for potential threats is incorrect since the customer owns their data. AWS cannot simply access customer data without permission.
Which AWS support plan includes a Concierge Support Team which will assist you with your billing and account inquiries, and work with you to implement billing and account best practices? A.Developer support plan B.Business support plan C.Enterprise support plan D.Basic support plan
C.Enterprise support plan
Explanation:
The Concierge Support Team is composed of AWS billing and account experts that specialize in working with enterprise accounts. They will quickly and efficiently assist you with your billing and account inquiries, and work with you to implement billing and account best practices so that you can focus on what matters: running your business.
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.
AWS Support offers four support plans: Basic, Developer, Business, and Enterprise. The Basic plan is free of charge and offers support for account and billing questions and service limit increases. The other plans offer an unlimited number of technical support cases with pay-by-the-month pricing and no long-term contracts, providing the level of support that meets your needs.
All AWS customers automatically have around-the-clock access to these features of the Basic support plan:
- Customer Service: one-on-one responses to account and billing questions
- Support forums
- Service health checks
- Documentation, whitepapers, and best-practice guides
Customers with an Enterprise support plan have more access since they are eligible for these additional features unlike the Developer or Business plans:
- Application architecture guidance: consultative partnership supporting specific use cases and applications.
- Infrastructure event management: short-term engagement with AWS Support to get a deep understanding of your use case—and after analysis, provide architectural and scaling guidance for an event.
- Technical account manager
- White-glove case routing
- Management business reviews
- Concierge Support Team
Hence, the correct answer is the Enterprise support plan.
All other options are incorrect because the Basic, Developer and Business support plans do not include a Concierge Support Team.
You wish to host a static website of your own in AWS at a low cost. Which service should be used for this purpose? A.Amazon S3 Standard B.Amazon S3 Infrequent Access C.Amazon EC2 D.Amazon Elastic Load Balancer
A.Amazon S3 Standard
Explanation:
You can host a static website on Amazon Simple Storage Service (Amazon S3). On a static website, individual webpages include static content. To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket. This bucket must have public read access.
Amazon S3 Standard only charges you a small amount per month, which is perfect for this scenario’s needs. Hence, the correct answer is Amazon S3 Standard.
Amazon S3 infrequent access is incorrect because this may not be the storage choice for a static website. Since this is a website, you expect multiple visitors a day who will also be accessing your objects at a frequent rate. S3 infrequent access GET requests cost more than a standard S3 storage type.
Amazon EC2 is incorrect because using this will not be as cost-effective as using Amazon S3 Standard for static website hosting. This is because there are other costs to consider when using EC2 instances, such as EBS volumes.
Amazon Elastic Load Balancer is incorrect because a standalone load balancer cannot host content. You require a source for your content, such as an EC2 instance or Lambda function, and use this source in tandem with an ELB. The purpose of an ELB is to redirect incoming traffic to only your healthy instances to increase the availability of your website.
You have a large number of log files that will be archived in AWS for a long time and should have a retrieval time of 12 hours or less. Which service is the most cost-effective storage class for this purpose? A.Amazon S3 Standard-IA B.Amazon EBS Cold HDD C.Amzon S3 Glacier Deep Archive D.Amazon S3 Glacier
C.Amzon S3 Glacier Deep Archive
Explanation: S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage class and supports long-term retention and digital preservation for data that may be accessed once or twice in a year. It is designed for customers — particularly those in highly-regulated industries, such as the Financial Services, Healthcare, and Public Sectors — that retain data sets for 7-10 years or longer to meet regulatory compliance requirements.
S3 Glacier Deep Archive can also be used for backup and disaster recovery use cases, and is a cost-effective and easy-to-manage alternative to magnetic tape systems, whether they are on-premises libraries or off-premises services. S3 Glacier Deep Archive complements Amazon S3 Glacier, which is ideal for archives where data is regularly retrieved and some of the data may be needed in minutes.
All objects stored in S3 Glacier Deep Archive are replicated and stored across at least three geographically-dispersed Availability Zones, protected by 99.999999999% of durability, and can be restored within 12 hours.
Hence, the correct answer is Amazon S3 Glacier Deep Archive.
Amazon S3 Standard-IA is incorrect because this costs more than Glacier and Glacier Deep Archive. This storage type takes into consideration that you will still need to retrieve your objects in a timely manner, although infrequently.
Amazon S3 Glacier is incorrect because the scenario already mentioned that the retrieval option should be within 12 hours and thus, Glacier Deep Archive can provide a more cost-effective option than the Glacier class including the capability to retrieve the data within the mentioned timeframe.
Amazon EBS Cold HDD is incorrect because this is not the best nor the cheapest choice for archival. You use Cold HDD if you have infrequent workloads that require consistent throughput. EBS volumes need to be used along with EC2 instances for you to have access to the files stored in it.
Which of the following is the responsibility of the customer in the AWS cloud? (Select TWO.)
A.Ensuring that AWS services comply with the standards required of them
B.Managing users in their AWS account
C.Managing data stored in the AWS Resources
D.Upgrading chipsets to the latest commercially available product
E.Disposal of disk drives
B.Managing users in their AWS account
C.Managing data stored in the AWS Resources
Explanation:
Deploying workloads on Amazon Web Services (AWS) helps streamline time-to-market, increase business efficiency, and enhance user performance for many organizations. But as you capitalize on this strategy, it is important to understand your role in securing your AWS environment. Based on the AWS Shared Responsibility Model, AWS provides a data center and network architecture built to meet the requirements of the most security-sensitive organizations, while you are responsible for securing services built on top of this infrastructure, notably including network traffic from remote networks.
This customer/AWS shared responsibility model also extends to IT controls. Just as the responsibility to operate the IT environment is shared between AWS and its customers, so is the management, operation and verification of IT controls shared. AWS can help relieve customer burden of operating controls by managing those controls associated with the physical infrastructure deployed in the AWS environment that may previously have been managed by the customer. As every customer is deployed differently in AWS, customers can take advantage of shifting management of certain IT controls to AWS which results in a (new) distributed control environment.
Customers can then use the AWS control and compliance documentation available to them to perform their control evaluation and verification procedures as required. Below are examples of controls that are managed by AWS, AWS Customers and/or both.
Inherited Controls: Controls which a customer fully inherits from AWS.
- Physical and Environmental controls
Shared Controls: Controls which apply to both the infrastructure layer and customer layers, but in completely separate contexts or perspectives. In a shared control, AWS provides the requirements for the infrastructure and the customer must provide their own control implementation within their use of AWS services.
Examples include:
- Patch Management: AWS is responsible for patching and fixing flaws within the infrastructure, but customers are responsible for patching their guest OS and applications.
- Configuration Management: AWS maintains the configuration of its infrastructure devices, but a customer is responsible for configuring their own guest operating systems, databases, and applications.
- Awareness & Training: AWS trains AWS employees, but a customer must train their own employees.
Customer Specific: Controls which are solely the responsibility of the customer based on the application they are deploying within AWS services.
Examples include:
- Service and Communications Protection or Zone Security which may require a customer to route or zone data within specific security environments.
Hence, the correct answers are: managing users in their AWS account and managing data stored in the AWS resources.
The following options are incorrect because these tasks fall under the responsibilities of AWS:
- Disposal of disk drives
- Ensure that AWS services comply with the standards required of them
- Upgrading chipsets to the latest commercially available product
A customer wants to send push notifications to her mobile users whenever a new patch comes out. What service is the most appropriate for this task? A.SES B.SNS C.SQS D.SMS
B.SNS
Explanation:
Amazon Simple Notification Service (SNS) is a highly available, durable, secure, fully managed pub/sub messaging service that enables you to decouple microservices, distributed systems, and serverless applications. Amazon SNS provides topics for high-throughput, push-based, many-to-many messaging. Using Amazon SNS topics, your publisher systems can fan out messages to a large number of subscriber endpoints for parallel processing, including Amazon SQS queues, AWS Lambda functions, and HTTP/S webhooks. Additionally, SNS can be used to fan out notifications to end users using mobile push, SMS, and email.
Amazon SNS uses cross availability zone message storage to provide high message durability. Running within Amazon’s proven network infrastructure and datacenters, Amazon SNS topics are available whenever your applications need them. All messages published to Amazon SNS are stored redundantly across multiple geographically separated servers and data centers. Amazon SNS reliably delivers messages to all valid AWS endpoints, such as Amazon SQS queues and AWS Lambda functions.
Amazon SNS enables message filtering and fanout to a large number of subscribers, including serverless functions, queues, and distributed systems. Additionally, Amazon SNS fans out notifications to end users via mobile push messages, SMS, and email.
Hence, SNS is the correct answer.
Amazon SES is incorrect since this is an emailing service. It cannot send push notifications to mobile users.
Amazon SQS is incorrect since this is a poll type messaging service.
AWS Server Migration Service (SMS) is incorrect because this is simply an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS.
Which AWS service allows you to check if your running resources conform with the AWS best practices? A.AWS Trusted Advisor B.AWS IAM C.Amazon CloudWatch D.AWS Config
A.AWS Trusted Advisor
Explanation:
AWS Trusted Advisor is an application that draws upon best practices learned from AWS’ aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps.
Trusted Advisor includes an ever-expanding list of checks in the following five categories:
Cost Optimization – recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.
Security – identification of security settings that could make your AWS solution less secure.
Fault Tolerance – recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and over-utilized resources.
Performance – recommendations that can help to improve the speed and responsiveness of your applications.
Service Limits – recommendations that will tell you when service usage is more than 80% of the service limit.
Hence, the correct answer is AWS Trusted Advisor.
Amazon CloudWatch is incorrect because this simply provides you with data and actionable insights to monitor your applications, respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services that run on AWS and on-premises servers. It does not check whether your resources conform with the best practices.
AWS Config is incorrect because this is just a compliance monitoring tool. It makes sure your resources are in line with your set guidelines and requirements.
AWS IAM is incorrect because this is simply a security service used to manage your accounts and user permissions. It does not check whether your resources conform to the best practices.
Which of the following security group rules are valid? (Select TWO.)
A.Inbound RDP rule with an address range as a source
B.Inbound HTTP rule with security group ID as a source
C.Outbound HTTPS rule with hostname as destination
D.Outbound MYSQL rule with IP address as source
E.Inbound TCP rule with instance ID as source
A.Inbound RDP rule with an address range as a source
B.Inbound HTTP rule with security group ID as a source
Explanation:
Security groups accept IP address, IP address range, and security group ID as either source or destination of inbound or outbound rules.
Hence, the correct answers are: Inbound HTTP rule with security group ID as source and Inbound RDP rule with an address range as source.
Inbound TCP rule with instance ID as source and Outbound HTTPS rule with hostname as destination are both incorrect because Instance IDs or hostnames are not valid values.
Outbound MYSQL rule with IP address as source is incorrect because the source cannot be modified. Since it is outbound, you should set the allowed destination.
Which of the following services allow you to mask downtime of your application by rerouting your traffic to healthy instances? (Select TWO.) A.AWS ELB B.VPC Route tables C.Amazon CloudFront D.AWS EC2 Auto Scaling E.Amazon Route 53
A.AWS ELB
E.Amazon Route 53
Explanation:
AWS ELB and Amazon Route 53 help mask downtime by redirecting traffic to your healthy instances and allowing failover to your secondary systems. This is achieved through a combination of different health checks, routing policies, and failover policies.
Amazon CloudFront, AWS EC2 Auto Scaling, and VPC Route Tables do not help mask downtime. Amazon CloudFront boosts your content delivery speeds through the help of edge locations found globally. EC2 Auto Scaling allows you to automatically scale the number of running instances required to handle your workloads. VPC Route Tables dictate how traffic flows inside your VPC.
`How can your RDS production instances be more cost-effective when they will be used for a long period of time?
A.you can stop your RDS instances when idle to prevent AWS from charging you during this time
B.You can easily backup, terminate and restore RDS instances when you need them
C.You can avail of reserved instances to get discounts on your instance costs
D.AWS does not charge you when your RDS is idle
C.You can avail of reserved instances to get discounts on your instance costs
Explanation:
Amazon RDS Reserved Instances give you the option to reserve a DB instance for a one or three year term and in turn receive a significant discount compared to the On-Demand Instance pricing for the DB instance.
You can choose between three payment options when you purchase a Reserved Instance. With the All Upfront option, you pay for the entire Reserved Instance with one upfront payment. This option provides you with the largest discount compared to On-Demand Instance pricing. With the Partial Upfront option, you make a low upfront payment and are then charge a discounted hourly rate for the instance for the duration of the Reserved Instance term. The No Upfront option does not require any upfront payment and provides a discounted hourly rate for the duration of the term.
All Reserved Instance types are available for Aurora, MySQL, MariaDB, PostgreSQL, Oracle and SQL Server database engines.
Hence, the correct answer is: You can avail of reserved instances to get discounts on your instance costs.
The option that says: You can stop your RDS instances when idle to prevent AWS from charging you during this time is not the best way to save money as it entails more effort than required to do so. It is still better to opt for reserved instances for your RDS database cluster instead.
The option that says: You can easily backup, terminate, and restore RDS instances when you need them is not the best solution. There is too much effort involved.
The option that says: AWS does not charge you when your RDS is idle is incorrect. Idle time or not, once your RDS instance is running, AWS charges you for it.
`
What is the minimum support plan that will provide you access to all Trusted Advisor Checks? A.Enterprise B.Basic C.Developer D.Business
D.Business
Explanation:
AWS Support offers a range of plans that provide access to tools and expertise that support the success and operational health of your AWS solutions. All support plans provide 24x7 access to customer service, AWS documentation, whitepapers, and support forums. For technical support and more resources to plan, deploy, and improve your AWS environment, you can select a support plan that best aligns with your AWS use case.
It is important to know what services each support plan offers, and how they will matter to your company. Both business and enterprise support plans provide the full set of trusted advisor checks. The Developer plan only allow you access to the 7 core Trusted Advisor checks.
Since the question is asking for the minimum support plan available, the Business support plan is the correct answer.
Basic and Developer support plans are incorrect because these will only allow you access to the 7 core Trusted Advisor checks.
Enterprise support plan is incorrect because although it provides the full set of trusted advisor checks, it costs a lot more than the Business support plan.
Which of the following services will be able to reroute traffic to your secondary EC2 instances in another region during disaster recovery? A.VPC Peering B.Amazon VPC C.Amazon Route 53 D.AWS ELB
C.Amazon Route 53
Explanation:
Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. It is designed to give developers and businesses an extremely reliable and cost-effective way to route end users to Internet applications by translating names like www.i-love-boracay.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6 as well.
You can use Route 53 to check the health of your resources and only return healthy resources in response to DNS queries. There are three types of DNS failover configurations:
- Active-passive: Route 53 actively returns a primary resource. In case of failure, Route 53 returns the backup resource. Configured using a failover policy.
- Active-active: Route 53 actively returns more than one resource. In case of failure, Route 53 fails back to the healthy resource. Configured using any routing policy besides failover.
- Combination: Multiple routing policies (such as latency-based, weighted, etc.) are combined into a tree to configure more complex DNS failover.
Hence, the correct answer is: Amazon Route 53.
Both Amazon VPC and VPC Peering are incorrect because they don’t do any kind of failover rerouting procedure during a disaster.
AWS ELB is incorrect. If the ELB itself were to fail then no traffic would reach your servers at all. In the event of a regional outage, this can likely happen since ELB is a regional service, unlike Route 53 which is considered a global service. You would have to then rely on Route 53 to redirect traffic to your secondary application hosted in another region.
What service should you use in order to add user sign-up, sign-in, and access control to your mobile app with a feature that supports sign-in with social identity providers such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0?
A.AWS Directory Service
B.AWS Single Sign-On (SSO)
C.AWS Identity and Access Management (IAM)
D.Amazon Cognito
D.Amazon Cognito
Explanation:
Amazon Cognito lets you add user sign-up, sign-in, and access control to your web and mobile apps quickly and easily. Amazon Cognito scales to millions of users and supports sign-in with social identity providers, such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
Amazon Cognito provides solutions to control access to backend resources from your app. You can define roles and map users to different roles so your app can access only the resources that are authorized for each user.
With a built-in UI and easy configuration for federating identity providers, you can integrate Amazon Cognito to add user sign-in, sign-up, and access control to your app in minutes. You can customize the UI to put your company branding front and center for all user interactions.
Hence, the correct answer is Amazon Cognito.
AWS Single Sign-On (SSO) is incorrect because this is just a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. It doesn’t provide an easy way to add user sign-in, sign-up, and access control to your mobile app, unlike Amazon Cognito.
AWS Identity and Access Management (IAM) is incorrect because this just enables you to manage access to AWS services and resources securely.
AWS Directory Service is incorrect because it only enables your directory-aware workloads and AWS resources to use managed Active Directory in the AWS Cloud.
Which of the following infrastructure correlates to a VPC’s subnet? A.Region B.Server C.Edge Location D.Availability Zone
D.Availability Zone
Explanation:
A VPC spans all the Availability Zones in the region. After creating a VPC, you can add one or more subnets in each Availability Zone. Each subnet must reside entirely within one Availability Zone and cannot span zones. Availability Zones are distinct locations that are engineered to be isolated from failures in other Availability Zones. By launching instances in separate Availability Zones, you can protect your applications from the failure of a single location.
If a subnet’s traffic is routed to an Internet gateway, the subnet is known as a public subnet. If you want your instance in a public subnet to communicate with the Internet over IPv4, it must have a public IPv4 address or an Elastic IP address (IPv4).
Hence, the correct answer is Availability zone.
Both AWS Regions and Edge locations are incorrect because subnets are not directly correlated with these two.
Server is incorrect because subnets are part of a VPC within the AWS global network. Subnets are not bound by servers in any way.
What is the best way to keep track of all activities made in your AWS account?
A.Create a multi-region trail in AWS CloudTrail
B.Set up MFA logging to know who is currently in your environment
C.USe Amazon CloudWatch Logs to log all activities
D.Use LDAP authentication on your AWS account
A.Create a multi-region trail in AWS CloudTrail
Explanation
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. Creating a multi-region trail will allow you to keep your activity records in an S3 bucket and prevent them from getting rewritten automatically.
Hence, the correct answer is: Create a multi-region trail in AWS CloudTrail.
Using Amazon Cloudwatch Logs is incorrect since this service is not related to user actions in your account. CloudWatch Logs enables you to centralize the logs from all of your systems, applications, and AWS services that you use, in a single, highly scalable service.
Setting up MFA is incorrect because it will not tell you exactly who performed what in your AWS account.
Using LDAP authentication on your AWS account is incorrect because not all company supports it. Access logging can be done from the company’s side however, this cannot capture the actions performed within the AWS account.
In which of the following occasions should you use the Amazon SQS application system? (Select TWO.)
A.If you need to submit push notifications to your event subscribers
B.When your application requires the use of industry-standard messing protocols for message delivery
C.When you have to automate certain tasks in your workflow
D.IF you need to decouple certain parts of your system for better fault rolerance
E.If you require a durable storage for your application events of messages
D.IF you need to decouple certain parts of your system for better fault rolerance
E.If you require a durable storage for your application events of messages
Explanation
Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available. SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system. Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.
If you need to submit push notifications, you should use Amazon SNS instead.
If you need to automate certain workflows in AWS, you should use Amazon SWF.
If your messaging service requires the use of certain protocols, try using Amazon MQ.
Which of the following services should you use to deploy and easily rollback a web application from your Git repository to your on-premises server? A.AWS System Manager B.AWS Elastic Beanstalk C.AWS OpsWorks D.AWS CloudFormation
C.AWS OpsWorks
Explanation
Enterprise environments are often a mix of cloud, on-premises data centers, and edge locations. Hybrid cloud architectures help organizations integrate their on-premises and cloud operations to support a broad spectrum of use cases using a common set of cloud services, tools, and APIs across on-premises and cloud environments.
Customers can seamlessly integrate their on-premises and cloud storage, networking, identity management, and security policies to enable use cases such as data center extension to the cloud, backup, and disaster recovery to the cloud, and hybrid data processing.
AWS offers services that integrate application deployment and management across on-premises and cloud environments for a robust hybrid architecture. Below are the following services that you can use to manage or deploy applications to your servers running on-premises:
OpsWorks – AWS OpsWorks is a configuration management service that helps customers configure and operate applications, both on-premises and in the AWS Cloud, using Chef and Puppet.
CodeDeploy – AWS CodeDeploy automates code deployments to any instance, including Amazon EC2 instances and instances running on-premises. AWS CodeDeploy makes it easier to rapidly release new features, avoids downtime during application deployment, and handles the complexity of updating applications.
Hence, the correct answer in this scenario is AWS OpsWorks.
Both AWS CloudFormation and AWS Elastic Beanstalk are incorrect because these services can only deploy applications to your AWS resources and not to the servers located in your on-premises data center.
AWS Systems Manager is incorrect because although you can remotely operate and deploy packages/scripts to your on-premises servers with this one, this service is still not suitable to be used for deploying your web application. It also doesn’t have a feature to easily rollback your deployments unlike OpsWorks. This service is primarily used to automate maintenance and deployment tasks on Amazon EC2 and on-premises instances, or automatically apply patches, updates, and configuration changes across any resource group.
You have multiple AWS accounts in your organization. How can you take advantage of this to save on your monthly costs?
A.Use AWS Organizations and enable consolidated billing
B.Migrate all resources to a central account to take advantafe of volume discounts
C.Set a budget to each account to minimize spending
D.Create an IAM Group containing all the AWS accounts and enable consolidated billing
A.Use AWS Organizations and enable consolidated billing
Explanation:
AWS Organizations is an account management service that enables you to consolidate multiple AWS accounts into an organization that you create and centrally manage. AWS Organizations includes account management and consolidated billing capabilities that enable you to better meet the budgetary, security, and compliance needs of your business. As an administrator of an organization, you can create accounts in your organization and invite existing accounts to join the organization.
You can use the consolidated billing feature in AWS Organizations to consolidate billing and payment for multiple AWS accounts. You can combine the usage across all accounts in the organization to share the volume pricing discounts and Reserved Instance discounts. This can result in a lower charge for your project, department, or company than with individual standalone accounts.
Hence, the correct answer is: Use AWS Organizations and enable consolidated billing.
The option that says: Create an IAM Group containing all the AWS accounts and enable consolidated billing is incorrect because IAM Groups are used for IAM Users. It is not intended to be used with other AWS accounts.
The option that says: Migrate all resources to a central account to take advantage of volume discounts is incorrect because migrating your applications would take too much time, effort and resources to pull off. You can easily combine your AWS accounts through AWS Organizations, and lessen the work that needed to be done.
The option that says: Set a budget for each account to minimize spending is incorrect because although setting a maximum budget for each account is an acceptable practice, you still cannot gain significant cost reductions in your billing. Better to use AWS Organizations and enable Consolidated Billing instead to take advantage of volume discounts.
Which of the following statements is true for AWS CloudTrail?
A.CloudTrail is disabled by default for newly created AWS Accounts
B.CloudTrail is able to capture application error logs from EC2 Instances
C.CloudTrail charges you for every management event trail created
D.When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default
D.When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default
Explanation:
AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command-line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting.
With AWS CloudTrail, simplify your compliance audits by automatically recording and storing event logs for actions made within your AWS account. Integration with Amazon CloudWatch Logs provides a convenient way to search through log data, identify out-of-compliance events, accelerate incident investigations, and expedite responses to auditor requests.
Hence, the correct answer to the question is: When you create a trail in the AWS Management Console, the trail applies to all AWS Regions by default. Alternatively, you can also specify the trail to only a specific Region if you wish to.
The option that says: CloudTrail is disabled by default for newly created AWS accounts is incorrect because AWS CloudTrail is now enabled by default for ALL CUSTOMERS and will provide visibility into the past seven days of account activity without the need for you to configure a trail in the service to get started.
The option that says: CloudTrail is able to capture application error logs from your EC2 is incorrect because CloudTrail actually does not capture error logs in your EC2 instances. You may instead use CloudWatch Logs for this purpose.
The option that says: CloudTrail charges you for every management event trail created is incorrect because actually, CloudTrail does not charge you for your first management trail, but only the additional management trails you create after the first one.