Practice 01 Flashcards
An online educational institute uses a three-tier web application and is using AWS X-Ray to trace data between various services. User A is experiencing latency issues using this application and the Operations team has asked you to gather all traces for User A. Which of the following needs to be enabled to get Filtered Output for User A from all other traces?
A. Trace Id
B. Annotations
C. Segment Id
D. Tracing header
B. Annotations
Annotations are key-value pairs indexed to use with filter expressions. In the above case, traces for a user need to be tracked, for which Annotations can be used along with a Filter expression to find all traces related to that user.
A developer working on an AWS CodeBuild project wants to override a build command as part of a build run to test a change. The developer has access to run the builds but does not have access to the code and to edit the CodeBuild project.
What process should the Developer use to override the build command?
A. Update the buildspec.yml configuration file that is part of the source code and run a new build.
B. Update the command in the Build Commands section during the build run in AWS console
C. Run the start build AWS CLI command with buildspecOverride property set to the new buildspec.yml file
D. Update the buildspec property in the StartBuild API to override the build command during the build run
C. Run the start build AWS CLI command with buildspecOverride property set to the new buildspec.yml file
Since the developer can run the build, he can run the build by changing the parameters from the command line.
You are using AWS SAM to define a Lambda function and configure CodeDeploy to manage deployment patterns. With the new Lambda function working as per expectation, which of the following will shift traffic from the original Lambda function to the new Lambda function in the shortest time frame?
A. Canary10Percent5Minutes
B. Linear10PercentEvery10Minutes
C. Canary 10Percent15Minutes
D. Linear10PercentEvery5Minute
A. Canary10Percent5Minutes
With Canary Deployment Preference type, traffic is shifted in two intervals. With Canary10Percent5Minutes, 10 percent of traffic is shifted in the first interval while all remaining traffic is shifted after 5 minutes.
A Developer has been asked to create an AWS Elastic Beanstalk environment for a production web application that needs to handle thousands of requests. Currently, the dev environment is running a t1.micro instance.
What is the best way for the developer to provision a new production environment with a m4.large instance of a t1.micro?
A. Use CloudFormation to migrate the Amazon EC2 instance type from t1.micro to m4.large
B. Create a new configuration file with the instance type as m4.large and reference this file when provisioning the new environment
C. Provision a m4.large instance directly in the dev environment and deploy to the new production environment
D. Change the instance type value in the configuration file to m4.large by using the update autoscaling group CLI command
B. Create a new configuration file with the instance type as m4.large and reference this file when provisioning the new environment
Configuration options can be saved in configurations and configuration files. Settings in configuration files are not applied directly to the environment and cannot be removed without modifying the configuration files and deploying a new application version.
Your team has been instructed to deploy a Microservice and an ETL based application onto AWS. There is a requirement to manage the containerization of the application using Docker. Which of the following would be the ideal way to implement this with the least amount of administrative effort?
A. Use AWS OpsWorks
B. User Elastic Container Service
C. Deploy Kubernetes on EC2 instances
D. Use the CloudFormation service
B. User Elastic Container Service
ECS is fully managed container orchestration service.
C is incorrect because hosting Kubernetes on EC2 will incur more administrative headache
You are developing an application that will be comprised of the following architecture:
A set of EC2 instances to process messages
These instances will be spun up by an Autoscaling group
SQS queues to maintain processing messages
There will be two pricing tiers.
How will you ensure the premium customers’ messages are given preference?
A. Create 2 Autoscaling groups, one for normal and one for premium customers
B. Create 2 sets of EC2 instances, one for normal and one for premium customers
C. Create 2 SQS queues, one for normal and one for premium customers
D. Create 2 Elastic Load Balancers, one for normal and one for premium customers
C. Create 2 SQS queues, one for normal and one for premium customers
Messages can be processed by the application from high priority queue first.
Your team has been instructed to develop a completely new solution for AWS. Currently, you have a limitation on the tools available to manage the complete lifecycle of the project. Which of the following services from AWS could help you handle all aspects of development and depolyment?
A. AWS CodePipeline
B. AWS CodeBuild
C. AWS CodeCommit
D. AWS CodeStar
D. AWS CodeStar
CodeStar allows you to quickly develop, build, and deploy applications on AWS
You are using S3 buckets to store images. These S3 buckets invoke a lambda function on upload. The Lambda function creates thumbnails of the images and stores them in another S3 bucket. An AWS CloudFormation template is used to create the Lambda function with the resource “AWS::Lambda::Function”. Which of the following attributes i the method name that the Lambda calls to execute the function?
A. FunctionName
B. Layers
C. Environment
D. Handler
D. Handler
The handler is the name of the method within code that Lambda calls to execute the function
In API Gateway, when a stage variable is used as part of an HTTP integration URL, which of the following are correct ways of defining a “subdomain” and the “path”?
A. http://example.com/${<variable_name>}/prod
B. http://example.com/${stageVariables.<variable_name>}/prod
C. http://${stageVariables.<variable_name>}.example.com/dev/operation
D. http://${stageVariables}.example.com/dev/operation
E. http://${<variable_name>}.example.com/dev/operation
F. http://example.com/${stageVariables}/prod</variable_name></variable_name></variable_name></variable_name>
B. http://example.com/${stageVariables.<variable_name>}/prod
C. http://${stageVariables.<variable_name>}.example.com/dev/operation</variable_name></variable_name>
Company B is writing 10 items to the Dynamo DB table every second. Each item is 15.5 kb in size. What would be the required provisioned write throughput for best performance?
A. 10
B. 160
C. 155
D. 16
B. 160
When working with write capacity, the rule is to divide the item by 1kb. Hence, 15.5 divided by 1, is 15.5. Round off to the nearest 1kb value, which is 16. Since we are writing 10 items per second, multiply 10 * 16 = 160
A reputed pharma company has deployed its updated dealer network application on a bunch of EC2 instances. Using CloudWatch Logs to monitor the application logs, their IT team wishes to search for missing files or resources at particular positions in the code and report that data to CloudWatch metric, which can then be monitored. Which of the following measures need to be used to fulfill the requirement?
A. Set up & install cloudwatch agent on EC2 to send logs for CloudWatch to monitor
B. Create a custom role in IAM with relevant write permissions & associate them with EC2 instances. Install Cloudwatch agent on EC2 instances. Create log groups in CloudWatch Logs through the console along with CloudWatch Agent configuration file and use filters to search for 404 errors
C. Application logs cannot be monitored by CloudWatch
D. EC2 instances can directly send application logs to CloudWatch
B. Create a custom role in IAM with relevant write permissions & associate them with EC2 instances. Install Cloudwatch agent on EC2 instances. Create log groups in CloudWatch Logs through the console along with CloudWatch Agent configuration file and use filters to search for 404 errors
More setup is needed than just installing the agent.
An enterprise gaming company has recently launched its new Soccer game and wished to bring scalability, availability, and better performance in terms of durability and consistency. The existing setup is using Redis but facing latency and throughput issues. You are required to propose an upgrade or a new solution/service to meet and support more than 100 million request per second as part of their requirements for the new game.
A. Upgrade the EC2 instances from m6in.4xlarge to m6in.24xlarge
B. Introduce Amazon Memory DB for Redis-based architecture bringing in ultrafast performance and Multi-AZ durability
C. Introduce Multi-AZ setup and migrate the DB to DynamoDB using AWS Migration Service. Use DynamoDb Accelerator (DAX) - this will help in increasing the read performance from milliseconds to microseconds even if there are millions of requests per second.
D. Migrate the entire architecture to AWS EKS which is fully managed Kubernetes service that automatically manages the availability and scalability of the application
B. Introduce Amazon Memory DB for Redis-based architecture bringing in ultrafast performance and Multi-AZ durability
Since the company already has a Redis setup, migrating or introducing Amazon Memory DB for Redis can enhance and bring in data durability, consistency, and recoverability since it uses distributed transactional log features.
Mary is a Docker expert and has deployed multiple projects using AWS Cloud9 as the preferred IDE along with AWS CodeStar to streamline the CI/CD pipelines. She is currently struggling to open a new environment for a new project which involved workloads on Docker. She is unable to connect to the EC2 environment in the project VPS which has been set up using the IPV4 CIDR block of 172.17.0.0/16. Which of the following will solve the problem?
A. Enable Advance Network for EC2 instance which is used for AWS Cloud9
B. Configure a new VPC for the instance backing the EC2 environment using 192.168.0.0/16 CIDR block
C. Upgrade the EC2 instance backing the environment from t2.micro to t3.large and try reconnecting
D. Change the IP address range of the existing VPC to 172.17.0.0/18
B. Configure a new VPC for the instance backing the EC2 environment using 192.168.0.0/16 CIDR block
Docker uses default bridge on 172.17.0.0/16 for container networking. If VPC uses the same address, an IP conflict can occur. The address cannot be changed for an existing VPC, so a new VPC must be created.
A leading Automobile dealer company that is expanding globally is facing problems to ensure a consistent state of provisioning and maintenance of environments. Their current architecture is rolling out Kubernetes jobs through AWS EKS using Spot instances to create new microservices for a new environment requested by a user. However, Spot instances are deleting the underlying nodes and jobs are getting terminated, hence disrupting the entire chain of environment creation for different business units. Select two options:
A. Replace Spot instances with Reserved which will ensure that the underlying infrastructure will not get terminated
B. Integrate AWS API Gateway which will trigger Lambda functions to spin off new instances
C. Integrate AWS SNS with SQS and Dead Letter Queue which will ensure job requests are being managed, stored, and processed seamlessly. DLQ will further enhance and bring in overall consistency
D. Integrate CloudWatch monitoring along with Lambda, to spin off new instances in the event of nodes that are getting terminated.
B. Integrate AWS API Gateway which will trigger Lambda functions to spin off new instances
C. Integrate AWS SNS with SQS and Dead Letter Queue which will ensure job requests are being managed, stored, and processed seamlessly. DLQ will further enhance and bring in overall consistency
The DevOps team at DevHub Inc is trying to deploy an application using AWS CodeDeploy and has also integrated AWS AutoScaling service which will ensure it always has the correct number of EC2 instances available to handle the load for deployments. AWS CodeDeploy configuration has been setup in such a way that multiple deployment groups have been associated with each AWS AutoScaling group. The deployment process is running as expected, but the deployment is failing. Which will help correct the problem and ensure that the application deployment on EC2 instances completes successfully?
A. Change the deployment group configuration so that only one deployment group is associated with each AWS AutoScaling group
B. Let AWS CodeDeploy configure the AutoScaling lifecycle hooks instead of using manual configuration
C. Change the timeout period of the script for the lifecycle event in the AppSpec file to 60 minutes. Because CodeDeploy has a one-hour timeout for the CodeDeploy agent to respond to pending deployments, it can take up to 60 minutes for each instance to time out.
D. Use AutoScaling notifications to keep track of terminated EC2 instances that have not been set up.
A. Change the deployment group configuration so that only one deployment group is associated with each AWS AutoScaling group
B. Let AWS CodeDeploy configure the AutoScaling lifecycle hooks instead of using manual configuration
Recommendation is to not setup or modify these hook manually since CodeDeploy can do it.