AWS Exam Questions Flashcards
I want to be able to give my manager a billing report, how can I do this with AWS?
You can use a ‘Cost and Usage report’, you can set this resort up to deliver to S3.
I have a Java application and a MongoDB NoSQL database to store app customer data. It is currently on-prem and I am migrating to AWS, I what the application to be highly available, have traced, what options do I have?
- For Java use beanstalk or autoscaled EC2 or Kubernetes, this meets autoscaling requirement.
- Use DynamoDB for scalable and highly available backend
- Use Xray for tracing.
- Use CloudWatch logs for log visibility
- Use CloudWatch metrics for performance visibility.
I want to run a batch script at 8pm to collect stats and generate a report, what options do I have?
I can use CloudWatch Events to trigger an event to run lambda functions.
When using CloudWatch events, is this the correct cron expression, “00 08**?”” ?
Yes
I have a legacy application that used a traditional load balancer, my application used a certificated a that is used between the calling application and the LB, what options do I have?
Select to use a network load balancer as an application load balancer is not suitable. The network load balancer will allow traffic straight through to the application without touching it. The application will see traffic as coming from the application, not the LB.
I have a legacy application that used a traditional load balancer, my application used a certificated a that is used between the calling application and the LB, also the application uses TLS, what type of LB and protocols should I select to use on the lB?
TLS
Network LB
I have to trigger a build of application code at 2 PM each day and also send an email to dev team how can I trigger this build and send an email?
You can use CloudWatch Events to trigger the build by setting a schedule cron expression and creating two triggers, one for the code build and one for the SNS email.
What are the two conditions you must meet when switching role to another account, are the following true or false? User must not be a root user to switch rile? Does the user need to be granted permissions to assume the role?
Both are true.
I am using glue, pick what is true form the following, a) Glue contains a crawler that can connect to s3 and create metadata tables in a data catalogue b) can automatically generate Java code to extract data from the source and transform the data to a scheme c) has a central metadata repository and can be analyzed straight away.
A and C are true B is incorrect.
I have an s3 bucket and an application that writes user file to the s3 bucket, I want to keep track how much storage a user is using and send a notification email, how can I do this? a) Itirirate over files using a lambda function triggered on notifications b) write the size of the how much each user has used to dynnamodb?
b) is correct as you do not want to initiate over large amounts of data.
You have been taking frequent snapshots and you want to perform a restore of 10 files form the volume, how can you do this?
Using the snapshots you will create a new volume from the snapshot, mount the volume to the existing instance, navigate to where the files are on the disk and select the 10 files and copy.
I am collecting thermostat information at a rage of 1K ever min and I have 5.5M thermostats in the USA, how bets can I collect this type of data, could I have S3 and it is a good choice?
S3 is not the best choice here, it is possible but a better choice would be Kinesis.
I have 999 users and I want to set up an ActiveDirectory for use with a new AWS app, I want to be able to use existing on-prem ActiveDirectory with this new ActiveDirectory?
You need a compatible AD so SimpeAD is not a choice here, AWS Directory for Microsoft Active Directory is a possible option where you can set up a trust relationship with on-prem. AD Connector is probably bets options, it connects with on-prem.
I have two groups using RedShift and I want to ensure that the group’s queries do not what to wat on each other, how can I ensure this?
You can use two different RedShift management groups/
I have a mobile application that writes data to RedShift tables, I want to set up how the app will access the table, how can I do this?
Set up a role to allow web-based identity federation using OAuth.
I have a MySQL DB in EU, ASIA and head Q in the US, I run an hourly report where I need the data to form all regions, how bets can I do this? I am using RDS.
You can set up an RDS Master in each region and replicas in the HQ in the USA.
I have licences tied to MAC address of the Instance, how can you ensure the MAC address will not change?
Create ENI and assign it to the instance, the MAC will not change.
We have a customer that uses AWS and we want to share our service running in a VPC with the customer so they can do some work, how can we share the server?
Use VPC peering to share the VPC and server.
I have a number of EC2 instances, I wnat to check the logs form both the OS and Apache( IIS) for security issues, how can I do this in real-time?
- Install the cloudwatch logs agent
- Configure a lambda with the trigger on the cloudwatch log group.
I have an autoscaling group using 3 availability zones, once zone currently has issues and no instances are running in it, all instances are across other zones. The error zones come back online, what will happen?
An AZRebalance will take place and new instances will be created in the error zone and then once all instances are up and working, the other instances will be terminated from the other two zones until there is equal numbers in all zones.
I am using autoscaling, from cloud watch I see the autoscaling group launching more instance then the max and then terminating to instances to bring the overall number back to the Max
This happens when the autoscaling group is AZRebalancing.
Why does the Autoscaling group create new instance before terminating old one in an AZRebalance?
To ensure your capacity is kept and your application does not get capacity issues.
An International company has deployed a multi-tier web application that relies on DynamoDB in a single region. For regulatory reasons they need disaster recovery capability in a separate region with a Recovery Time Objective of 2 hours and a Recovery Point Objective of 24 hours. They should synchronize their data on a regular basis and be able to provision the web application rapidly using CloudFormation. The objective is to minimize changes to the existing web application, control the throughput of DynamoDB used for the synchronization of data and synchronize only the modified elements. Which design would you choose to meet these requirements?
Use EMR and write a custom script to retrieve data from DynamoDB in the current region using a SCAN operation and push it to DynamoDB in the second region. (No Schedule and throughput control)
Use AWS data Pipeline to schedule an export of the DynamoDB table to S3 in the current region once a day then schedule another task immediately after it that will import data from S3 to DynamoDB in the other region. (With AWS Data pipeline the data can be copied directly to other DynamoDB table)
Send each item into an SQS queue in the second region; use an auto-scaling group behind the SQS queue to replay the write in the second region. (Not Automated to replay the write)
I am a SAAS provider a and my clients are already on AWS, I wnat to share my service through my VPC, how cna I do this?
One could use VPC peering, but this is not correct as you have many customers an IP overlap is an issue. You will wnat to share your service through Privatelink so your customers will be able to create an endpoint in there VPC and access your service.
Can you enable cost explorer is it enables by default?
By default cost explorer is disabled. The payer (master) account can enable Cost Explorer at a root level, automatically enabling all linked (member) accounts.
In relation to EC2 what should we be doing to ensure costs are managed?
We should be,
- Using autoscaling groups
I am architecting a solution and wnat to better understand the approx monthly cost of my solution for my customer, what is the best solution?
AWS Simple Monthly Calculator will enable you to calculate the approx monthly cost of your architecture.
Can I use AWS Simple Monthly Calculator to see my in production cost?
No! WS Simple Monthly Calculator only shows you what a configuration could be, like when you are architecting a solution. It does not show you the cost of current and running resources.
I am using EMR, should I favour spot pricing over on-demand?
Yes for the core and task nodes.
For EBS Optimised volumes, is the throughput limited and how can you increase it if needed?
Yes, it is limited based in instance size, you can increase the instance size.
If I change the size of the instance of an EBS optimized instances, what will happen?
The throughput and IOPS will change, in the larger instance have larger throughput and IOPS.
Can I add a WAF ACL to an NLB ?
No, you can only add L7 to type device and the supported one are API Gw, CloudFront and ALB.
When creating a pre-signed URL, what must I ensure to ensure the users of the pre-sighed URL has read and write permissions?
Ensure the user creating the URL as read-write permissions
A customer needs governance and cost control over a number of accounts, what are the best options?
Start to use AWS orgnizations.
What is the component (something) in Austoscaling that controles the scaling up or down?
It is the autoscaling group policy.