Whizlabs V Flashcards
How can you perform a blue/green deployment in Elastic Beanstalk?
Clone your current environment, or launch a new environment running the configuration you want. Deploy the new application version to the new environment. Test the new version of the environment. From the new environment’s dashboard, choose Actions, then Swap Environment URLs.
You want to deploy a Lambda function using the Serverless Application Model. After you’ve created the function, what are the next steps in the serverless deployment?
A. Create a YAML file with the deployment specifics and package that along with the function file.
B. Upload the application function file to an S3 bucket.
C. Upload the function to AWS Lambda.
D. Upload the complete package to an S3 bucket.
A & D.
Create a YAML file with the deployment specifics and package that along with the function file. AND, upload the complete package to an S3 bucket.
What is AWS Step Functions?
AWS Step Functions is a web service that enables you to coordinate the components of applications and microservices using visual workflows.
You have created a Lambda step function which is generating a “ServiceException”. What is the best practice to handle this exception?
Use Lambda retry ode with “ErrorEquals” string. Within a retry code, “ErrorEquals” field is a required string, and all other fields are optional.
Your team is using CodeBuild for an application build. As part of the integration testing during the build phase, the application needs to access an RDS instance in a private subnet. How can you ensure this is possible?
Provide additional VPC-specific configuration information as part of your CodeBuild project.
Typically, resources in a VPC are not accessible by CodeBuild. To enable access, you must provide additional VPC-specific configuration information as part of your CodeBuild project configuration. This includes the VPC ID, the VPC subnet IDs, and teh VPC security group IDs. VPC-enabled builds are then able to access resources inside your VPC.
VPC connectivity from CodeBuild builds makes it possible to:
- Run integration tests from your build against data in an Amazon RDS database that’s isolated on a private subnet.
- Query data in an ElastiCache cluster directly from tests.
- Interact with internal web services hosted on EC2, ECS, or services that use internal Elastic Load Balancing.
What does the X-Ray SDK provide?
The X-Ray SDK provides:
- Interceptors to add to your code to trace incoming HTTP requests.
- Client handlers to instrument AWS SDK clients that your application uses to call other AWS services.
- An HTTP client to use to instrument calls to other internal and external HTTP web services.
In AWS Data Pipeline, what is a data node?
In AWS Data Pipeline, a data node defines the location and type of data that a pipeline activity uses as input our output.
You’re planning on using the DataPipeline service to transfer data from S3 to Redshift. You need to define the source and destination locations. What part of the Data Pipeline service allows you to define these locations?
Data nodes allow you to define the source and destination locations when using Data Pipeline to transfer data.
You are working on a system that will make use of AWS Kinesis, and it is getting data from various log sources. You are looking at creating an initial number of shards for the Kinesis stream. What can be used in the calculation of initial number of shards for the Kinesis stream?
Incoming write bandwidth and outgoing read bandwidth.
What are two possible data sources for storing Docker based images?
Docker Hub or Elastic Container Registry.
What are some recommendations for defining secondary indexes>
Keep the number of indexes to a minimum AND avoid indexing tables that experience heavy write activity.
- Keep the number of indexes to a minimum. Don’t create secondary indexes on attributes that you don’t query often. Indexes that are seldom used contribute to increased storage and I/O costs without improving application performance.
- Avoid indexing tables that experience heavy write activity. In a data capture application, e.g., the cost of I/O operations required to maintain an index on a table with a very high write load can be significant. If you need to index data in such a table, it may be more effective to copy the data to another table that has the necessary indexes and query it there.
Your team is planning on delivering content to users by using the CloudFront service and an S3 bucket as the source. You need to ensure that a custom value is placed for the amount of time the object is stored in the CloudFront cache. What can be used to fulfill this requirement?
For web distributions, to control how long your objects stay in a CloudFront cache before CloudFront forwards another request to your origin, you can:
- Configure your origin to add a Cache-Control or an Expires header field to each object.
- Specify a value for minimum TTL in CloudFront cache behaviors.
- Use the default value of 24 hours.
Your company has an existing Redshift cluster. The sales team currently store historical data in the cluster. There is now a requirement to ensure that all data is encrypted at rest. What do you need to do on your end?
Encryption is an optional, immutable setting of a cluster. If you want encryption, you enable it during the cluster launch process. As of October 2018, you can enable encryption on an un-encrypted cluster and AWS will handle migrating the data over to a new, encrypted cluster behind the scenes.
What can the Route 53 health checks be used for?
Route 53 health checks monitor the health and performance of your web applications, web servers and other resources. Each health check that you create can monitor one of the following:
- The health of a specified resource, such as a web server.
- The status of other health checks
- The status of an Amazon CloudWatch alarm.
You’re using S3 to host a static website. A bucket has been defined with the domain name, the objects uploaded and the static web site hosting has been enabled. What could be the reason you are still not able to access the website?
The bucket must have public read access. To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket. This bucket must have public read access. It is intentional that everyone in the world will have read access to this bucket.