A company runs an on-premises application that is powered by a MySQL database. The company is migrating the application to AWS to increase the application's elasticity and availability.
The current architecture shows heavy read activity on the database during times of normal operation. Every 4 hours, the company's development team pulls a full export of the production database to populate a database in the staging environment. During this period, users experience unacceptable application latency. The development team is unable to use the staging environment until the procedure completes.
A solutions architect must recommend replacement architecture that alleviates the application latency issue. The replacement architecture also must give the development team the ability to continue using the staging environment without delay.
Which solution meets these requirements?
Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.
Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Use the standby instance for the staging database.
Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populate the staging database by implementing a backup and restore process that uses the mysqldump utility.
Answer is Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Use database cloning to create the staging database on-demand.
To alleviate the application latency issue, the recommended solution is to use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production, and use database cloning to create the staging database on-demand. This allows the development team to continue using the staging environment without delay, while also providing elasticity and availability for the production application.
Option A: Use Amazon Aurora MySQL with Multi-AZ Aurora Replicas for production. Populating the staging database by implementing a backup and restore process that uses the mysqldump utility is not the recommended solution because it involves taking a full export of the production database, which can cause unacceptable application latency.
Option C: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Using the standby instance for the staging database is not the recommended solution because it does not give the development team the ability to continue using the staging environment without delay. The standby instance is used for failover in case of a production instance failure, and it is not intended for use as a staging environment.
Option D: Use Amazon RDS for MySQL with a Multi-AZ deployment and read replicas for production. Populating the staging database by implementing a backup and restore process that uses the mysqqldump utility is not the recommended solution because it involves taking a full export of the production database, which can cause unacceptable application latency.
A company is designing an application where users upload small files into Amazon S3. After a user uploads a file, the file requires one-time simple processing to transform the data and save the data in JSON format for later analysis.
Each file must be processed as quickly as possible after it is uploaded. Demand will vary. On some days, users will upload a high number of files. On other days, users will upload a few files or no files.
Which solution meets these requirements with the LEAST operational overhead?
Configure Amazon EMR to read text files from Amazon S3. Run processing scripts to transform the data. Store the resulting JSON file in an Amazon Aurora DB cluster.
Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use Amazon EC2 instances to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
Configure Amazon EventBridge (Amazon CloudWatch Events) to send an event to Amazon Kinesis Data Streams when a new file is uploaded. Use an AWS Lambda function to consume the event from the stream and process the data. Store the resulting JSON file in an Amazon Aurora DB cluster.
Answer is Configure Amazon S3 to send an event notification to an Amazon Simple Queue Service (Amazon SQS) queue. Use an AWS Lambda function to read from the queue and process the data. Store the resulting JSON file in Amazon DynamoDB.
AWS Lambda is a serverless computing service that allows you to run code without the need to provision or manage infrastructure. When a new file is uploaded to Amazon S3, it can trigger an event notification which sends a message to an SQS queue. The Lambda function can then be set up to be triggered by messages in the queue, and it can process the data and store the resulting JSON file in Amazon DynamoDB.
Using a serverless solution like AWS Lambda can help to reduce operational overhead because it automatically scales to meet demand and does not require you to provision and manage infrastructure. Additionally, using an SQS queue as a buffer between the S3 event notification and the Lambda function can help to decouple the processing of the data from the uploading of the data, allowing the processing to happen asynchronously and improving the overall efficiency of the system.
Question 103
An application allows users at a company's headquarters to access product data. The product data is stored in an Amazon RDS MySQL DB instance. The operations team has isolated an application performance slowdown and wants to separate read traffic from write traffic. A solutions architect needs to optimize the application's performance quickly.
What should the solutions architect recommend?
Change the existing database to a Multi-AZ deployment. Serve the read requests from the primary Availability Zone.
Change the existing database to a Multi-AZ deployment. Serve the read requests from the secondary Availability Zone.
Create read replicas for the database. Configure the read replicas with half of the compute and storage resources as the source database.
Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.
Answer is Create read replicas for the database. Configure the read replicas with the same compute and storage resources as the source database.
Creating read replicas allows the application to offload read traffic from the source database, improving its performance. The read replicas should be configured with the same compute and storage resources as the source database to ensure that they can handle the read workload effectively.
An image-processing company has a web application that users use to upload images. The application uploads the images into an Amazon S3 bucket. The company has set up S3 event notifications to publish the object creation events to an Amazon Simple Queue Service (Amazon SQS) standard queue. The SQS queue serves as the event source for an AWS Lambda function that processes the images and sends the results to users through email.
Users report that they are receiving multiple email messages for every uploaded image. A solutions architect determines that SQS messages are invoking the Lambda function more than once, resulting in multiple email messages.
What should the solutions architect do to resolve this issue with the LEAST operational overhead?
Set up long polling in the SQS queue by increasing the ReceiveMessage wait time to 30 seconds.
Change the SQS standard queue to an SQS FIFO queue. Use the message deduplication ID to discard duplicate messages.
Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout.
Modify the Lambda function to delete each message from the SQS queue immediately after the message is read before processing.
Answer is Increase the visibility timeout in the SQS queue to a value that is greater than the total of the function timeout and the batch window timeout.
In this setup the only way to get multiple emails is when same image is processed multiple times. This only happens when another Lambda starts processing while the previous one hasn't finished processing.
Increasing the SQS queue timeout to be greater than Lambda timeout will ensure that other Lambda can't see the SQS message before previous Lambda finishes processing or times out.
So C is best answer
A: Long polling won't fix anything
B: FIFO is nice idea but how will the Lambda function know it has got a duplicate message?
D: Wrong as in case of Lambda timeout that message is lost without being processed
Question 105
A company is preparing to store confidential data in Amazon S3. For compliance reasons, the data must be encrypted at rest. Encryption key usage must be logged for auditing purposes. Keys must be rotated every year.
Which solution meets these requirements and is the MOST operationally efficient?
Server-side encryption with customer-provided keys (SSE-C)
Server-side encryption with Amazon S3 managed keys (SSE-S3)
Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation
Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
Answer is Server-side encryption with AWS KMS keys (SSE-KMS) with automatic rotation
SSE-KMS allows you to use keys that are managed by the AWS Key Management Service (KMS) to encrypt your data at rest. KMS is a fully managed service that makes it easy to create and control the encryption keys used to encrypt your data. With automatic key rotation enabled, KMS will automatically create a new key for you on a regular basis, typically every year, and use it to encrypt your data. This simplifies the key rotation process and reduces the operational burden on your team.
In addition, SSE-KMS provides logging of key usage through AWS CloudTrail, which can be used for auditing purposes.
Option A: Server-side encryption with customer-provided keys (SSE-C) would require you to manage the encryption keys yourself, which can be more operationally burdensome.
Option B: Server-side encryption with Amazon S3 managed keys (SSE-S3) does not allow for key rotation or logging of the key usage.
Option C: Server-side encryption with AWS KMS keys (SSE-KMS) with manual rotation would require you to manually initiate the key rotation process, which can be more operationally burdensome compared to automatic rotation.
Question 106
A bicycle sharing company is developing a multi-tier architecture to track the location of its bicycles during peak operating hours. The company wants to use these data points in its existing analytics platform. A solutions architect must determine the most viable multi-tier option to support this architecture. The data points must be accessible from the REST API.
Which action meets these requirements for storing and retrieving location data?
Use Amazon Athena with Amazon S3.
Use Amazon API Gateway with AWS Lambda.
Use Amazon QuickSight with Amazon Redshift.
Use Amazon API Gateway with Amazon Kinesis Data Analytics.
Answer is Use Amazon API Gateway with Amazon Kinesis Data Analytics.
Amazon API Gateway with Amazon Kinesis Data Analytics is used for processing and analyzing streaming data in real-time using SQL or Apache Flink. While it's suitable for real-time analytics, it's not typically used for storing and retrieving data in a RESTful API format.
Option B is not a correct answer because After API Gateway to Lambda Function, what next ?. where will data go ?. but with Option D API Gateway with Kenesis Data Stream you have an option of sending data(through Firehose) to third party partner destination like DataDog, NewRelic, MongoDB, Splunk etc.
A social media company allows users to upload images to its website. The website runs on Amazon EC2 instances. During upload requests, the website resizes the images to a standard size and stores the resized images in Amazon S3. Users are experiencing slow upload requests to the website.
The company needs to reduce coupling within the application and improve website performance. A solutions architect must design the most operationally efficient process for image uploads.
Which combination of actions should the solutions architect take to meet these requirements? (Choose two.)
Configure the application to upload images to S3 Glacier.
Configure the web server to upload the original images to Amazon S3.
Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a presigned URL
Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize uploaded images.
Answers are;
C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a presigned URL
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image.
To meet the requirements of reducing coupling within the application and improving website performance, the solutions architect should consider taking the following actions:
C. Configure the application to upload images directly from each user's browser to Amazon S3 through the use of a pre-signed URL. This will allow the application to upload images directly to S3 without having to go through the web server, which can reduce the load on the web server and improve performance.
D. Configure S3 Event Notifications to invoke an AWS Lambda function when an image is uploaded. Use the function to resize the image. This will allow the application to resize images asynchronously, rather than having to do it synchronously during the upload request, which can improve performance.
Why other options are wrong
Option A, Configuring the application to upload images to S3 Glacier, is not relevant to improving the performance of image uploads.
Option B, Configuring the webserver to upload the original images to Amazon S3, is not a recommended solution as it would not reduce coupling within the application or improve performance.
Option E, Creating an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function on a schedule to resize uploaded images, is not a recommended solution as it would not be able to resize images in a timely manner and would not improve performance.
Question 108
A company has created an image analysis application in which users can upload photos and add photo frames to their images. The users upload images and metadata to indicate which photo frames they want to add to their images. The application uses a single Amazon EC2 instance and Amazon DynamoDB to store the metadata.
The application is becoming more popular, and the number of users is increasing. The company expects the number of concurrent users to vary significantly depending on the time of day and day of week. The company must ensure that the application can scale to meet the needs of the growing user base.
Which solution meats these requirements?
Use AWS Lambda to process the photos. Store the photos and metadata in DynamoDB.
Use Amazon Kinesis Data Firehose to process the photos and to store the photos and metadata.
Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
Increase the number of EC2 instances to three. Use Provisioned IOPS SSD (io2) Amazon Elastic Block Store (Amazon EBS) volumes to store the photos and metadata.
Answer is Use AWS Lambda to process the photos. Store the photos in Amazon S3. Retain DynamoDB to store the metadata.
Solution C offloads the photo processing to Lambda. Storing the photos in S3 ensures scalability and durability, while keeping the metadata in DynamoDB allows for efficient querying of the associated information.
Option A does not provide an appropriate solution for storing the photos, as DynamoDB is not suitable for storing large binary data like images.
Option B is more focused on real-time streaming data processing and is not the ideal service for processing and storing photos and metadata in this use case.
Option D involves manual scaling and management of EC2 instances, which is less flexible and more labor-intensive compared to the serverless nature of Lambda. It may not efficiently handle the varying number of concurrent users and can introduce higher operational overhead.
In conclusion, option C provides the best solution for scaling the application to meet the needs of the growing user base by leveraging the scalability and durability of Lambda, S3, and DynamoDB.
Question 109
A company uses a popular content management system (CMS) for its corporate website. However, the required patching and maintenance are burdensome. The company is redesigning its website and wants anew solution. The website will be updated four times a year and does not need to have any dynamic content available. The solution must provide high scalability and enhanced security.
Which combination of changes will meet these requirements with the LEAST operational overhead? (Choose two.)
Configure Amazon CloudFront in front of the website to use HTTPS functionality.
Deploy an AWS WAF web ACL in front of the website to provide HTTPS functionality.
Create and deploy an AWS Lambda function to manage and serve the website content.
Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
Create the new website. Deploy the website by using an Auto Scaling group of Amazon EC2 instances behind an Application Load Balancer.
Answers are;
A. Configure Amazon CloudFront in front of the website to use HTTPS functionality.
D. Create the new website and an Amazon S3 bucket. Deploy the website on the S3 bucket with static website hosting enabled.
Option A. Amazon CloudFront provides scalable content delivery with HTTPS functionality, meeting security and scalability requirements.
Option D. (using Amazon S3 with static website hosting) would provide high scalability and enhanced security with minimal operational overhead because it requires little maintenance and can automatically scale to meet increased demand.
A company has implemented a self-managed DNS solution on three Amazon EC2 instances behind a Network Load Balancer (NLB) in the us-west-2 Region. Most of the company's users are located in the United States and Europe. The company wants to improve the performance and availability of the solution. The company launches and configures three EC2 instances in the eu-west-1 Region and adds the EC2 instances as targets for a new NLB.
Which solution can the company use to route traffic to all the EC2 instances?
Create an Amazon Route 53 geolocation routing policy to route requests to one of the two NLBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
Attach Elastic IP addresses to the six EC2 instances. Create an Amazon Route 53 geolocation routing policy to route requests to one of the six EC2 instances. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution's origin.
Replace the two NLBs with two Application Load Balancers (ALBs). Create an Amazon Route 53 latency routing policy to route requests to one of the two ALBs. Create an Amazon CloudFront distribution. Use the Route 53 record as the distribution’s origin.
Answer is Create a standard accelerator in AWS Global Accelerator. Create endpoint groups in us-west-2 and eu-west-1. Add the two NLBs as endpoints for the endpoint groups.
AWS Global Accelerator allows routing traffic to endpoints in multiple AWS Regions. It uses the AWS global network to optimize availability and performance.
Creating an accelerator with endpoint groups in us-west-2 and eu-west-1 allows traffic to be distributed across both regions.
Adding the NLBs in each region as endpoints allows the traffic to be routed to the EC2 instances behind them.
This provides improved performance and availability compared to just using Route 53 geolocation routing.
Option A does not directly address the requirement of routing traffic to all EC2 instances. It focuses on routing based on geolocation and using CloudFront as a distribution, which may not achieve the desired outcome.
Option C involves managing Elastic IP addresses and routing based on geolocation. However, it may not provide the same level of performance and availability as AWS Global Accelerator.
Option D focuses on ALBs and latency-based routing. While it can be a valid solution, it does not utilize AWS Global Accelerator and may require more configuration and management compared to option B.