A company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS. The website is experiencing increased demand from around the world. The company must decrease latency for users who access the website.
Which solution meets these requirements MOST cost-effectively?
Replicate the S3 bucket that contains the website to all AWS Regions. Add Route 53 geolocation routing entries.
Provision accelerators in AWS Global Accelerator. Associate the supplied IP addresses with the S3 bucket. Edit the Route 53 entries to point to the IP addresses of the accelerators.
Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
Enable S3 Transfer Acceleration on the bucket. Edit the Route 53 entries to point to the new endpoint.
Answer is Add an Amazon CloudFront distribution in front of the S3 bucket. Edit the Route 53 entries to point to the CloudFront distribution.
Option A (replicating the S3 bucket to all AWS Regions) can be costly and complex, requiring replication of data across multiple Regions and managing synchronization. It may not provide a significant latency improvement compared to the CloudFront solution.
Option B (provisioning accelerators in AWS Global Accelerator) can be more expensive as it adds an extra layer of infrastructure (accelerators) and requires associating IP addresses with the S3 bucket. CloudFront already includes global edge locations and provides similar acceleration capabilities.
Option D (enabling S3 Transfer Acceleration) can help improve upload speed to the S3 bucket but may not have a significant impact on reducing latency for website visitors.
Therefore, option C is the most cost-effective solution as it leverages CloudFront's caching and global distribution capabilities to decrease latency and improve website performance.
Question 122
A company runs a highly available image-processing application on Amazon EC2 instances in a single VPC. The EC2 instances run inside several subnets across multiple Availability Zones. The EC2 instances do not communicate with each other. However, the EC2 instances download images from Amazon S3 and upload images to Amazon S3 through a single NAT gateway. The company is concerned about data transfer charges.
What is the MOST cost-effective way for the company to avoid Regional data transfer charges?
Launch the NAT gateway in each Availability Zone.
Replace the NAT gateway with a NAT instance.
Deploy a gateway VPC endpoint for Amazon S3.
Provision an EC2 Dedicated Host to run the EC2 instances.
Answer is Deploy a gateway VPC endpoint for Amazon S3.
Deploying a gateway VPC endpoint for Amazon S3 is the most cost-effective way for the company to avoid Regional data transfer charges. A gateway VPC endpoint is a network gateway that allows communication between instances in a VPC and a service, such as Amazon S3, without requiring an Internet gateway or a NAT device. Data transfer between the VPC and the service through a gateway VPC endpoint is free of charge, while data transfer between the VPC and the Internet through an Internet gateway or NAT device is subject to data transfer charges. By using a gateway VPC endpoint, the company can reduce its data transfer costs by eliminating the need to transfer data through the NAT gateway to access Amazon S3. This option would provide the required connectivity to Amazon S3 and minimize data transfer charges.
A company stores call transcript files on a monthly basis. Users access the files randomly within 1 year of the call, but users access the files infrequently after 1 year. The company wants to optimize its solution by giving users the ability to query and retrieve files that are less than 1-year-old as quickly as possible. A delay in retrieving older files is acceptable.
Which solution will meet these requirements MOST cost-effectively?
Store individual files with tags in Amazon S3 Glacier Instant Retrieval. Query the tags to retrieve the files from S3 Glacier Instant Retrieval.
Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
Store individual files with tags in Amazon S3 Standard storage. Store search metadata for each archive in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Instant Retrieval after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
Store individual files in Amazon S3 Standard storage. Use S3 Lifecycle policies to move the files to S3 Glacier Deep Archive after 1 year. Store search metadata in Amazon RDS. Query the files from Amazon RDS. Retrieve the files from S3 Glacier Deep Archive.
Answer is Store individual files in Amazon S3 Intelligent-Tiering. Use S3 Lifecycle policies to move the files to S3 Glacier Flexible Retrieval after 1 year. Query and retrieve the files that are in Amazon S3 by using Amazon Athena. Query and retrieve the files that are in S3 Glacier by using S3 Glacier Select.
S3 Intelligent-Tiering stores objects in two access tiers: one tier that is optimized for frequent access and another lower-cost tier that is optimized for infrequent access. For a small monthly monitoring and automation fee per object, S3 Intelligent-Tiering monitors access patterns and moves objects that have not been accessed for 30 consecutive days to the infrequent access tier.
There are no retrieval fees in S3 Intelligent-Tiering. If an object in the infrequent access tier is accessed later, it is automatically moved back to the frequent access tier.
No additional tiering fees apply when objects are moved between access tiers within the S3 Intelligent-Tiering storage class. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and offers the same low latency and high throughput performance of S3 Standard
A company runs its infrastructure on AWS and has a registered base of 700,000 users for its document management application. The company intends to create a product that converts large .pdf files to .jpg image files. The .pdf files average 5 MB in size. The company needs to store the original files and the converted files. A solutions architect must design a scalable solution to accommodate demand that will grow rapidly over time.
Which solution meets these requirements MOST cost-effectively?
Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
Save the .pdf files to Amazon DynamoDUse the DynamoDB Streams feature to invoke an AWS Lambda function to convert the files to .jpg format and store them back in DynamoDB.
Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic Block Store (Amazon EBS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the files to .jpg format. Save the .pdf files and the .jpg files in the EBS store.
Upload the .pdf files to an AWS Elastic Beanstalk application that includes Amazon EC2 instances, Amazon Elastic File System (Amazon EFS) storage, and an Auto Scaling group. Use a program in the EC2 instances to convert the file to .jpg format. Save the .pdf files and the .jpg files in the EBS store.
Answer is Save the .pdf files to Amazon S3. Configure an S3 PUT event to invoke an AWS Lambda function to convert the files to .jpg format and store them back in Amazon S3.
Option A is the most cost-effective solution that meets the requirements. In this solution, the .pdf files are saved to Amazon S3, which is an object storage service that is highly scalable, durable, and secure. S3 can store unlimited amounts of data at a very low cost.
The S3 PUT event triggers an AWS Lambda function to convert the .pdf files to .jpg format. Lambda is a serverless compute service that runs code in response to specific events and automatically scales to meet demand. This means that the conversion process can scale up or down as needed, without the need for manual intervention.
The converted .jpg files are then stored back in S3, which allows the company to store both the original .pdf files and the converted .jpg files in the same service. This reduces the complexity of the solution and helps to keep costs low.
Option C is also a valid solution, but it may be more expensive due to the use of EC2 instances, EBS storage, and an Auto Scaling group. These resources can add additional cost, especially if the demand for the conversion service grows rapidly.
Option D is not a valid solution because it uses Amazon EFS, which is a file storage service that is not suitable for storing large amounts of data. EFS is designed for storing and accessing files that are accessed frequently, such as application logs and media files. It is not designed for storing large files like .pdf or .jpg files.
A company has an application that generates a large number of files, each approximately 5 MB in size. The files are stored in Amazon S3. Company policy requires the files to be stored for 4 years before they can be deleted. Immediate accessibility is always required as the files contain critical business data that is not easy to reproduce. The files are frequently accessed in the first 30 days of the object creation but are rarely accessed after the first 30 days.
Which storage solution is MOST cost-effective?
Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Glacier 30 days from object creation. Delete the files 4 years after object creation.
Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) 30 days from object creation. Delete the files 4 years after object creation.
Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Move the files to S3 Glacier 4 years after object creation.
Answer is Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
Create an S3 bucket lifecycle policy to move files from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) 30 days from object creation. Delete the files 4 years after object creation.
Options A, B, and D have drawbacks:
Option A: Transitioning to S3 Glacier might introduce retrieval times and costs, which may not be suitable for files that require immediate accessibility. Deleting directly after 4 years is a more straightforward approach.
Option B: S3 One Zone-Infrequent Access (S3 One Zone-IA) is less durable than Standard or Standard-Infrequent Access, as it stores data in a single availability zone. This may not be ideal for critical business data.
Option D: Transitioning to S3 Glacier after 4 years introduces retrieval times and costs, which might not align with the immediate accessibility requirement. It adds complexity without a clear benefit in this scenario.
Question 126
A solutions architect is designing a new hybrid architecture to extend a company's on-premises infrastructure to AWS. The company requires a highly available connection with consistent low latency to an AWS Region. The company needs to minimize costs and is willing to accept slower traffic if the primary connection fails.
What should the solutions architect do to meet these requirements?
Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
Provision a VPN tunnel connection to a Region for private connectivity. Provision a second VPN tunnel for private connectivity and as a backup if the primary VPN connection fails.
Provision an AWS Direct Connect connection to a Region. Provision a second Direct Connect connection to the same Region as a backup if the primary Direct Connect connection fails.
Provision an AWS Direct Connect connection to a Region. Use the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails.
Answer is Provision an AWS Direct Connect connection to a Region. Provision a VPN connection as a backup if the primary Direct Connect connection fails.
A highly available connection with consistent low latency = AWS Direct Connect
Minimize costs and accept slower traffic if the primary connection fails = VPN connection
Options B and C propose using multiple VPN connections for private connectivity and as backups. While VPNs can serve as backups, they may not provide the same level of consistent low latency and high availability as Direct Connect connections. Additionally, provisioning multiple VPN tunnels can increase operational complexity and costs.
Option D suggests using the Direct Connect failover attribute from the AWS CLI to automatically create a backup connection if the primary Direct Connect connection fails. While this approach can be automated, it does not provide the same level of immediate failover capabilities as having a separate backup connection in place.
A company runs a photo processing application that needs to frequently upload and download pictures from Amazon S3 buckets that are located in the same AWS Region. A solutions architect has noticed an increased cost in data transfer fees and needs to implement a solution to reduce these costs.
How can the solutions architect meet this requirement?
Deploy Amazon API Gateway into a public subnet and adjust the route table to route S3 calls through it.
Deploy a NAT gateway into a public subnet and attach an endpoint policy that allows access to the S3 buckets.
Deploy the application into a public subnet and allow it to route through an internet gateway to access the S3 buckets.
Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
Answer is Deploy an S3 VPC gateway endpoint into the VPC and attach an endpoint policy that allows access to the S3 buckets.
By deploying an S3 VPC gateway endpoint, the application can access the S3 buckets over a private network connection within the VPC, eliminating the need for data transfer over the internet. This can help reduce data transfer fees as well as improve the performance of the application. The endpoint policy can be used to specify which S3 buckets the application has access to.
Option A, deploying Amazon API Gateway into a public subnet and adjusting the route table, would not address the issue of data transfer fees as the application would still be transferring data over the internet.
Option B, deploying a NAT gateway into a public subnet and attaching an endpoint policy, would not address the issue of data transfer fees either as the NAT gateway is used to enable outbound internet access for instances in a private subnet, rather than for connecting to S3.
Option C, deploying the application into a public subnet and allowing it to route through an internet gateway, would not reduce data transfer fees as the application would still be transferring data over the internet.
Question 128
A company hosts its web applications in the AWS Cloud. The company configures Elastic Load Balancers to use certificates that are imported into AWS Certificate Manager (ACM). The company's security team must be notified 30 days before the expiration of each certificate.
What should a solutions architect recommend to meet this requirement?
Add a rule in ACM to publish a custom message to an Amazon Simple Notification Service (Amazon SNS) topic every day, beginning 30 days before any certificate will expire.
Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant resource.
Use AWS Trusted Advisor to check for certificates that will expire within 30 days. Create an Amazon CloudWatch alarm that is based on Trusted Advisor metrics for check status changes. Configure the alarm to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS).
Create an Amazon EventBridge (Amazon CloudWatch Events) rule to detect any certificates that will expire within 30 days. Configure the rule to invoke an AWS Lambda function. Configure the Lambda function to send a custom alert by way of Amazon Simple Notification Service (Amazon SNS).
Answer is Create an AWS Config rule that checks for certificates that will expire within 30 days. Configure Amazon EventBridge (Amazon CloudWatch Events) to invoke a custom alert by way of Amazon Simple Notification Service (Amazon SNS) when AWS Config reports a noncompliant resource.
To get a notification that your certificate is about to expire, use one of the following methods:
Use the ACM API in Amazon EventBridge to configure the ACM Certificate Approaching Expiration event.
Create a custom EventBridge rule to receive email notifications when certificates are nearing the expiration date.
Use AWS Config to check for certificates that are nearing the expiration date.
D IS INCORRECT;
-Lambda is not necessary; AWS services (such as Amazon EC2, Amazon S3 & Amazon CloudWatch) can publish messages to your SNS topics to trigger event-driven computing and workflows. Using Lambda here goes against building the Well-Architected Framework pillar of Performance Efficiency. The more efficient solution is to use the managed service of AWS Config.
-For those that argue against (B) bc of cost: The Cost Optimization pillar is upheld by (B) vs (D). Understanding how efficient your current architecture is in relation to your goals can remove unneeded expense. The goal is for the security team to be notified B4 expiration. If the certificate expires, there will be a far greater expense to pay.
A company wants to reduce the cost of its existing three-tier web architecture. The web, application, and database servers are running on Amazon EC2 instances for the development, test, and production environments. The EC2 instances average 30% CPU utilization during peak hours and 10% CPU utilization during non-peak hours.
The production EC2 instances run 24 hours a day. The development and test EC2 instances run for at least 8 hours each day. The company plans to implement automation to stop the development and test EC2 instances when they are not in use.
Which EC2 instance purchasing solution will meet the company's requirements MOST cost-effectively?
Use Spot Instances for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
Use Spot blocks for the production EC2 instances. Use Reserved Instances for the development and test EC2 instances.
Use On-Demand Instances for the production EC2 instances. Use Spot blocks for the development and test EC2 instances.
Answer is Use Reserved Instances for the production EC2 instances. Use On-Demand Instances for the development and test EC2 instances.
Reserved Instances provide cost savings for instances that run consistently, such as the production environment in this case, while On-Demand Instances offer flexibility and are suitable for instances with variable usage patterns like the development and test environments. This combination ensures cost optimization based on the specific requirements and usage patterns described in the question.
In addition to this, we can set up an automated process to start and stop the EC2 instances in the test and dev environment
Question 130
A survey company has gathered data for several years from areas in the United States. The company hosts the data in an Amazon S3 bucket that is 3 TB in size and growing. The company has started to share the data with a European marketing firm that has S3 buckets. The company wants to ensure that its data transfer costs remain as low as possible.
Which solution will meet these requirements?
Configure the Requester Pays feature on the company's S3 bucket.
Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.
Configure cross-account access for the marketing firm so that the marketing firm has access to the company's S3 bucket.
Configure the company's S3 bucket to use S3 Intelligent-Tiering. Sync the S3 bucket to one of the marketing firm's S3 buckets.
Answer is Configure S3 Cross-Region Replication from the company's S3 bucket to one of the marketing firm's S3 buckets.
S3 Cross-Region Replication: This feature automatically replicates objects from the source bucket (owned by the survey company) to a destination bucket (owned by the marketing firm) in a different AWS region. By replicating the data to the marketing firm's region, data transfer costs can be minimized, as data transfers within the same AWS region are typically cheaper than those across regions.
Cost Efficiency: Since the marketing firm is located in Europe, having the data replicated to an S3 bucket in a European AWS region reduces the data transfer costs associated with transferring data across regions.