SAA-C03: AWS Certified Solutions Architect - Associate

87%

Question 111

A company has a dynamic web application hosted on two Amazon EC2 instances. The company has its own SSL certificate, which is on each instance to perform SSL termination.
There has been an increase in traffic recently, and the operations team determined that SSL encryption and decryption is causing the compute capacity of the web servers to reach their maximum limit.

What should a solutions architect do to increase the application's performance?
Create a new SSL certificate using AWS Certificate Manager (ACM). Install the ACM certificate on each instance.
Create an Amazon S3 bucket Migrate the SSL certificate to the S3 bucket. Configure the EC2 instances to reference the bucket for SSL termination.
Create another EC2 instance as a proxy server. Migrate the SSL certificate to the new instance and configure it to direct connections to the existing EC2 instances.
Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.




Answer is Import the SSL certificate into AWS Certificate Manager (ACM). Create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.

To increase the application's performance, the solutions architect should import the SSL certificate into AWS Certificate Manager (ACM) and create an Application Load Balancer with an HTTPS listener that uses the SSL certificate from ACM.

Using an Application Load Balancer with an HTTPS listener allows SSL termination to happen at the load balancer layer.
The EC2 instances behind the load balancer receive only unencrypted traffic, reducing load on them.
Importing the custom SSL certificate into ACM allows the ALB to use it for HTTPS listeners.
This removes the need to install and manage SSL certificates on each EC2 instance.
ALB handles the SSL overhead and scales automatically. The EC2 fleet focuses on app logic.

Option A suggests creating a new SSL certificate using ACM, but it does not address the SSL termination offloading and load balancing capabilities provided by an ALB.
Option B suggests migrating the SSL certificate to an S3 bucket, but this approach does not provide the necessary SSL termination and load balancing functionalities.
Option C suggests creating another EC2 instance as a proxy server, but this adds unnecessary complexity and management overhead without leveraging the benefits of ALB's built-in load balancing and SSL termination capabilities.

Question 112

A media company is evaluating the possibility of moving its systems to the AWS Cloud. The company needs at least 10 TB of storage with the maximum possible I/O performance for video processing, 300 TB of very durable storage for storing media content, and 900 TB of storage to meet requirements for archival media that is not in use anymore.

Which set of services should a solutions architect recommend to meet these requirements?
Amazon EBS for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage
Amazon EBS for maximum performance, Amazon EFS for durable data storage, and Amazon S3 Glacier for archival storage
Amazon EC2 instance store for maximum performance, Amazon EFS for durable data storage, and Amazon S3 for archival storage
Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage




Answer is Amazon EC2 instance store for maximum performance, Amazon S3 for durable data storage, and Amazon S3 Glacier for archival storage

An instance store provides temporary block-level storage for your instance. This storage is located on disks that are physically attached to the host computer. Instance store is ideal for temporary storage of information that changes frequently, such as buffers, caches, scratch data, and other temporary content. It can also be used to store temporary data that you replicate across a fleet of instances, such as a load-balanced pool of web servers.

Reference:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/InstanceStorage.html

Question 113

A company is running a multi-tier web application on premises. The web application is containerized and runs on a number of Linux hosts connected to a PostgreSQL database that contains user records. The operational overhead of maintaining the infrastructure and capacity planning is limiting the company's growth. A solutions architect must improve the application's infrastructure.

Which combination of actions should the solutions architect take to accomplish this? (Choose two.)
Migrate the PostgreSQL database to Amazon Aurora.
Migrate the web application to be hosted on Amazon EC2 instances.
Set up an Amazon CloudFront distribution for the web application content.
Set up Amazon ElastiCache between the web application and the PostgreSQL database.
Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).




Answers are;
A. Migrate the PostgreSQL database to Amazon Aurora.
E. Migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).


To improve the application's infrastructure, the solutions architect should migrate the PostgreSQL database to Amazon Aurora and migrate the web application to be hosted on AWS Fargate with Amazon Elastic Container Service (Amazon ECS).

Amazon Aurora is a fully managed, scalable, and highly available relational database service that is compatible with PostgreSQL. Migrating the database to Amazon Aurora would reduce the operational overhead of maintaining the database infrastructure and allow the company to focus on building and scaling the application.

AWS Fargate is a fully managed container orchestration service that enables users to run containers without the need to manage the underlying EC2 instances. By using AWS Fargate with Amazon Elastic Container Service (Amazon ECS), the solutions architect can improve the scalability and efficiency of the web application and reduce the operational overhead of maintaining the underlying infrastructure.

Wrong options;
B. Migrating the web application to Amazon EC2 instances would not directly address the operational overhead and capacity planning concerns mentioned in the scenario.
C. Setting up an Amazon CloudFront distribution improves content delivery but does not directly address the operational overhead or capacity planning limitations.
D. Configuring Amazon ElastiCache improves performance but does not directly address the operational overhead or capacity planning challenges mentioned.

Question 114

An application runs on Amazon EC2 instances across multiple Availability Zonas. The instances run in an Amazon EC2 Auto Scaling group behind an Application Load Balancer. The application performs best when the CPU utilization of the EC2 instances is at or near 40%.

What should a solutions architect do to maintain the desired performance across all instances in the group?
Use a simple scaling policy to dynamically scale the Auto Scaling group.
Use a target tracking policy to dynamically scale the Auto Scaling group.
Use an AWS Lambda function ta update the desired Auto Scaling group capacity.
Use scheduled scaling actions to scale up and scale down the Auto Scaling group.




Answer is Use a target tracking policy to dynamically scale the Auto Scaling group.

To maintain the desired performance across all instances in the Amazon EC2 Auto Scaling group, the solutions architect should use a target tracking policy to dynamically scale the Auto Scaling group.

A target tracking policy allows the Auto Scaling group to automatically adjust the number of EC2 instances in the group based on a target value for a metric. In this case, the target value for the CPU utilization metric could be set to 40% to maintain the desired performance of the application. The Auto Scaling group would then automatically scale the number of instances up or down as needed to maintain the target value for the metric.

Reference:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scaling-simple-step.html

Question 115

A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket. Queries will be simple and will run on-demand. A solutions architect needs to perform the analysis with minimal changes to the existing architecture.

What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?
Use Amazon Redshift to load all the content into one place and run the SQL queries as needed.
Use Amazon CloudWatch Logs to store the logs. Run SQL queries as needed from the Amazon CloudWatch console.
Use Amazon Athena directly with Amazon S3 to run the queries as needed.
Use AWS Glue to catalog the logs. Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed.




Answer is Use Amazon Athena directly with Amazon S3 to run the queries as needed.

Keyword:
- Queries will be simple and will run on-demand.
- Minimal changes to the existing architecture.

Amazon Athena is an interactive query service that makes it easy to analyze data directly in Amazon Simple Storage Service (Amazon S3) using standard SQL. With a few actions in the AWS Management Console, you can point Athena at your data stored in Amazon S3 and begin using standard SQL to run ad-hoc queries and get results in seconds.

Reference:
https://docs.aws.amazon.com/athena/latest/ug/what-is.html

Question 116

A solutions architect is using Amazon S3 to design the storage architecture of a new digital media application. The media files must be resilient to the loss of an Availability Zone. Some files are accessed frequently while other files are rarely accessed in an unpredictable pattern. The solutions architect must minimize the costs of storing and retrieving the media files.

Which storage option meets these requirements?
S3 Standard
S3 Intelligent-Tiering
S3 Standard-Infrequent Access (S3 Standard-IA)
S3 One Zone-Infrequent Access (S3 One Zone-IA)




Answer is S3 Intelligent-Tiering

The storage option that meets these requirements is B: S3 Intelligent-Tiering.

Amazon S3 Intelligent Tiering is a storage class that automatically moves data to the most cost-effective storage tier based on access patterns. It can store objects in two access tiers: the frequent access tier and the infrequent access tier. The frequent access tier is optimized for frequently accessed objects and is charged at the same rate as S3 Standard. The infrequent access tier is optimized for objects that are not accessed frequently and are charged at a lower rate than S3 Standard.

S3 Intelligent Tiering is a good choice for storing media files that are accessed frequently and infrequently in an unpredictable pattern because it automatically moves data to the most cost-effective storage tier based on access patterns, minimizing storage and retrieval costs. It is also resilient to the loss of an Availability Zone because it stores objects in multiple Availability Zones within a region.

Option A, S3 Standard, is not a good choice because it does not offer the cost optimization of S3 Intelligent-Tiering.

Option C, S3 Standard-Infrequent Access (S3 Standard-IA), is not a good choice because it is optimized for infrequently accessed objects and does not offer the cost optimization of S3 Intelligent-Tiering.

Option D, S3 One Zone-Infrequent Access (S3 One Zone-IA), is not a good choice because it is not resilient to the loss of an Availability Zone. It stores objects in a single Availability Zone, making it less durable than other storage classes.

Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/DataDurability.html

Question 117

A company is storing backup files by using Amazon S3 Standard storage. The files are accessed frequently for 1 month. However, the files are not accessed after 1 month. The company must keep the files indefinitely.

Which storage solution will meet these requirements MOST cost-effectively?
Configure S3 Intelligent-Tiering to automatically migrate objects.
Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.
Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month.
Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month.




Answer is Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.

Keywords:
- The files are accessed frequently for 1 month.
- Files are NOT accessed after 1 month.

The storage solution that will meet these requirements most cost-effectively is B: Create an S3 Lifecycle configuration to transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month.

Amazon S3 Glacier Deep Archive is a secure, durable, and extremely low-cost Amazon S3 storage class for long-term retention of data that is rarely accessed and for which retrieval times of several hours are acceptable. It is the lowest-cost storage option in Amazon S3, making it a cost-effective choice for storing backup files that are not accessed after 1 month.

You can use an S3 Lifecycle configuration to automatically transition objects from S3 Standard to S3 Glacier Deep Archive after 1 month. This will minimize the storage costs for the backup files that are not accessed frequently. Option A, configuring S3 Intelligent-Tiering to automatically migrate objects, is not a good choice because it is not designed for long-term storage and does not offer the cost benefits of S3 Glacier Deep Archive.

Option C, transitioning objects from S3 Standard to S3 Standard-Infrequent Access (S3 Standard-IA) after 1 month, is not a good choice because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.

Option D, transitioning objects from S3 Standard to S3 One Zone-Infrequent Access (S3 One Zone-IA) after 1 month, is not a good choice because it is not the lowest-cost storage option and would not provide the cost benefits of S3 Glacier Deep Archive.

Reference:
https://aws.amazon.com/s3/storage-classes/

Question 118

A company observes an increase in Amazon EC2 costs in its most recent bill. The billing team notices unwanted vertical scaling of instance types for a couple of EC2 instances. A solutions architect needs to create a graph comparing the last 2 months of EC2 costs and perform an in-depth analysis to identify the root cause of the vertical scaling. How should the solutions architect generate the information with the LEAST operational overhead?
Use AWS Budgets to create a budget report and compare EC2 costs based on instance types.
Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.
Use graphs from the AWS Billing and Cost Management dashboard to compare EC2 costs based on instance types for the last 2 months.
Use AWS Cost and Usage Reports to create a report and send it to an Amazon S3 bucket. Use Amazon QuickSight with Amazon S3 as a source to generate an interactive graph based on instance types.




Answer is Use Cost Explorer's granular filtering feature to perform an in-depth analysis of EC2 costs based on instance types.

Cost Explorer is a tool that enables you to view and analyze your costs and usage. You can filter graphs by values such as API operation, Availability Zone, AWS service, custom cost allocation tag, instance type, and more. This makes it a powerful tool for in-depth analysis of costs.

Reference:
https://www.examtopics.com/discussions/amazon/view/68306-exam-aws-certified-solutions-architect-associate-saa-c02/

Question 119

A development team runs monthly resource-intensive tests on its general purpose Amazon RDS for MySQL DB instance with Performance Insights enabled. The testing lasts for 48 hours once a month and is the only process that uses the database. The team wants to reduce the cost of running the tests without reducing the compute and memory attributes of the DB instance.

Which solution meets these requirements MOST cost-effectively?
Stop the DB instance when tests are completed. Restart the DB instance when required.
Use an Auto Scaling policy with the DB instance to automatically scale when tests are completed.
Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.
Modify the DB instance to a low-capacity instance when tests are completed. Modify the DB instance again when required.




Answer is Create a snapshot when tests are completed. Terminate the DB instance and restore the snapshot when required.

Stopping and starting a DB instance is the most cost-effective solution for scenarios where the database is not in use all the time. Amazon RDS allows you to stop and start the database instances, and you are not charged for the instance hours while the database is stopped.

Not A - By stopping the DB although you are not paying for DB hours you are still paying for Provisioned IOPs , the storage for Stopped DB is more than Snapshot of underlying EBS vol. and Automated Back ups .
Not D - Is possible but not MOST cost effective, no need to run the RDS when not needed.

Reference:
https://aws.amazon.com/rds/pricing/

Question 120

A development team needs to host a website that will be accessed by other teams. The website contents consist of HTML, CSS, client-side JavaScript, and images.

Which method is the MOST cost-effective for hosting the website?
Containerize the website and host it in AWS Fargate.
Create an Amazon S3 bucket and host the website there.
Deploy a web server on an Amazon EC2 instance to host the website.
Configure an Application Load Balancer with an AWS Lambda target that uses the Express.js framework.




Answer is Create an Amazon S3 bucket and host the website there.

Static website should work fine with S3. HTML, CSS, client-side JavaScript, and images are all static resources.

Containerizing the website and hosting it in AWS Fargate (option A) would involve additional complexity and costs associated with managing the container environment and scaling resources. Deploying a web server on an Amazon EC2 instance (option C) would require provisioning and managing the EC2 instance, which may not be cost-effective for a static website. Configuring an Application Load Balancer with an AWS Lambda target (option D) adds unnecessary complexity and may not be the most efficient solution for hosting a static website.

< Previous PageNext Page >

Quick access to all questions in this exam

Warning: file_get_contents(http://www.geoplugin.net/php.gp?ip=216.73.216.138): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/passnexa/public_html/view/question.php on line 243