SAA-C03: AWS Certified Solutions Architect - Associate

100%

Question 131

A company uses 50 TB of data for reporting. The company wants to move this data from on premises to AWS. A custom application in the company’s data center runs a weekly data transformation job. The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible.
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud.

Which solution will meet these requirements with the LEAST operational overhead?
Use AWS DataSync to move the data. Create a custom transformation job by using AWS Glue.
Order an AWS Snowcone device to move the data. Deploy the transformation application to the device.
Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS Glue.
Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute. Copy the data to the device. Create a new EC2 instance on AWS to run the transformation application.




Answer is Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation job by using AWS Glue.

Using an EC2 instance instead of a managed service like AWS Glue will include more operational overhead for the organization.
Technically option D will work but with the overhead of EC2, negating the requirement for LEAST ops.

Question 132

A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution.

Which storage solution meets these requirements MOST cost-effectively?
Amazon Elastic Block Store (Amazon EBS)
Amazon Elastic File System (Amazon EFS)
Amazon OpenSearch Service (Amazon Elasticsearch Service)
Amazon S3




Answer is Amazon S3

Amazon S3 is an object storage service that is designed to store and retrieve large amounts of data from anywhere on the web. It is highly scalable, highly available, and cost-effective, making it an ideal choice for storing a large repository of text documents that will experience periods of high demand. S3 is a standalone storage service that can be accessed from anywhere, and it is designed to handle large numbers of objects, making it well-suited for storing the 900 TB repository of text documents described in the scenario. It is also designed to handle high levels of demand, making it suitable for handling periods of high demand.

Option A (Amazon EBS) is block storage designed for individual EC2 instances and may not scale as seamlessly and cost-effectively as S3 for large amounts of data.

Option B (Amazon EFS) is a scalable file storage service, but it may not be the most cost-effective option compared to S3, especially for the anticipated storage size of 900 TB.

Option C (Amazon OpenSearch Service) is a search and analytics service and may not be suitable as the primary storage solution for the text documents.

Question 133

A company has a highly dynamic batch processing job that uses many Amazon EC2 instances to complete it. The job is stateless in nature, can be started and stopped at any given time with no negative impact, and typically takes upwards of 60 minutes total to complete. The company has asked a solutions architect to design a scalable and cost-effective solution that meets the requirements of the job.

What should the solutions architect recommend?
Implement EC2 Spot Instances.
Purchase EC2 Reserved Instances.
Implement EC2 On-Demand Instances.
Implement the processing on AWS Lambda.




Answer is Implement EC2 Spot Instances.

To design a scalable and cost-effective solution for the batch processing job, the solutions architect should recommend implementing EC2 Spot Instances.

Spot can provide significant cost savings (up to 90%) compared to On-Demand.
Since the job is stateless and can be stopped/restarted anytime, the intermittent availability of Spot is not an issue.
Spot supports the same instance types as On-Demand, so optimal instance types can be chosen.
For a 60+ minute batch job, the chance of Spot interruption is low. But if it happens, the job can just be restarted.
Reserved Instances don't offer any advantage for a highly dynamic job like this.
Lambda is not a good fit given the long runtime requirement.

Question 134

A solutions architect needs to implement a solution to reduce a company's storage costs. All the company's data is in the Amazon S3 Standard storage class. The company must keep all data for at least 25 years. Data from the most recent 2 years must be highly available and immediately retrievable.

Which solution will meet these requirements?
Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive immediately.
Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.
Use S3 Intelligent-Tiering. Activate the archiving option to ensure that data is archived in S3 Glacier Deep Archive.
Set up an S3 Lifecycle policy to transition objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) immediately and to S3 Glacier Deep Archive after 2 years.




Answer is Set up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years.

By setting up an S3 Lifecycle policy to transition objects to S3 Glacier Deep Archive after 2 years, the company can keep all data for at least 25 years while ensuring that data from the most recent 2 years remains highly available and immediately retrievable in the Amazon S3 Standard storage class. This solution optimizes storage costs by leveraging the Glacier Deep Archive for long-term storage.

Option A is incorrect because immediately transitioning objects to S3 Glacier Deep Archive would not fulfill the requirement of keeping the most recent 2 years of data highly available and immediately retrievable.

Option C is also incorrect because using S3 Intelligent-Tiering with archiving option would not meet the requirement of immediately retrievable data for the most recent 2 years.

Option D is not the best choice because transitioning objects to S3 One Zone-Infrequent Access (S3 One Zone-IA) and then to S3 Glacier Deep Archive would not satisfy the requirement of immediately retrievable data for the most recent 2 years.

Question 135

A company wants to run applications in containers in the AWS Cloud. These applications are stateless and can tolerate disruptions within the underlying infrastructure. The company needs a solution that minimizes cost and operational overhead.

What should a solutions architect do to meet these requirements?
Use Spot Instances in an Amazon EC2 Auto Scaling group to run the application containers.
Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.
Use On-Demand Instances in an Amazon EC2 Auto Scaling group to run the application containers.
Use On-Demand Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.




Answer is Use Spot Instances in an Amazon Elastic Kubernetes Service (Amazon EKS) managed node group.

Spot instances for disruption friendly containers which are also cheaper.
EKS allows using spot instances from a managed node group that takes away the EC2 operational overhead.

Previously, customers had to run Spot Instances as self-managed worker nodes in their EKS clusters. This meant doing some heavy lifting such as building and maintaining configuration for Spot Instances in EC2 Auto Scaling groups, deploying a tool for handling Spot interruptions gracefully, deploying AMI updates, and updating the kubelet version running on their worker nodes. Now, all you need to do is supply a single parameter to indicate that a managed node group should launch Spot Instances, and provide multiple instance types that would be used by the underlying EC2 Auto Scaling group.

Reference:
https://aws.amazon.com/blogs/containers/amazon-eks-now-supports-provisioning-and-managing-ec2-spot-instances-in-managed-node-groups/

Question 136

A company’s website provides users with downloadable historical performance reports. The website needs a solution that will scale to meet the company’s website demands globally. The solution should be cost-effective, limit the provisioning of infrastructure resources, and provide the fastest possible response time.

Which combination should a solutions architect recommend to meet these requirements?
Amazon CloudFront and Amazon S3
AWS Lambda and Amazon DynamoDB
Application Load Balancer with Amazon EC2 Auto Scaling
Amazon Route 53 with internal Application Load Balancers




Answer is Amazon CloudFront and Amazon S3

By using CloudFront, the website can leverage the global network of edge locations to cache and deliver the performance reports to users from the nearest edge location, reducing latency and providing fast response times. Amazon S3 serves as the origin for the files, where the reports are stored.

Option B is incorrect because AWS Lambda and Amazon DynamoDB are not the most suitable services for serving downloadable files and meeting the website demands globally.

Option C is incorrect because using an Application Load Balancer with Amazon EC2 Auto Scaling may require more infrastructure provisioning and management compared to the CloudFront and S3 combination. Additionally, it may not provide the same level of global scalability and fast response times as CloudFront.

Option D is incorrect because while Amazon Route 53 is a global DNS service, it alone does not provide the caching and content delivery capabilities required for serving the downloadable reports. Internal Application Load Balancers do not address the global scalability and caching requirements specified in the scenario.

Reference:
https://aws.amazon.com/s3/

Question 137

A company runs an ecommerce application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones. The Auto Scaling group scales based on CPU utilization metrics. The ecommerce application stores the transaction data in a MySQL 8.0 database that is hosted on a large EC2 instance.
The database's performance degrades quickly as application load increases. The application handles more read requests than write transactions. The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads while maintaining high availability.

Which solution will meet these requirements?
Use Amazon Redshift with a single node for leader and compute functionality.
Use Amazon RDS with a Single-AZ deployment Configure Amazon RDS to add reader instances in a different Availability Zone.
Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.
Use Amazon ElastiCache for Memcached with EC2 Spot Instances.




Answer is Use Amazon Aurora with a Multi-AZ deployment. Configure Aurora Auto Scaling with Aurora Replicas.

Amazon Aurora is a relational database engine that is compatible with MySQL and PostgreSQL. It is designed for high performance, scalability, and availability. With a Multi-AZ deployment, Aurora automatically replicates the database to a standby instance in a different Availability Zone. This provides high availability and fast failover in case of a primary instance failure.

Aurora Auto Scaling allows you to add or remove Aurora Replicas based on CPU utilization, connections, or custom metrics. This enables you to automatically scale the read capacity of the database in response to application load. Aurora Replicas are read-only instances that can offload read traffic from the primary instance. They are kept in sync with the primary instance using Aurora's distributed storage architecture, which enables low-latency updates across the replicas.

Option A, using Amazon Redshift with a single node for leader and compute functionality, would not provide high availability.

Option B, using Amazon RDS with a Single-AZ deployment and configuring RDS to add reader instances in a different Availability Zone, would not provide high availability and would not automatically scale the number of reader instances in response to read workloads.

Option D, using Amazon ElastiCache for Memcached with EC2 Spot Instances, would not provide a database solution and would not meet the requirements.

< Previous Page

Quick access to all questions in this exam

Warning: file_get_contents(http://www.geoplugin.net/php.gp?ip=216.73.216.138): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/passnexa/public_html/view/question.php on line 243