SAA-C03: AWS Certified Solutions Architect - Associate

58%

Question 71

A company has an AWS Glue extract, transform, and load (ETL) job that runs every day at the same time. The job processes XML data that is in an Amazon S3 bucket. New data is added to the S3 bucket every day. A solutions architect notices that AWS Glue is processing all the data during each run.

What should the solutions architect do to prevent AWS Glue from reprocessing old data?
Edit the job to use job bookmarks.
Edit the job to delete data after the data is processed.
Edit the job by setting the NumberOfWorkers field to 1.
Use a FindMatches machine learning (ML) transform.




Answer is Edit the job to use job bookmarks.

Job bookmarks in AWS Glue allow the ETL job to track the data that has been processed and to skip data that has already been processed. This can prevent AWS Glue from reprocessing old data and can improve the performance of the ETL job by only processing new data. To use job bookmarks, the solutions architect can edit the job and set the "Use job bookmark" option to "True". The ETL job will then use the job bookmark to track the data that has been processed and skip data that has already been processed in subsequent runs.

Reference:
https://docs.aws.amazon.com/glue/latest/dg/monitor-continuations.html

Question 72

A company has an automobile sales website that stores its listings in a database on Amazon RDS. When an automobile is sold, the listing needs to be removed from the website and the data must be sent to multiple target systems.

Which design should a solutions architect recommend?
Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume.
Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) FIFO queue for the targets to consume.
Subscribe to an RDS event notification and send an Amazon Simple Queue Service (Amazon SQS) queue fanned out to multiple Amazon Simple Notification Service (Amazon SNS) topics. Use AWS Lambda functions to update the targets.
Subscribe to an RDS event notification and send an Amazon Simple Notification Service (Amazon SNS) topic fanned out to multiple Amazon Simple Queue Service (Amazon SQS) queues. Use AWS Lambda functions to update the targets.




Answer is Create an AWS Lambda function triggered when the database on Amazon RDS is updated to send the information to an Amazon Simple Queue Service (Amazon SQS) queue for the targets to consume.

RDS events only provide operational events such as DB instance events, DB parameter group events, DB security group events, and DB snapshot events. What we need in the scenario is to capture data-modifying events i.e delete. Usually, you can do it through a native function or stored procedure

Reference:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/PostgreSQL-Lambda.html

Question 73

A company recently migrated a message processing system to AWS. The system receives messages into an ActiveMQ queue running on an Amazon EC2 instance. Messages are processed by a consumer application running on Amazon EC2. The consumer application processes the messages and writes results to a MySQL database running on Amazon EC2. The company wants this application to be highly available with low operational complexity.

Which architecture offers the HIGHEST availability?
Add a second ActiveMQ server to another Availability Zone. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Replicate the MySQL database to another Availability Zone.
Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an additional consumer EC2 instance in another Availability Zone. Use Amazon RDS for MySQL with Multi-AZ enabled.
Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.




Answer is Use Amazon MQ with active/standby brokers configured across two Availability Zones. Add an Auto Scaling group for the consumer EC2 instances across two Availability Zones. Use Amazon RDS for MySQL with Multi-AZ enabled.

Option D offers the highest availability because it addresses all potential points of failure in the system:
Amazon MQ with active/standby brokers configured across two Availability Zones ensures that the message queue is available even if one Availability Zone experiences an outage.
An Auto Scaling group for the consumer EC2 instances across two Availability Zones ensures that the consumer application is able to continue processing messages even if one Availability Zone experiences an outage.
Amazon RDS for MySQL with Multi-AZ enabled ensures that the database is available even if one Availability Zone experiences an outage.

Option A addresses some potential points of failure, but it does not address the potential for the consumer application to become unavailable due to an Availability Zone outage.

Option B addresses some potential points of failure, but it does not address the potential for the database to become unavailable due to an Availability Zone outage.

Option C addresses some potential points of failure, but it does not address the potential for the consumer application to become unavailable due to an Availability Zone outage.

Question 74

A company hosts a containerized web application on a fleet of on-premises servers that process incoming requests. The number of requests is growing quickly. The on-premises servers cannot handle the increased number of requests. The company wants to move the application to AWS with minimum code changes and minimum development effort.

Which solution will meet these requirements with the LEAST operational overhead?
Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests.
Use two Amazon EC2 instances to host the containerized web application. Use an Application Load Balancer to distribute the incoming requests.
Use AWS Lambda with a new code that uses one of the supported languages. Create multiple Lambda functions to support the load. Use Amazon API Gateway as an entry point to the Lambda functions.
Use a high performance computing (HPC) solution such as AWS ParallelCluster to establish an HPC cluster that can process the incoming requests at the appropriate scale.




Answer is Use AWS Fargate on Amazon Elastic Container Service (Amazon ECS) to run the containerized web application with Service Auto Scaling. Use an Application Load Balancer to distribute the incoming requests.

AWS Fargate is a technology that you can use with Amazon ECS to run containers without having to manage servers on clusters of Amazon EC2 instances. With Fargate, you no longer have to provision, configure, or scale of virtual machines to run containers.

Reference:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/what-is-fargate.html

Question 75

A company stores its application logs in an Amazon CloudWatch Logs log group. A new policy requires the company to store all application logs in Amazon OpenSearch Service (Amazon Elasticsearch Service) in near-real time.

Which solution will meet this requirement with the LEAST operational overhead?
Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).
Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery streams sources. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.
Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service).




Answer is Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery streams sources. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.

The correct answer is C: Create an Amazon Kinesis Data Firehose delivery stream. Configure the log group as the delivery stream source. Configure Amazon OpenSearch Service (Amazon Elasticsearch Service) as the delivery stream's destination.

This solution uses Amazon Kinesis Data Firehose, which is a fully managed service for streaming data to Amazon OpenSearch Service (Amazon Elasticsearch Service) and other destinations. You can configure the log group as the source of the delivery stream and Amazon OpenSearch Service as the destination. This solution requires minimal operational overhead, as Kinesis Data Firehose automatically scales and handles data delivery, transformation, and indexing.

Option A: Configure a CloudWatch Logs subscription to stream the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but it may require more operational overhead as you would need to set up and manage the subscription and ensure that the logs are delivered in near-real time.

Option B: Create an AWS Lambda function. Use the log group to invoke the function to write the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but it may require more operational overhead as you would need to set up and manage the Lambda function and ensure that it scales to handle the incoming logs.

Option D: Install and configure Amazon Kinesis Agent on each application server to deliver the logs to Amazon Kinesis Data Streams. Configure Kinesis Data Streams to deliver the logs to Amazon OpenSearch Service (Amazon Elasticsearch Service) would also work, but it may require more operational overhead as you would need to install and configure the Kinesis Agent on each application server and set up and manage the Kinesis Data Streams.

Question 76

A global company is using Amazon API Gateway to design REST APIs for its loyalty club users in the us-east-1 Region and the ap-southeast-2 Region. A solutions architect must design a solution to protect these API Gateway managed REST APIs across multiple accounts from SQL injection and cross-site scripting attacks.

Which solution will meet these requirements with the LEAST amount of administrative effort?
Set up AWS WAF in both Regions. Associate Regional web ACLs with an API stage.
Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.
Set up AWS Shield in bath Regions. Associate Regional web ACLs with an API stage.
Set up AWS Shield in one of the Regions. Associate Regional web ACLs with an API stage.




Answer is Set up AWS Firewall Manager in both Regions. Centrally configure AWS WAF rules.

Using AWS Firewall Manager to centrally configure AWS WAF rules provides the least administrative effort compared to the other options.

Firewall Manager allows centralized administration of AWS WAF rules across multiple accounts and Regions. WAF rules can be defined once in Firewall Manager and automatically applied to APIs in all the required Regions and accounts.

Option A involves setting up AWS WAF in both regions and associating regional web ACLs with an API stage. While this can provide the necessary protection, it requires more manual configuration in each region, potentially leading to more administrative effort, especially if there are updates or changes needed to be made across multiple regions.

Reference:
https://docs.aws.amazon.com/waf/latest/developerguide/what-is-aws-waf.html

Question 77

A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third-party web service. The application must be highly available.

Which combination of configuration options will meet these requirements? (Choose two.)
Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the private subnets.
Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance in private subnets.
Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnet.
D. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnets.




Answers are;
A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS Multi-AZ DB instance in private subnets.
D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnet.


Option A uses an Auto Scaling group to launch the EC2 instances in private subnets, ensuring they are not directly accessible from the public internet. The RDS Multi-AZ DB instance is also placed in private subnets, maintaining security.

Option D configures a VPC with a public subnet for the web tier, allowing customers to access the website. The private subnet provides a secure environment for the EC2 instances and the RDS DB instance. NAT gateways are used to provide internet access to the EC2 instances in the private subnet for payment processing.

Question 78

A company runs an Oracle database on premises. As part of the company’s migration to AWS, the company wants to upgrade the database to the most recent available version. The company also wants to set up disaster recovery (DR) for the database. The company needs to minimize the operational overhead for normal operations and DR setup. The company also needs to maintain access to the database's underlying operating system.

Which solution will meet these requirements?
Migrate the Oracle database to an Amazon EC2 instance. Set up database replication to a different AWS Region.
Migrate the Oracle database to Amazon RDS for Oracle. Activate Cross-Region automated backups to replicate the snapshots to another AWS Region.
Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.
Migrate the Oracle database to Amazon RDS for Oracle. Create a standby database in another Availability Zone.




Answer is Migrate the Oracle database to Amazon RDS Custom for Oracle. Create a read replica for the database in another AWS Region.

Amazon RDS Custom for Oracle: Amazon RDS Custom for Oracle allows the company to run Oracle databases on managed instances in the AWS Cloud. It provides managed services for tasks such as backups, patching, and monitoring, minimizing operational overhead for normal operations.
Read Replica in Another AWS Region: By creating a read replica for the database in another AWS Region, the company can set up disaster recovery (DR) with minimal operational overhead. Amazon RDS automatically handles replication between the primary database and the read replica, ensuring data consistency and minimizing management tasks for DR setup.

Reference:
https://aws.amazon.com/blogs/database/part-2-implement-multi-master-replication-with-rds-custom-for-oracle-high-availability-disaster-recovery/

Question 79

A company wants to move its application to a serverless solution. The serverless solution needs to analyze existing and new data by using SL. The company stores the data in an Amazon S3 bucket. The data requires encryption and must be replicated to a different AWS Region.

Which solution will meet these requirements with the LEAST operational overhead?
Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region kays (SSE-KMS). Use Amazon Athena to query the data.
Create a new S3 bucket. Load the data into the new S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with AWS KMS multi-Region keys (SSE-KMS). Use Amazon RDS to query the data.
Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon Athena to query the data.
Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon RDS to query the data.




Answer is Load the data into the existing S3 bucket. Use S3 Cross-Region Replication (CRR) to replicate encrypted objects to an S3 bucket in another Region. Use server-side encryption with Amazon S3 managed encryption keys (SSE-S3). Use Amazon Athena to query the data.

The easiest way to encrypt existing objects in S3 is to use server-side encryption with S3-managed keys (SSE-S3). Here are the basic steps:
1. Enable SSE-S3 on the target S3 bucket if it is not already enabled. This will ensure all new or copied objects are encrypted automatically.
2. Create an S3 inventory report for the source bucket containing the objects. This will generate a CSV file with metadata of all objects.
3. Use S3 Select or AWS Athena to query the inventory report and filter for only unencrypted objects.
4. Create an S3 Batch Operations job to copy the filtered unencrypted objects to the target bucket. The copy operation will automatically encrypt the objects using the bucket's SSE-S3 configuration.
5. Monitor the job completion to ensure all objects were encrypted. You can optionally delete the original unencrypted versions after verifying successful encryption. This approach minimizes disruption and performs the encryption without having to rewrite existing data or code.

Reference:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/batch-ops-copy-example-bucket-key.html

Question 80

A company is migrating its on-premises PostgreSQL database to Amazon Aurora PostgreSQL. The on-premises database must remain online and accessible during the migration. The Aurora database must remain synchronized with the on-premises database.

Which combination of actions must a solutions architect take to meet these requirements? (Choose two.)
Create an ongoing replication task.
Create a database backup of the on-premises database.
Create an AWS Database Migration Service (AWS DMS) replication server.
Convert the database schema by using the AWS Schema Conversion Tool (AWS SCT).
Create an Amazon EventBridge (Amazon CloudWatch Events) rule to monitor the database synchronization.




Answers are;
A. Create an ongoing replication task.
C. Create an AWS Database Migration Service (AWS DMS) replication server.


AWS Database Migration Service (AWS DMS) helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.
With AWS Database Migration Service, you can also continuously replicate data with low latency from any supported source to any supported target.

Reference:
https://aws.amazon.com/dms/
https://docs.aws.amazon.com/zh_cn/dms/latest/sbs/chap-manageddatabases.postgresql-rds-postgresql.html

< Previous PageNext Page >

Quick access to all questions in this exam

Warning: file_get_contents(http://www.geoplugin.net/php.gp?ip=216.73.216.138): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/passnexa/public_html/view/question.php on line 243