SAA-C03: AWS Certified Solutions Architect - Associate

16%

Question 1

A company uses AWS Organizations to manage multiple AWS accounts for different departments. The management account has an Amazon S3 bucket that contains project reports. The company wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.

Which solution meets these requirements with the LEAST amount of operational overhead?
Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.




Answer is Add the aws PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.

This is the least operationally overhead solution because it does not require any additional infrastructure or configuration. AWS Organizations already tracks the organization ID of each account, so you can simply add the aws:PrincipalOrgID condition key to the S3 bucket policy and reference the organization ID. This will ensure that only users of accounts within the organization can access the S3 bucket.

Condition keys: AWS provides condition keys that you can query to provide more granular control over certain actions. The following condition keys are especially useful with AWS Organizations:

aws:PrincipalOrgID – Simplifies specifying the Principal element in a resource-based policy. This global key provides an alternative to listing all the account IDs for all AWS accounts in an organization. Instead of listing all of the accounts that are members of an organization, you can specify the organization ID in the Condition element.

aws:PrincipalOrgPaths – Use this condition key to match members of a specific organization root, an OU, or its children. The aws:PrincipalOrgPaths condition key returns true when the principal (root user, IAM user, or role) making the request is in the specified organization path. A path is a text representation of the structure of an AWS Organizations entity.

Reference:
https://aws.amazon.com/blogs/security/control-access-to-aws-resources-by-using-the-aws-organization-of-iam-principals/

Question 2

A company has an application that runs on Amazon EC2 instances and uses an Amazon Aurora database. The EC2 instances connect to the database by using user names and passwords that are stored locally in a file. The company wants to minimize the operational overhead of credential management.

What should a solutions architect do to accomplish this goal?
Use AWS Secrets Manager. Turn on automatic rotation.
Use AWS Systems Manager Parameter Store. Turn on automatic rotation.
Create an Amazon S3 bucket to store objects that are encrypted with an AWS Key Management Service (AWS KMS) encryption key. Migrate the credential file to the S3 bucket. Point the application to the S3 bucket.
Create an encrypted Amazon Elastic Block Store (Amazon EBS) volume for each EC2 instance. Attach the new EBS volume to each EC2 instance. Migrate the credential file to the new EBS volume. Point the application to the new EBS volume.




Answer is Use AWS Secrets Manager. Turn on automatic rotation.

AWS Secrets Manager is a service that provides a secure and convenient way to store, manage, and rotate secrets. Secrets Manager can be used to store database credentials, SSH keys, and other sensitive information.
AWS Secrets Manager also supports automatic rotation, which can help to minimize the operational overhead of credential management. When automatic rotation is enabled, Secrets Manager will automatically generate new secrets and rotate them on a regular schedule

Reference:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotate-secrets_turn-on-for-db.html

Question 3

A company recently migrated to AWS and wants to implement a solution to protect the traffic that flows in and out of the production VPC. The company had an inspection server in its on-premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering. The company wants to have the same functionalities in the AWS Cloud.

Which solution will meet these requirements?
Use Amazon GuardDuty for traffic inspection and traffic filtering in the production VPC.
Use Traffic Mirroring to mirror traffic from the production VPC for traffic inspection and filtering.
Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPMost Voted
Use AWS Firewall Manager to create the required rules for traffic inspection and traffic filtering for the production VPC.




Answer is Use AWS Network Firewall to create the required rules for traffic inspection and traffic filtering for the production VPC.

AWS Network Firewall is a managed network firewall service that allows you to define firewall rules to filter and inspect network traffic. You can create rules to define the traffic that should be allowed or blocked based on various criteria such as source/destination IP addresses, protocols, ports, and more. With AWS Network Firewall, you can implement traffic inspection and filtering capabilities within the production VPC, helping to protect the network traffic.

With AWS Network Firewall, you can create custom rule groups to define specific operations for traffic inspection and filtering.
It can perform deep packet inspection and filtering at the network level to enforce security policies, block malicious traffic, and allow or deny traffic based on defined rules.
By integrating AWS Network Firewall with the production VPC, you can achieve similar functionalities as the on-premises inspection server, performing traffic flow inspection and filtering.

In the context of the given scenario, AWS Network Firewall can be a suitable choice if the company wants to implement traffic inspection and filtering directly within the VPC without the need for traffic mirroring. It provides an additional layer of security by enforcing specific rules for traffic filtering, which can help protect the production environment.

Question 4

A company hosts a data lake on AWS. The data lake consists of data in Amazon S3 and Amazon RDS for PostgreSQL. The company needs a reporting solution that provides data visualization and includes all the data sources within the data lake. Only the company's management team should have full access to all the visualizations. The rest of the company should have only limited access.

Which solution will meet these requirements?
Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate IAM roles.
Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.
Create an AWS Glue table and crawler for the data in Amazon S3. Create an AWS Glue extract, transform, and load (ETL) job to produce reports. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.
Create an AWS Glue table and crawler for the data in Amazon S3. Use Amazon Athena Federated Query to access data within Amazon RDS for PostgreSQL. Generate reports by using Amazon Athena. Publish the reports to Amazon S3. Use S3 bucket policies to limit access to the reports.




Answer is Create an analysis in Amazon QuickSight. Connect all the data sources and create new datasets. Publish dashboards to visualize the data. Share the dashboards with the appropriate users and groups.

Keywords:
- Data lake on AWS.
- Consists of data in Amazon S3 and Amazon RDS for PostgreSQL.
- The company needs a reporting solution that provides data VISUALIZATION and includes ALL the data sources within the data lake.

Option B involves using Amazon QuickSight, which is a business intelligence tool provided by AWS for data visualization and reporting. With this option, you can connect all the data sources within the data lake, including Amazon S3 and Amazon RDS for PostgreSQL. You can create datasets within QuickSight that pull data from these sources.

The solution allows you to publish dashboards in Amazon QuickSight, which will provide the required data visualization capabilities. To control access, you can use appropriate IAM (Identity and Access Management) roles, assigning full access only to the company's management team and limiting access for the rest of the company. You can share the dashboards selectively with the users and groups that need access.

A - Incorrect: Amazon QuickSight only support users(standard version) and groups (enterprise version). users and groups only exists without QuickSight. QuickSight don't support IAM. We use users and groups to view the QuickSight dashboard
C - Incorrect: This way don't support visulization and don't mention how to process RDS data
D - Incorrect: This way don't support visulization and don't mention how to combine data RDS and S3

Reference:
https://docs.aws.amazon.com/quicksight/latest/user/share-a-dashboard-grant-access-users.html

Question 5

A company is implementing a new business application. The application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage. A solutions architect needs to ensure that the EC2 instances can access the S3 bucket.

What should the solutions architect do to meet this requirement?
Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.
Create an IAM policy that grants access to the S3 bucket. Attach the policy to the EC2 instances.
Create an IAM group that grants access to the S3 bucket. Attach the group to the EC2 instances.
Create an IAM user that grants access to the S3 bucket. Attach the user account to the EC2 instances.




Answer is Create an IAM role that grants access to the S3 bucket. Attach the role to the EC2 instances.

Option A is the correct approach because IAM roles are designed to provide temporary credentials to AWS resources such as EC2 instances. By creating an IAM role, you can define the necessary permissions and policies that allow the EC2 instances to access the S3 bucket securely. Attaching the IAM role to the EC2 instances will automatically provide the necessary credentials to access the S3 bucket without the need for explicit access keys or secrets.

Option B is not recommended in this case because IAM policies alone cannot be directly attached to EC2 instances. Policies are usually attached to IAM users, groups, or roles.

Option C is not the most appropriate choice because IAM groups are used to manage collections of IAM users and their permissions, rather than granting access to specific resources like S3 buckets.

Option D is not the optimal solution because IAM users are intended for individual user accounts and are not the recommended approach for granting access to resources within EC2 instances.

Reference:
https://aws.amazon.com/blogs/security/writing-iam-policies-how-to-grant-access-to-an-amazon-s3-bucket/

Question 6

A company has a three-tier web application that is deployed on AWS. The web servers are deployed in a public subnet in a VPC. The application servers and database servers are deployed in private subnets in the same VPC. The company has deployed a third-party virtual firewall appliance from AWS Marketplace in an inspection VPC. The appliance is configured with an IP interface that can accept IP packets.
A solutions architect needs to integrate the web application with the appliance to inspect all traffic to the application before the traffic reaches the web server.

Which solution will meet these requirements with the LEAST operational overhead?
Create a Network Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
Create an Application Load Balancer in the public subnet of the application's VPC to route the traffic to the appliance for packet inspection.
Deploy a transit gateway in the inspection VPConfigure route tables to route the incoming packets through the transit gateway.
Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.




Answer is Deploy a Gateway Load Balancer in the inspection VPC. Create a Gateway Load Balancer endpoint to receive the incoming packets and forward the packets to the appliance.

Gateway Load Balancer (GWLB) is a global service, and it can be deployed in any VPC. This means that the GWLB can reach the appliance. Additionally, the GWLB can be configured to forward packets to the appliance for packet inspection.

Option A is incorrect because a Network Load Balancer (NLB) is a regional service, and the appliance is deployed in an inspection VPC. This means that the NLB would not be able to reach the appliance.
Option B is incorrect because an Application Load Balancer (ALB) is a regional service, and the appliance is deployed in an inspection VPC. This means that the ALB would not be able to reach the appliance.
Option C is incorrect because a transit gateway is a global service, and the appliance is deployed in an inspection VPC. This means that the transit gateway would not be able to reach the appliance.

Reference:
https://aws.amazon.com/blogs/networking-and-content-delivery/scaling-network-traffic-inspection-using-aws-gateway-load-balancer/

Question 7

A company needs to review its AWS Cloud deployment to ensure that its Amazon S3 buckets do not have unauthorized configuration changes.

What should a solutions architect do to accomplish this goal?
Turn on AWS Config with the appropriate rules.
Turn on AWS Trusted Advisor with the appropriate checks.
Turn on Amazon Inspector with the appropriate assessment template.
Turn on Amazon S3 server access logging. Configure Amazon EventBridge (Amazon Cloud Watch Events).




Answer is Turn on AWS Config with the appropriate rules.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. You can use AWS Config to monitor and record changes to the configuration of your Amazon S3 buckets. By turning on AWS Config and enabling the appropriate rules, you can ensure that your S3 buckets do not have unauthorized configuration changes.

AWS Trusted Advisor (Option B) is a service that provides best practice recommendations for your AWS resources, but it does not monitor or record changes to the configuration of your S3 buckets.

Amazon Inspector (Option C) is a service that helps you assess the security and compliance of your applications. While it can be used to assess the security of your S3 buckets, it does not monitor or record changes to the configuration of your S3 buckets.

Amazon S3 server access logging (Option D) enables you to log requests made to your S3 bucket. While it can help you identify changes to your S3 bucket, it does not monitor or record changes to the configuration of your S3 bucket.

Reference:
https://aws.amazon.com/config/#:~:text=How%20it%20works
-,AWS%20Config,-continually%20assesses%2C%20audits

Question 8

A company is launching a new application and will display application metrics on an Amazon CloudWatch dashboard. The company's product manager needs to access this dashboard periodically. The product manager does not have an AWS account. A solutions architect must provide access to the product manager by following the principle of least privilege.

Which solution will meet these requirements?
Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.
Create an IAM user specifically for the product manager. Attach the CloudWatchReadOnlyAccess AWS managed policy to the user. Share the new login credentials with the product manager. Share the browser URL of the correct dashboard with the product manager.
Create an IAM user for the company's employees. Attach the ViewOnlyAccess AWS managed policy to the IAM user. Share the new login credentials with the product manager. Ask the product manager to navigate to the CloudWatch console and locate the dashboard by name in the Dashboards section.
Deploy a bastion server in a public subnet. When the product manager requires access to the dashboard, start the server and share the RDP credentials. On the bastion server, ensure that the browser is configured to open the dashboard URL with cached AWS credentials that have appropriate permissions to view the dashboard.




Answer is Share the dashboard from the CloudWatch console. Enter the product manager's email address, and complete the sharing steps. Provide a shareable link for the dashboard to the product manager.

Share a single dashboard and designate specific email addresses of the people who can view the dashboard. Each of these users creates their own password that they must enter to view the dashboard.

This solution allows the product manager to access the CloudWatch dashboard without requiring an AWS account or IAM user credentials. By sharing the dashboard through the CloudWatch console, you can provide direct access to the specific dashboard without granting unnecessary permissions.

With this approach, the product manager can access the dashboard periodically by simply clicking on the provided link. They will be able to view the application metrics without the need for an AWS account or IAM user credentials. This ensures that the product manager has the necessary access while adhering to the principle of least privilege by not granting unnecessary permissions or creating additional IAM users.

Reference:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch-dashboard-sharing.html

Question 9

A company is migrating applications to AWS. The applications are deployed in different accounts. The company manages the accounts centrally by using AWS Organizations. The company's security team needs a single sign-on (SSO) solution across all the company's accounts. The company must continue managing the users and groups in its on-premises self-managed Microsoft Active Directory.

Which solution will meet these requirements?
Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a one-way forest trust or a one-way domain trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.
Use AWS Directory Service. Create a two-way trust relationship with the company's self-managed Microsoft Active Directory.
Deploy an identity provider (IdP) on premises. Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console.




Answer is Enable AWS Single Sign-On (AWS SSO) from the AWS SSO console. Create a two-way forest trust to connect the company's self-managed Microsoft Active Directory with AWS SSO by using AWS Directory Service for Microsoft Active Directory.

A two-way trust is required for AWS Enterprise Apps such as Amazon Chime, Amazon Connect, Amazon QuickSight, AWS IAM Identity Center, Amazon WorkDocs, Amazon WorkMail, Amazon WorkSpaces, and the AWS Management Console. AWS Managed Microsoft AD must be able to query the users and groups in your self-managed AD.

Amazon EC2, Amazon RDS, and Amazon FSx will work with either a one-way or two-way trust.

Reference:
https://docs.aws.amazon.com/singlesignon/latest/userguide/connectonpremad.html

Question 10

A company hosts its multi-tier applications on AWS. For compliance, governance, auditing, and security, the company must track configuration changes on its AWS resources and record a history of API calls made to these resources.

What should a solutions architect do to meet these requirements?
Use AWS CloudTrail to track configuration changes and AWS Config to record API calls.
Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.
Use AWS Config to track configuration changes and Amazon CloudWatch to record API calls.
Use AWS CloudTrail to track configuration changes and Amazon CloudWatch to record API calls.




Answer is Use AWS Config to track configuration changes and AWS CloudTrail to record API calls.

AWS Config is a service that enables you to assess, audit, and evaluate the configurations of your AWS resources. It provides a history of configuration changes made to your resources and can be used to track changes made to your resources over time.

AWS CloudTrail is a service that enables you to record API calls made to your AWS resources. It provides a history of API calls made to your resources, including the identity of the caller, the time of the call, the source of the call, and the response element returned by the service.

Next Page >

Quick access to all questions in this exam