Professional Data Engineer on Google Cloud Platform

97%

Question 261

You have a data stored in BigQuery. The data in the BigQuery dataset must be highly available. You need to define a storage, backup, and recovery strategy of this data that minimizes cost.

How should you configure the BigQuery table that have a recovery point objective (RPO) of 30 days?
Set the BigQuery dataset to be regional. In the event of an emergency, use a point-in-time snapshot to recover the data.
Set the BigQuery dataset to be regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the backup. In the event of an emergency, use the backup copy of the table.
Set the BigQuery dataset to be multi-regional. In the event of an emergency, use a point-in-time snapshot to recover the data.
Set the BigQuery dataset to be multi-regional. Create a scheduled query to make copies of the data to tables suffixed with the time of the backup. In the event of an emergency, use the backup copy of the table.




Answer is Set the BigQuery dataset to be multi-regional. In the event of an emergency, use a point-in-time snapshot to recover the data.

A BigQuery table snapshot preserves the contents of a table (called the base table) at a particular time. You can save a snapshot of a current table, or create a snapshot of a table as it was at any time in the past seven days. A table snapshot can have an expiration; when the configured amount of time has passed since the table snapshot was created, BigQuery deletes the table snapshot. You can query a table snapshot as you would a standard table. Table snapshots are read-only, but you can create (restore) a standard table from a table snapshot, and then you can modify the restored table.

Reference:
https://cloud.google.com/bigquery/docs/table-snapshots-intro#table_snapshots

Question 262

You issue a new batch job to Dataflow. The job starts successfully, processes a few elements, and then suddenly fails and shuts down.
You navigate to the Dataflow monitoring interface where you find errors related to a particular DoFn in your pipeline.

What is the most likely cause of the errors?
Job validation
Exceptions in worker code
Graph or pipeline construction
Insufficient permissions




Answer is Exceptions in worker code

The most likely cause of the errors you're experiencing in Dataflow, particularly if they are related to a particular DoFn (Dataflow's parallel processing operation), is; Exceptions in worker code.
When a Dataflow job processes a few elements successfully before failing, it suggests that the overall job setup, permissions, and pipeline graph are likely correct, as the job was able to start and initially process data. However, if it fails during execution and the errors are associated with a specific DoFn, this points towards issues in the code that executes within the workers. This could include:
1. Runtime exceptions in the code logic of the DoFn.
2. Issues handling specific data elements that might not be correctly managed by the DoFn code (e.g., unexpected data formats, null values, etc.).
3. Resource constraints or timeouts if the DoFn performs operations that are resource-intensive or long-running.

To resolve these issues, you should:
1. Inspect the stack traces and error messages in the Dataflow monitoring interface for details on the exception.
2. Test the DoFn with a variety of data inputs, especially edge cases, to ensure robust error handling.
3. Review the resource usage and performance characteristics of the DoFn if the issue is related to resource constraints.

Reference:
https://cloud.google.com/dataflow/docs/guides/troubleshooting-your-pipeline#detect_an_exception_in_worker_code

Question 263

You need to migrate 1 PB of data from an on-premises data center to Google Cloud. Data transfer time during the migration should take only a few hours. You want to follow Google-recommended practices to facilitate the large data transfer over a secure connection.

What should you do?
Establish a Cloud Interconnect connection between the on-premises data center and Google Cloud, and then use the Storage Transfer Service.
Use a Transfer Appliance and have engineers manually encrypt, decrypt, and verify the data.
Establish a Cloud VPN connection, start gcloud compute scp jobs in parallel, and run checksums to verify the data.
Reduce the data into 3 TB batches, transfer the data using gsutil, and run checksums to verify the data.




Answer is Establish a Cloud Interconnect connection between the on-premises data center and Google Cloud, and then use the Storage Transfer Service.

Cloud Interconnect provides a dedicated private connection between on-prem and Google Cloud for high bandwidth (up to 100 Gbps) and low latency. This facilitates large, fast data transfers.
Storage Transfer Service supports parallel data transfers over Cloud Interconnect. It can transfer petabyte-scale datasets faster by transferring objects in parallel.
Storage Transfer Service uses HTTPS encryption in transit and at rest by default for secure data transfers.
It follows Google-recommended practices for large data migrations vs ad hoc methods like gsutil or scp.
The other options would take too long for a 1 PB transfer (VPN capped at 3 Gbps, manual transfers) or introduce extra steps like batching and checksums. Cloud Interconnect + Storage Transfer is the recommended Google solution.

Reference:
https://cloud.google.com/storage-transfer/docs/transfer-options#:~:text=Transferring%20more%20than%201%20TB%20from%20on%2Dpremises

Question 264

Your company wants to be able to retrieve large result sets of medical information from your current system, which has over 10 TBs in the database, and store the data in new tables for further query.
The database must have a low-maintenance architecture and be accessible via SQL.
You need to implement a cost-effective solution that can support data analytics for large result sets.

What should you do?
Use Cloud SQL, but first organize the data into tables. Use JOIN in queries to retrieve data.
Use BigQuery as a data warehouse. Set output destinations for caching large queries.
Use a MySQL cluster installed on a Compute Engine managed instance group for scalability.
Use Cloud Spanner to replicate the data across regions. Normalize the data in a series of tables.




Answer is Use BigQuery as a data warehouse. Set output destinations for caching large queries.

The key reasons why BigQuery fits the requirements:
It is a fully managed data warehouse built to scale to handle massive datasets and perform fast SQL analytics
It has a low maintenance architecture with no infrastructure to manage
SQL capabilities allow easy querying of the medical data
Output destinations allow configurable caching for fast retrieval of large result sets
It provides a very cost-effective solution for these large scale analytics use cases
In contrast, Cloud Spanner and Cloud SQL would not scale as cost effectively for 10TB+ data volumes. Self-managed MySQL on Compute Engine also requires more maintenance. Hence, leveraging BigQuery as a fully managed data warehouse is the optimal solution here.

Reference:
https://cloud.google.com/bigquery/docs/query-overview

Question 265

You are designing a system that requires an ACID-compliant database. You must ensure that the system requires minimal human intervention in case of a failure.

What should you do?
Configure a Cloud SQL for MySQL instance with point-in-time recovery enabled.
Configure a Cloud SQL for PostgreSQL instance with high availability enabled.
Configure a Bigtable instance with more than one cluster.
Configure a BigQuery table with a multi-region configuration.




Answer is Configure a Cloud SQL for PostgreSQL instance with high availability enabled.

Key reasons: Cloud SQL for PostgreSQL provides full ACID compliance, unlike Bigtable which provides only atomicity and consistency guarantees.
Enabling high availability removes the need for manual failover as Cloud SQL will automatically failover to a standby replica if the leader instance goes down.
Point-in-time recovery in MySQL requires manual intervention to restore data if needed.
BigQuery does not provide transactional guarantees required for an ACID database.
Therefore, a Cloud SQL for PostgreSQL instance with high availability meets the ACID and minimal intervention requirements best. The automatic failover will ensure availability and uptime without administrative effort.

Reference:
https://cloud.google.com/sql/docs/postgres/high-availability#HA-configuration

Question 266

You are implementing workflow pipeline scheduling using open source-based tools and Google Kubernetes Engine (GKE). You want to use a Google managed service to simplify and automate the task.
You also want to accommodate Shared VPC networking considerations.

What should you do?
Use Dataflow for your workflow pipelines. Use Cloud Run triggers for scheduling.
Use Dataflow for your workflow pipelines. Use shell scripts to schedule workflows.
Use Cloud Composer in a Shared VPC configuration. Place the Cloud Composer resources in the host project.
Use Cloud Composer in a Shared VPC configuration. Place the Cloud Composer resources in the service project.




Answer is Use Cloud Composer in a Shared VPC configuration. Place the Cloud Composer resources in the service project.

Shared VPC requires that you designate a host project to which networks and subnetworks belong and a service project, which is attached to the host project. When Cloud Composer participates in a Shared VPC, the Cloud Composer environment is in the service project.

Reference:
https://cloud.google.com/composer/docs/how-to/managing/configuring-shared-vpc

Question 267

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
- Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
- Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment
Flowlogistic architecture resides in a single data center:
- Databases
8 physical servers in 2 clusters
• SQL Server "" user data, inventory, static data
3 physical servers
• Cassandra "" metadata, tracking messages
10 Kafka servers "" tracking message aggregation and batch insert
- Application servers "" customer front end, middleware for order/customs
60 virtual machines across 20 physical servers
• Tomcat "" Java services
• Nginx "" static content
• Batch servers
- Storage appliances
• iSCSI for virtual machine (VM) hosts
• Fibre Channel storage area network (FC SAN) "" SQL server storage
• Network-attached storage (NAS) image storage, logs, backups
- 10 Apache Hadoop /Spark servers
• Core Data Lake
• Data analysis workloads
- 20 miscellaneous servers
• Jenkins, monitoring, bastion hosts

Business Requirements
- Build a reliable and reproducible environment with scaled panty of production.
- Aggregate data in a centralized Data Lake for analysis
- Use historical data to perform predictive analytics on future shipments
- Accurately track every shipment worldwide using proprietary technology
- Improve business agility and speed of innovation through rapid provisioning of new resources
- Analyze and optimize architecture for performance in the cloud
- Migrate fully to the cloud if all other requirements are met

Technical Requirements
- Handle both streaming and batch data
- Migrate existing Hadoop workloads
- Ensure architecture is scalable and elastic to meet the changing demands of the company.
- Use managed services whenever possible
- Encrypt data flight and at rest
- Connect a VPN between the production data center and cloud environment

SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.

CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic wants to use Google BigQuery as their primary analysis system, but they still have Apache Hadoop and Spark workloads that they cannot move to BigQuery. Flowlogistic does not know how to store the data that is common to both workloads.

What should they do?
Store the common data in BigQuery as partitioned tables.
Store the common data in BigQuery and expose authorized views.
Store the common data encoded as Avro in Google Cloud Storage.
Store he common data in the HDFS storage for a Google Cloud Dataproc cluster.




Answer is Store the common data encoded as Avro in Google Cloud Storage.

Though data proc has connectors for cloud storage, Bigtable, and BigQuery, using a big query connector is a little more work compared to cloud storage and big table.
The best thing while moving apache Hadoop and spark jobs to data proc is using cloud storage instead of HDFS.
Your data in ORC, Parquet, Avro, or any other format will be used by different clusters or jobs, and you need data persistence if the cluster terminates.
We can just replace HDFS:// with gs://.
Even big query can read data in avro format.
The best way to store data common to both workloads is cloud storage.

Reference:
https://cloud.google.com/solutions/streaming-avro-records-into-bigquery-using-dataflow

Question 268

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
- Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
- Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment
Flowlogistic architecture resides in a single data center:
- Databases
8 physical servers in 2 clusters
• SQL Server "" user data, inventory, static data
3 physical servers
• Cassandra "" metadata, tracking messages
10 Kafka servers "" tracking message aggregation and batch insert
- Application servers "" customer front end, middleware for order/customs
60 virtual machines across 20 physical servers
• Tomcat "" Java services
• Nginx "" static content
• Batch servers
- Storage appliances
• iSCSI for virtual machine (VM) hosts
• Fibre Channel storage area network (FC SAN) "" SQL server storage
• Network-attached storage (NAS) image storage, logs, backups
- 10 Apache Hadoop /Spark servers
• Core Data Lake
• Data analysis workloads
- 20 miscellaneous servers
• Jenkins, monitoring, bastion hosts

Business Requirements
- Build a reliable and reproducible environment with scaled panty of production.
- Aggregate data in a centralized Data Lake for analysis
- Use historical data to perform predictive analytics on future shipments
- Accurately track every shipment worldwide using proprietary technology
- Improve business agility and speed of innovation through rapid provisioning of new resources
- Analyze and optimize architecture for performance in the cloud
- Migrate fully to the cloud if all other requirements are met

Technical Requirements
- Handle both streaming and batch data
- Migrate existing Hadoop workloads
- Ensure architecture is scalable and elastic to meet the changing demands of the company.
- Use managed services whenever possible
- Encrypt data flight and at rest
- Connect a VPN between the production data center and cloud environment

SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.

CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic's management has determined that the current Apache Kafka servers cannot handle the data volume for their real-time inventory tracking system.
You need to build a new system on Google Cloud Platform (GCP) that will feed the proprietary tracking software. The system must be able to ingest data from a variety of global sources, process and query in real-time, and store the data reliably.

Which combination of GCP products should you choose?
Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage
Cloud Pub/Sub, Cloud Dataflow, and Local SSD
Cloud Pub/Sub, Cloud SQL, and Cloud Storage
Cloud Load Balancing, Cloud Dataflow, and Cloud Storage




Answer is Cloud Pub/Sub, Cloud Dataflow, and Cloud Storage

Cloud pub sub for real time processing, cloud data flow for real time streaming data processing and loading data into cloud storage.

Reference:
https://codelabs.developers.google.com/codelabs/cpb104-pubsub/#0

Question 269

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
- Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
- Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment
Flowlogistic architecture resides in a single data center:
- Databases
8 physical servers in 2 clusters
• SQL Server "" user data, inventory, static data
3 physical servers
• Cassandra "" metadata, tracking messages
10 Kafka servers "" tracking message aggregation and batch insert
- Application servers "" customer front end, middleware for order/customs
60 virtual machines across 20 physical servers
• Tomcat "" Java services
• Nginx "" static content
• Batch servers
- Storage appliances
• iSCSI for virtual machine (VM) hosts
• Fibre Channel storage area network (FC SAN) "" SQL server storage
• Network-attached storage (NAS) image storage, logs, backups
- 10 Apache Hadoop /Spark servers
• Core Data Lake
• Data analysis workloads
- 20 miscellaneous servers
• Jenkins, monitoring, bastion hosts

Business Requirements
- Build a reliable and reproducible environment with scaled panty of production.
- Aggregate data in a centralized Data Lake for analysis
- Use historical data to perform predictive analytics on future shipments
- Accurately track every shipment worldwide using proprietary technology
- Improve business agility and speed of innovation through rapid provisioning of new resources
- Analyze and optimize architecture for performance in the cloud
- Migrate fully to the cloud if all other requirements are met

Technical Requirements
- Handle both streaming and batch data
- Migrate existing Hadoop workloads
- Ensure architecture is scalable and elastic to meet the changing demands of the company.
- Use managed services whenever possible
- Encrypt data flight and at rest
- Connect a VPN between the production data center and cloud environment

SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.

CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic's CEO wants to gain rapid insight into their customer base so his sales team can be better informed in the field. This team is not very technical, so they've purchased a visualization tool to simplify the creation of BigQuery reports. However, they've been overwhelmed by all the data in the table, and are spending a lot of money on queries trying to find the data they need. You want to solve their problem in the most cost-effective way.

What should you do?
Export the data into a Google Sheet for virtualization.
Create an additional table with only the necessary columns.
Create a view on the table to present to the virtualization tool.
Create identity and access management (IAM) roles on the appropriate columns, so only they appear in a query.




Answer is Create a view on the table to present to the virtualization tool.

A logical view can be created with only the required columns which is required for visualization. B is not the right option as you will create a table and make it static. What happens when the original data is updated. This new table will not have the latest data and hence view is the best possible option here.

Question 270

Flowlogistic is a leading logistics and supply chain provider. They help businesses throughout the world manage their resources and transport them to their final destination. The company has grown rapidly, expanding their offerings to include rail, truck, aircraft, and oceanic shipping.

Company Background
The company started as a regional trucking company, and then expanded into other logistics market. Because they have not updated their infrastructure, managing and tracking orders and shipments has become a bottleneck. To improve operations, Flowlogistic developed proprietary technology for tracking shipments in real time at the parcel level. However, they are unable to deploy it because their technology stack, based on Apache Kafka, cannot support the processing volume. In addition, Flowlogistic wants to further analyze their orders and shipments to determine how best to deploy their resources.

Solution Concept
Flowlogistic wants to implement two concepts using the cloud:
- Use their proprietary technology in a real-time inventory-tracking system that indicates the location of their loads
- Perform analytics on all their orders and shipment logs, which contain both structured and unstructured data, to determine how best to deploy resources, which markets to expand info. They also want to use predictive analytics to learn earlier when a shipment will be delayed.

Existing Technical Environment
Flowlogistic architecture resides in a single data center:
- Databases
8 physical servers in 2 clusters
• SQL Server "" user data, inventory, static data
3 physical servers
• Cassandra "" metadata, tracking messages
10 Kafka servers "" tracking message aggregation and batch insert
- Application servers "" customer front end, middleware for order/customs
60 virtual machines across 20 physical servers
• Tomcat "" Java services
• Nginx "" static content
• Batch servers
- Storage appliances
• iSCSI for virtual machine (VM) hosts
• Fibre Channel storage area network (FC SAN) "" SQL server storage
• Network-attached storage (NAS) image storage, logs, backups
- 10 Apache Hadoop /Spark servers
• Core Data Lake
• Data analysis workloads
- 20 miscellaneous servers
• Jenkins, monitoring, bastion hosts

Business Requirements
- Build a reliable and reproducible environment with scaled panty of production.
- Aggregate data in a centralized Data Lake for analysis
- Use historical data to perform predictive analytics on future shipments
- Accurately track every shipment worldwide using proprietary technology
- Improve business agility and speed of innovation through rapid provisioning of new resources
- Analyze and optimize architecture for performance in the cloud
- Migrate fully to the cloud if all other requirements are met

Technical Requirements
- Handle both streaming and batch data
- Migrate existing Hadoop workloads
- Ensure architecture is scalable and elastic to meet the changing demands of the company.
- Use managed services whenever possible
- Encrypt data flight and at rest
- Connect a VPN between the production data center and cloud environment

SEO Statement
We have grown so quickly that our inability to upgrade our infrastructure is really hampering further growth and efficiency. We are efficient at moving shipments around the world, but we are inefficient at moving data around. We need to organize our information so we can more easily understand where our customers are and what they are shipping.

CTO Statement
IT has never been a priority for us, so as our data has grown, we have not invested enough in our technology. I have a good staff to manage IT, but they are so busy managing our infrastructure that I cannot get them to do the things that really matter, such as organizing our data, building the analytics, and figuring out how to implement the CFO' s tracking technology.

CFO Statement
Part of our competitive advantage is that we penalize ourselves for late shipments and deliveries. Knowing where out shipments are at all times has a direct correlation to our bottom line and profitability. Additionally, I don't want to commit capital to building out a server environment.
Flowlogistic is rolling out their real-time inventory tracking system. The tracking devices will all send package-tracking messages, which will now go to a single Google Cloud Pub/Sub topic instead of the Apache Kafka cluster. A subscriber application will then process the messages for real-time reporting and store them in Google BigQuery for historical analysis. You want to ensure the package data can be analyzed over time.

Which approach should you take?
Attach the timestamp on each message in the Cloud Pub/Sub subscriber application as they are received.
Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.
Use the NOW () function in BigQuery to record the event's time.
Use the automatically generated timestamp from Cloud Pub/Sub to order the data.




Answer is Attach the timestamp and Package ID on the outbound message from each publisher device as they are sent to Clod Pub/Sub.

A. There is no indication that the application can do this. Moreover, due to networking problems, it is possible that Pub/Sub doesn't receive messages in order. This will analysis difficult.
B. This makes sure that you have access to publishing timestamp which provides you with the correct ordering of messages.
C. If timestamps are already messed up, BigQuery will get wrong results anyways.
D. The timestamp we are interested in is when the data was produced by the publisher, not when it was received by Pub/Sub.

< Previous PageNext Page >

Quick access to all questions in this exam

Warning: file_get_contents(http://www.geoplugin.net/php.gp?ip=216.73.216.106): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in /home/passnexa/public_html/view/question.php on line 243