DP-100: Designing and Implementing a Data Science Solution on Azure

4%

Question 1

Which of the following descriptions accurately describes Azure Machine Learning?
A Python library that you can use as an alternative to common machine learning frameworks like Scikit-Learn, PyTorch, and Tensorflow.
An application for Microsoft Windows that enables you to create machine learning models by using a drag and drop interface.
A cloud-based platform for operating machine learning solutions at scale.




Answer is A cloud-based platform for operating machine learning solutions at scale.

Azure Machine Learning is an Azure service that you can use to manage machine learning model data preparation, training, validation, and deployment. It leverages existing frameworks such as Scikit-Learn, PyTorch, and Tensorflow; and provides a cross-platform platform for operationalizing machine learning in the cloud.

Question 2

Which edition of Azure Machine Learning workspace should you provision if you only plan to use the graphical Designer tool to train machine learning models?
Enterprise
Basic




Answer is Enterprise. The visual Designer tool is not available in Basic edition workspaces, so you must create an Enterprise workspace to use it.

Question 3

You need a cloud-based development environment that you can use to run Jupyter notebooks that are stored in your workspace. The notebooks must remain in your workspace at all times.
What should you do?
Install Visual Studio Code on your local computer.
Create a Compute Instance compute target in your workspace.
Create a Training Cluster compute target in your workspace.




Answer is Create a Compute Instance compute target in your workspace.

Compute Instances provide a cloud-based development environment that supports Jupyter Notebooks in your workspace. You can save notebooks in the workspace and work on them there.

Question 4

You plan to use the Workspace.from_config() method to connect to your Azure Machine Learning workspace from a Python environment on your local workstation. You have already used pip to install the azureml-sdk package.
What else should you do?
What else should you do? Run pip install azureml-sdk['notebooks'] to install the notebooks extra.
Download the config.json file for your workspace to the folder containing your local Python code files.
Create a Compute Instance compute target in your workspace.




Answer is Download the config.json file for your workspace to the folder containing your local Python code files.

To connect to a workspace from an environment outside of the workspace, you should download the config.json file for your workspace from the Azure portal. This includes the subscription and workspace information necessary to connect.

Question 5

You need to ingest data from a CSV file into a pipeline in Designer. What should you do?
Create a Dataset by uploading the file, and drag the dataset to the canvas.
Add a Convert to CSV module to the canvas.
Add an Enter Data Manually module to the canvas.




Answer is Create a Dataset by uploading the file, and drag the dataset to the canvas.

The recommended way to ingest data is to create a dataset and drag it to the canvas.

Question 6

You have created a pipeline that includes multiple modules to define a dataflow and train a model.
Now you want to run the pipeline.
What must you do first?
Add comments to each of the modules on the pipeline canvas.
Rename the pipeline to include the date and time.
Create a Training Cluster in your workspace, and select it as the compute target for the pipeline.




Answer is Create a Training Cluster in your workspace, and select it as the compute target for the pipeline.

To run a pipeline, you need a Training Cluster compute target in the workspace.

Question 7

You have created and run a pipeline to train a model using the Designer tool. Now you want to publish it as a real-time service.
What must you do first?
Create an inference pipeline from your training pipeline.
Clone the training pipeline with a different name.
Change the compute target of the training pipeline to an Azure Kubernetes Services (AKS) cluster.




Answer is Create an inference pipeline from your training pipeline.

Before you can publish a pipeline as a service, you must create an inference pipeline from the training pipeline, and modify the web service inputs, outputs, and data flow as necessary for production inferencing.

Question 8

You have published a pipeline as a real-time service on an Azure Kubernetes Services (AKS) cluster.
An application developer plans to call the service from a REST-based client.
What information does the application developer require?
The name of the inference pipeline in designer.
The endpoint URL and key for the published service.
The name of the AKS compute target in the workspace.




Answer is The endpoint URL and key for the published service.

To make a REST call to a published service, you need the service endpoint URL and authorization key.

Question 9

You are using the Azure Machine Learning Python SDK to write code for an experiment. You need to record metrics from each run of the experiment, and be able to retrieve them easily from each run. What should you do?
Add print statements to the experiment code to print the metrics.
Use the log methods of the Run class to record named metrics.
Save the experiment data in the outputs folder.




Answer is Use the log methods of the Run class to record named metrics.

To record metrics in an experiment run, use the Run.log methods.

Question 10

You want to run a script as an experiment.
You have already created a RunConfig object to define the Python runtime context for the experiment.
What other object should you create to associate the script with the runtime context?
A ScriptRunConfig object.
A Pipeline object.
A ComputeTarget object.




Answer is A ScriptRunConfig object.

To associate a script with a RunConfig, you must use a ScriptRunConfig object.

Next Page >

Quick access to all questions in this exam