Benefits of using Experiments in Azure Machine Learning
Azure Machine Learning is an all-encompassing machine learning environment for data science teams. It is useful when running everything from small tests to large production systems.
One concept that is central to Azure Machine Learning is that of the 'experiment'. This is the way that you can run a script which will train your model and log key metrics.
Code example for running Experiments in Azure ML
In the below example I show how you can run an experiment on a remote compute instance. This example assumes that you have already created a workspace within Azure as well as a compute instance.
from azureml.core import ( Workspace, Experiment, Environment, RunConfiguration, ScriptRunConfig, ) #Define the name of your created compute instance compute_name = 'my-compute' #Define the name of your new experiment and environment experiment_name = 'my-experiment' environment_name = 'my-environment' #Define the directory you want to run the experiment from source_directory = '.' #Define the entry script for the experiment script_path = 'code/train.py' #Define the location of the machine learning workspace subscription_id = 'subscription-id' resource_group = 'resource-group' workspace_name = 'my-workspace' #Connect to your workspace ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name) #Use the workspace to create an experiment exp = Experiment(workspace=ws, name=experiment_name) #Create an environment with the packages you need env = Environment(name=environment_name) for pip_package in ['numpy==1.19.2','pandas==1.0.5']: env.python.conda_dependencies.add_pip_package(pip_package) #Create a run configuration to connect our environment and compute run_config = RunConfiguration() run_config.target = compute_name run_config.environment = env #Create a script run config to tie all the elements together config = ScriptRunConfig( source_directory=source_directory, script=script_path, run_config=run_config ) #Submitting the experiment will start it run = exp.submit(config) #Wait for the completion and show the output of the experiment as we go run.wait_for_completion(show_output=True, wait_post_processing=True)