Deploy machine learning model to ACI in Azure Machine Learning
Deploying a machine learning model to an Azure Container Instance (ACI) allows you to make live predictions against your trained model for testing or small production systems
Deploy to ACI or AKS in Azure ML?
Deploying a machine learning model to an Azure Container Instance (ACI) allows you to make live predictions against your trained model for testing or small production systems. It is also possible to deploy to Azure Kubernetes Service (AKS) if you are wanting to deploy a model intended for heavy usage.
Deploying to ACI using Azure ML Python SDK
This example shows how to deploy your model using the Azure Machine Learning Python package. The following assumptions are made in this example: you have already created a machine learning workspace, have registered a model, and have written a scoring script to handle the incoming prediction requests.
from azureml.core import (
Workspace,
Environment,
RunConfiguration,
ScriptRunConfig,
)
from azureml.core.model import InferenceConfig, Model
from azureml.core.webservice import AciWebservice
#Define the location of the machine learning workspace
subscription_id = 'subscription-id'
resource_group = 'resource-group'
workspace_name = 'my-workspace'
#Define our new environment name
environment_name = 'my-env'
#Our registered model name
model_name = 'my-model'
#Location of the entry script
entry_scipt = 'score.py'
source_directory = '.'
#Define the size and location of our deployed compute
cpu_cores = 1
memory_gb = 1
location = 'northeurope'
service_name = 'my-service'
#Connect to your workspace
ws = Workspace(subscription_id = subscription_id,
resource_group = resource_group,
workspace_name = workspace_name)
#Create an environment with the packages you require
env = Environment(name=environment_name)
extra_pip_packages = ['numpy==1.19.2','pandas==1.0.5','azureml-defaults']
for pip_package in extra_pip_packages:
env.python.conda_dependencies.add_pip_package(pip_package)
#Get the registered model you want to deploy
model = Model(ws, name=model_name)
#Define configs for the deployment
inference_config = InferenceConfig(
entry_script=entry_script, environment=env, source_directory=source_directory
)
aci_config = AciWebservice.deploy_configuration(cpu_cores = cpu_cores, memory_gb = memory_gb,location=location)
#Deloy the service
service = Model.deploy(
workspace=ws,
name=service_name,
models=[model],
inference_config=inference_config,
deployment_config=aci_config,
overwrite=True
)
service.wait_for_deployment(show_output=True)
Related articles
Run an experiment in Azure Machine Learning
Register model in Azure Machine Learning
Set up CI/CD pipelines for machine learning in Azure DevOps