Thursday 9 January 2020

Deploying Machine Learning (ML) Model with Azure Pipeline Using Deployable Artifact from Build

We have discussed how to create a Machine Learning (ML) model as a deployable artifact in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” which is based on the open source ML repo, (https://github.com/SaschaDittmann/MLOps-Lab.git) with data by Sascha Dittmann, which also contains the code to train a model.
Prerequisites: You have followed the instructions in posts “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” and “Setup MLOPS workspace using Azure DevOps pipeline”, and created a build pipeline which can train the ML model with cloned repo from https://github.com/SaschaDittmann/MLOps-Lab.git.
Link the build created as per instructions in “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” to the release pipeline.

Prerequisite steps, such as Install Python 3.6 step, adding Azure CLI ML extension, creating an ML workspace are required to be done as explained in post “Setup MLOPS workspace using Azure DevOps pipeline”. All Azure CLI steps require the Azure service connection which has used an service principal, which is having contribution permissions to your Azure subscription where you want to create and use ML workspace.

Then using the model file in the build artifacts register the model in the new ML workspace created. The step outputs a metadata file which you can save with providing a value to the argument --output-metadata-file. This file is required to do the deployment in the next step, using the registered model in this step. Make sure to change resource group ML workspace name which you use.
az ml model register -n diabetes_model --model-path sklearn_diabetes_model.pkl --experiment-name diabetes_sklearn --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --output-metadata-file ../metadata/deployedmodel.json

To deploy the model, you can use the output metadata file form previous step. Inference configuration available in the repo, which was copied to artifacts contains the input parameters related to the model deployment. Deploy config file contains the meta data for deployment.
az ml model deploy --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --name diabetes-service-aci --model-metadata-file ../metadata/deployedmodel.json --deploy-config-file aciDeploymentConfig.yml --inference-config-file inferenceConfig.yml –overwrite

Then you can install the python requirements as explained in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact” to enable running the python based integration tests in the artifacts which was copied from the repo, to test the deployed model.
Tests can be run using the command such as below.
pytest integration_test.py --doctest-modules --junitxml=junit/test-results.xml --cov=integration_test --cov-report=xml --cov-report=html --scoreurl $(az ml service show --resource-group rg-ch-mldemostg01 --workspace-name mlw-ch-demostg01 --name diabetes-service-aci --query scoringUri --output tsv)

The results can be published to the release pipeline.

Above two steps are similar to the unit test execution explained in the post “Training Machine Learning (ML) Model with Azure Pipeline and Output ML Model as Deployable Artifact”.
Once the pipeline is executed you can see the model got registered and deployed.

The tests are executed and the results are available in the release pipeline. Following these instructions, you can setup ML deployments to different ML workspaces in different resource groups in Azure.


No comments:

Popular Posts