diff --git a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb b/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb index a7dd83291..0bf2132b1 100644 --- a/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb +++ b/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb @@ -1,920 +1,935 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Classification with Deployment using a Bank Marketing Dataset**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Train](#Train)\n", - "1. [Results](#Results)\n", - "1. [Deploy](#Deploy)\n", - "1. [Test](#Test)\n", - "1. [Acknowledgements](#Acknowledgements)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "\n", - "In this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", - "\n", - "Please find the ONNX related documentations [here](https://github.com/onnx/onnx).\n", - "\n", - "In this notebook you will learn how to:\n", - "1. Create an experiment using an existing workspace.\n", - "2. Configure AutoML using `AutoMLConfig`.\n", - "3. Train the model using local compute with ONNX compatible config on.\n", - "4. Explore the results, featurization transparency options and save the ONNX model\n", - "5. Inference with the ONNX model.\n", - "6. Register the model.\n", - "7. Create a container image.\n", - "8. Create an Azure Container Instance (ACI) service.\n", - "9. Test the ACI service.\n", - "\n", - "In addition this notebook showcases the following features\n", - "- **Blocking** certain pipelines\n", - "- Specifying **target metrics** to indicate stopping criteria\n", - "- Handling **missing data** in the input" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n", - "\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import pandas as pd\n", - "import os\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.dataset import Dataset\n", - "from azureml.train.automl import AutoMLConfig\n", - "from azureml.interpret import ExplanationClient" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Accessing the Azure ML workspace requires authentication with Azure.\n", - "\n", - "The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.\n", - "\n", - "If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n", - "\n", - "```\n", - "from azureml.core.authentication import InteractiveLoginAuthentication\n", - "auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n", - "ws = Workspace.from_config(auth = auth)\n", - "```\n", - "\n", - "If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n", - "\n", - "```\n", - "from azureml.core.authentication import ServicePrincipalAuthentication\n", - "auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n", - "ws = Workspace.from_config(auth = auth)\n", - "```\n", - "For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for experiment\n", - "experiment_name = 'automl-classification-bmarketing-all'\n", - "\n", - "experiment=Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Experiment Name'] = experiment.name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create or Attach existing AmlCompute\n", - "You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "cpu_cluster_name = \"cpu-cluster-4\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n", - " max_nodes=6)\n", - " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Load Data\n", - "\n", - "Leverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Training Data" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = pd.read_csv(\"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\")\n", - "data.head()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Add missing values in 75% of the lines.\n", - "import numpy as np\n", - "\n", - "missing_rate = 0.75\n", - "n_missing_samples = int(np.floor(data.shape[0] * missing_rate))\n", - "missing_samples = np.hstack((np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool), np.ones(n_missing_samples, dtype=np.bool)))\n", - "rng = np.random.RandomState(0)\n", - "rng.shuffle(missing_samples)\n", - "missing_features = rng.randint(0, data.shape[1], n_missing_samples)\n", - "data.values[np.where(missing_samples)[0], missing_features] = np.nan" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "if not os.path.isdir('data'):\n", - " os.mkdir('data')\n", - " \n", - "# Save the train data to a csv to be uploaded to the datastore\n", - "pd.DataFrame(data).to_csv(\"data/train_data.csv\", index=False)\n", - "\n", - "ds = ws.get_default_datastore()\n", - "ds.upload(src_dir='./data', target_path='bankmarketing', overwrite=True, show_progress=True)\n", - "\n", - " \n", - "\n", - "# Upload the training data as a tabular dataset for access during training on remote compute\n", - "train_data = Dataset.Tabular.from_delimited_files(path=ds.path('bankmarketing/train_data.csv'))\n", - "label = \"y\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Validation Data" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "validation_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv\"\n", - "validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Test Data" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv\"\n", - "test_dataset = Dataset.Tabular.from_delimited_files(test_data)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|classification or regression or forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
accuracy
AUC_weighted
average_precision_score_weighted
norm_macro_recall
precision_score_weighted|\n", - "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", - "|**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.

Allowed values for **Classification**
LogisticRegression
SGD
MultinomialNaiveBayes
BernoulliNaiveBayes
SVM
LinearSVM
KNN
DecisionTree
RandomForest
ExtremeRandomTrees
LightGBM
GradientBoosting
TensorFlowDNN
TensorFlowLinearClassifier

Allowed values for **Regression**
ElasticNet
GradientBoosting
DecisionTree
KNN
LassoLars
SGD
RandomForest
ExtremeRandomTrees
LightGBM
TensorFlowLinearRegressor
TensorFlowDNN

Allowed values for **Forecasting**
ElasticNet
GradientBoosting
DecisionTree
KNN
LassoLars
SGD
RandomForest
ExtremeRandomTrees
LightGBM
TensorFlowLinearRegressor
TensorFlowDNN
Arima
Prophet|\n", - "|**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.|\n", - "|**experiment_exit_score**| Value indicating the target for *primary_metric*.
Once the target is surpassed the run terminates.|\n", - "|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n", - "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n", - "|**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**training_data**|Input dataset, containing both features and label column.|\n", - "|**label_column_name**|The name of the label column.|\n", - "\n", - "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"experiment_timeout_hours\" : 0.3,\n", - " \"enable_early_stopping\" : True,\n", - " \"iteration_timeout_minutes\": 5,\n", - " \"max_concurrent_iterations\": 4,\n", - " \"max_cores_per_iteration\": -1,\n", - " #\"n_cross_validations\": 2,\n", - " \"primary_metric\": 'AUC_weighted',\n", - " \"featurization\": 'auto',\n", - " \"verbosity\": logging.INFO,\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'classification',\n", - " debug_log = 'automl_errors.log',\n", - " compute_target=compute_target,\n", - " experiment_exit_score = 0.9984,\n", - " blocked_models = ['KNN','LinearSVM'],\n", - " enable_onnx_compatible_models=True,\n", - " training_data = train_data,\n", - " label_column_name = label,\n", - " validation_data = validation_dataset,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output = False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Run the following cell to access previous runs. Uncomment the cell below and update the run_id." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#from azureml.train.automl.run import AutoMLRun\n", - "#remote_run = AutoMLRun(experiment=experiment, run_id='thresh else 'black')\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Delete a Web Service\n", - "\n", - "Deletes the specified web service." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aci_service.delete()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Acknowledgements" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This Bank Marketing dataset is made available under the Creative Commons (CCO: Public Domain) License: https://creativecommons.org/publicdomain/zero/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: https://creativecommons.org/publicdomain/zero/1.0/ and is available at: https://www.kaggle.com/janiobachmann/bank-marketing-dataset .\n", - "\n", - "_**Acknowledgements**_\n", - "This data set is originally available within the UCI Machine Learning Database: https://archive.ics.uci.edu/ml/datasets/bank+marketing\n", - "\n", - "[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Classification with Deployment using a Bank Marketing Dataset**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Deploy](#Deploy)\n", + "1. [Test](#Test)\n", + "1. [Acknowledgements](#Acknowledgements)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "In this example we use the UCI Bank Marketing dataset to showcase how you can use AutoML for a classification problem and deploy it to an Azure Container Instance (ACI). The classification goal is to predict if the client will subscribe to a term deposit with the bank.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "Please find the ONNX related documentations [here](https://github.com/onnx/onnx).\n", + "\n", + "In this notebook you will learn how to:\n", + "1. Create an experiment using an existing workspace.\n", + "2. Configure AutoML using `AutoMLConfig`.\n", + "3. Train the model using local compute with ONNX compatible config on.\n", + "4. Explore the results, featurization transparency options and save the ONNX model\n", + "5. Inference with the ONNX model.\n", + "6. Register the model.\n", + "7. Create a container image.\n", + "8. Create an Azure Container Instance (ACI) service.\n", + "9. Test the ACI service.\n", + "\n", + "In addition this notebook showcases the following features\n", + "- **Blocking** certain pipelines\n", + "- Specifying **target metrics** to indicate stopping criteria\n", + "- Handling **missing data** in the input" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import pandas as pd\n", + "import os\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.train.automl import AutoMLConfig\n", + "from azureml.interpret import ExplanationClient" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Accessing the Azure ML workspace requires authentication with Azure.\n", + "\n", + "The default authentication is interactive authentication using the default tenant. Executing the `ws = Workspace.from_config()` line in the cell below will prompt for authentication the first time that it is run.\n", + "\n", + "If you have multiple Azure tenants, you can specify the tenant by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n", + "\n", + "```\n", + "from azureml.core.authentication import InteractiveLoginAuthentication\n", + "auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n", + "ws = Workspace.from_config(auth = auth)\n", + "```\n", + "\n", + "If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the `ws = Workspace.from_config()` line in the cell below with the following:\n", + "\n", + "```\n", + "from azureml.core.authentication import ServicePrincipalAuthentication\n", + "auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n", + "ws = Workspace.from_config(auth = auth)\n", + "```\n", + "For more details, see [aka.ms/aml-notebook-auth](http://aka.ms/aml-notebook-auth)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for experiment\n", + "experiment_name = \"automl-classification-bmarketing-all\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create or Attach existing AmlCompute\n", + "You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [] + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "cpu_cluster_name = \"cpu-cluster-4\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load Data\n", + "\n", + "Leverage azure compute to load the bank marketing dataset as a Tabular Dataset into the dataset variable. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Training Data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = pd.read_csv(\n", + " \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_train.csv\"\n", + ")\n", + "data.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Add missing values in 75% of the lines.\n", + "import numpy as np\n", + "\n", + "missing_rate = 0.75\n", + "n_missing_samples = int(np.floor(data.shape[0] * missing_rate))\n", + "missing_samples = np.hstack(\n", + " (\n", + " np.zeros(data.shape[0] - n_missing_samples, dtype=np.bool),\n", + " np.ones(n_missing_samples, dtype=np.bool),\n", + " )\n", + ")\n", + "rng = np.random.RandomState(0)\n", + "rng.shuffle(missing_samples)\n", + "missing_features = rng.randint(0, data.shape[1], n_missing_samples)\n", + "data.values[np.where(missing_samples)[0], missing_features] = np.nan" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if not os.path.isdir(\"data\"):\n", + " os.mkdir(\"data\")\n", + "# Save the train data to a csv to be uploaded to the datastore\n", + "pd.DataFrame(data).to_csv(\"data/train_data.csv\", index=False)\n", + "\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(\n", + " src_dir=\"./data\", target_path=\"bankmarketing\", overwrite=True, show_progress=True\n", + ")\n", + "\n", + "\n", + "# Upload the training data as a tabular dataset for access during training on remote compute\n", + "train_data = Dataset.Tabular.from_delimited_files(\n", + " path=ds.path(\"bankmarketing/train_data.csv\")\n", + ")\n", + "label = \"y\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Validation Data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "validation_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_validate.csv\"\n", + "validation_dataset = Dataset.Tabular.from_delimited_files(validation_data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Test Data" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/bankmarketing_test.csv\"\n", + "test_dataset = Dataset.Tabular.from_delimited_files(test_data)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|classification or regression or forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
accuracy
AUC_weighted
average_precision_score_weighted
norm_macro_recall
precision_score_weighted|\n", + "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", + "|**blocked_models** | *List* of *strings* indicating machine learning algorithms for AutoML to avoid in this run.

Allowed values for **Classification**
LogisticRegression
SGD
MultinomialNaiveBayes
BernoulliNaiveBayes
SVM
LinearSVM
KNN
DecisionTree
RandomForest
ExtremeRandomTrees
LightGBM
GradientBoosting
TensorFlowDNN
TensorFlowLinearClassifier

Allowed values for **Regression**
ElasticNet
GradientBoosting
DecisionTree
KNN
LassoLars
SGD
RandomForest
ExtremeRandomTrees
LightGBM
TensorFlowLinearRegressor
TensorFlowDNN

Allowed values for **Forecasting**
ElasticNet
GradientBoosting
DecisionTree
KNN
LassoLars
SGD
RandomForest
ExtremeRandomTrees
LightGBM
TensorFlowLinearRegressor
TensorFlowDNN
Arima
Prophet|\n", + "|**allowed_models** | *List* of *strings* indicating machine learning algorithms for AutoML to use in this run. Same values listed above for **blocked_models** allowed for **allowed_models**.|\n", + "|**experiment_exit_score**| Value indicating the target for *primary_metric*.
Once the target is surpassed the run terminates.|\n", + "|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n", + "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n", + "|**featurization**| 'auto' / 'off' Indicator for whether featurization step should be done automatically or not. Note: If the input data is sparse, featurization cannot be turned on.|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "\n", + "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"experiment_timeout_hours\": 0.3,\n", + " \"enable_early_stopping\": True,\n", + " \"iteration_timeout_minutes\": 5,\n", + " \"max_concurrent_iterations\": 4,\n", + " \"max_cores_per_iteration\": -1,\n", + " # \"n_cross_validations\": 2,\n", + " \"primary_metric\": \"AUC_weighted\",\n", + " \"featurization\": \"auto\",\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " experiment_exit_score=0.9984,\n", + " blocked_models=[\"KNN\", \"LinearSVM\"],\n", + " enable_onnx_compatible_models=True,\n", + " training_data=train_data,\n", + " label_column_name=label,\n", + " validation_data=validation_dataset,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "Run the following cell to access previous runs. Uncomment the cell below and update the run_id." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# from azureml.train.automl.run import AutoMLRun\n", + "# remote_run = AutoMLRun(experiment=experiment, run_id=' thresh else \"black\",\n", + " )\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Delete a Web Service\n", + "\n", + "Deletes the specified web service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aci_service.delete()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Acknowledgements" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This Bank Marketing dataset is made available under the Creative Commons (CCO: Public Domain) License: https://creativecommons.org/publicdomain/zero/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: https://creativecommons.org/publicdomain/zero/1.0/ and is available at: https://www.kaggle.com/janiobachmann/bank-marketing-dataset .\n", + "\n", + "_**Acknowledgements**_\n", + "This data set is originally available within the UCI Machine Learning Database: https://archive.ics.uci.edu/ml/datasets/bank+marketing\n", + "\n", + "[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "ratanase" + } + ], + "category": "tutorial", + "compute": [ + "AML" + ], + "datasets": [ + "Bankmarketing" + ], + "deployment": [ + "ACI" + ], + "exclude_from_index": false, + "framework": [ + "None" + ], + "friendly_name": "Automated ML run with basic edition features.", + "index_order": 5, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + }, + "tags": [ + "featurization", + "explainability", + "remote_run", + "AutomatedML" ], - "metadata": { - "authors": [ - { - "name": "ratanase" - } - ], - "category": "tutorial", - "compute": [ - "AML" - ], - "datasets": [ - "Bankmarketing" - ], - "deployment": [ - "ACI" - ], - "exclude_from_index": false, - "framework": [ - "None" - ], - "friendly_name": "Automated ML run with basic edition features.", - "index_order": 5, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, - "tags": [ - "featurization", - "explainability", - "remote_run", - "AutomatedML" - ], - "task": "Classification" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "task": "Classification" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb b/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb index 5ef43e3bd..970448257 100644 --- a/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb +++ b/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.ipynb @@ -1,497 +1,483 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Classification of credit card fraudulent transactions on remote compute **_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Train](#Train)\n", - "1. [Results](#Results)\n", - "1. [Test](#Test)\n", - "1. [Acknowledgements](#Acknowledgements)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "\n", - "In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n", - "\n", - "This notebook is using remote compute to train the model.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", - "\n", - "In this notebook you will learn how to:\n", - "1. Create an experiment using an existing workspace.\n", - "2. Configure AutoML using `AutoMLConfig`.\n", - "3. Train the model using remote compute.\n", - "4. Explore the results.\n", - "5. Test the fitted model." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n", - "\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import pandas as pd\n", - "import os\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.dataset import Dataset\n", - "from azureml.train.automl import AutoMLConfig" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for experiment\n", - "experiment_name = 'automl-classification-ccard-remote'\n", - "\n", - "experiment=Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Experiment Name'] = experiment.name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create or Attach existing AmlCompute\n", - "A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "cpu_cluster_name = \"cpu-cluster-1\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n", - " max_nodes=6)\n", - " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Load Data\n", - "\n", - "Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n", - "dataset = Dataset.Tabular.from_delimited_files(data)\n", - "training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n", - "label_column_name = 'Class'" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|classification or regression|\n", - "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
accuracy
AUC_weighted
average_precision_score_weighted
norm_macro_recall
precision_score_weighted|\n", - "|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**training_data**|Input dataset, containing both features and label column.|\n", - "|**label_column_name**|The name of the label column.|\n", - "\n", - "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"n_cross_validations\": 3,\n", - " \"primary_metric\": 'AUC_weighted',\n", - " \"enable_early_stopping\": True,\n", - " \"max_concurrent_iterations\": 2, # This is a limit for testing purpose, please increase it as per cluster size\n", - " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible\n", - " \"verbosity\": logging.INFO,\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'classification',\n", - " debug_log = 'automl_errors.log',\n", - " compute_target = compute_target,\n", - " training_data = training_data,\n", - " label_column_name = label_column_name,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output = False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# If you need to retrieve a run that already started, use the following code\n", - "#from azureml.train.automl.run import AutoMLRun\n", - "#remote_run = AutoMLRun(experiment = experiment, run_id = '')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Results" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Widget for Monitoring Runs\n", - "\n", - "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", - "\n", - "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "widget-rundetails-sample" - ] - }, - "outputs": [], - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(remote_run).show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Explain model\n", - "\n", - "Automated ML models can be explained and visualized using the SDK Explainability library. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Analyze results\n", - "\n", - "### Retrieve the Best Model\n", - "\n", - "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run, fitted_model = remote_run.get_output()\n", - "fitted_model" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Print the properties of the model\n", - "The fitted_model is a python object and you can read the different properties of the object.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Test the fitted model\n", - "\n", - "Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# convert the test data to dataframe\n", - "X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()\n", - "y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# call the predict functions on the model\n", - "y_pred = fitted_model.predict(X_test_df)\n", - "y_pred" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Calculate metrics for the prediction\n", - "\n", - "Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n", - "from the trained model that was returned." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from sklearn.metrics import confusion_matrix\n", - "import numpy as np\n", - "import itertools\n", - "\n", - "cf =confusion_matrix(y_test_df.values,y_pred)\n", - "plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')\n", - "plt.colorbar()\n", - "plt.title('Confusion Matrix')\n", - "plt.xlabel('Predicted')\n", - "plt.ylabel('Actual')\n", - "class_labels = ['False','True']\n", - "tick_marks = np.arange(len(class_labels))\n", - "plt.xticks(tick_marks,class_labels)\n", - "plt.yticks([-0.5,0,1,1.5],['','False','True',''])\n", - "# plotting text value inside cells\n", - "thresh = cf.max() / 2.\n", - "for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):\n", - " plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Acknowledgements" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n", - "\n", - "The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u00a9 Libre de Bruxelles) on big data mining and fraud detection.\n", - "More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n", - "\n", - "Please cite the following works:\n", - "\n", - "Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n", - "\n", - "Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n", - "\n", - "Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n", - "\n", - "Dal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n", - "\n", - "Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n", - "\n", - "Carcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing\n", - "\n", - "Bertrand Lebichot, Yann-A\u00c3\u00abl Le Borgne, Liyun He, Frederic Obl\u00c3\u00a9, Gianluca Bontempi Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection, INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019\n", - "\n", - "Fabrizio Carcillo, Yann-A\u00c3\u00abl Le Borgne, Olivier Caelen, Frederic Obl\u00c3\u00a9, Gianluca Bontempi Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection Information Sciences, 2019" - ] - } - ], - "metadata": { - "authors": [ - { - "name": "ratanase" - } - ], - "category": "tutorial", - "compute": [ - "AML Compute" - ], - "datasets": [ - "Creditcard" - ], - "deployment": [ - "None" - ], - "exclude_from_index": false, - "file_extension": ".py", - "framework": [ - "None" - ], - "friendly_name": "Classification of credit card fraudulent transactions using Automated ML", - "index_order": 5, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Classification of credit card fraudulent transactions on remote compute **_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Test](#Test)\n", + "1. [Acknowledgements](#Acknowledgements)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n", + "\n", + "This notebook is using remote compute to train the model.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "In this notebook you will learn how to:\n", + "1. Create an experiment using an existing workspace.\n", + "2. Configure AutoML using `AutoMLConfig`.\n", + "3. Train the model using remote compute.\n", + "4. Explore the results.\n", + "5. Test the fitted model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import pandas as pd\n", + "import os\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for experiment\n", + "experiment_name = \"automl-classification-ccard-remote\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create or Attach existing AmlCompute\n", + "A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "cpu_cluster_name = \"cpu-cluster-1\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load Data\n", + "\n", + "Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n", + "dataset = Dataset.Tabular.from_delimited_files(data)\n", + "training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n", + "label_column_name = \"Class\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|classification or regression|\n", + "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
accuracy
AUC_weighted
average_precision_score_weighted
norm_macro_recall
precision_score_weighted|\n", + "|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "\n", + "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"n_cross_validations\": 3,\n", + " \"primary_metric\": \"average_precision_score_weighted\",\n", + " \"enable_early_stopping\": True,\n", + " \"max_concurrent_iterations\": 2, # This is a limit for testing purpose, please increase it as per cluster size\n", + " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ablity to find the best model possible\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=training_data,\n", + " label_column_name=label_column_name,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# If you need to retrieve a run that already started, use the following code\n", + "# from azureml.train.automl.run import AutoMLRun\n", + "# remote_run = AutoMLRun(experiment = experiment, run_id = '')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Widget for Monitoring Runs\n", + "\n", + "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", + "\n", + "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { "tags": [ - "remote_run", - "AutomatedML" - ], - "task": "Classification", - "version": "3.6.7" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "widget-rundetails-sample" + ] + }, + "outputs": [], + "source": [ + "from azureml.widgets import RunDetails\n", + "\n", + "RunDetails(remote_run).show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Explain model\n", + "\n", + "Automated ML models can be explained and visualized using the SDK Explainability library. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Analyze results\n", + "\n", + "### Retrieve the Best Model\n", + "\n", + "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = remote_run.get_output()\n", + "fitted_model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Print the properties of the model\n", + "The fitted_model is a python object and you can read the different properties of the object.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test the fitted model\n", + "\n", + "Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# convert the test data to dataframe\n", + "X_test_df = validation_data.drop_columns(\n", + " columns=[label_column_name]\n", + ").to_pandas_dataframe()\n", + "y_test_df = validation_data.keep_columns(\n", + " columns=[label_column_name], validate=True\n", + ").to_pandas_dataframe()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# call the predict functions on the model\n", + "y_pred = fitted_model.predict(X_test_df)\n", + "y_pred" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Calculate metrics for the prediction\n", + "\n", + "Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n", + "from the trained model that was returned." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.metrics import confusion_matrix\n", + "import numpy as np\n", + "import itertools\n", + "\n", + "cf = confusion_matrix(y_test_df.values, y_pred)\n", + "plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n", + "plt.colorbar()\n", + "plt.title(\"Confusion Matrix\")\n", + "plt.xlabel(\"Predicted\")\n", + "plt.ylabel(\"Actual\")\n", + "class_labels = [\"False\", \"True\"]\n", + "tick_marks = np.arange(len(class_labels))\n", + "plt.xticks(tick_marks, class_labels)\n", + "plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n", + "# plotting text value inside cells\n", + "thresh = cf.max() / 2.0\n", + "for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n", + " plt.text(\n", + " j,\n", + " i,\n", + " format(cf[i, j], \"d\"),\n", + " horizontalalignment=\"center\",\n", + " color=\"white\" if cf[i, j] > thresh else \"black\",\n", + " )\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Acknowledgements" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n", + "\n", + "The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection.\n", + "More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n", + "\n", + "Please cite the following works:\n", + "\n", + "Andrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n", + "\n", + "Dal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n", + "\n", + "Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n", + "\n", + "Dal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n", + "\n", + "Carcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aël; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n", + "\n", + "Carcillo, Fabrizio; Le Borgne, Yann-Aël; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing\n", + "\n", + "Bertrand Lebichot, Yann-Aël Le Borgne, Liyun He, Frederic Oblé, Gianluca Bontempi Deep-Learning Domain Adaptation Techniques for Credit Cards Fraud Detection, INNSBDDL 2019: Recent Advances in Big Data and Deep Learning, pp 78-88, 2019\n", + "\n", + "Fabrizio Carcillo, Yann-Aël Le Borgne, Olivier Caelen, Frederic Oblé, Gianluca Bontempi Combining Unsupervised and Supervised Learning in Credit Card Fraud Detection Information Sciences, 2019" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "ratanase" + } + ], + "category": "tutorial", + "compute": [ + "AML Compute" + ], + "datasets": [ + "Creditcard" + ], + "deployment": [ + "None" + ], + "exclude_from_index": false, + "file_extension": ".py", + "framework": [ + "None" + ], + "friendly_name": "Classification of credit card fraudulent transactions using Automated ML", + "index_order": 5, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + }, + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "tags": [ + "remote_run", + "AutomatedML" + ], + "task": "Classification", + "version": "3.6.7" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb index ec98657b4..83cfbbbcc 100644 --- a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb +++ b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.ipynb @@ -1,590 +1,591 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-text-dnn/auto-ml-classification-text-dnn.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Text Classification Using Deep Learning**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Data](#Data)\n", - "1. [Train](#Train)\n", - "1. [Evaluate](#Evaluate)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "This notebook demonstrates classification with text data using deep learning in AutoML.\n", - "\n", - "AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n", - "\n", - "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", - "\n", - "Notebook synopsis:\n", - "\n", - "1. Creating an Experiment in an existing Workspace\n", - "2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n", - "3. Registering the best model for future use\n", - "4. Evaluating the final model on a test set" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "import os\n", - "import shutil\n", - "\n", - "import pandas as pd\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.dataset import Dataset\n", - "from azureml.core.compute import AmlCompute\n", - "from azureml.core.compute import ComputeTarget\n", - "from azureml.core.run import Run\n", - "from azureml.widgets import RunDetails\n", - "from azureml.core.model import Model \n", - "from helper import run_inference, get_result_df\n", - "from azureml.train.automl import AutoMLConfig\n", - "from sklearn.datasets import fetch_20newsgroups" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# Choose an experiment name.\n", - "experiment_name = 'automl-classification-text-dnn'\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace Name'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Experiment Name'] = experiment.name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Set up a compute cluster\n", - "This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "num_nodes = 2\n", - "\n", - "# Choose a name for your cluster.\n", - "amlcompute_cluster_name = \"dnntext-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size = \"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_DS12_V2\" \n", - " # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\" \n", - " # or similar GPU option\n", - " # available in your workspace\n", - " max_nodes = num_nodes)\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Get data\n", - "For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_dir = \"text-dnn-data\" # Local directory to store data\n", - "blobstore_datadir = data_dir # Blob store directory to store data in\n", - "target_column_name = 'y'\n", - "feature_column_name = 'X'\n", - "\n", - "def get_20newsgroups_data():\n", - " '''Fetches 20 Newsgroups data from scikit-learn\n", - " Returns them in form of pandas dataframes\n", - " '''\n", - " remove = ('headers', 'footers', 'quotes')\n", - " categories = [\n", - " 'rec.sport.baseball',\n", - " 'rec.sport.hockey',\n", - " 'comp.graphics',\n", - " 'sci.space',\n", - " ]\n", - "\n", - " data = fetch_20newsgroups(subset = 'train', categories = categories,\n", - " shuffle = True, random_state = 42,\n", - " remove = remove)\n", - " data = pd.DataFrame({feature_column_name: data.data, target_column_name: data.target})\n", - "\n", - " data_train = data[:200]\n", - " data_test = data[200:300] \n", - "\n", - " data_train = remove_blanks_20news(data_train, feature_column_name, target_column_name)\n", - " data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n", - " \n", - " return data_train, data_test\n", - " \n", - "def remove_blanks_20news(data, feature_column_name, target_column_name):\n", - " \n", - " data[feature_column_name] = data[feature_column_name].replace(r'\\n', ' ', regex=True).apply(lambda x: x.strip())\n", - " data = data[data[feature_column_name] != '']\n", - " \n", - " return data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Fetch data and upload to datastore for use in training" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_train, data_test = get_20newsgroups_data()\n", - "\n", - "if not os.path.isdir(data_dir):\n", - " os.mkdir(data_dir)\n", - " \n", - "train_data_fname = data_dir + '/train_data.csv'\n", - "test_data_fname = data_dir + '/test_data.csv'\n", - "\n", - "data_train.to_csv(train_data_fname, index=False)\n", - "data_test.to_csv(test_data_fname, index=False)\n", - "\n", - "datastore = ws.get_default_datastore()\n", - "datastore.upload(src_dir=data_dir, target_path=blobstore_datadir,\n", - " overwrite=True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "train_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/train_data.csv')])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prepare AutoML run" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"experiment_timeout_minutes\": 30,\n", - " \"primary_metric\": 'AUC_weighted',\n", - " \"max_concurrent_iterations\": num_nodes, \n", - " \"max_cores_per_iteration\": -1,\n", - " \"enable_dnn\": True,\n", - " \"enable_early_stopping\": True,\n", - " \"validation_size\": 0.3,\n", - " \"verbosity\": logging.INFO,\n", - " \"enable_voting_ensemble\": False,\n", - " \"enable_stack_ensemble\": False,\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'classification',\n", - " debug_log = 'automl_errors.log',\n", - " compute_target=compute_target,\n", - " training_data=train_dataset,\n", - " label_column_name=target_column_name,\n", - " blocked_models = ['LightGBM', 'XGBoostClassifier'],\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Submit AutoML Run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_run = experiment.submit(automl_config, show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Model\n", - "Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For local inferencing, you can load the model locally via. the method `remote_run.get_output()`. For more information on the arguments expected by this method, you can run `remote_run.get_output??`.\n", - "Note that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your MachineLearningNotebooks folder here:\n", - "MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl_env.yml\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Retrieve the best Run object\n", - "best_run = automl_run.get_best_child()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the featurization summary JSON file locally\n", - "best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n", - "\n", - "# Render the JSON as a pandas DataFrame\n", - "with open(\"featurization_summary.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "\n", - "featurization_summary = pd.DataFrame.from_records(records)\n", - "featurization_summary['Transformations'].tolist()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Registering the best model\n", - "We now register the best fitted model from the AutoML Run for use in future deployments. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Get results stats, extract the best model from AutoML run, download and register the resultant best model" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "summary_df = get_result_df(automl_run)\n", - "best_dnn_run_id = summary_df['run_id'].iloc[0]\n", - "best_dnn_run = Run(experiment, best_dnn_run_id)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model_dir = 'Model' # Local folder where the model will be stored temporarily\n", - "if not os.path.isdir(model_dir):\n", - " os.mkdir(model_dir)\n", - " \n", - "best_dnn_run.download_file('outputs/model.pkl', model_dir + '/model.pkl')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Register the model\n", - "model_name = 'textDNN-20News'\n", - "model = Model.register(model_path = model_dir + '/model.pkl',\n", - " model_name = model_name,\n", - " tags=None,\n", - " workspace=ws)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Evaluate on Test Data" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now use the best fitted model from the AutoML Run to make predictions on the test set. \n", - "\n", - "Test set schema should match that of the training set." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_dataset = Dataset.Tabular.from_delimited_files(path = [(datastore, blobstore_datadir + '/test_data.csv')])\n", - "\n", - "# preview the first 3 rows of the dataset\n", - "test_dataset.take(3).to_pandas_dataframe()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment = Experiment(ws, experiment_name + \"_test\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "script_folder = os.path.join(os.getcwd(), 'inference')\n", - "os.makedirs(script_folder, exist_ok=True)\n", - "shutil.copy('infer.py', script_folder)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_run = run_inference(test_experiment, compute_target, script_folder, best_dnn_run,\n", - " test_dataset, target_column_name, model_name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Display computed metrics" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "RunDetails(test_run).show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_run.wait_for_completion()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pd.Series(test_run.get_metrics())" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Text Classification Using Deep Learning**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Data](#Data)\n", + "1. [Train](#Train)\n", + "1. [Evaluate](#Evaluate)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "This notebook demonstrates classification with text data using deep learning in AutoML.\n", + "\n", + "AutoML highlights here include using deep neural networks (DNNs) to create embedded features from text data. Depending on the compute cluster the user provides, AutoML tried out Bidirectional Encoder Representations from Transformers (BERT) when a GPU compute is used, and Bidirectional Long-Short Term neural network (BiLSTM) when a CPU compute is used, thereby optimizing the choice of DNN for the uesr's setup.\n", + "\n", + "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", + "\n", + "Notebook synopsis:\n", + "\n", + "1. Creating an Experiment in an existing Workspace\n", + "2. Configuration and remote run of AutoML for a text dataset (20 Newsgroups dataset from scikit-learn) for classification\n", + "3. Registering the best model for future use\n", + "4. Evaluating the final model on a test set" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "import os\n", + "import shutil\n", + "\n", + "import pandas as pd\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.core.compute import AmlCompute\n", + "from azureml.core.compute import ComputeTarget\n", + "from azureml.core.run import Run\n", + "from azureml.widgets import RunDetails\n", + "from azureml.core.model import Model\n", + "from helper import run_inference, get_result_df\n", + "from azureml.train.automl import AutoMLConfig\n", + "from sklearn.datasets import fetch_20newsgroups" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# Choose an experiment name.\n", + "experiment_name = \"automl-classification-text-dnn\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace Name\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Set up a compute cluster\n", + "This section uses a user-provided compute cluster (named \"dnntext-cluster\" in this example). If a cluster with this name does not exist in the user's workspace, the below code will create a new cluster. You can choose the parameters of the cluster as mentioned in the comments.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "Whether you provide/select a CPU or GPU cluster, AutoML will choose the appropriate DNN for that setup - BiLSTM or BERT text featurizer will be included in the candidate featurizers on CPU and GPU respectively. If your goal is to obtain the most accurate model, we recommend you use GPU clusters since BERT featurizers usually outperform BiLSTM featurizers." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "num_nodes = 2\n", + "\n", + "# Choose a name for your cluster.\n", + "amlcompute_cluster_name = \"dnntext-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_NC6\", # CPU for BiLSTM, such as \"STANDARD_D2_V2\"\n", + " # To use BERT (this is recommended for best performance), select a GPU such as \"STANDARD_NC6\"\n", + " # or similar GPU option\n", + " # available in your workspace\n", + " max_nodes=num_nodes,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Get data\n", + "For this notebook we will use 20 Newsgroups data from scikit-learn. We filter the data to contain four classes and take a sample as training data. Please note that for accuracy improvement, more data is needed. For this notebook we provide a small-data example so that you can use this template to use with your larger sized data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_dir = \"text-dnn-data\" # Local directory to store data\n", + "blobstore_datadir = data_dir # Blob store directory to store data in\n", + "target_column_name = \"y\"\n", + "feature_column_name = \"X\"\n", + "\n", + "\n", + "def get_20newsgroups_data():\n", + " \"\"\"Fetches 20 Newsgroups data from scikit-learn\n", + " Returns them in form of pandas dataframes\n", + " \"\"\"\n", + " remove = (\"headers\", \"footers\", \"quotes\")\n", + " categories = [\n", + " \"rec.sport.baseball\",\n", + " \"rec.sport.hockey\",\n", + " \"comp.graphics\",\n", + " \"sci.space\",\n", + " ]\n", + "\n", + " data = fetch_20newsgroups(\n", + " subset=\"train\",\n", + " categories=categories,\n", + " shuffle=True,\n", + " random_state=42,\n", + " remove=remove,\n", + " )\n", + " data = pd.DataFrame(\n", + " {feature_column_name: data.data, target_column_name: data.target}\n", + " )\n", + "\n", + " data_train = data[:200]\n", + " data_test = data[200:300]\n", + "\n", + " data_train = remove_blanks_20news(\n", + " data_train, feature_column_name, target_column_name\n", + " )\n", + " data_test = remove_blanks_20news(data_test, feature_column_name, target_column_name)\n", + "\n", + " return data_train, data_test\n", + "\n", + "\n", + "def remove_blanks_20news(data, feature_column_name, target_column_name):\n", + "\n", + " data[feature_column_name] = (\n", + " data[feature_column_name]\n", + " .replace(r\"\\n\", \" \", regex=True)\n", + " .apply(lambda x: x.strip())\n", + " )\n", + " data = data[data[feature_column_name] != \"\"]\n", + "\n", + " return data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Fetch data and upload to datastore for use in training" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_train, data_test = get_20newsgroups_data()\n", + "\n", + "if not os.path.isdir(data_dir):\n", + " os.mkdir(data_dir)\n", + "\n", + "train_data_fname = data_dir + \"/train_data.csv\"\n", + "test_data_fname = data_dir + \"/test_data.csv\"\n", + "\n", + "data_train.to_csv(train_data_fname, index=False)\n", + "data_test.to_csv(test_data_fname, index=False)\n", + "\n", + "datastore = ws.get_default_datastore()\n", + "datastore.upload(src_dir=data_dir, target_path=blobstore_datadir, overwrite=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "train_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, blobstore_datadir + \"/train_data.csv\")]\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prepare AutoML run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook uses the blocked_models parameter to exclude some models that can take a longer time to train on some text datasets. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"experiment_timeout_minutes\": 30,\n", + " \"primary_metric\": \"accuracy\",\n", + " \"max_concurrent_iterations\": num_nodes,\n", + " \"max_cores_per_iteration\": -1,\n", + " \"enable_dnn\": True,\n", + " \"enable_early_stopping\": True,\n", + " \"validation_size\": 0.3,\n", + " \"verbosity\": logging.INFO,\n", + " \"enable_voting_ensemble\": False,\n", + " \"enable_stack_ensemble\": False,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=train_dataset,\n", + " label_column_name=target_column_name,\n", + " blocked_models=[\"LightGBM\", \"XGBoostClassifier\"],\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Submit AutoML Run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_run = experiment.submit(automl_config, show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Model\n", + "Below we select the best model pipeline from our iterations, use it to test on test data on the same compute cluster." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For local inferencing, you can load the model locally via. the method `remote_run.get_output()`. For more information on the arguments expected by this method, you can run `remote_run.get_output??`.\n", + "Note that when the model contains BERT, this step will require pytorch and pytorch-transformers installed in your local environment. The exact versions of these packages can be found in the **automl_env.yml** file located in the local copy of your azureml-examples folder here: \"azureml-examples/python-sdk/tutorials/automl-with-azureml\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieve the best Run object\n", + "best_run = automl_run.get_best_child()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now see what text transformations are used to convert text data to features for this dataset, including deep learning transformations based on BiLSTM or Transformer (BERT is one implementation of a Transformer) models." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the featurization summary JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n", + ")\n", + "\n", + "# Render the JSON as a pandas DataFrame\n", + "with open(\"featurization_summary.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "\n", + "featurization_summary = pd.DataFrame.from_records(records)\n", + "featurization_summary[\"Transformations\"].tolist()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Registering the best model\n", + "We now register the best fitted model from the AutoML Run for use in future deployments. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Get results stats, extract the best model from AutoML run, download and register the resultant best model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "summary_df = get_result_df(automl_run)\n", + "best_dnn_run_id = summary_df[\"run_id\"].iloc[0]\n", + "best_dnn_run = Run(experiment, best_dnn_run_id)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model_dir = \"Model\" # Local folder where the model will be stored temporarily\n", + "if not os.path.isdir(model_dir):\n", + " os.mkdir(model_dir)\n", + "\n", + "best_dnn_run.download_file(\"outputs/model.pkl\", model_dir + \"/model.pkl\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Register the model in your Azure Machine Learning Workspace. If you previously registered a model, please make sure to delete it so as to replace it with this new model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register the model\n", + "model_name = \"textDNN-20News\"\n", + "model = Model.register(\n", + " model_path=model_dir + \"/model.pkl\", model_name=model_name, tags=None, workspace=ws\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluate on Test Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now use the best fitted model from the AutoML Run to make predictions on the test set. \n", + "\n", + "Test set schema should match that of the training set." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, blobstore_datadir + \"/test_data.csv\")]\n", + ")\n", + "\n", + "# preview the first 3 rows of the dataset\n", + "test_dataset.take(3).to_pandas_dataframe()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment = Experiment(ws, experiment_name + \"_test\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "script_folder = os.path.join(os.getcwd(), \"inference\")\n", + "os.makedirs(script_folder, exist_ok=True)\n", + "shutil.copy(\"infer.py\", script_folder)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_run = run_inference(\n", + " test_experiment,\n", + " compute_target,\n", + " script_folder,\n", + " best_dnn_run,\n", + " test_dataset,\n", + " target_column_name,\n", + " model_name,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Display computed metrics" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "RunDetails(test_run).show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_run.wait_for_completion()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pd.Series(test_run.get_metrics())" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "anshirga" + } + ], + "compute": [ + "AML Compute" + ], + "datasets": [ + "None" + ], + "deployment": [ + "None" + ], + "exclude_from_index": false, + "framework": [ + "None" + ], + "friendly_name": "DNN Text Featurization", + "index_order": 2, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + }, + "tags": [ + "None" ], - "metadata": { - "authors": [ - { - "name": "anshirga" - } - ], - "compute": [ - "AML Compute" - ], - "datasets": [ - "None" - ], - "deployment": [ - "None" - ], - "exclude_from_index": false, - "framework": [ - "None" - ], - "friendly_name": "DNN Text Featurization", - "index_order": 2, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, - "tags": [ - "None" - ], - "task": "Text featurization using DNNs for classification" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "task": "Text featurization using DNNs for classification" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/helper.py b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/helper.py index 66a125492..90d67f83a 100644 --- a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/helper.py +++ b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/helper.py @@ -4,52 +4,65 @@ from azureml.core.run import Run -def run_inference(test_experiment, compute_target, script_folder, train_run, - test_dataset, target_column_name, model_name): +def run_inference( + test_experiment, + compute_target, + script_folder, + train_run, + test_dataset, + target_column_name, + model_name, +): inference_env = train_run.get_environment() - est = Estimator(source_directory=script_folder, - entry_script='infer.py', - script_params={ - '--target_column_name': target_column_name, - '--model_name': model_name - }, - inputs=[ - test_dataset.as_named_input('test_data') - ], - compute_target=compute_target, - environment_definition=inference_env) + est = Estimator( + source_directory=script_folder, + entry_script="infer.py", + script_params={ + "--target_column_name": target_column_name, + "--model_name": model_name, + }, + inputs=[test_dataset.as_named_input("test_data")], + compute_target=compute_target, + environment_definition=inference_env, + ) run = test_experiment.submit( - est, tags={ - 'training_run_id': train_run.id, - 'run_algorithm': train_run.properties['run_algorithm'], - 'valid_score': train_run.properties['score'], - 'primary_metric': train_run.properties['primary_metric'] - }) - - run.log("run_algorithm", run.tags['run_algorithm']) + est, + tags={ + "training_run_id": train_run.id, + "run_algorithm": train_run.properties["run_algorithm"], + "valid_score": train_run.properties["score"], + "primary_metric": train_run.properties["primary_metric"], + }, + ) + + run.log("run_algorithm", run.tags["run_algorithm"]) return run def get_result_df(remote_run): children = list(remote_run.get_children(recursive=True)) - summary_df = pd.DataFrame(index=['run_id', 'run_algorithm', - 'primary_metric', 'Score']) + summary_df = pd.DataFrame( + index=["run_id", "run_algorithm", "primary_metric", "Score"] + ) goal_minimize = False for run in children: - if('run_algorithm' in run.properties and 'score' in run.properties): - summary_df[run.id] = [run.id, run.properties['run_algorithm'], - run.properties['primary_metric'], - float(run.properties['score'])] - if('goal' in run.properties): - goal_minimize = run.properties['goal'].split('_')[-1] == 'min' + if "run_algorithm" in run.properties and "score" in run.properties: + summary_df[run.id] = [ + run.id, + run.properties["run_algorithm"], + run.properties["primary_metric"], + float(run.properties["score"]), + ] + if "goal" in run.properties: + goal_minimize = run.properties["goal"].split("_")[-1] == "min" summary_df = summary_df.T.sort_values( - 'Score', - ascending=goal_minimize).drop_duplicates(['run_algorithm']) - summary_df = summary_df.set_index('run_algorithm') + "Score", ascending=goal_minimize + ).drop_duplicates(["run_algorithm"]) + summary_df = summary_df.set_index("run_algorithm") return summary_df diff --git a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/infer.py b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/infer.py index cd3f8257f..28fd10b37 100644 --- a/how-to-use-azureml/automated-machine-learning/classification-text-dnn/infer.py +++ b/how-to-use-azureml/automated-machine-learning/classification-text-dnn/infer.py @@ -12,19 +12,22 @@ parser = argparse.ArgumentParser() parser.add_argument( - '--target_column_name', type=str, dest='target_column_name', - help='Target Column Name') + "--target_column_name", + type=str, + dest="target_column_name", + help="Target Column Name", +) parser.add_argument( - '--model_name', type=str, dest='model_name', - help='Name of registered model') + "--model_name", type=str, dest="model_name", help="Name of registered model" +) args = parser.parse_args() target_column_name = args.target_column_name model_name = args.model_name -print('args passed are: ') -print('Target column name: ', target_column_name) -print('Name of registered model: ', model_name) +print("args passed are: ") +print("Target column name: ", target_column_name) +print("Name of registered model: ", model_name) model_path = Model.get_model_path(model_name) # deserialize the model file back into a sklearn model @@ -32,13 +35,16 @@ run = Run.get_context() # get input dataset by name -test_dataset = run.input_datasets['test_data'] +test_dataset = run.input_datasets["test_data"] -X_test_df = test_dataset.drop_columns(columns=[target_column_name]) \ - .to_pandas_dataframe() -y_test_df = test_dataset.with_timestamp_columns(None) \ - .keep_columns(columns=[target_column_name]) \ - .to_pandas_dataframe() +X_test_df = test_dataset.drop_columns( + columns=[target_column_name] +).to_pandas_dataframe() +y_test_df = ( + test_dataset.with_timestamp_columns(None) + .keep_columns(columns=[target_column_name]) + .to_pandas_dataframe() +) predicted = model.predict_proba(X_test_df) @@ -47,11 +53,13 @@ # Use the AutoML scoring module train_labels = model.classes_ -class_labels = np.unique(np.concatenate((y_test_df.values, np.reshape(train_labels, (-1, 1))))) +class_labels = np.unique( + np.concatenate((y_test_df.values, np.reshape(train_labels, (-1, 1)))) +) classification_metrics = list(constants.CLASSIFICATION_SCALAR_SET) -scores = scoring.score_classification(y_test_df.values, predicted, - classification_metrics, - class_labels, train_labels) +scores = scoring.score_classification( + y_test_df.values, predicted, classification_metrics, class_labels, train_labels +) print("scores:") print(scores) diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb b/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb index 497f8b75a..edd11ff77 100644 --- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb +++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb @@ -1,572 +1,585 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved. \n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/continous-retraining/auto-ml-continuous-retraining.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning \n", - "**Continuous retraining using Pipelines and Time-Series TabularDataset**\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "2. [Setup](#Setup)\n", - "3. [Compute](#Compute)\n", - "4. [Run Configuration](#Run-Configuration)\n", - "5. [Data Ingestion Pipeline](#Data-Ingestion-Pipeline)\n", - "6. [Training Pipeline](#Training-Pipeline)\n", - "7. [Publish Retraining Pipeline and Schedule](#Publish-Retraining-Pipeline-and-Schedule)\n", - "8. [Test Retraining](#Test-Retraining)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "In this example we use AutoML and Pipelines to enable contious retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining. \n", - "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", - "In this notebook you will learn how to:\n", - "* Create an Experiment in an existing Workspace.\n", - "* Configure AutoML using AutoMLConfig.\n", - "* Create data ingestion pipeline to update a time-series based TabularDataset\n", - "* Create training pipeline to prepare data, run AutoML, register the model and setup pipeline triggers.\n", - "\n", - "## Setup\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import numpy as np\n", - "import pandas as pd\n", - "from sklearn import datasets\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.train.automl import AutoMLConfig" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Accessing the Azure ML workspace requires authentication with Azure.\n", - "\n", - "The default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.\n", - "\n", - "If you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:\n", - "```\n", - "from azureml.core.authentication import InteractiveLoginAuthentication\n", - "auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n", - "ws = Workspace.from_config(auth = auth)\n", - "```\n", - "If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:\n", - "```\n", - "from azureml.core.authentication import ServicePrincipalAuthentication\n", - "auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n", - "ws = Workspace.from_config(auth = auth)\n", - "```\n", - "For more details, see aka.ms/aml-notebook-auth" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "dstor = ws.get_default_datastore()\n", - "\n", - "# Choose a name for the run history container in the workspace.\n", - "experiment_name = 'retrain-noaaweather'\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Run History Name'] = experiment_name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Compute \n", - "\n", - "#### Create or Attach existing AmlCompute\n", - "\n", - "You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "amlcompute_cluster_name = \"cont-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n", - " max_nodes=4)\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Run Configuration" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.runconfig import CondaDependencies, RunConfiguration\n", - "\n", - "# create a new RunConfig object\n", - "conda_run_config = RunConfiguration(framework=\"python\")\n", - "\n", - "# Set compute target to AmlCompute\n", - "conda_run_config.target = compute_target\n", - "\n", - "conda_run_config.environment.docker.enabled = True\n", - "\n", - "cd = CondaDependencies.create(pip_packages=['azureml-sdk[automl]', 'applicationinsights', 'azureml-opendatasets', 'azureml-defaults'], \n", - " conda_packages=['numpy==1.16.2'], \n", - " pin_sdk_version=False)\n", - "conda_run_config.environment.python.conda_dependencies = cd\n", - "\n", - "print('run config is ready')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data Ingestion Pipeline \n", - "For this demo, we will use NOAA weather data from [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). You can replace this with your own dataset, or you can skip this pipeline if you already have a time-series based `TabularDataset`.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# The name and target column of the Dataset to create \n", - "dataset = \"NOAA-Weather-DS4\"\n", - "target_column_name = \"temperature\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "\n", - "### Upload Data Step\n", - "The data ingestion pipeline has a single step with a script to query the latest weather data and upload it to the blob store. During the first run, the script will create and register a time-series based `TabularDataset` with the past one week of weather data. For each subsequent run, the script will create a partition in the blob store by querying NOAA for new weather data since the last modified time of the dataset (`dataset.data_changed_time`) and creating a data.csv file." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline, PipelineParameter\n", - "from azureml.pipeline.steps import PythonScriptStep\n", - "\n", - "ds_name = PipelineParameter(name=\"ds_name\", default_value=dataset)\n", - "upload_data_step = PythonScriptStep(script_name=\"upload_weather_data.py\", \n", - " allow_reuse=False,\n", - " name=\"upload_weather_data\",\n", - " arguments=[\"--ds_name\", ds_name],\n", - " compute_target=compute_target, \n", - " runconfig=conda_run_config)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Submit Pipeline Run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_pipeline = Pipeline(\n", - " description=\"pipeline_with_uploaddata\",\n", - " workspace=ws, \n", - " steps=[upload_data_step])\n", - "data_pipeline_run = experiment.submit(data_pipeline, pipeline_parameters={\"ds_name\":dataset})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_pipeline_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Training Pipeline\n", - "### Prepare Training Data Step\n", - "\n", - "Script to check if new data is available since the model was last trained. If no new data is available, we cancel the remaining pipeline steps. We need to set allow_reuse flag to False to allow the pipeline to run even when inputs don't change. We also need the name of the model to check the time the model was last trained." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import PipelineData\n", - "\n", - "# The model name with which to register the trained model in the workspace.\n", - "model_name = PipelineParameter(\"model_name\", default_value=\"noaaweatherds\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data_prep_step = PythonScriptStep(script_name=\"check_data.py\", \n", - " allow_reuse=False,\n", - " name=\"check_data\",\n", - " arguments=[\"--ds_name\", ds_name,\n", - " \"--model_name\", model_name],\n", - " compute_target=compute_target, \n", - " runconfig=conda_run_config)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core import Dataset\n", - "train_ds = Dataset.get_by_name(ws, dataset)\n", - "train_ds = train_ds.drop_columns([\"partition_date\"])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### AutoMLStep\n", - "Create an AutoMLConfig and a training step." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.train.automl import AutoMLConfig\n", - "from azureml.pipeline.steps import AutoMLStep\n", - "\n", - "automl_settings = {\n", - " \"iteration_timeout_minutes\": 10,\n", - " \"experiment_timeout_hours\": 0.25,\n", - " \"n_cross_validations\": 3,\n", - " \"primary_metric\": 'normalized_root_mean_squared_error',\n", - " \"max_concurrent_iterations\": 3,\n", - " \"max_cores_per_iteration\": -1,\n", - " \"verbosity\": logging.INFO,\n", - " \"enable_early_stopping\": True\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'regression',\n", - " debug_log = 'automl_errors.log',\n", - " path = \".\",\n", - " compute_target=compute_target,\n", - " training_data = train_ds,\n", - " label_column_name = target_column_name,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import PipelineData, TrainingOutput\n", - "\n", - "metrics_output_name = 'metrics_output'\n", - "best_model_output_name = 'best_model_output'\n", - "\n", - "metrics_data = PipelineData(name='metrics_data',\n", - " datastore=dstor,\n", - " pipeline_output_name=metrics_output_name,\n", - " training_output=TrainingOutput(type='Metrics'))\n", - "model_data = PipelineData(name='model_data',\n", - " datastore=dstor,\n", - " pipeline_output_name=best_model_output_name,\n", - " training_output=TrainingOutput(type='Model'))" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_step = AutoMLStep(\n", - " name='automl_module',\n", - " automl_config=automl_config,\n", - " outputs=[metrics_data, model_data],\n", - " allow_reuse=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Register Model Step\n", - "Script to register the model to the workspace. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "register_model_step = PythonScriptStep(script_name=\"register_model.py\",\n", - " name=\"register_model\",\n", - " allow_reuse=False,\n", - " arguments=[\"--model_name\", model_name, \"--model_path\", model_data, \"--ds_name\", ds_name],\n", - " inputs=[model_data],\n", - " compute_target=compute_target,\n", - " runconfig=conda_run_config)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Submit Pipeline Run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_pipeline = Pipeline(\n", - " description=\"training_pipeline\",\n", - " workspace=ws, \n", - " steps=[data_prep_step, automl_step, register_model_step])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_pipeline_run = experiment.submit(training_pipeline, pipeline_parameters={\n", - " \"ds_name\": dataset, \"model_name\": \"noaaweatherds\"})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_pipeline_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Publish Retraining Pipeline and Schedule\n", - "Once we are happy with the pipeline, we can publish the training pipeline to the workspace and create a schedule to trigger on blob change. The schedule polls the blob store where the data is being uploaded and runs the retraining pipeline if there is a data change. A new version of the model will be registered to the workspace once the run is complete." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pipeline_name = \"Retraining-Pipeline-NOAAWeather\"\n", - "\n", - "published_pipeline = training_pipeline.publish(\n", - " name=pipeline_name, \n", - " description=\"Pipeline that retrains AutoML model\")\n", - "\n", - "published_pipeline" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Schedule\n", - "schedule = Schedule.create(workspace=ws, name=\"RetrainingSchedule\",\n", - " pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n", - " pipeline_id=published_pipeline.id, \n", - " experiment_name=experiment_name, \n", - " datastore=dstor,\n", - " wait_for_provisioning=True,\n", - " polling_interval=1440)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Test Retraining\n", - "Here we setup the data ingestion pipeline to run on a schedule, to verify that the retraining pipeline runs as expected. \n", - "\n", - "Note: \n", - "* Azure NOAA Weather data is updated daily and retraining will not trigger if there is no new data available. \n", - "* Depending on the polling interval set in the schedule, the retraining may take some time trigger after data ingestion pipeline completes." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pipeline_name = \"DataIngestion-Pipeline-NOAAWeather\"\n", - "\n", - "published_pipeline = training_pipeline.publish(\n", - " name=pipeline_name, \n", - " description=\"Pipeline that updates NOAAWeather Dataset\")\n", - "\n", - "published_pipeline" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Schedule\n", - "schedule = Schedule.create(workspace=ws, name=\"RetrainingSchedule-DataIngestion\",\n", - " pipeline_parameters={\"ds_name\":dataset},\n", - " pipeline_id=published_pipeline.id, \n", - " experiment_name=experiment_name, \n", - " datastore=dstor,\n", - " wait_for_provisioning=True,\n", - " polling_interval=1440)" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning \n", + "**Continuous retraining using Pipelines and Time-Series TabularDataset**\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "2. [Setup](#Setup)\n", + "3. [Compute](#Compute)\n", + "4. [Run Configuration](#Run-Configuration)\n", + "5. [Data Ingestion Pipeline](#Data-Ingestion-Pipeline)\n", + "6. [Training Pipeline](#Training-Pipeline)\n", + "7. [Publish Retraining Pipeline and Schedule](#Publish-Retraining-Pipeline-and-Schedule)\n", + "8. [Test Retraining](#Test-Retraining)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "In this example we use AutoML and Pipelines to enable contious retraining of a model based on updates to the training dataset. We will create two pipelines, the first one to demonstrate a training dataset that gets updated over time. We leverage time-series capabilities of `TabularDataset` to achieve this. The second pipeline utilizes pipeline `Schedule` to trigger continuous retraining. \n", + "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", + "In this notebook you will learn how to:\n", + "* Create an Experiment in an existing Workspace.\n", + "* Configure AutoML using AutoMLConfig.\n", + "* Create data ingestion pipeline to update a time-series based TabularDataset\n", + "* Create training pipeline to prepare data, run AutoML, register the model and setup pipeline triggers.\n", + "\n", + "## Setup\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For AutoML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import numpy as np\n", + "import pandas as pd\n", + "from sklearn import datasets\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Accessing the Azure ML workspace requires authentication with Azure.\n", + "\n", + "The default authentication is interactive authentication using the default tenant. Executing the ws = Workspace.from_config() line in the cell below will prompt for authentication the first time that it is run.\n", + "\n", + "If you have multiple Azure tenants, you can specify the tenant by replacing the ws = Workspace.from_config() line in the cell below with the following:\n", + "```\n", + "from azureml.core.authentication import InteractiveLoginAuthentication\n", + "auth = InteractiveLoginAuthentication(tenant_id = 'mytenantid')\n", + "ws = Workspace.from_config(auth = auth)\n", + "```\n", + "If you need to run in an environment where interactive login is not possible, you can use Service Principal authentication by replacing the ws = Workspace.from_config() line in the cell below with the following:\n", + "```\n", + "from azureml.core.authentication import ServicePrincipalAuthentication\n", + "auth = auth = ServicePrincipalAuthentication('mytenantid', 'myappid', 'mypassword')\n", + "ws = Workspace.from_config(auth = auth)\n", + "```\n", + "For more details, see aka.ms/aml-notebook-auth" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "dstor = ws.get_default_datastore()\n", + "\n", + "# Choose a name for the run history container in the workspace.\n", + "experiment_name = \"retrain-noaaweather\"\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute \n", + "\n", + "#### Create or Attach existing AmlCompute\n", + "\n", + "You will need to create a compute target for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "amlcompute_cluster_name = \"cont-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Run Configuration" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.runconfig import CondaDependencies, RunConfiguration\n", + "\n", + "# create a new RunConfig object\n", + "conda_run_config = RunConfiguration(framework=\"python\")\n", + "\n", + "# Set compute target to AmlCompute\n", + "conda_run_config.target = compute_target\n", + "\n", + "conda_run_config.environment.docker.enabled = True\n", + "\n", + "cd = CondaDependencies.create(\n", + " pip_packages=[\n", + " \"azureml-sdk[automl]\",\n", + " \"applicationinsights\",\n", + " \"azureml-opendatasets\",\n", + " \"azureml-defaults\",\n", + " ],\n", + " conda_packages=[\"numpy==1.16.2\"],\n", + " pin_sdk_version=False,\n", + ")\n", + "conda_run_config.environment.python.conda_dependencies = cd\n", + "\n", + "print(\"run config is ready\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data Ingestion Pipeline \n", + "For this demo, we will use NOAA weather data from [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). You can replace this with your own dataset, or you can skip this pipeline if you already have a time-series based `TabularDataset`.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The name and target column of the Dataset to create\n", + "dataset = \"NOAA-Weather-DS4\"\n", + "target_column_name = \"temperature\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Upload Data Step\n", + "The data ingestion pipeline has a single step with a script to query the latest weather data and upload it to the blob store. During the first run, the script will create and register a time-series based `TabularDataset` with the past one week of weather data. For each subsequent run, the script will create a partition in the blob store by querying NOAA for new weather data since the last modified time of the dataset (`dataset.data_changed_time`) and creating a data.csv file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline, PipelineParameter\n", + "from azureml.pipeline.steps import PythonScriptStep\n", + "\n", + "ds_name = PipelineParameter(name=\"ds_name\", default_value=dataset)\n", + "upload_data_step = PythonScriptStep(\n", + " script_name=\"upload_weather_data.py\",\n", + " allow_reuse=False,\n", + " name=\"upload_weather_data\",\n", + " arguments=[\"--ds_name\", ds_name],\n", + " compute_target=compute_target,\n", + " runconfig=conda_run_config,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Submit Pipeline Run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_pipeline = Pipeline(\n", + " description=\"pipeline_with_uploaddata\", workspace=ws, steps=[upload_data_step]\n", + ")\n", + "data_pipeline_run = experiment.submit(\n", + " data_pipeline, pipeline_parameters={\"ds_name\": dataset}\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_pipeline_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Training Pipeline\n", + "### Prepare Training Data Step\n", + "\n", + "Script to check if new data is available since the model was last trained. If no new data is available, we cancel the remaining pipeline steps. We need to set allow_reuse flag to False to allow the pipeline to run even when inputs don't change. We also need the name of the model to check the time the model was last trained." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import PipelineData\n", + "\n", + "# The model name with which to register the trained model in the workspace.\n", + "model_name = PipelineParameter(\"model_name\", default_value=\"noaaweatherds\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data_prep_step = PythonScriptStep(\n", + " script_name=\"check_data.py\",\n", + " allow_reuse=False,\n", + " name=\"check_data\",\n", + " arguments=[\"--ds_name\", ds_name, \"--model_name\", model_name],\n", + " compute_target=compute_target,\n", + " runconfig=conda_run_config,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "\n", + "train_ds = Dataset.get_by_name(ws, dataset)\n", + "train_ds = train_ds.drop_columns([\"partition_date\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### AutoMLStep\n", + "Create an AutoMLConfig and a training step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.train.automl import AutoMLConfig\n", + "from azureml.pipeline.steps import AutoMLStep\n", + "\n", + "automl_settings = {\n", + " \"iteration_timeout_minutes\": 10,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"n_cross_validations\": 3,\n", + " \"primary_metric\": \"r2_score\",\n", + " \"max_concurrent_iterations\": 3,\n", + " \"max_cores_per_iteration\": -1,\n", + " \"verbosity\": logging.INFO,\n", + " \"enable_early_stopping\": True,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"regression\",\n", + " debug_log=\"automl_errors.log\",\n", + " path=\".\",\n", + " compute_target=compute_target,\n", + " training_data=train_ds,\n", + " label_column_name=target_column_name,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import PipelineData, TrainingOutput\n", + "\n", + "metrics_output_name = \"metrics_output\"\n", + "best_model_output_name = \"best_model_output\"\n", + "\n", + "metrics_data = PipelineData(\n", + " name=\"metrics_data\",\n", + " datastore=dstor,\n", + " pipeline_output_name=metrics_output_name,\n", + " training_output=TrainingOutput(type=\"Metrics\"),\n", + ")\n", + "model_data = PipelineData(\n", + " name=\"model_data\",\n", + " datastore=dstor,\n", + " pipeline_output_name=best_model_output_name,\n", + " training_output=TrainingOutput(type=\"Model\"),\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_step = AutoMLStep(\n", + " name=\"automl_module\",\n", + " automl_config=automl_config,\n", + " outputs=[metrics_data, model_data],\n", + " allow_reuse=False,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Register Model Step\n", + "Script to register the model to the workspace. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "register_model_step = PythonScriptStep(\n", + " script_name=\"register_model.py\",\n", + " name=\"register_model\",\n", + " allow_reuse=False,\n", + " arguments=[\n", + " \"--model_name\",\n", + " model_name,\n", + " \"--model_path\",\n", + " model_data,\n", + " \"--ds_name\",\n", + " ds_name,\n", + " ],\n", + " inputs=[model_data],\n", + " compute_target=compute_target,\n", + " runconfig=conda_run_config,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Submit Pipeline Run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_pipeline = Pipeline(\n", + " description=\"training_pipeline\",\n", + " workspace=ws,\n", + " steps=[data_prep_step, automl_step, register_model_step],\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_pipeline_run = experiment.submit(\n", + " training_pipeline,\n", + " pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_pipeline_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Publish Retraining Pipeline and Schedule\n", + "Once we are happy with the pipeline, we can publish the training pipeline to the workspace and create a schedule to trigger on blob change. The schedule polls the blob store where the data is being uploaded and runs the retraining pipeline if there is a data change. A new version of the model will be registered to the workspace once the run is complete." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pipeline_name = \"Retraining-Pipeline-NOAAWeather\"\n", + "\n", + "published_pipeline = training_pipeline.publish(\n", + " name=pipeline_name, description=\"Pipeline that retrains AutoML model\"\n", + ")\n", + "\n", + "published_pipeline" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Schedule\n", + "\n", + "schedule = Schedule.create(\n", + " workspace=ws,\n", + " name=\"RetrainingSchedule\",\n", + " pipeline_parameters={\"ds_name\": dataset, \"model_name\": \"noaaweatherds\"},\n", + " pipeline_id=published_pipeline.id,\n", + " experiment_name=experiment_name,\n", + " datastore=dstor,\n", + " wait_for_provisioning=True,\n", + " polling_interval=1440,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test Retraining\n", + "Here we setup the data ingestion pipeline to run on a schedule, to verify that the retraining pipeline runs as expected. \n", + "\n", + "Note: \n", + "* Azure NOAA Weather data is updated daily and retraining will not trigger if there is no new data available. \n", + "* Depending on the polling interval set in the schedule, the retraining may take some time trigger after data ingestion pipeline completes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pipeline_name = \"DataIngestion-Pipeline-NOAAWeather\"\n", + "\n", + "published_pipeline = training_pipeline.publish(\n", + " name=pipeline_name, description=\"Pipeline that updates NOAAWeather Dataset\"\n", + ")\n", + "\n", + "published_pipeline" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Schedule\n", + "\n", + "schedule = Schedule.create(\n", + " workspace=ws,\n", + " name=\"RetrainingSchedule-DataIngestion\",\n", + " pipeline_parameters={\"ds_name\": dataset},\n", + " pipeline_id=published_pipeline.id,\n", + " experiment_name=experiment_name,\n", + " datastore=dstor,\n", + " wait_for_provisioning=True,\n", + " polling_interval=1440,\n", + ")" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "vivijay" + } ], - "metadata": { - "authors": [ - { - "name": "vivijay" - } - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.6" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.6" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/check_data.py b/how-to-use-azureml/automated-machine-learning/continuous-retraining/check_data.py index 628ea9611..aec68d422 100644 --- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/check_data.py +++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/check_data.py @@ -31,7 +31,7 @@ model = Model(ws, args.model_name) last_train_time = model.created_time print("Model was last trained on {0}.".format(last_train_time)) -except Exception: +except Exception as e: print("Could not get last model train time.") last_train_time = datetime.min.replace(tzinfo=pytz.UTC) diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/register_model.py b/how-to-use-azureml/automated-machine-learning/continuous-retraining/register_model.py index 4c9a34a7a..aa37ee86b 100644 --- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/register_model.py +++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/register_model.py @@ -25,9 +25,11 @@ # Register model with training dataset -model = Model.register(workspace=ws, - model_path=args.model_path, - model_name=args.model_name, - datasets=datasets) +model = Model.register( + workspace=ws, + model_path=args.model_path, + model_name=args.model_name, + datasets=datasets, +) print("Registered version {0} of model {1}".format(model.version, model.name)) diff --git a/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py b/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py index 444dcbea0..28f30a65b 100644 --- a/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py +++ b/how-to-use-azureml/automated-machine-learning/continuous-retraining/upload_weather_data.py @@ -16,26 +16,82 @@ else: ws = run.experiment.workspace -usaf_list = ['725724', '722149', '723090', '722159', '723910', '720279', - '725513', '725254', '726430', '720381', '723074', '726682', - '725486', '727883', '723177', '722075', '723086', '724053', - '725070', '722073', '726060', '725224', '725260', '724520', - '720305', '724020', '726510', '725126', '722523', '703333', - '722249', '722728', '725483', '722972', '724975', '742079', - '727468', '722193', '725624', '722030', '726380', '720309', - '722071', '720326', '725415', '724504', '725665', '725424', - '725066'] +usaf_list = [ + "725724", + "722149", + "723090", + "722159", + "723910", + "720279", + "725513", + "725254", + "726430", + "720381", + "723074", + "726682", + "725486", + "727883", + "723177", + "722075", + "723086", + "724053", + "725070", + "722073", + "726060", + "725224", + "725260", + "724520", + "720305", + "724020", + "726510", + "725126", + "722523", + "703333", + "722249", + "722728", + "725483", + "722972", + "724975", + "742079", + "727468", + "722193", + "725624", + "722030", + "726380", + "720309", + "722071", + "720326", + "725415", + "724504", + "725665", + "725424", + "725066", +] def get_noaa_data(start_time, end_time): - columns = ['usaf', 'wban', 'datetime', 'latitude', 'longitude', 'elevation', - 'windAngle', 'windSpeed', 'temperature', 'stationName', 'p_k'] + columns = [ + "usaf", + "wban", + "datetime", + "latitude", + "longitude", + "elevation", + "windAngle", + "windSpeed", + "temperature", + "stationName", + "p_k", + ] isd = NoaaIsdWeather(start_time, end_time, cols=columns) noaa_df = isd.to_pandas_dataframe() df_filtered = noaa_df[noaa_df["usaf"].isin(usaf_list)] df_filtered.reset_index(drop=True) - print("Received {0} rows of training data between {1} and {2}".format( - df_filtered.shape[0], start_time, end_time)) + print( + "Received {0} rows of training data between {1} and {2}".format( + df_filtered.shape[0], start_time, end_time + ) + ) return df_filtered @@ -54,11 +110,12 @@ def get_noaa_data(start_time, end_time): try: ds = Dataset.get_by_name(ws, args.ds_name) end_time_last_slice = ds.data_changed_time.replace(tzinfo=None) - print("Dataset {0} last updated on {1}".format(args.ds_name, - end_time_last_slice)) + print("Dataset {0} last updated on {1}".format(args.ds_name, end_time_last_slice)) except Exception: print(traceback.format_exc()) - print("Dataset with name {0} not found, registering new dataset.".format(args.ds_name)) + print( + "Dataset with name {0} not found, registering new dataset.".format(args.ds_name) + ) register_dataset = True end_time = datetime(2021, 5, 1, 0, 0) end_time_last_slice = end_time - relativedelta(weeks=2) @@ -66,26 +123,35 @@ def get_noaa_data(start_time, end_time): train_df = get_noaa_data(end_time_last_slice, end_time) if train_df.size > 0: - print("Received {0} rows of new data after {1}.".format( - train_df.shape[0], end_time_last_slice)) - folder_name = "{}/{:04d}/{:02d}/{:02d}/{:02d}/{:02d}/{:02d}".format(args.ds_name, end_time.year, - end_time.month, end_time.day, - end_time.hour, end_time.minute, - end_time.second) + print( + "Received {0} rows of new data after {1}.".format( + train_df.shape[0], end_time_last_slice + ) + ) + folder_name = "{}/{:04d}/{:02d}/{:02d}/{:02d}/{:02d}/{:02d}".format( + args.ds_name, + end_time.year, + end_time.month, + end_time.day, + end_time.hour, + end_time.minute, + end_time.second, + ) file_path = "{0}/data.csv".format(folder_name) # Add a new partition to the registered dataset os.makedirs(folder_name, exist_ok=True) train_df.to_csv(file_path, index=False) - dstor.upload_files(files=[file_path], - target_path=folder_name, - overwrite=True, - show_progress=True) + dstor.upload_files( + files=[file_path], target_path=folder_name, overwrite=True, show_progress=True + ) else: print("No new data since {0}.".format(end_time_last_slice)) if register_dataset: - ds = Dataset.Tabular.from_delimited_files(dstor.path("{}/**/*.csv".format( - args.ds_name)), partition_format='/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv') + ds = Dataset.Tabular.from_delimited_files( + dstor.path("{}/**/*.csv".format(args.ds_name)), + partition_format="/{partition_date:yyyy/MM/dd/HH/mm/ss}/data.csv", + ) ds.register(ws, name=args.ds_name) diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb index 610948ed5..8ca8cdaa7 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb @@ -1,725 +1,725 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Many Models with Backtesting - Automated ML\n", - "**_Backtest many models time series forecasts with Automated Machine Learning_**\n", - "\n", - "---" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For this notebook we are using a synthetic dataset to demonstrate the back testing in many model scenario. This allows us to check historical performance of AutoML on a historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.
\n", - "\n", - "Thus, it is a quick way of evaluating AutoML as if it was in production. Here, we do not test historical performance of a particular model, for this see the [notebook](../forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb). Instead, the best model for every backtest iteration can be different since AutoML chooses the best model for a given training set.\n", - "![Backtesting](Backtesting.png)\n", - "\n", - "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prerequisites\n", - "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1.0 Set up workspace, datastore, experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003526897 - } - }, - "outputs": [], - "source": [ - "import os\n", - "\n", - "import azureml.core\n", - "from azureml.core import Workspace, Datastore\n", - "import numpy as np\n", - "import pandas as pd\n", - "\n", - "from pandas.tseries.frequencies import to_offset\n", - "\n", - "# Set up your workspace\n", - "ws = Workspace.from_config()\n", - "ws.get_details()\n", - "\n", - "# Set up your datastores\n", - "dstore = ws.get_default_datastore()\n", - "\n", - "output = {}\n", - "output[\"SDK version\"] = azureml.core.VERSION\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Default datastore name\"] = dstore.name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook is compatible with Azure ML SDK version 1.35.1 or later." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose an experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003540729 - } - }, - "outputs": [], - "source": [ - "from azureml.core import Experiment\n", - "\n", - "experiment = Experiment(ws, \"automl-many-models-backtest\")\n", - "\n", - "print(\"Experiment name: \" + experiment.name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2.0 Data\n", - "\n", - "#### 2.1 Data generation\n", - "For this notebook we will generate the artificial data set with two [time series IDs](https://docs.microsoft.com/en-us/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters?view=azure-ml-py). Then we will generate backtest folds and will upload it to the default BLOB storage and create a [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# simulate data: 2 grains - 700\n", - "TIME_COLNAME = \"date\"\n", - "TARGET_COLNAME = \"value\"\n", - "TIME_SERIES_ID_COLNAME = \"ts_id\"\n", - "\n", - "sample_size = 700\n", - "# Set the random seed for reproducibility of results.\n", - "np.random.seed(20)\n", - "X1 = pd.DataFrame(\n", - " {\n", - " TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n", - " TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n", - " TIME_SERIES_ID_COLNAME: \"ts_A\",\n", - " }\n", - ")\n", - "X2 = pd.DataFrame(\n", - " {\n", - " TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n", - " TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n", - " TIME_SERIES_ID_COLNAME: \"ts_B\",\n", - " }\n", - ")\n", - "\n", - "X = pd.concat([X1, X2], ignore_index=True, sort=False)\n", - "print(\"Simulated dataset contains {} rows \\n\".format(X.shape[0]))\n", - "X.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Now we will generate 8 backtesting folds with backtesting period of 7 days and with the same forecasting horizon. We will add the column \"backtest_iteration\", which will identify the backtesting period by the last training date." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "offset_type = \"7D\"\n", - "NUMBER_OF_BACKTESTS = 8 # number of train/test sets to generate\n", - "\n", - "dfs_train = []\n", - "dfs_test = []\n", - "for ts_id, df_one in X.groupby(TIME_SERIES_ID_COLNAME):\n", - "\n", - " data_end = df_one[TIME_COLNAME].max()\n", - "\n", - " for i in range(NUMBER_OF_BACKTESTS):\n", - " train_cutoff_date = data_end - to_offset(offset_type)\n", - " df_one = df_one.copy()\n", - " df_one[\"backtest_iteration\"] = \"iteration_\" + str(train_cutoff_date)\n", - " train = df_one[df_one[TIME_COLNAME] <= train_cutoff_date]\n", - " test = df_one[\n", - " (df_one[TIME_COLNAME] > train_cutoff_date)\n", - " & (df_one[TIME_COLNAME] <= data_end)\n", - " ]\n", - " data_end = train[TIME_COLNAME].max()\n", - " dfs_train.append(train)\n", - " dfs_test.append(test)\n", - "\n", - "X_train = pd.concat(dfs_train, sort=False, ignore_index=True)\n", - "X_test = pd.concat(dfs_test, sort=False, ignore_index=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### 2.2 Create the Tabular Data Set.\n", - "\n", - "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", - "\n", - "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n", - "\n", - "In this next step, we will upload the data and create a TabularDataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.data.dataset_factory import TabularDatasetFactory\n", - "\n", - "ds = ws.get_default_datastore()\n", - "# Upload saved data to the default data store.\n", - "train_data = TabularDatasetFactory.register_pandas_dataframe(\n", - " X_train, target=(ds, \"data_mm\"), name=\"data_train\"\n", - ")\n", - "test_data = TabularDatasetFactory.register_pandas_dataframe(\n", - " X_test, target=(ds, \"data_mm\"), name=\"data_test\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.0 Build the training pipeline\n", - "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose a compute target\n", - "\n", - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", - "\n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007037308 - } - }, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "\n", - "# Name your cluster\n", - "compute_name = \"backtest-mm\"\n", - "\n", - "\n", - "if compute_name in ws.compute_targets:\n", - " compute_target = ws.compute_targets[compute_name]\n", - " if compute_target and type(compute_target) is AmlCompute:\n", - " print(\"Found compute target: \" + compute_name)\n", - "else:\n", - " print(\"Creating a new compute target...\")\n", - " provisioning_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", - " )\n", - " # Create the compute target\n", - " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", - "\n", - " # Can poll for a minimum number of nodes and for a specific timeout.\n", - " # If no min node count is provided it will use the scale settings for the cluster\n", - " compute_target.wait_for_completion(\n", - " show_output=True, min_node_count=None, timeout_in_minutes=20\n", - " )\n", - "\n", - " # For a more detailed view of current cluster status, use the 'status' property\n", - " print(compute_target.status.serialize())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up training parameters\n", - "\n", - "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition. Please note, that in this case we are setting grain_column_names to be the time series ID column plus iteration, because we want to train a separate model for each time series and iteration.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **task** | forecasting |\n", - "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
normalized_root_mean_squared_error
normalized_mean_absolute_error |\n", - "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", - "| **label_column_name** | The name of the label column. |\n", - "| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", - "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", - "| **time_column_name** | The name of your time column. |\n", - "| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", - "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", - "| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007061544 - } - }, - "outputs": [], - "source": [ - "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", - " ManyModelsTrainParameters,\n", - ")\n", - "\n", - "partition_column_names = [TIME_SERIES_ID_COLNAME, \"backtest_iteration\"]\n", - "automl_settings = {\n", - " \"task\": \"forecasting\",\n", - " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", - " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", - " \"iterations\": 15,\n", - " \"experiment_timeout_hours\": 0.25, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.\n", - " \"label_column_name\": TARGET_COLNAME,\n", - " \"n_cross_validations\": 3,\n", - " \"time_column_name\": TIME_COLNAME,\n", - " \"max_horizon\": 6,\n", - " \"grain_column_names\": partition_column_names,\n", - " \"track_child_runs\": False,\n", - "}\n", - "\n", - "mm_paramters = ManyModelsTrainParameters(\n", - " automl_settings=automl_settings, partition_column_names=partition_column_names\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up many models pipeline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **experiment** | The experiment used for training. |\n", - "| **train_data** | The file dataset to be used as input to the training run. |\n", - "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n", - "| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |\n", - "| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n", - "\n", - "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", - "\n", - "\n", - "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", - " experiment=experiment,\n", - " train_data=train_data,\n", - " compute_target=compute_target,\n", - " node_count=2,\n", - " process_count_per_node=2,\n", - " run_invocation_timeout=920,\n", - " train_pipeline_parameters=mm_paramters,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Submit the pipeline to run\n", - "Next we submit our pipeline to run. The whole training pipeline takes about 20 minutes using a STANDARD_DS12_V2 VM with our current ParallelRunConfig setting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run = experiment.submit(training_pipeline)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Check the run status, if training_run is in completed state, continue to next section. Otherwise, check the portal for failures." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 4.0 Backtesting\n", - "Now that we selected the best AutoML model for each backtest fold, we will use these models to generate the forecasts and compare with the actuals." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up output dataset for inference data\n", - "Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.data import OutputFileDatasetConfig\n", - "\n", - "output_inference_data_ds = OutputFileDatasetConfig(\n", - " name=\"many_models_inference_output\",\n", - " destination=(dstore, \"backtesting/inference_data/\"),\n", - ").register_on_complete(name=\"backtesting_data_ds\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For many models we need to provide the ManyModelsInferenceParameters object.\n", - "\n", - "#### ManyModelsInferenceParameters arguments\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **partition_column_names** | List of column names that identifies groups. |\n", - "| **target_column_name** | \\[Optional\\] Column name only if the inference dataset has the target. |\n", - "| **time_column_name** | Column name only if it is timeseries. |\n", - "| **many_models_run_id** | \\[Optional\\] Many models pipeline run id where models were trained. |\n", - "\n", - "#### get_many_models_batch_inference_steps arguments\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **experiment** | The experiment used for inference run. |\n", - "| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n", - "| **compute_target** | The compute target that runs the inference pipeline.|\n", - "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n", - "| **process_count_per_node** | The number of processes per node.\n", - "| **train_run_id** | \\[Optional\\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n", - "| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n", - "| **process_count_per_node** | \\[Optional\\] The number of processes per node, by default it's 4. |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", - "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", - " ManyModelsInferenceParameters,\n", - ")\n", - "\n", - "mm_parameters = ManyModelsInferenceParameters(\n", - " partition_column_names=partition_column_names,\n", - " time_column_name=TIME_COLNAME,\n", - " target_column_name=TARGET_COLNAME,\n", - ")\n", - "\n", - "inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", - " experiment=experiment,\n", - " inference_data=test_data,\n", - " node_count=2,\n", - " process_count_per_node=2,\n", - " compute_target=compute_target,\n", - " run_invocation_timeout=300,\n", - " output_datastore=output_inference_data_ds,\n", - " train_run_id=training_run.id,\n", - " train_experiment_name=training_run.experiment.name,\n", - " inference_pipeline_parameters=mm_parameters,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "inference_pipeline = Pipeline(ws, steps=inference_steps)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "inference_run = experiment.submit(inference_pipeline)\n", - "inference_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 5.0 Retrieve results and calculate metrics\n", - "\n", - "The pipeline returns one file with the predictions for each times series ID and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n", - "\n", - "The next code snippet does the following:\n", - "1. Downloads the contents of the output folder that is passed in the parallel run step \n", - "2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe \n", - "3. Saves the table in csv format and \n", - "4. Displays the top 10 rows of the predictions" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n", - "\n", - "forecasting_results_name = \"forecasting_results\"\n", - "forecasting_output_name = \"many_models_inference_output\"\n", - "forecast_file = get_output_from_mm_pipeline(\n", - " inference_run, forecasting_results_name, forecasting_output_name\n", - ")\n", - "df = pd.read_csv(forecast_file, delimiter=\" \", header=None, parse_dates=[0])\n", - "df.columns = list(X_train.columns) + [\"predicted_level\"]\n", - "print(\n", - " \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n", - ")\n", - "# Save the scv file with header to read it in the next step.\n", - "df.rename(columns={TARGET_COLNAME: \"actual_level\"}, inplace=True)\n", - "df.to_csv(os.path.join(forecasting_results_name, \"forecast.csv\"), index=False)\n", - "df.head(10)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## View metrics\n", - "We will read in the obtained results and run the helper script, which will generate metrics and create the plots of predicted versus actual values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from assets.score import calculate_scores_and_build_plots\n", - "\n", - "backtesting_results = \"backtesting_mm_results\"\n", - "os.makedirs(backtesting_results, exist_ok=True)\n", - "calculate_scores_and_build_plots(\n", - " forecasting_results_name, backtesting_results, automl_settings\n", - ")\n", - "pd.DataFrame({\"File\": os.listdir(backtesting_results)})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The directory contains a set of files with results:\n", - "- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n", - "- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series ids, which are marked as \"all_sets\"\n", - "- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and, eash time series is saved as separate plot.\n", - "\n", - "For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". We will create the utility function, which will build the table with metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def get_metrics_for_ts(all_metrics, ts):\n", - " \"\"\"\n", - " Get the metrics for the time series with ID ts and return it as pandas data frame.\n", - "\n", - " :param all_metrics: The table with all the metrics.\n", - " :param ts: The ID of a time series of interest.\n", - " :return: The pandas DataFrame with metrics for one time series.\n", - " \"\"\"\n", - " results_df = None\n", - " for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n", - " if not ts_id.startswith(ts):\n", - " continue\n", - " iteration = ts_id.split(\"|\")[-1]\n", - " df = one_series[[\"metric_name\", \"metric\"]]\n", - " df.rename({\"metric\": iteration}, axis=1, inplace=True)\n", - " df.set_index(\"metric_name\", inplace=True)\n", - " if results_df is None:\n", - " results_df = df\n", - " else:\n", - " results_df = results_df.merge(\n", - " df, how=\"inner\", left_index=True, right_index=True\n", - " )\n", - " results_df.sort_index(axis=1, inplace=True)\n", - " return results_df\n", - "\n", - "\n", - "metrics_df = pd.read_csv(os.path.join(backtesting_results, \"scores.csv\"))\n", - "ts = \"ts_A\"\n", - "get_metrics_for_ts(metrics_df, ts)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Forecast vs actuals plots." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.display import IFrame\n", - "\n", - "IFrame(\"./backtesting_mm_results/plots_fcst_vs_actual.pdf\", width=800, height=300)" - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Many Models with Backtesting - Automated ML\n", + "**_Backtest many models time series forecasts with Automated Machine Learning_**\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For this notebook we are using a synthetic dataset to demonstrate the back testing in many model scenario. This allows us to check historical performance of AutoML on a historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.
\n", + "\n", + "Thus, it is a quick way of evaluating AutoML as if it was in production. Here, we do not test historical performance of a particular model, for this see the [notebook](../forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb). Instead, the best model for every backtest iteration can be different since AutoML chooses the best model for a given training set.\n", + "![Backtesting](Backtesting.png)\n", + "\n", + "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prerequisites\n", + "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1.0 Set up workspace, datastore, experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003526897 } - ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" + }, + "outputs": [], + "source": [ + "import os\n", + "\n", + "import azureml.core\n", + "from azureml.core import Workspace, Datastore\n", + "import numpy as np\n", + "import pandas as pd\n", + "\n", + "from pandas.tseries.frequencies import to_offset\n", + "\n", + "# Set up your workspace\n", + "ws = Workspace.from_config()\n", + "ws.get_details()\n", + "\n", + "# Set up your datastores\n", + "dstore = ws.get_default_datastore()\n", + "\n", + "output = {}\n", + "output[\"SDK version\"] = azureml.core.VERSION\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Default datastore name\"] = dstore.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.1 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose an experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003540729 } + }, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment = Experiment(ws, \"automl-many-models-backtest\")\n", + "\n", + "print(\"Experiment name: \" + experiment.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2.0 Data\n", + "\n", + "#### 2.1 Data generation\n", + "For this notebook we will generate the artificial data set with two [time series IDs](https://docs.microsoft.com/en-us/python/api/azureml-automl-core/azureml.automl.core.forecasting_parameters.forecastingparameters?view=azure-ml-py). Then we will generate backtest folds and will upload it to the default BLOB storage and create a [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# simulate data: 2 grains - 700\n", + "TIME_COLNAME = \"date\"\n", + "TARGET_COLNAME = \"value\"\n", + "TIME_SERIES_ID_COLNAME = \"ts_id\"\n", + "\n", + "sample_size = 700\n", + "# Set the random seed for reproducibility of results.\n", + "np.random.seed(20)\n", + "X1 = pd.DataFrame(\n", + " {\n", + " TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n", + " TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n", + " TIME_SERIES_ID_COLNAME: \"ts_A\",\n", + " }\n", + ")\n", + "X2 = pd.DataFrame(\n", + " {\n", + " TIME_COLNAME: pd.date_range(start=\"2018-01-01\", periods=sample_size),\n", + " TARGET_COLNAME: np.random.normal(loc=100, scale=20, size=sample_size),\n", + " TIME_SERIES_ID_COLNAME: \"ts_B\",\n", + " }\n", + ")\n", + "\n", + "X = pd.concat([X1, X2], ignore_index=True, sort=False)\n", + "print(\"Simulated dataset contains {} rows \\n\".format(X.shape[0]))\n", + "X.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we will generate 8 backtesting folds with backtesting period of 7 days and with the same forecasting horizon. We will add the column \"backtest_iteration\", which will identify the backtesting period by the last training date." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "offset_type = \"7D\"\n", + "NUMBER_OF_BACKTESTS = 8 # number of train/test sets to generate\n", + "\n", + "dfs_train = []\n", + "dfs_test = []\n", + "for ts_id, df_one in X.groupby(TIME_SERIES_ID_COLNAME):\n", + "\n", + " data_end = df_one[TIME_COLNAME].max()\n", + "\n", + " for i in range(NUMBER_OF_BACKTESTS):\n", + " train_cutoff_date = data_end - to_offset(offset_type)\n", + " df_one = df_one.copy()\n", + " df_one[\"backtest_iteration\"] = \"iteration_\" + str(train_cutoff_date)\n", + " train = df_one[df_one[TIME_COLNAME] <= train_cutoff_date]\n", + " test = df_one[\n", + " (df_one[TIME_COLNAME] > train_cutoff_date)\n", + " & (df_one[TIME_COLNAME] <= data_end)\n", + " ]\n", + " data_end = train[TIME_COLNAME].max()\n", + " dfs_train.append(train)\n", + " dfs_test.append(test)\n", + "\n", + "X_train = pd.concat(dfs_train, sort=False, ignore_index=True)\n", + "X_test = pd.concat(dfs_test, sort=False, ignore_index=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 2.2 Create the Tabular Data Set.\n", + "\n", + "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", + "\n", + "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n", + "\n", + "In this next step, we will upload the data and create a TabularDataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.data.dataset_factory import TabularDatasetFactory\n", + "\n", + "ds = ws.get_default_datastore()\n", + "# Upload saved data to the default data store.\n", + "train_data = TabularDatasetFactory.register_pandas_dataframe(\n", + " X_train, target=(ds, \"data_mm\"), name=\"data_train\"\n", + ")\n", + "test_data = TabularDatasetFactory.register_pandas_dataframe(\n", + " X_test, target=(ds, \"data_mm\"), name=\"data_test\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3.0 Build the training pipeline\n", + "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose a compute target\n", + "\n", + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", + "\n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007037308 + } + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "\n", + "# Name your cluster\n", + "compute_name = \"backtest-mm\"\n", + "\n", + "\n", + "if compute_name in ws.compute_targets:\n", + " compute_target = ws.compute_targets[compute_name]\n", + " if compute_target and type(compute_target) is AmlCompute:\n", + " print(\"Found compute target: \" + compute_name)\n", + "else:\n", + " print(\"Creating a new compute target...\")\n", + " provisioning_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " # Create the compute target\n", + " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", + "\n", + " # Can poll for a minimum number of nodes and for a specific timeout.\n", + " # If no min node count is provided it will use the scale settings for the cluster\n", + " compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + " )\n", + "\n", + " # For a more detailed view of current cluster status, use the 'status' property\n", + " print(compute_target.status.serialize())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up training parameters\n", + "\n", + "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition. Please note, that in this case we are setting grain_column_names to be the time series ID column plus iteration, because we want to train a separate model for each time series and iteration.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **task** | forecasting |\n", + "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
normalized_root_mean_squared_error
normalized_mean_absolute_error |\n", + "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", + "| **label_column_name** | The name of the label column. |\n", + "| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", + "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", + "| **time_column_name** | The name of your time column. |\n", + "| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", + "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", + "| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007061544 + } + }, + "outputs": [], + "source": [ + "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", + " ManyModelsTrainParameters,\n", + ")\n", + "\n", + "partition_column_names = [TIME_SERIES_ID_COLNAME, \"backtest_iteration\"]\n", + "automl_settings = {\n", + " \"task\": \"forecasting\",\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", + " \"iterations\": 15,\n", + " \"experiment_timeout_hours\": 0.25, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.\n", + " \"label_column_name\": TARGET_COLNAME,\n", + " \"n_cross_validations\": 3,\n", + " \"time_column_name\": TIME_COLNAME,\n", + " \"max_horizon\": 6,\n", + " \"grain_column_names\": partition_column_names,\n", + " \"track_child_runs\": False,\n", + "}\n", + "\n", + "mm_paramters = ManyModelsTrainParameters(\n", + " automl_settings=automl_settings, partition_column_names=partition_column_names\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up many models pipeline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **experiment** | The experiment used for training. |\n", + "| **train_data** | The file dataset to be used as input to the training run. |\n", + "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n", + "| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |\n", + "| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n", + "\n", + "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", + "\n", + "\n", + "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", + " experiment=experiment,\n", + " train_data=train_data,\n", + " compute_target=compute_target,\n", + " node_count=2,\n", + " process_count_per_node=2,\n", + " run_invocation_timeout=920,\n", + " train_pipeline_parameters=mm_paramters,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Submit the pipeline to run\n", + "Next we submit our pipeline to run. The whole training pipeline takes about 20 minutes using a STANDARD_DS12_V2 VM with our current ParallelRunConfig setting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run = experiment.submit(training_pipeline)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check the run status, if training_run is in completed state, continue to next section. Otherwise, check the portal for failures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 4.0 Backtesting\n", + "Now that we selected the best AutoML model for each backtest fold, we will use these models to generate the forecasts and compare with the actuals." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up output dataset for inference data\n", + "Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.data import OutputFileDatasetConfig\n", + "\n", + "output_inference_data_ds = OutputFileDatasetConfig(\n", + " name=\"many_models_inference_output\",\n", + " destination=(dstore, \"backtesting/inference_data/\"),\n", + ").register_on_complete(name=\"backtesting_data_ds\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For many models we need to provide the ManyModelsInferenceParameters object.\n", + "\n", + "#### ManyModelsInferenceParameters arguments\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **partition_column_names** | List of column names that identifies groups. |\n", + "| **target_column_name** | \\[Optional\\] Column name only if the inference dataset has the target. |\n", + "| **time_column_name** | Column name only if it is timeseries. |\n", + "| **many_models_run_id** | \\[Optional\\] Many models pipeline run id where models were trained. |\n", + "\n", + "#### get_many_models_batch_inference_steps arguments\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **experiment** | The experiment used for inference run. |\n", + "| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n", + "| **compute_target** | The compute target that runs the inference pipeline.|\n", + "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n", + "| **process_count_per_node** | The number of processes per node.\n", + "| **train_run_id** | \\[Optional\\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n", + "| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n", + "| **process_count_per_node** | \\[Optional\\] The number of processes per node, by default it's 4. |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", + "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", + " ManyModelsInferenceParameters,\n", + ")\n", + "\n", + "mm_parameters = ManyModelsInferenceParameters(\n", + " partition_column_names=partition_column_names,\n", + " time_column_name=TIME_COLNAME,\n", + " target_column_name=TARGET_COLNAME,\n", + ")\n", + "\n", + "inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", + " experiment=experiment,\n", + " inference_data=test_data,\n", + " node_count=2,\n", + " process_count_per_node=2,\n", + " compute_target=compute_target,\n", + " run_invocation_timeout=300,\n", + " output_datastore=output_inference_data_ds,\n", + " train_run_id=training_run.id,\n", + " train_experiment_name=training_run.experiment.name,\n", + " inference_pipeline_parameters=mm_parameters,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "inference_pipeline = Pipeline(ws, steps=inference_steps)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "inference_run = experiment.submit(inference_pipeline)\n", + "inference_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5.0 Retrieve results and calculate metrics\n", + "\n", + "The pipeline returns one file with the predictions for each times series ID and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n", + "\n", + "The next code snippet does the following:\n", + "1. Downloads the contents of the output folder that is passed in the parallel run step \n", + "2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe \n", + "3. Saves the table in csv format and \n", + "4. Displays the top 10 rows of the predictions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n", + "\n", + "forecasting_results_name = \"forecasting_results\"\n", + "forecasting_output_name = \"many_models_inference_output\"\n", + "forecast_file = get_output_from_mm_pipeline(\n", + " inference_run, forecasting_results_name, forecasting_output_name\n", + ")\n", + "df = pd.read_csv(forecast_file, delimiter=\" \", header=None, parse_dates=[0])\n", + "df.columns = list(X_train.columns) + [\"predicted_level\"]\n", + "print(\n", + " \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n", + ")\n", + "# Save the scv file with header to read it in the next step.\n", + "df.rename(columns={TARGET_COLNAME: \"actual_level\"}, inplace=True)\n", + "df.to_csv(os.path.join(forecasting_results_name, \"forecast.csv\"), index=False)\n", + "df.head(10)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## View metrics\n", + "We will read in the obtained results and run the helper script, which will generate metrics and create the plots of predicted versus actual values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from assets.score import calculate_scores_and_build_plots\n", + "\n", + "backtesting_results = \"backtesting_mm_results\"\n", + "os.makedirs(backtesting_results, exist_ok=True)\n", + "calculate_scores_and_build_plots(\n", + " forecasting_results_name, backtesting_results, automl_settings\n", + ")\n", + "pd.DataFrame({\"File\": os.listdir(backtesting_results)})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The directory contains a set of files with results:\n", + "- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n", + "- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series ids, which are marked as \"all_sets\"\n", + "- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and, eash time series is saved as separate plot.\n", + "\n", + "For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". We will create the utility function, which will build the table with metrics." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_metrics_for_ts(all_metrics, ts):\n", + " \"\"\"\n", + " Get the metrics for the time series with ID ts and return it as pandas data frame.\n", + "\n", + " :param all_metrics: The table with all the metrics.\n", + " :param ts: The ID of a time series of interest.\n", + " :return: The pandas DataFrame with metrics for one time series.\n", + " \"\"\"\n", + " results_df = None\n", + " for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n", + " if not ts_id.startswith(ts):\n", + " continue\n", + " iteration = ts_id.split(\"|\")[-1]\n", + " df = one_series[[\"metric_name\", \"metric\"]]\n", + " df.rename({\"metric\": iteration}, axis=1, inplace=True)\n", + " df.set_index(\"metric_name\", inplace=True)\n", + " if results_df is None:\n", + " results_df = df\n", + " else:\n", + " results_df = results_df.merge(\n", + " df, how=\"inner\", left_index=True, right_index=True\n", + " )\n", + " results_df.sort_index(axis=1, inplace=True)\n", + " return results_df\n", + "\n", + "\n", + "metrics_df = pd.read_csv(os.path.join(backtesting_results, \"scores.csv\"))\n", + "ts = \"ts_A\"\n", + "get_metrics_for_ts(metrics_df, ts)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Forecast vs actuals plots." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import IFrame\n", + "\n", + "IFrame(\"./backtesting_mm_results/plots_fcst_vs_actual.pdf\", width=800, height=300)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/update_env.yml b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/update_env.yml new file mode 100644 index 000000000..d0b193dab --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-many-models/update_env.yml @@ -0,0 +1,3 @@ +dependencies: +- pip: + - azureml-contrib-automl-pipeline-steps diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb index 64cada912..81104a4f9 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb @@ -1,719 +1,719 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License.\n", - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl-forecasting-function.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated MachineLearning\n", - "_**The model backtesting**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "2. [Setup](#Setup)\n", - "3. [Data](#Data)\n", - "4. [Prepare remote compute and data.](#prepare_remote)\n", - "5. [Create the configuration for AutoML backtesting](#train)\n", - "6. [Backtest AutoML](#backtest_automl)\n", - "7. [View metrics](#Metrics)\n", - "8. [Backtest the best model](#backtest_model)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "Model backtesting is used to evaluate its performance on historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.
\n", - "This notebook is intended to demonstrate backtesting on a single model, this is the best solution for small data sets with a few or one time series in it. For scenarios where we would like to choose the best AutoML model for every backtest iteration, please see [AutoML Forecasting Backtest Many Models Example](../forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb) notebook.\n", - "![Backtesting](Backtesting.png)\n", - "This notebook demonstrates two ways of backtesting:\n", - "- AutoML backtesting: we will train separate AutoML models for historical data\n", - "- Model backtesting: from the first run we will select the best model trained on the most recent data, retrain it on the past data and evaluate." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import numpy as np\n", - "import pandas as pd\n", - "import shutil\n", - "\n", - "import azureml.core\n", - "from azureml.core import Experiment, Model, Workspace" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook is compatible with Azure ML SDK version 1.35.1 or later." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created a Workspace." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "output = {}\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"SKU\"] = ws.sku\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data\n", - "For the demonstration purposes we will simulate one year of daily data. To do this we need to specify the following parameters: time column name, time series ID column names and label column name. Our intention is to forecast for two weeks ahead." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "TIME_COLUMN_NAME = \"date\"\n", - "TIME_SERIES_ID_COLUMN_NAMES = \"time_series_id\"\n", - "LABEL_COLUMN_NAME = \"y\"\n", - "FORECAST_HORIZON = 14\n", - "FREQUENCY = \"D\"\n", - "\n", - "\n", - "def simulate_timeseries_data(\n", - " train_len: int,\n", - " test_len: int,\n", - " time_column_name: str,\n", - " target_column_name: str,\n", - " time_series_id_column_name: str,\n", - " time_series_number: int = 1,\n", - " freq: str = \"H\",\n", - "):\n", - " \"\"\"\n", - " Return the time series of designed length.\n", - "\n", - " :param train_len: The length of training data (one series).\n", - " :type train_len: int\n", - " :param test_len: The length of testing data (one series).\n", - " :type test_len: int\n", - " :param time_column_name: The desired name of a time column.\n", - " :type time_column_name: str\n", - " :param time_series_number: The number of time series in the data set.\n", - " :type time_series_number: int\n", - " :param freq: The frequency string representing pandas offset.\n", - " see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n", - " :type freq: str\n", - " :returns: the tuple of train and test data sets.\n", - " :rtype: tuple\n", - "\n", - " \"\"\"\n", - " data_train = [] # type: List[pd.DataFrame]\n", - " data_test = [] # type: List[pd.DataFrame]\n", - " data_length = train_len + test_len\n", - " for i in range(time_series_number):\n", - " X = pd.DataFrame(\n", - " {\n", - " time_column_name: pd.date_range(\n", - " start=\"2000-01-01\", periods=data_length, freq=freq\n", - " ),\n", - " target_column_name: np.arange(data_length).astype(float)\n", - " + np.random.rand(data_length)\n", - " + i * 5,\n", - " \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n", - " time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n", - " }\n", - " )\n", - " data_train.append(X[:train_len])\n", - " data_test.append(X[train_len:])\n", - " train = pd.concat(data_train)\n", - " label_train = train.pop(target_column_name).values\n", - " test = pd.concat(data_test)\n", - " label_test = test.pop(target_column_name).values\n", - " return train, label_train, test, label_test\n", - "\n", - "\n", - "n_test_periods = FORECAST_HORIZON\n", - "n_train_periods = 365\n", - "X_train, y_train, X_test, y_test = simulate_timeseries_data(\n", - " train_len=n_train_periods,\n", - " test_len=n_test_periods,\n", - " time_column_name=TIME_COLUMN_NAME,\n", - " target_column_name=LABEL_COLUMN_NAME,\n", - " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAMES,\n", - " time_series_number=2,\n", - " freq=FREQUENCY,\n", - ")\n", - "X_train[LABEL_COLUMN_NAME] = y_train" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see what the training data looks like." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "X_train.tail()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prepare remote compute and data. \n", - "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.data.dataset_factory import TabularDatasetFactory\n", - "\n", - "ds = ws.get_default_datastore()\n", - "# Upload saved data to the default data store.\n", - "train_data = TabularDatasetFactory.register_pandas_dataframe(\n", - " X_train, target=(ds, \"data\"), name=\"data_backtest\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You will need to create a compute target for backtesting. In this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute), you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "amlcompute_cluster_name = \"backtest-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print(\"Found existing cluster, use it.\")\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", - " )\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create the configuration for AutoML backtesting \n", - "\n", - "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **task** | forecasting |\n", - "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
normalized_root_mean_squared_error
normalized_mean_absolute_error |\n", - "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", - "| **label_column_name** | The name of the label column. |\n", - "| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", - "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", - "| **time_column_name** | The name of your time column. |\n", - "| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"task\": \"forecasting\",\n", - " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", - " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", - " \"iterations\": 15,\n", - " \"experiment_timeout_hours\": 1, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.\n", - " \"label_column_name\": LABEL_COLUMN_NAME,\n", - " \"n_cross_validations\": 3,\n", - " \"time_column_name\": TIME_COLUMN_NAME,\n", - " \"max_horizon\": FORECAST_HORIZON,\n", - " \"track_child_runs\": False,\n", - " \"grain_column_names\": TIME_SERIES_ID_COLUMN_NAMES,\n", - "}" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Backtest AutoML \n", - "First we set backtesting parameters: we will step back by 30 days and will make 5 such steps; for each step we will forecast for next two weeks." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# The number of periods to step back on each backtest iteration.\n", - "BACKTESTING_PERIOD = 30\n", - "# The number of times we will back test the model.\n", - "NUMBER_OF_BACKTESTS = 5" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To train AutoML on backtesting folds we will use the [Azure Machine Learning pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines). It will generate backtest folds, then train model for each of them and calculate the accuracy metrics. To run pipeline, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve (here, it is a forecasting), while a Run corresponds to a specific approach to the problem." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from uuid import uuid1\n", - "\n", - "from pipeline_helper import get_backtest_pipeline\n", - "\n", - "pipeline_exp = Experiment(ws, \"automl-backtesting\")\n", - "\n", - "# We will create the unique identifier to mark our models.\n", - "model_uid = str(uuid1())\n", - "\n", - "pipeline = get_backtest_pipeline(\n", - " experiment=pipeline_exp,\n", - " dataset=train_data,\n", - " # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n", - " process_per_node=2,\n", - " # The maximum number of nodes for our compute is 6.\n", - " node_count=6,\n", - " compute_target=compute_target,\n", - " automl_settings=automl_settings,\n", - " step_size=BACKTESTING_PERIOD,\n", - " step_number=NUMBER_OF_BACKTESTS,\n", - " model_uid=model_uid,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Run the pipeline and wait for results." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pipeline_run = pipeline_exp.submit(pipeline)\n", - "pipeline_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "After the run is complete, we can download the results. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "metrics_output = pipeline_run.get_pipeline_output(\"results\")\n", - "metrics_output.download(\"backtest_metrics\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## View metrics\n", - "To distinguish these metrics from the model backtest, which we will obtain in the next section, we will move the directory with metrics out of the backtest_metrics and will remove the parent folder. We will create the utility function for that." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def copy_scoring_directory(new_name):\n", - " scores_path = os.path.join(\"backtest_metrics\", \"azureml\")\n", - " directory_list = [os.path.join(scores_path, d) for d in os.listdir(scores_path)]\n", - " latest_file = max(directory_list, key=os.path.getctime)\n", - " print(\n", - " f\"The output directory {latest_file} was created on {pd.Timestamp(os.path.getctime(latest_file), unit='s')} GMT.\"\n", - " )\n", - " shutil.move(os.path.join(latest_file, \"results\"), new_name)\n", - " shutil.rmtree(\"backtest_metrics\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Move the directory and list its contents." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "copy_scoring_directory(\"automl_backtest\")\n", - "pd.DataFrame({\"File\": os.listdir(\"automl_backtest\")})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The directory contains a set of files with results:\n", - "- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n", - "- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series id are marked as \"all_sets\"\n", - "- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and time series.\n", - "\n", - "For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". Again, we will create the utility function, which will be re used in model backtesting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def get_metrics_for_ts(all_metrics, ts):\n", - " \"\"\"\n", - " Get the metrics for the time series with ID ts and return it as pandas data frame.\n", - "\n", - " :param all_metrics: The table with all the metrics.\n", - " :param ts: The ID of a time series of interest.\n", - " :return: The pandas DataFrame with metrics for one time series.\n", - " \"\"\"\n", - " results_df = None\n", - " for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n", - " if not ts_id.startswith(ts):\n", - " continue\n", - " iteration = ts_id.split(\"|\")[-1]\n", - " df = one_series[[\"metric_name\", \"metric\"]]\n", - " df.rename({\"metric\": iteration}, axis=1, inplace=True)\n", - " df.set_index(\"metric_name\", inplace=True)\n", - " if results_df is None:\n", - " results_df = df\n", - " else:\n", - " results_df = results_df.merge(\n", - " df, how=\"inner\", left_index=True, right_index=True\n", - " )\n", - " results_df.sort_index(axis=1, inplace=True)\n", - " return results_df\n", - "\n", - "\n", - "metrics_df = pd.read_csv(os.path.join(\"automl_backtest\", \"scores.csv\"))\n", - "ts_id = \"ts0\"\n", - "get_metrics_for_ts(metrics_df, ts_id)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Forecast vs actuals plots." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.display import IFrame\n", - "\n", - "IFrame(\"./automl_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Backtest the best model \n", - "\n", - "For model backtesting we will use the same parameters we used to backtest AutoML. All the models, we have obtained in the previous run were registered in our workspace. To identify the model, each was assigned a tag with the last trainig date." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model_list = Model.list(ws, tags={\"experiment\": \"automl-backtesting\"})\n", - "model_data = {\"name\": [], \"last_training_date\": []}\n", - "for model in model_list:\n", - " if (\n", - " \"last_training_date\" not in model.tags\n", - " or \"model_uid\" not in model.tags\n", - " or model.tags[\"model_uid\"] != model_uid\n", - " ):\n", - " continue\n", - " model_data[\"name\"].append(model.name)\n", - " model_data[\"last_training_date\"].append(\n", - " pd.Timestamp(model.tags[\"last_training_date\"])\n", - " )\n", - "df_models = pd.DataFrame(model_data)\n", - "df_models.sort_values([\"last_training_date\"], inplace=True)\n", - "df_models.reset_index(inplace=True, drop=True)\n", - "df_models" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will backtest the model trained on the most recet data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model_name = df_models[\"name\"].iloc[-1]\n", - "model_name" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrain the models.\n", - "Assemble the pipeline, which will retrain the best model from AutoML run on historical data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pipeline_exp = Experiment(ws, \"model-backtesting\")\n", - "\n", - "pipeline = get_backtest_pipeline(\n", - " experiment=pipeline_exp,\n", - " dataset=train_data,\n", - " # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n", - " process_per_node=2,\n", - " # The maximum number of nodes for our compute is 6.\n", - " node_count=6,\n", - " compute_target=compute_target,\n", - " automl_settings=automl_settings,\n", - " step_size=BACKTESTING_PERIOD,\n", - " step_number=NUMBER_OF_BACKTESTS,\n", - " model_name=model_name,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Launch the backtesting pipeline." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "pipeline_run = pipeline_exp.submit(pipeline)\n", - "pipeline_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The metrics are stored in the pipeline output named \"score\". The next code will download the table with metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "metrics_output = pipeline_run.get_pipeline_output(\"results\")\n", - "metrics_output.download(\"backtest_metrics\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Again, we will copy the data files from the downloaded directory, but in this case we will call the folder \"model_backtest\"; it will contain the same files as the one for AutoML backtesting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "copy_scoring_directory(\"model_backtest\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Finally, we will display the metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "model_metrics_df = pd.read_csv(os.path.join(\"model_backtest\", \"scores.csv\"))\n", - "get_metrics_for_ts(model_metrics_df, ts_id)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Forecast vs actuals plots." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.display import IFrame\n", - "\n", - "IFrame(\"./model_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl-forecasting-function.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated MachineLearning\n", + "_**The model backtesting**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "2. [Setup](#Setup)\n", + "3. [Data](#Data)\n", + "4. [Prepare remote compute and data.](#prepare_remote)\n", + "5. [Create the configuration for AutoML backtesting](#train)\n", + "6. [Backtest AutoML](#backtest_automl)\n", + "7. [View metrics](#Metrics)\n", + "8. [Backtest the best model](#backtest_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "Model backtesting is used to evaluate its performance on historical data. To do that we step back on the backtesting period by the data set several times and split the data to train and test sets. Then these data sets are used for training and evaluation of model.
\n", + "This notebook is intended to demonstrate backtesting on a single model, this is the best solution for small data sets with a few or one time series in it. For scenarios where we would like to choose the best AutoML model for every backtest iteration, please see [AutoML Forecasting Backtest Many Models Example](../forecasting-backtest-many-models/auto-ml-forecasting-backtest-many-models.ipynb) notebook.\n", + "![Backtesting](Backtesting.png)\n", + "This notebook demonstrates two ways of backtesting:\n", + "- AutoML backtesting: we will train separate AutoML models for historical data\n", + "- Model backtesting: from the first run we will select the best model trained on the most recent data, retrain it on the past data and evaluate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import numpy as np\n", + "import pandas as pd\n", + "import shutil\n", + "\n", + "import azureml.core\n", + "from azureml.core import Experiment, Model, Workspace" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.1 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created a Workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"SKU\"] = ws.sku\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data\n", + "For the demonstration purposes we will simulate one year of daily data. To do this we need to specify the following parameters: time column name, time series ID column names and label column name. Our intention is to forecast for two weeks ahead." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "TIME_COLUMN_NAME = \"date\"\n", + "TIME_SERIES_ID_COLUMN_NAMES = \"time_series_id\"\n", + "LABEL_COLUMN_NAME = \"y\"\n", + "FORECAST_HORIZON = 14\n", + "FREQUENCY = \"D\"\n", + "\n", + "\n", + "def simulate_timeseries_data(\n", + " train_len: int,\n", + " test_len: int,\n", + " time_column_name: str,\n", + " target_column_name: str,\n", + " time_series_id_column_name: str,\n", + " time_series_number: int = 1,\n", + " freq: str = \"H\",\n", + "):\n", + " \"\"\"\n", + " Return the time series of designed length.\n", + "\n", + " :param train_len: The length of training data (one series).\n", + " :type train_len: int\n", + " :param test_len: The length of testing data (one series).\n", + " :type test_len: int\n", + " :param time_column_name: The desired name of a time column.\n", + " :type time_column_name: str\n", + " :param time_series_number: The number of time series in the data set.\n", + " :type time_series_number: int\n", + " :param freq: The frequency string representing pandas offset.\n", + " see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n", + " :type freq: str\n", + " :returns: the tuple of train and test data sets.\n", + " :rtype: tuple\n", + "\n", + " \"\"\"\n", + " data_train = [] # type: List[pd.DataFrame]\n", + " data_test = [] # type: List[pd.DataFrame]\n", + " data_length = train_len + test_len\n", + " for i in range(time_series_number):\n", + " X = pd.DataFrame(\n", + " {\n", + " time_column_name: pd.date_range(\n", + " start=\"2000-01-01\", periods=data_length, freq=freq\n", + " ),\n", + " target_column_name: np.arange(data_length).astype(float)\n", + " + np.random.rand(data_length)\n", + " + i * 5,\n", + " \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n", + " time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n", + " }\n", + " )\n", + " data_train.append(X[:train_len])\n", + " data_test.append(X[train_len:])\n", + " train = pd.concat(data_train)\n", + " label_train = train.pop(target_column_name).values\n", + " test = pd.concat(data_test)\n", + " label_test = test.pop(target_column_name).values\n", + " return train, label_train, test, label_test\n", + "\n", + "\n", + "n_test_periods = FORECAST_HORIZON\n", + "n_train_periods = 365\n", + "X_train, y_train, X_test, y_test = simulate_timeseries_data(\n", + " train_len=n_train_periods,\n", + " test_len=n_test_periods,\n", + " time_column_name=TIME_COLUMN_NAME,\n", + " target_column_name=LABEL_COLUMN_NAME,\n", + " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAMES,\n", + " time_series_number=2,\n", + " freq=FREQUENCY,\n", + ")\n", + "X_train[LABEL_COLUMN_NAME] = y_train" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's see what the training data looks like." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X_train.tail()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prepare remote compute and data. \n", + "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.data.dataset_factory import TabularDatasetFactory\n", + "\n", + "ds = ws.get_default_datastore()\n", + "# Upload saved data to the default data store.\n", + "train_data = TabularDatasetFactory.register_pandas_dataframe(\n", + " X_train, target=(ds, \"data\"), name=\"data_backtest\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to create a compute target for backtesting. In this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute), you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "amlcompute_cluster_name = \"backtest-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create the configuration for AutoML backtesting \n", + "\n", + "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **task** | forecasting |\n", + "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
normalized_root_mean_squared_error
normalized_mean_absolute_error |\n", + "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", + "| **label_column_name** | The name of the label column. |\n", + "| **max_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", + "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", + "| **time_column_name** | The name of your time column. |\n", + "| **grain_column_names** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"task\": \"forecasting\",\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", + " \"iterations\": 15,\n", + " \"experiment_timeout_hours\": 1, # This also needs to be changed based on the dataset. For larger data set this number needs to be bigger.\n", + " \"label_column_name\": LABEL_COLUMN_NAME,\n", + " \"n_cross_validations\": 3,\n", + " \"time_column_name\": TIME_COLUMN_NAME,\n", + " \"max_horizon\": FORECAST_HORIZON,\n", + " \"track_child_runs\": False,\n", + " \"grain_column_names\": TIME_SERIES_ID_COLUMN_NAMES,\n", + "}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Backtest AutoML \n", + "First we set backtesting parameters: we will step back by 30 days and will make 5 such steps; for each step we will forecast for next two weeks." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The number of periods to step back on each backtest iteration.\n", + "BACKTESTING_PERIOD = 30\n", + "# The number of times we will back test the model.\n", + "NUMBER_OF_BACKTESTS = 5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To train AutoML on backtesting folds we will use the [Azure Machine Learning pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/concept-ml-pipelines). It will generate backtest folds, then train model for each of them and calculate the accuracy metrics. To run pipeline, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve (here, it is a forecasting), while a Run corresponds to a specific approach to the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from uuid import uuid1\n", + "\n", + "from pipeline_helper import get_backtest_pipeline\n", + "\n", + "pipeline_exp = Experiment(ws, \"automl-backtesting\")\n", + "\n", + "# We will create the unique identifier to mark our models.\n", + "model_uid = str(uuid1())\n", + "\n", + "pipeline = get_backtest_pipeline(\n", + " experiment=pipeline_exp,\n", + " dataset=train_data,\n", + " # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n", + " process_per_node=2,\n", + " # The maximum number of nodes for our compute is 6.\n", + " node_count=6,\n", + " compute_target=compute_target,\n", + " automl_settings=automl_settings,\n", + " step_size=BACKTESTING_PERIOD,\n", + " step_number=NUMBER_OF_BACKTESTS,\n", + " model_uid=model_uid,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Run the pipeline and wait for results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pipeline_run = pipeline_exp.submit(pipeline)\n", + "pipeline_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After the run is complete, we can download the results. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "metrics_output = pipeline_run.get_pipeline_output(\"results\")\n", + "metrics_output.download(\"backtest_metrics\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## View metrics\n", + "To distinguish these metrics from the model backtest, which we will obtain in the next section, we will move the directory with metrics out of the backtest_metrics and will remove the parent folder. We will create the utility function for that." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def copy_scoring_directory(new_name):\n", + " scores_path = os.path.join(\"backtest_metrics\", \"azureml\")\n", + " directory_list = [os.path.join(scores_path, d) for d in os.listdir(scores_path)]\n", + " latest_file = max(directory_list, key=os.path.getctime)\n", + " print(\n", + " f\"The output directory {latest_file} was created on {pd.Timestamp(os.path.getctime(latest_file), unit='s')} GMT.\"\n", + " )\n", + " shutil.move(os.path.join(latest_file, \"results\"), new_name)\n", + " shutil.rmtree(\"backtest_metrics\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Move the directory and list its contents." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "copy_scoring_directory(\"automl_backtest\")\n", + "pd.DataFrame({\"File\": os.listdir(\"automl_backtest\")})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The directory contains a set of files with results:\n", + "- forecast.csv contains forecasts for all backtest iterations. The backtest_iteration column contains iteration identifier with the last training date as a suffix\n", + "- scores.csv contains all metrics. If data set contains several time series, the metrics are given for all combinations of time series id and iterations, as well as scores for all iterations and time series id are marked as \"all_sets\"\n", + "- plots_fcst_vs_actual.pdf contains the predictions vs forecast plots for each iteration and time series.\n", + "\n", + "For demonstration purposes we will display the table of metrics for one of the time series with ID \"ts0\". Again, we will create the utility function, which will be re used in model backtesting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def get_metrics_for_ts(all_metrics, ts):\n", + " \"\"\"\n", + " Get the metrics for the time series with ID ts and return it as pandas data frame.\n", + "\n", + " :param all_metrics: The table with all the metrics.\n", + " :param ts: The ID of a time series of interest.\n", + " :return: The pandas DataFrame with metrics for one time series.\n", + " \"\"\"\n", + " results_df = None\n", + " for ts_id, one_series in all_metrics.groupby(\"time_series_id\"):\n", + " if not ts_id.startswith(ts):\n", + " continue\n", + " iteration = ts_id.split(\"|\")[-1]\n", + " df = one_series[[\"metric_name\", \"metric\"]]\n", + " df.rename({\"metric\": iteration}, axis=1, inplace=True)\n", + " df.set_index(\"metric_name\", inplace=True)\n", + " if results_df is None:\n", + " results_df = df\n", + " else:\n", + " results_df = results_df.merge(\n", + " df, how=\"inner\", left_index=True, right_index=True\n", + " )\n", + " results_df.sort_index(axis=1, inplace=True)\n", + " return results_df\n", + "\n", + "\n", + "metrics_df = pd.read_csv(os.path.join(\"automl_backtest\", \"scores.csv\"))\n", + "ts_id = \"ts0\"\n", + "get_metrics_for_ts(metrics_df, ts_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Forecast vs actuals plots." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import IFrame\n", + "\n", + "IFrame(\"./automl_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Backtest the best model \n", + "\n", + "For model backtesting we will use the same parameters we used to backtest AutoML. All the models, we have obtained in the previous run were registered in our workspace. To identify the model, each was assigned a tag with the last trainig date." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model_list = Model.list(ws, tags={\"experiment\": \"automl-backtesting\"})\n", + "model_data = {\"name\": [], \"last_training_date\": []}\n", + "for model in model_list:\n", + " if (\n", + " \"last_training_date\" not in model.tags\n", + " or \"model_uid\" not in model.tags\n", + " or model.tags[\"model_uid\"] != model_uid\n", + " ):\n", + " continue\n", + " model_data[\"name\"].append(model.name)\n", + " model_data[\"last_training_date\"].append(\n", + " pd.Timestamp(model.tags[\"last_training_date\"])\n", + " )\n", + "df_models = pd.DataFrame(model_data)\n", + "df_models.sort_values([\"last_training_date\"], inplace=True)\n", + "df_models.reset_index(inplace=True, drop=True)\n", + "df_models" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will backtest the model trained on the most recet data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model_name = df_models[\"name\"].iloc[-1]\n", + "model_name" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrain the models.\n", + "Assemble the pipeline, which will retrain the best model from AutoML run on historical data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pipeline_exp = Experiment(ws, \"model-backtesting\")\n", + "\n", + "pipeline = get_backtest_pipeline(\n", + " experiment=pipeline_exp,\n", + " dataset=train_data,\n", + " # The STANDARD_DS12_V2 has 4 vCPU per node, we will set 2 process per node to be safe.\n", + " process_per_node=2,\n", + " # The maximum number of nodes for our compute is 6.\n", + " node_count=6,\n", + " compute_target=compute_target,\n", + " automl_settings=automl_settings,\n", + " step_size=BACKTESTING_PERIOD,\n", + " step_number=NUMBER_OF_BACKTESTS,\n", + " model_name=model_name,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Launch the backtesting pipeline." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "pipeline_run = pipeline_exp.submit(pipeline)\n", + "pipeline_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The metrics are stored in the pipeline output named \"score\". The next code will download the table with metrics." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "metrics_output = pipeline_run.get_pipeline_output(\"results\")\n", + "metrics_output.download(\"backtest_metrics\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Again, we will copy the data files from the downloaded directory, but in this case we will call the folder \"model_backtest\"; it will contain the same files as the one for AutoML backtesting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "copy_scoring_directory(\"model_backtest\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we will display the metrics." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "model_metrics_df = pd.read_csv(os.path.join(\"model_backtest\", \"scores.csv\"))\n", + "get_metrics_for_ts(model_metrics_df, ts_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Forecast vs actuals plots." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import IFrame\n", + "\n", + "IFrame(\"./model_backtest/plots_fcst_vs_actual.pdf\", width=800, height=300)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "category": "tutorial", + "compute": [ + "Remote" + ], + "datasets": [ + "None" ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "category": "tutorial", - "compute": [ - "Remote" - ], - "datasets": [ - "None" - ], - "deployment": [ - "None" - ], - "exclude_from_index": false, - "framework": [ - "Azure ML AutoML" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "deployment": [ + "None" + ], + "exclude_from_index": false, + "framework": [ + "Azure ML AutoML" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb index 48056752e..81b5e8e80 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb @@ -1,714 +1,725 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "**BikeShare Demand Forecasting**\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Compute](#Compute)\n", - "1. [Data](#Data)\n", - "1. [Train](#Train)\n", - "1. [Featurization](#Featurization)\n", - "1. [Evaluate](#Evaluate)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "This notebook demonstrates demand forecasting for a bike-sharing service using AutoML.\n", - "\n", - "AutoML highlights here include built-in holiday featurization, accessing engineered feature names, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n", - "\n", - "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", - "\n", - "Notebook synopsis:\n", - "1. Creating an Experiment in an existing Workspace\n", - "2. Configuration and local run of AutoML for a time-series model with lag and holiday features \n", - "3. Viewing the engineered names for featurized data and featurization summary for all raw features\n", - "4. Evaluating the fitted model using a rolling test " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "from datetime import datetime\n", - "\n", - "import azureml.core\n", - "import numpy as np\n", - "import pandas as pd\n", - "from azureml.automl.core.featurization import FeaturizationConfig\n", - "from azureml.core import Dataset, Experiment, Workspace\n", - "from azureml.train.automl import AutoMLConfig\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for the run history container in the workspace\n", - "experiment_name = \"automl-bikeshareforecasting\"\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"SKU\"] = ws.sku\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Run History Name\"] = experiment_name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Compute\n", - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your cluster.\n", - "amlcompute_cluster_name = \"bike-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print(\"Found existing cluster, use it.\")\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", - " )\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data\n", - "\n", - "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datastore = ws.get_default_datastore()\n", - "datastore.upload_files(\n", - " files=[\"./bike-no.csv\"], target_path=\"dataset/\", overwrite=True, show_progress=True\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's set up what we know about the dataset. \n", - "\n", - "**Target column** is what we want to forecast.\n", - "\n", - "**Time column** is the time axis along which to predict." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "target_column_name = \"cnt\"\n", - "time_column_name = \"date\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "dataset = Dataset.Tabular.from_delimited_files(\n", - " path=[(datastore, \"dataset/bike-no.csv\")]\n", - ").with_timestamp_columns(fine_grain_timestamp=time_column_name)\n", - "\n", - "# Drop the columns 'casual' and 'registered' as these columns are a breakdown of the total and therefore a leak.\n", - "dataset = dataset.drop_columns(columns=[\"casual\", \"registered\"])\n", - "\n", - "dataset.take(5).to_pandas_dataframe().reset_index(drop=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Split the data\n", - "\n", - "The first split we make is into train and test sets. Note we are splitting on time. Data before 9/1 will be used for training, and data after and including 9/1 will be used for testing." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# select data that occurs before a specified date\n", - "train = dataset.time_before(datetime(2012, 8, 31), include_boundary=True)\n", - "train.to_pandas_dataframe().tail(5).reset_index(drop=True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test = dataset.time_after(datetime(2012, 9, 1), include_boundary=True)\n", - "test.to_pandas_dataframe().head(5).reset_index(drop=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting Parameters\n", - "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**time_column_name**|The name of your time column.|\n", - "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", - "|**country_or_region_for_holidays**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n", - "|**target_lags**|The target_lags specifies how far back we will construct the lags of the target variable.|\n", - "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error\n", - "|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n", - "|**experiment_timeout_hours**|Experimentation timeout in hours.|\n", - "|**training_data**|Input dataset, containing both features and label column.|\n", - "|**label_column_name**|The name of the label column.|\n", - "|**compute_target**|The remote compute for training.|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|\n", - "|**forecasting_parameters**|A class that holds all the forecasting related parameters.|\n", - "\n", - "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Setting forecaster maximum horizon \n", - "\n", - "The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 14 periods (i.e. 14 days). Notice that this is much shorter than the number of days in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the [energy demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand). " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "forecast_horizon = 14" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Convert prediction type to integer\n", - "The featurization configuration can be used to change the default prediction type from decimal numbers to integer. This customization can be used in the scenario when the target column is expected to contain whole values as the number of rented bikes per day." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "featurization_config = FeaturizationConfig()\n", - "# Force the target column, to be integer type.\n", - "featurization_config.add_prediction_transform_type(\"Integer\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Config AutoML" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", - "\n", - "forecasting_parameters = ForecastingParameters(\n", - " time_column_name=time_column_name,\n", - " forecast_horizon=forecast_horizon,\n", - " country_or_region_for_holidays=\"US\", # set country_or_region will trigger holiday featurizer\n", - " target_lags=\"auto\", # use heuristic based lag setting\n", - " freq=\"D\", # Set the forecast frequency to be daily\n", - ")\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " primary_metric=\"normalized_root_mean_squared_error\",\n", - " featurization=featurization_config,\n", - " blocked_models=[\"ExtremeRandomTrees\"],\n", - " experiment_timeout_hours=0.3,\n", - " training_data=train,\n", - " label_column_name=target_column_name,\n", - " compute_target=compute_target,\n", - " enable_early_stopping=True,\n", - " n_cross_validations=3,\n", - " max_concurrent_iterations=4,\n", - " max_cores_per_iteration=-1,\n", - " verbosity=logging.INFO,\n", - " forecasting_parameters=forecasting_parameters,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will now run the experiment, you can go to Azure ML portal to view the run details. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output=False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Run details\n", - "Below we retrieve the best Run object from among all the runs in the experiment." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run = remote_run.get_best_child()\n", - "best_run" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Featurization\n", - "\n", - "We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the JSON file locally\n", - "best_run.download_file(\"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\")\n", - "with open(\"engineered_feature_names.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "\n", - "records" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### View the featurization summary\n", - "\n", - "You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:\n", - "\n", - "- Raw feature name\n", - "- Number of engineered features formed out of this raw feature\n", - "- Type detected\n", - "- If feature was dropped\n", - "- List of feature transformations for the raw feature" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the featurization summary JSON file locally\n", - "best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n", - "\n", - "# Render the JSON as a pandas DataFrame\n", - "with open(\"featurization_summary.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "fs = pd.DataFrame.from_records(records)\n", - "\n", - "# View a summary of the featurization \n", - "fs[[\"RawFeatureName\", \"TypeDetected\", \"Dropped\", \"EngineeredFeatureCount\", \"Transformations\"]]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Evaluate" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", - "\n", - "The scoring will run on a remote compute. In this example, it will reuse the training compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment = Experiment(ws, experiment_name + \"_test\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieving forecasts from the model\n", - "To run the forecast on the remote compute we will use a helper script: forecasting_script. This script contains the utility methods which will be used by the remote estimator. We copy the script to the project folder to upload it to remote compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import shutil\n", - "\n", - "script_folder = os.path.join(os.getcwd(), \"forecast\")\n", - "os.makedirs(script_folder, exist_ok=True)\n", - "shutil.copy(\"forecasting_script.py\", script_folder)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For brevity, we have created a function called run_forecast that submits the test data to the best model determined during the training run and retrieves forecasts. The test set is longer than the forecast horizon specified at train time, so the forecasting script uses a so-called rolling evaluation to generate predictions over the whole test set. A rolling evaluation iterates the forecaster over the test set, using the actuals in the test set to make lag features as needed. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from run_forecast import run_rolling_forecast\n", - "\n", - "remote_run = run_rolling_forecast(\n", - " test_experiment, compute_target, best_run, test, target_column_name\n", - ")\n", - "remote_run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Download the prediction result for metrics calculation\n", - "The test data with predictions are saved in artifact outputs/predictions.csv. You can download it and calculation some error metrics for the forecasts and vizualize the predictions vs. the actuals." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.download_file(\"outputs/predictions.csv\", \"predictions.csv\")\n", - "df_all = pd.read_csv(\"predictions.csv\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.shared import constants\n", - "from azureml.automl.runtime.shared.score import scoring\n", - "from sklearn.metrics import mean_absolute_error, mean_squared_error\n", - "from matplotlib import pyplot as plt\n", - "\n", - "# use automl metrics module\n", - "scores = scoring.score_regression(\n", - " y_test=df_all[target_column_name],\n", - " y_pred=df_all[\"predicted\"],\n", - " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", - ")\n", - "\n", - "print(\"[Test data scores]\\n\")\n", - "for key, value in scores.items():\n", - " print(\"{}: {:.3f}\".format(key, value))\n", - "\n", - "# Plot outputs\n", - "%matplotlib inline\n", - "test_pred = plt.scatter(df_all[target_column_name], df_all[\"predicted\"], color=\"b\")\n", - "test_test = plt.scatter(\n", - " df_all[target_column_name], df_all[target_column_name], color=\"g\"\n", - ")\n", - "plt.legend(\n", - " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", - ")\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For more details on what metrics are included and how they are calculated, please refer to [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics). You could also calculate residuals, like described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n", - "\n", - "\n", - "Since we did a rolling evaluation on the test set, we can analyze the predictions by their forecast horizon relative to the rolling origin. The model was initially trained at a forecast horizon of 14, so each prediction from the model is associated with a horizon value from 1 to 14. The horizon values are in a column named, \"horizon_origin,\" in the prediction set. For example, we can calculate some of the error metrics grouped by the horizon:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from metrics_helper import MAPE, APE\n", - "\n", - "df_all.groupby(\"horizon_origin\").apply(\n", - " lambda df: pd.Series(\n", - " {\n", - " \"MAPE\": MAPE(df[target_column_name], df[\"predicted\"]),\n", - " \"RMSE\": np.sqrt(\n", - " mean_squared_error(df[target_column_name], df[\"predicted\"])\n", - " ),\n", - " \"MAE\": mean_absolute_error(df[target_column_name], df[\"predicted\"]),\n", - " }\n", - " )\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To drill down more, we can look at the distributions of APE (absolute percentage error) by horizon. From the chart, it is clear that the overall MAPE is being skewed by one particular point where the actual value is of small absolute value." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "df_all_APE = df_all.assign(APE=APE(df_all[target_column_name], df_all[\"predicted\"]))\n", - "APEs = [\n", - " df_all_APE[df_all[\"horizon_origin\"] == h].APE.values\n", - " for h in range(1, forecast_horizon + 1)\n", - "]\n", - "\n", - "%matplotlib inline\n", - "plt.boxplot(APEs)\n", - "plt.yscale(\"log\")\n", - "plt.xlabel(\"horizon\")\n", - "plt.ylabel(\"APE (%)\")\n", - "plt.title(\"Absolute Percentage Errors by Forecast Horizon\")\n", - "\n", - "plt.show()" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-bike-share/auto-ml-forecasting-bike-share.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "**BikeShare Demand Forecasting**\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Compute](#Compute)\n", + "1. [Data](#Data)\n", + "1. [Train](#Train)\n", + "1. [Featurization](#Featurization)\n", + "1. [Evaluate](#Evaluate)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "This notebook demonstrates demand forecasting for a bike-sharing service using AutoML.\n", + "\n", + "AutoML highlights here include built-in holiday featurization, accessing engineered feature names, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n", + "\n", + "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", + "\n", + "Notebook synopsis:\n", + "1. Creating an Experiment in an existing Workspace\n", + "2. Configuration and local run of AutoML for a time-series model with lag and holiday features \n", + "3. Viewing the engineered names for featurized data and featurization summary for all raw features\n", + "4. Evaluating the fitted model using a rolling test " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "from datetime import datetime\n", + "\n", + "import azureml.core\n", + "import numpy as np\n", + "import pandas as pd\n", + "from azureml.automl.core.featurization import FeaturizationConfig\n", + "from azureml.core import Dataset, Experiment, Workspace\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.0 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for the run history container in the workspace\n", + "experiment_name = \"automl-bikeshareforecasting\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"SKU\"] = ws.sku\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute\n", + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "amlcompute_cluster_name = \"bike-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data\n", + "\n", + "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datastore = ws.get_default_datastore()\n", + "datastore.upload_files(\n", + " files=[\"./bike-no.csv\"], target_path=\"dataset/\", overwrite=True, show_progress=True\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's set up what we know about the dataset. \n", + "\n", + "**Target column** is what we want to forecast.\n", + "\n", + "**Time column** is the time axis along which to predict." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "target_column_name = \"cnt\"\n", + "time_column_name = \"date\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"dataset/bike-no.csv\")]\n", + ").with_timestamp_columns(fine_grain_timestamp=time_column_name)\n", + "\n", + "# Drop the columns 'casual' and 'registered' as these columns are a breakdown of the total and therefore a leak.\n", + "dataset = dataset.drop_columns(columns=[\"casual\", \"registered\"])\n", + "\n", + "dataset.take(5).to_pandas_dataframe().reset_index(drop=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Split the data\n", + "\n", + "The first split we make is into train and test sets. Note we are splitting on time. Data before 9/1 will be used for training, and data after and including 9/1 will be used for testing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# select data that occurs before a specified date\n", + "train = dataset.time_before(datetime(2012, 8, 31), include_boundary=True)\n", + "train.to_pandas_dataframe().tail(5).reset_index(drop=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test = dataset.time_after(datetime(2012, 9, 1), include_boundary=True)\n", + "test.to_pandas_dataframe().head(5).reset_index(drop=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting Parameters\n", + "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**time_column_name**|The name of your time column.|\n", + "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", + "|**country_or_region_for_holidays**|The country/region used to generate holiday features. These should be ISO 3166 two-letter country/region codes (i.e. 'US', 'GB').|\n", + "|**target_lags**|The target_lags specifies how far back we will construct the lags of the target variable.|\n", + "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error\n", + "|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n", + "|**experiment_timeout_hours**|Experimentation timeout in hours.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "|**compute_target**|The remote compute for training.|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|\n", + "|**forecasting_parameters**|A class that holds all the forecasting related parameters.|\n", + "\n", + "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Setting forecaster maximum horizon \n", + "\n", + "The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 14 periods (i.e. 14 days). Notice that this is much shorter than the number of days in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the [energy demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "forecast_horizon = 14" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert prediction type to integer\n", + "The featurization configuration can be used to change the default prediction type from decimal numbers to integer. This customization can be used in the scenario when the target column is expected to contain whole values as the number of rented bikes per day." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "featurization_config = FeaturizationConfig()\n", + "# Force the target column, to be integer type.\n", + "featurization_config.add_prediction_transform_type(\"Integer\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Config AutoML" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", + "\n", + "forecasting_parameters = ForecastingParameters(\n", + " time_column_name=time_column_name,\n", + " forecast_horizon=forecast_horizon,\n", + " country_or_region_for_holidays=\"US\", # set country_or_region will trigger holiday featurizer\n", + " target_lags=\"auto\", # use heuristic based lag setting\n", + " freq=\"D\", # Set the forecast frequency to be daily\n", + ")\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " featurization=featurization_config,\n", + " blocked_models=[\"ExtremeRandomTrees\"],\n", + " experiment_timeout_hours=0.3,\n", + " training_data=train,\n", + " label_column_name=target_column_name,\n", + " compute_target=compute_target,\n", + " enable_early_stopping=True,\n", + " n_cross_validations=3,\n", + " max_concurrent_iterations=4,\n", + " max_cores_per_iteration=-1,\n", + " verbosity=logging.INFO,\n", + " forecasting_parameters=forecasting_parameters,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will now run the experiment, you can go to Azure ML portal to view the run details. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Run details\n", + "Below we retrieve the best Run object from among all the runs in the experiment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run = remote_run.get_best_child()\n", + "best_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Featurization\n", + "\n", + "We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. Note that a number of named holiday periods are represented. We recommend that you have at least one year of data when using this feature to ensure that all yearly holidays are captured in the training featurization." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\"\n", + ")\n", + "with open(\"engineered_feature_names.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "\n", + "records" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### View the featurization summary\n", + "\n", + "You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:\n", + "\n", + "- Raw feature name\n", + "- Number of engineered features formed out of this raw feature\n", + "- Type detected\n", + "- If feature was dropped\n", + "- List of feature transformations for the raw feature" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the featurization summary JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n", + ")\n", + "\n", + "# Render the JSON as a pandas DataFrame\n", + "with open(\"featurization_summary.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "fs = pd.DataFrame.from_records(records)\n", + "\n", + "# View a summary of the featurization\n", + "fs[\n", + " [\n", + " \"RawFeatureName\",\n", + " \"TypeDetected\",\n", + " \"Dropped\",\n", + " \"EngineeredFeatureCount\",\n", + " \"Transformations\",\n", + " ]\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Evaluate" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", + "\n", + "The scoring will run on a remote compute. In this example, it will reuse the training compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment = Experiment(ws, experiment_name + \"_test\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieving forecasts from the model\n", + "To run the forecast on the remote compute we will use a helper script: forecasting_script. This script contains the utility methods which will be used by the remote estimator. We copy the script to the project folder to upload it to remote compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import shutil\n", + "\n", + "script_folder = os.path.join(os.getcwd(), \"forecast\")\n", + "os.makedirs(script_folder, exist_ok=True)\n", + "shutil.copy(\"forecasting_script.py\", script_folder)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For brevity, we have created a function called run_forecast that submits the test data to the best model determined during the training run and retrieves forecasts. The test set is longer than the forecast horizon specified at train time, so the forecasting script uses a so-called rolling evaluation to generate predictions over the whole test set. A rolling evaluation iterates the forecaster over the test set, using the actuals in the test set to make lag features as needed. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from run_forecast import run_rolling_forecast\n", + "\n", + "remote_run = run_rolling_forecast(\n", + " test_experiment, compute_target, best_run, test, target_column_name\n", + ")\n", + "remote_run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download the prediction result for metrics calculation\n", + "The test data with predictions are saved in artifact outputs/predictions.csv. You can download it and calculation some error metrics for the forecasts and vizualize the predictions vs. the actuals." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.download_file(\"outputs/predictions.csv\", \"predictions.csv\")\n", + "df_all = pd.read_csv(\"predictions.csv\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared import constants\n", + "from azureml.automl.runtime.shared.score import scoring\n", + "from sklearn.metrics import mean_absolute_error, mean_squared_error\n", + "from matplotlib import pyplot as plt\n", + "\n", + "# use automl metrics module\n", + "scores = scoring.score_regression(\n", + " y_test=df_all[target_column_name],\n", + " y_pred=df_all[\"predicted\"],\n", + " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", + ")\n", + "\n", + "print(\"[Test data scores]\\n\")\n", + "for key, value in scores.items():\n", + " print(\"{}: {:.3f}\".format(key, value))\n", + "\n", + "# Plot outputs\n", + "%matplotlib inline\n", + "test_pred = plt.scatter(df_all[target_column_name], df_all[\"predicted\"], color=\"b\")\n", + "test_test = plt.scatter(\n", + " df_all[target_column_name], df_all[target_column_name], color=\"g\"\n", + ")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For more details on what metrics are included and how they are calculated, please refer to [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics). You could also calculate residuals, like described [here](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n", + "\n", + "\n", + "Since we did a rolling evaluation on the test set, we can analyze the predictions by their forecast horizon relative to the rolling origin. The model was initially trained at a forecast horizon of 14, so each prediction from the model is associated with a horizon value from 1 to 14. The horizon values are in a column named, \"horizon_origin,\" in the prediction set. For example, we can calculate some of the error metrics grouped by the horizon:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from metrics_helper import MAPE, APE\n", + "\n", + "df_all.groupby(\"horizon_origin\").apply(\n", + " lambda df: pd.Series(\n", + " {\n", + " \"MAPE\": MAPE(df[target_column_name], df[\"predicted\"]),\n", + " \"RMSE\": np.sqrt(\n", + " mean_squared_error(df[target_column_name], df[\"predicted\"])\n", + " ),\n", + " \"MAE\": mean_absolute_error(df[target_column_name], df[\"predicted\"]),\n", + " }\n", + " )\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To drill down more, we can look at the distributions of APE (absolute percentage error) by horizon. From the chart, it is clear that the overall MAPE is being skewed by one particular point where the actual value is of small absolute value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "df_all_APE = df_all.assign(APE=APE(df_all[target_column_name], df_all[\"predicted\"]))\n", + "APEs = [\n", + " df_all_APE[df_all[\"horizon_origin\"] == h].APE.values\n", + " for h in range(1, forecast_horizon + 1)\n", + "]\n", + "\n", + "%matplotlib inline\n", + "plt.boxplot(APEs)\n", + "plt.yscale(\"log\")\n", + "plt.xlabel(\"horizon\")\n", + "plt.ylabel(\"APE (%)\")\n", + "plt.title(\"Absolute Percentage Errors by Forecast Horizon\")\n", + "\n", + "plt.show()" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "category": "tutorial", + "compute": [ + "Remote" + ], + "datasets": [ + "BikeShare" + ], + "deployment": [ + "None" ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "category": "tutorial", - "compute": [ - "Remote" - ], - "datasets": [ - "BikeShare" - ], - "deployment": [ - "None" - ], - "exclude_from_index": false, - "file_extension": ".py", - "framework": [ - "Azure ML AutoML" - ], - "friendly_name": "Forecasting BikeShare Demand", - "index_order": 1, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, - "mimetype": "text/x-python", - "name": "python", - "npconvert_exporter": "python", - "pygments_lexer": "ipython3", - "tags": [ - "Forecasting" - ], - "task": "Forecasting", + "exclude_from_index": false, + "file_extension": ".py", + "framework": [ + "Azure ML AutoML" + ], + "friendly_name": "Forecasting BikeShare Demand", + "index_order": 1, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "mimetype": "text/x-python", + "name": "python", + "npconvert_exporter": "python", + "pygments_lexer": "ipython3", + "tags": [ + "Forecasting" + ], + "task": "Forecasting", + "version": 3 + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb index fe04dbaab..52f9a955c 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb @@ -1,774 +1,785 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Forecasting using the Energy Demand Dataset**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#introduction)\n", - "1. [Setup](#setup)\n", - "1. [Data and Forecasting Configurations](#data)\n", - "1. [Train](#train)\n", - "1. [Generate and Evaluate the Forecast](#forecast)\n", - "\n", - "Advanced Forecasting\n", - "1. [Advanced Training](#advanced_training)\n", - "1. [Advanced Results](#advanced_results)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Introduction\n", - "\n", - "In this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.\n", - "\n", - "In this notebook you will learn how to:\n", - "1. Creating an Experiment using an existing Workspace\n", - "1. Configure AutoML using 'AutoMLConfig'\n", - "1. Train the model using AmlCompute\n", - "1. Explore the engineered features and results\n", - "1. Generate the forecast and compute the out-of-sample accuracy metrics\n", - "1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features\n", - "1. Run and explore the forecast with lagging features" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Setup" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "\n", - "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n", - "from matplotlib import pyplot as plt\n", - "import pandas as pd\n", - "import numpy as np\n", - "import warnings\n", - "import os\n", - "\n", - "# Squash warning messages for cleaner output in the notebook\n", - "warnings.showwarning = lambda *args, **kwargs: None\n", - "\n", - "import azureml.core\n", - "from azureml.core import Experiment, Workspace, Dataset\n", - "from azureml.train.automl import AutoMLConfig\n", - "from datetime import datetime" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for the run history container in the workspace\n", - "experiment_name = \"automl-forecasting-energydemand\"\n", - "\n", - "# # project folder\n", - "# project_folder = './sample_projects/automl-forecasting-energy-demand'\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Run History Name\"] = experiment_name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create or Attach existing AmlCompute\n", - "A compute target is required to execute a remote Automated ML run. \n", - "\n", - "[Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your cluster.\n", - "amlcompute_cluster_name = \"energy-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print(\"Found existing cluster, use it.\")\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", - " )\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Data\n", - "\n", - "We will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. \n", - "\n", - "With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's set up what we know about the dataset.\n", - "\n", - "Target column is what we want to forecast.

\n", - "Time column is the time axis along which to predict.\n", - "\n", - "The other columns, \"temp\" and \"precip\", are implicitly designated as features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "target_column_name = \"demand\"\n", - "time_column_name = \"timeStamp\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "dataset = Dataset.Tabular.from_delimited_files(\n", - " path=\"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv\"\n", - ").with_timestamp_columns(fine_grain_timestamp=time_column_name)\n", - "dataset.take(5).to_pandas_dataframe().reset_index(drop=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Cut off the end of the dataset due to large number of nan values\n", - "dataset = dataset.time_before(datetime(2017, 10, 10, 5))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Split the data into train and test sets" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# split into train based on time\n", - "train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True)\n", - "train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# split into test based on time\n", - "test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5))\n", - "test.to_pandas_dataframe().reset_index(drop=True).head(5)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Setting the maximum forecast horizon\n", - "\n", - "The forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. \n", - "\n", - "Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecast#configure-and-run-experiment) guide.\n", - "\n", - "In this example, we set the horizon to 48 hours." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "forecast_horizon = 48" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting Parameters\n", - "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**time_column_name**|The name of your time column.|\n", - "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", - "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Train\n", - "\n", - "Instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n", - "|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n", - "|**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.|\n", - "|**training_data**|The training data to be used within the experiment.|\n", - "|**label_column_name**|The name of the label column.|\n", - "|**compute_target**|The remote compute for training.|\n", - "|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|\n", - "|**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.|\n", - "|**forecasting_parameters**|A class holds all the forecasting related parameters.|\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", - "\n", - "forecasting_parameters = ForecastingParameters(\n", - " time_column_name=time_column_name,\n", - " forecast_horizon=forecast_horizon,\n", - " freq=\"H\", # Set the forecast frequency to be hourly\n", - ")\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " primary_metric=\"normalized_root_mean_squared_error\",\n", - " blocked_models=[\"ExtremeRandomTrees\", \"AutoArima\", \"Prophet\"],\n", - " experiment_timeout_hours=0.3,\n", - " training_data=train,\n", - " label_column_name=target_column_name,\n", - " compute_target=compute_target,\n", - " enable_early_stopping=True,\n", - " n_cross_validations=3,\n", - " verbosity=logging.INFO,\n", - " forecasting_parameters=forecasting_parameters,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n", - "One may specify `show_output = True` to print currently running iterations to the console." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output=False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Run details\n", - "Below we retrieve the best Run object from among all the runs in the experiment." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run = remote_run.get_best_child()\n", - "best_run" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Featurization\n", - "We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the JSON file locally\n", - "best_run.download_file(\"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\")\n", - "with open(\"engineered_feature_names.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "\n", - "records" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### View featurization summary\n", - "You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:\n", - "\n", - "+ Raw feature name\n", - "+ Number of engineered features formed out of this raw feature\n", - "+ Type detected\n", - "+ If feature was dropped\n", - "+ List of feature transformations for the raw feature" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the featurization summary JSON file locally\n", - "best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n", - "\n", - "# Render the JSON as a pandas DataFrame\n", - "with open(\"featurization_summary.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "fs = pd.DataFrame.from_records(records)\n", - "\n", - "# View a summary of the featurization \n", - "fs[[\"RawFeatureName\", \"TypeDetected\", \"Dropped\", \"EngineeredFeatureCount\", \"Transformations\"]]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Forecasting\n", - "\n", - "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", - "\n", - "The inference will run on a remote compute. In this example, it will re-use the training compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment = Experiment(ws, experiment_name + \"_inference\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retreiving forecasts from the model\n", - "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from run_forecast import run_remote_inference\n", - "\n", - "remote_run_infer = run_remote_inference(\n", - " test_experiment=test_experiment,\n", - " compute_target=compute_target,\n", - " train_run=best_run,\n", - " test_dataset=test,\n", - " target_column_name=target_column_name,\n", - ")\n", - "remote_run_infer.wait_for_completion(show_output=False)\n", - "\n", - "# download the inference output file to the local machine\n", - "remote_run_infer.download_file(\"outputs/predictions.csv\", \"predictions.csv\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Evaluate\n", - "To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# load forecast data frame\n", - "fcst_df = pd.read_csv(\"predictions.csv\", parse_dates=[time_column_name])\n", - "fcst_df.head()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.shared import constants\n", - "from azureml.automl.runtime.shared.score import scoring\n", - "from matplotlib import pyplot as plt\n", - "\n", - "# use automl metrics module\n", - "scores = scoring.score_regression(\n", - " y_test=fcst_df[target_column_name],\n", - " y_pred=fcst_df[\"predicted\"],\n", - " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", - ")\n", - "\n", - "print(\"[Test data scores]\\n\")\n", - "for key, value in scores.items():\n", - " print(\"{}: {:.3f}\".format(key, value))\n", - "\n", - "# Plot outputs\n", - "%matplotlib inline\n", - "test_pred = plt.scatter(fcst_df[target_column_name], fcst_df[\"predicted\"], color=\"b\")\n", - "test_test = plt.scatter(\n", - " fcst_df[target_column_name], fcst_df[target_column_name], color=\"g\"\n", - ")\n", - "plt.legend(\n", - " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", - ")\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Advanced Training \n", - "We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Using lags and rolling window features\n", - "Now we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.\n", - "\n", - "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "advanced_forecasting_parameters = ForecastingParameters(\n", - " time_column_name=time_column_name,\n", - " forecast_horizon=forecast_horizon,\n", - " target_lags=12,\n", - " target_rolling_window_size=4,\n", - ")\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " primary_metric=\"normalized_root_mean_squared_error\",\n", - " blocked_models=[\n", - " \"ElasticNet\",\n", - " \"ExtremeRandomTrees\",\n", - " \"GradientBoosting\",\n", - " \"XGBoostRegressor\",\n", - " \"ExtremeRandomTrees\",\n", - " \"AutoArima\",\n", - " \"Prophet\",\n", - " ], # These models are blocked for tutorial purposes, remove this for real use cases.\n", - " experiment_timeout_hours=0.3,\n", - " training_data=train,\n", - " label_column_name=target_column_name,\n", - " compute_target=compute_target,\n", - " enable_early_stopping=True,\n", - " n_cross_validations=3,\n", - " verbosity=logging.INFO,\n", - " forecasting_parameters=advanced_forecasting_parameters,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "advanced_remote_run = experiment.submit(automl_config, show_output=False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "advanced_remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Run details" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run_lags = remote_run.get_best_child()\n", - "best_run_lags" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Advanced Results\n", - "We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment_advanced = Experiment(ws, experiment_name + \"_inference_advanced\")\n", - "advanced_remote_run_infer = run_remote_inference(\n", - " test_experiment=test_experiment_advanced,\n", - " compute_target=compute_target,\n", - " train_run=best_run_lags,\n", - " test_dataset=test,\n", - " target_column_name=target_column_name,\n", - " inference_folder=\"./forecast_advanced\",\n", - ")\n", - "advanced_remote_run_infer.wait_for_completion(show_output=False)\n", - "\n", - "# download the inference output file to the local machine\n", - "advanced_remote_run_infer.download_file(\n", - " \"outputs/predictions.csv\", \"predictions_advanced.csv\"\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "fcst_adv_df = pd.read_csv(\"predictions_advanced.csv\", parse_dates=[time_column_name])\n", - "fcst_adv_df.head()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.shared import constants\n", - "from azureml.automl.runtime.shared.score import scoring\n", - "from matplotlib import pyplot as plt\n", - "\n", - "# use automl metrics module\n", - "scores = scoring.score_regression(\n", - " y_test=fcst_adv_df[target_column_name],\n", - " y_pred=fcst_adv_df[\"predicted\"],\n", - " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", - ")\n", - "\n", - "print(\"[Test data scores]\\n\")\n", - "for key, value in scores.items():\n", - " print(\"{}: {:.3f}\".format(key, value))\n", - "\n", - "# Plot outputs\n", - "%matplotlib inline\n", - "test_pred = plt.scatter(\n", - " fcst_adv_df[target_column_name], fcst_adv_df[\"predicted\"], color=\"b\"\n", - ")\n", - "test_test = plt.scatter(\n", - " fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color=\"g\"\n", - ")\n", - "plt.legend(\n", - " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", - ")\n", - "plt.show()" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Forecasting using the Energy Demand Dataset**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#introduction)\n", + "1. [Setup](#setup)\n", + "1. [Data and Forecasting Configurations](#data)\n", + "1. [Train](#train)\n", + "1. [Generate and Evaluate the Forecast](#forecast)\n", + "\n", + "Advanced Forecasting\n", + "1. [Advanced Training](#advanced_training)\n", + "1. [Advanced Results](#advanced_results)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Introduction\n", + "\n", + "In this example we use the associated New York City energy demand dataset to showcase how you can use AutoML for a simple forecasting problem and explore the results. The goal is predict the energy demand for the next 48 hours based on historic time-series data.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) first, if you haven't already, to establish your connection to the AzureML Workspace.\n", + "\n", + "In this notebook you will learn how to:\n", + "1. Creating an Experiment using an existing Workspace\n", + "1. Configure AutoML using 'AutoMLConfig'\n", + "1. Train the model using AmlCompute\n", + "1. Explore the engineered features and results\n", + "1. Generate the forecast and compute the out-of-sample accuracy metrics\n", + "1. Configuration and remote run of AutoML for a time-series model with lag and rolling window features\n", + "1. Run and explore the forecast with lagging features" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "\n", + "from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score\n", + "from matplotlib import pyplot as plt\n", + "import pandas as pd\n", + "import numpy as np\n", + "import warnings\n", + "import os\n", + "\n", + "# Squash warning messages for cleaner output in the notebook\n", + "warnings.showwarning = lambda *args, **kwargs: None\n", + "\n", + "import azureml.core\n", + "from azureml.core import Experiment, Workspace, Dataset\n", + "from azureml.train.automl import AutoMLConfig\n", + "from datetime import datetime" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.0 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for the run history container in the workspace\n", + "experiment_name = \"automl-forecasting-energydemand\"\n", + "\n", + "# # project folder\n", + "# project_folder = './sample_projects/automl-forecasting-energy-demand'\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create or Attach existing AmlCompute\n", + "A compute target is required to execute a remote Automated ML run. \n", + "\n", + "[Azure Machine Learning Compute](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) is a managed-compute infrastructure that allows the user to easily create a single or multi-node compute. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "amlcompute_cluster_name = \"energy-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Data\n", + "\n", + "We will use energy consumption [data from New York City](http://mis.nyiso.com/public/P-58Blist.htm) for model training. The data is stored in a tabular format and includes energy demand and basic weather data at an hourly frequency. \n", + "\n", + "With Azure Machine Learning datasets you can keep a single copy of data in your storage, easily access data during model training, share data and collaborate with other users. Below, we will upload the datatset and create a [tabular dataset](https://docs.microsoft.com/bs-latn-ba/azure/machine-learning/service/how-to-create-register-datasets#dataset-types) to be used training and prediction." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's set up what we know about the dataset.\n", + "\n", + "Target column is what we want to forecast.

\n", + "Time column is the time axis along which to predict.\n", + "\n", + "The other columns, \"temp\" and \"precip\", are implicitly designated as features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "target_column_name = \"demand\"\n", + "time_column_name = \"timeStamp\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "dataset = Dataset.Tabular.from_delimited_files(\n", + " path=\"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/nyc_energy.csv\"\n", + ").with_timestamp_columns(fine_grain_timestamp=time_column_name)\n", + "dataset.take(5).to_pandas_dataframe().reset_index(drop=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The NYC Energy dataset is missing energy demand values for all datetimes later than August 10th, 2017 5AM. Below, we trim the rows containing these missing values from the end of the dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Cut off the end of the dataset due to large number of nan values\n", + "dataset = dataset.time_before(datetime(2017, 10, 10, 5))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Split the data into train and test sets" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The first split we make is into train and test sets. Note that we are splitting on time. Data before and including August 8th, 2017 5AM will be used for training, and data after will be used for testing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# split into train based on time\n", + "train = dataset.time_before(datetime(2017, 8, 8, 5), include_boundary=True)\n", + "train.to_pandas_dataframe().reset_index(drop=True).sort_values(time_column_name).tail(5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# split into test based on time\n", + "test = dataset.time_between(datetime(2017, 8, 8, 6), datetime(2017, 8, 10, 5))\n", + "test.to_pandas_dataframe().reset_index(drop=True).head(5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Setting the maximum forecast horizon\n", + "\n", + "The forecast horizon is the number of periods into the future that the model should predict. It is generally recommend that users set forecast horizons to less than 100 time periods (i.e. less than 100 hours in the NYC energy example). Furthermore, **AutoML's memory use and computation time increase in proportion to the length of the horizon**, so consider carefully how this value is set. If a long horizon forecast really is necessary, consider aggregating the series to a coarser time scale. \n", + "\n", + "Learn more about forecast horizons in our [Auto-train a time-series forecast model](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-auto-train-forecast#configure-and-run-experiment) guide.\n", + "\n", + "In this example, we set the horizon to 48 hours." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "forecast_horizon = 48" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting Parameters\n", + "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**time_column_name**|The name of your time column.|\n", + "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", + "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Train\n", + "\n", + "Instantiate an AutoMLConfig object. This config defines the settings and data used to run the experiment. We can provide extra configurations within 'automl_settings', for this forecasting task we add the forecasting parameters to hold all the additional forecasting parameters.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error|\n", + "|**blocked_models**|Models in blocked_models won't be used by AutoML. All supported models can be found at [here](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.constants.supportedmodels.forecasting?view=azure-ml-py).|\n", + "|**experiment_timeout_hours**|Maximum amount of time in hours that the experiment take before it terminates.|\n", + "|**training_data**|The training data to be used within the experiment.|\n", + "|**label_column_name**|The name of the label column.|\n", + "|**compute_target**|The remote compute for training.|\n", + "|**n_cross_validations**|Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way.|\n", + "|**enable_early_stopping**|Flag to enble early termination if the score is not improving in the short term.|\n", + "|**forecasting_parameters**|A class holds all the forecasting related parameters.|\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the experiment_timeout_hours parameter value to get results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", + "\n", + "forecasting_parameters = ForecastingParameters(\n", + " time_column_name=time_column_name,\n", + " forecast_horizon=forecast_horizon,\n", + " freq=\"H\", # Set the forecast frequency to be hourly\n", + ")\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " blocked_models=[\"ExtremeRandomTrees\", \"AutoArima\", \"Prophet\"],\n", + " experiment_timeout_hours=0.3,\n", + " training_data=train,\n", + " label_column_name=target_column_name,\n", + " compute_target=compute_target,\n", + " enable_early_stopping=True,\n", + " n_cross_validations=3,\n", + " verbosity=logging.INFO,\n", + " forecasting_parameters=forecasting_parameters,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n", + "One may specify `show_output = True` to print currently running iterations to the console." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieve the Best Run details\n", + "Below we retrieve the best Run object from among all the runs in the experiment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run = remote_run.get_best_child()\n", + "best_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Featurization\n", + "We can look at the engineered feature names generated in time-series featurization via. the JSON file named 'engineered_feature_names.json' under the run outputs." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/engineered_feature_names.json\", \"engineered_feature_names.json\"\n", + ")\n", + "with open(\"engineered_feature_names.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "\n", + "records" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### View featurization summary\n", + "You can also see what featurization steps were performed on different raw features in the user data. For each raw feature in the user data, the following information is displayed:\n", + "\n", + "+ Raw feature name\n", + "+ Number of engineered features formed out of this raw feature\n", + "+ Type detected\n", + "+ If feature was dropped\n", + "+ List of feature transformations for the raw feature" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the featurization summary JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n", + ")\n", + "\n", + "# Render the JSON as a pandas DataFrame\n", + "with open(\"featurization_summary.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "fs = pd.DataFrame.from_records(records)\n", + "\n", + "# View a summary of the featurization\n", + "fs[\n", + " [\n", + " \"RawFeatureName\",\n", + " \"TypeDetected\",\n", + " \"Dropped\",\n", + " \"EngineeredFeatureCount\",\n", + " \"Transformations\",\n", + " ]\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Forecasting\n", + "\n", + "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", + "\n", + "The inference will run on a remote compute. In this example, it will re-use the training compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment = Experiment(ws, experiment_name + \"_inference\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieving forecasts from the model\n", + "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from run_forecast import run_remote_inference\n", + "\n", + "remote_run_infer = run_remote_inference(\n", + " test_experiment=test_experiment,\n", + " compute_target=compute_target,\n", + " train_run=best_run,\n", + " test_dataset=test,\n", + " target_column_name=target_column_name,\n", + ")\n", + "remote_run_infer.wait_for_completion(show_output=False)\n", + "\n", + "# download the inference output file to the local machine\n", + "remote_run_infer.download_file(\"outputs/predictions.csv\", \"predictions.csv\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Evaluate\n", + "To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# load forecast data frame\n", + "fcst_df = pd.read_csv(\"predictions.csv\", parse_dates=[time_column_name])\n", + "fcst_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared import constants\n", + "from azureml.automl.runtime.shared.score import scoring\n", + "from matplotlib import pyplot as plt\n", + "\n", + "# use automl metrics module\n", + "scores = scoring.score_regression(\n", + " y_test=fcst_df[target_column_name],\n", + " y_pred=fcst_df[\"predicted\"],\n", + " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", + ")\n", + "\n", + "print(\"[Test data scores]\\n\")\n", + "for key, value in scores.items():\n", + " print(\"{}: {:.3f}\".format(key, value))\n", + "\n", + "# Plot outputs\n", + "%matplotlib inline\n", + "test_pred = plt.scatter(fcst_df[target_column_name], fcst_df[\"predicted\"], color=\"b\")\n", + "test_test = plt.scatter(\n", + " fcst_df[target_column_name], fcst_df[target_column_name], color=\"g\"\n", + ")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Advanced Training \n", + "We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Using lags and rolling window features\n", + "Now we will configure the target lags, that is the previous values of the target variables, meaning the prediction is no longer horizon-less. We therefore must still specify the `forecast_horizon` that the model will learn to forecast. The `target_lags` keyword specifies how far back we will construct the lags of the target variable, and the `target_rolling_window_size` specifies the size of the rolling window over which we will generate the `max`, `min` and `sum` features.\n", + "\n", + "This notebook uses the blocked_models parameter to exclude some models that take a longer time to train on this dataset. You can choose to remove models from the blocked_models list but you may need to increase the iteration_timeout_minutes parameter value to get results." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "advanced_forecasting_parameters = ForecastingParameters(\n", + " time_column_name=time_column_name,\n", + " forecast_horizon=forecast_horizon,\n", + " target_lags=12,\n", + " target_rolling_window_size=4,\n", + ")\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " blocked_models=[\n", + " \"ElasticNet\",\n", + " \"ExtremeRandomTrees\",\n", + " \"GradientBoosting\",\n", + " \"XGBoostRegressor\",\n", + " \"ExtremeRandomTrees\",\n", + " \"AutoArima\",\n", + " \"Prophet\",\n", + " ], # These models are blocked for tutorial purposes, remove this for real use cases.\n", + " experiment_timeout_hours=0.3,\n", + " training_data=train,\n", + " label_column_name=target_column_name,\n", + " compute_target=compute_target,\n", + " enable_early_stopping=True,\n", + " n_cross_validations=3,\n", + " verbosity=logging.INFO,\n", + " forecasting_parameters=advanced_forecasting_parameters,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We now start a new remote run, this time with lag and rolling window featurization. AutoML applies featurizations in the setup stage, prior to iterating over ML models. The full training set is featurized first, followed by featurization of each of the CV splits. Lag and rolling window features introduce additional complexity, so the run will take longer than in the previous example that lacked these featurizations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "advanced_remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "advanced_remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Run details" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run_lags = remote_run.get_best_child()\n", + "best_run_lags" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Advanced Results\n", + "We did not use lags in the previous model specification. In effect, the prediction was the result of a simple regression on date, time series identifier columns and any additional features. This is often a very good prediction as common time series patterns like seasonality and trends can be captured in this manner. Such simple regression is horizon-less: it doesn't matter how far into the future we are predicting, because we are not using past data. In the previous example, the horizon was only used to split the data for cross-validation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment_advanced = Experiment(ws, experiment_name + \"_inference_advanced\")\n", + "advanced_remote_run_infer = run_remote_inference(\n", + " test_experiment=test_experiment_advanced,\n", + " compute_target=compute_target,\n", + " train_run=best_run_lags,\n", + " test_dataset=test,\n", + " target_column_name=target_column_name,\n", + " inference_folder=\"./forecast_advanced\",\n", + ")\n", + "advanced_remote_run_infer.wait_for_completion(show_output=False)\n", + "\n", + "# download the inference output file to the local machine\n", + "advanced_remote_run_infer.download_file(\n", + " \"outputs/predictions.csv\", \"predictions_advanced.csv\"\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fcst_adv_df = pd.read_csv(\"predictions_advanced.csv\", parse_dates=[time_column_name])\n", + "fcst_adv_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared import constants\n", + "from azureml.automl.runtime.shared.score import scoring\n", + "from matplotlib import pyplot as plt\n", + "\n", + "# use automl metrics module\n", + "scores = scoring.score_regression(\n", + " y_test=fcst_adv_df[target_column_name],\n", + " y_pred=fcst_adv_df[\"predicted\"],\n", + " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", + ")\n", + "\n", + "print(\"[Test data scores]\\n\")\n", + "for key, value in scores.items():\n", + " print(\"{}: {:.3f}\".format(key, value))\n", + "\n", + "# Plot outputs\n", + "%matplotlib inline\n", + "test_pred = plt.scatter(\n", + " fcst_adv_df[target_column_name], fcst_adv_df[\"predicted\"], color=\"b\"\n", + ")\n", + "test_test = plt.scatter(\n", + " fcst_adv_df[target_column_name], fcst_adv_df[target_column_name], color=\"g\"\n", + ")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb index 23ae3e425..b140752fc 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb @@ -1,894 +1,893 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "\n", - "#### Forecasting away from training data\n", - "\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "2. [Setup](#Setup)\n", - "3. [Data](#Data)\n", - "4. [Prepare remote compute and data.](#prepare_remote)\n", - "4. [Create the configuration and train a forecaster](#train)\n", - "5. [Forecasting from the trained model](#forecasting)\n", - "6. [Forecasting away from training data](#forecasting_away)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "This notebook demonstrates the full interface of the `forecast()` function. \n", - "\n", - "The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n", - "\n", - "However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.\n", - "\n", - "Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.\n", - "\n", - "Terminology:\n", - "* forecast origin: the last period when the target value is known\n", - "* forecast periods(s): the period(s) for which the value of the target is desired.\n", - "* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.\n", - "* prediction context: `lookback` periods immediately preceding the forecast origin\n", - "\n", - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl-forecasting-function.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import pandas as pd\n", - "import numpy as np\n", - "import logging\n", - "import warnings\n", - "\n", - "import azureml.core\n", - "from azureml.core.dataset import Dataset\n", - "from pandas.tseries.frequencies import to_offset\n", - "from azureml.core.compute import AmlCompute\n", - "from azureml.core.compute import ComputeTarget\n", - "from azureml.core.runconfig import RunConfiguration\n", - "from azureml.core.conda_dependencies import CondaDependencies\n", - "\n", - "# Squash warning messages for cleaner output in the notebook\n", - "warnings.showwarning = lambda *args, **kwargs: None\n", - "\n", - "np.set_printoptions(precision=4, suppress=True, linewidth=120)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.workspace import Workspace\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.train.automl import AutoMLConfig\n", - "\n", - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for the run history container in the workspace\n", - "experiment_name = \"automl-forecast-function-demo\"\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"SKU\"] = ws.sku\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Run History Name\"] = experiment_name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data\n", - "For the demonstration purposes we will generate the data artificially and use them for the forecasting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "TIME_COLUMN_NAME = \"date\"\n", - "TIME_SERIES_ID_COLUMN_NAME = \"time_series_id\"\n", - "TARGET_COLUMN_NAME = \"y\"\n", - "\n", - "\n", - "def get_timeseries(\n", - " train_len: int,\n", - " test_len: int,\n", - " time_column_name: str,\n", - " target_column_name: str,\n", - " time_series_id_column_name: str,\n", - " time_series_number: int = 1,\n", - " freq: str = \"H\",\n", - "):\n", - " \"\"\"\n", - " Return the time series of designed length.\n", - "\n", - " :param train_len: The length of training data (one series).\n", - " :type train_len: int\n", - " :param test_len: The length of testing data (one series).\n", - " :type test_len: int\n", - " :param time_column_name: The desired name of a time column.\n", - " :type time_column_name: str\n", - " :param time_series_number: The number of time series in the data set.\n", - " :type time_series_number: int\n", - " :param freq: The frequency string representing pandas offset.\n", - " see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n", - " :type freq: str\n", - " :returns: the tuple of train and test data sets.\n", - " :rtype: tuple\n", - "\n", - " \"\"\"\n", - " data_train = [] # type: List[pd.DataFrame]\n", - " data_test = [] # type: List[pd.DataFrame]\n", - " data_length = train_len + test_len\n", - " for i in range(time_series_number):\n", - " X = pd.DataFrame(\n", - " {\n", - " time_column_name: pd.date_range(\n", - " start=\"2000-01-01\", periods=data_length, freq=freq\n", - " ),\n", - " target_column_name: np.arange(data_length).astype(float)\n", - " + np.random.rand(data_length)\n", - " + i * 5,\n", - " \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n", - " time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n", - " }\n", - " )\n", - " data_train.append(X[:train_len])\n", - " data_test.append(X[train_len:])\n", - " X_train = pd.concat(data_train)\n", - " y_train = X_train.pop(target_column_name).values\n", - " X_test = pd.concat(data_test)\n", - " y_test = X_test.pop(target_column_name).values\n", - " return X_train, y_train, X_test, y_test\n", - "\n", - "\n", - "n_test_periods = 6\n", - "n_train_periods = 30\n", - "X_train, y_train, X_test, y_test = get_timeseries(\n", - " train_len=n_train_periods,\n", - " test_len=n_test_periods,\n", - " time_column_name=TIME_COLUMN_NAME,\n", - " target_column_name=TARGET_COLUMN_NAME,\n", - " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", - " time_series_number=2,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see what the training data looks like." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "X_train.tail()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# plot the example time series\n", - "import matplotlib.pyplot as plt\n", - "\n", - "whole_data = X_train.copy()\n", - "target_label = \"y\"\n", - "whole_data[target_label] = y_train\n", - "for g in whole_data.groupby(\"time_series_id\"):\n", - " plt.plot(g[1][\"date\"].values, g[1][\"y\"].values, label=g[0])\n", - "plt.legend()\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prepare remote compute and data. \n", - "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# We need to save thw artificial data and then upload them to default workspace datastore.\n", - "DATA_PATH = \"fc_fn_data\"\n", - "DATA_PATH_X = \"{}/data_train.csv\".format(DATA_PATH)\n", - "if not os.path.isdir(\"data\"):\n", - " os.mkdir(\"data\")\n", - "pd.DataFrame(whole_data).to_csv(\"data/data_train.csv\", index=False)\n", - "# Upload saved data to the default data store.\n", - "ds = ws.get_default_datastore()\n", - "ds.upload(src_dir=\"./data\", target_path=DATA_PATH, overwrite=True, show_progress=True)\n", - "train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "amlcompute_cluster_name = \"fcfn-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print(\"Found existing cluster, use it.\")\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", - " )\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Create the configuration and train a forecaster \n", - "First generate the configuration, in which we:\n", - "* Set metadata columns: target, time column and time-series id column names.\n", - "* Validate our data using cross validation with rolling window method.\n", - "* Set normalized root mean squared error as a metric to select the best model.\n", - "* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.\n", - "* Set limitations on the length of experiment run to 15 minutes.\n", - "* Finally, we set the task to be forecasting.\n", - "* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.\n", - "* [Optional] Forecast frequency parameter (freq) represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", - "\n", - "lags = [1, 2, 3]\n", - "forecast_horizon = n_test_periods\n", - "forecasting_parameters = ForecastingParameters(\n", - " time_column_name=TIME_COLUMN_NAME,\n", - " forecast_horizon=forecast_horizon,\n", - " time_series_id_column_names=[TIME_SERIES_ID_COLUMN_NAME],\n", - " target_lags=lags,\n", - " freq=\"H\", # Set the forecast frequency to be hourly\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Run the model selection and training process. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.workspace import Workspace\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.train.automl import AutoMLConfig\n", - "\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " debug_log=\"automl_forecasting_function.log\",\n", - " primary_metric=\"normalized_root_mean_squared_error\",\n", - " experiment_timeout_hours=0.25,\n", - " enable_early_stopping=True,\n", - " training_data=train_data,\n", - " compute_target=compute_target,\n", - " n_cross_validations=3,\n", - " verbosity=logging.INFO,\n", - " max_concurrent_iterations=4,\n", - " max_cores_per_iteration=-1,\n", - " label_column_name=target_label,\n", - " forecasting_parameters=forecasting_parameters,\n", - ")\n", - "\n", - "remote_run = experiment.submit(automl_config, show_output=False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Retrieve the best model to use it further.\n", - "_, fitted_model = remote_run.get_output()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting from the trained model " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### X_train is directly followed by the X_test\n", - "\n", - "Let's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.\n", - "\n", - "![Forecasting after training](forecast_function_at_train.png)\n", - "\n", - "We use `X_test` as a **forecast request** to generate the predictions." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Typical path: X_test is known, forecast all upcoming periods" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00\n", - "\n", - "# These are predictions we are asking the model to make (does not contain thet target column y),\n", - "# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data\n", - "X_test" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)\n", - "\n", - "# xy_nogap contains the predictions in the _automl_target_col column.\n", - "# Those same numbers are output in y_pred_no_gap\n", - "xy_nogap" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Confidence intervals" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. \n", - "This method accepts the same parameters as forecast()." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "quantiles = fitted_model.forecast_quantiles(X_test)\n", - "quantiles" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Distribution forecasts\n", - "\n", - "Often the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. \n", - "This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like \"we want the item to be in stock and not run out 99% of the time\". This is called a \"service level\". Here is how you get quantile forecasts." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# specify which quantiles you would like\n", - "fitted_model.quantiles = [0.01, 0.5, 0.95]\n", - "# use forecast_quantiles function, not the forecast() one\n", - "y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n", - "\n", - "# quantile forecasts returned in a Dataframe along with the time and time series id columns\n", - "y_pred_quantiles" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Destination-date forecast: \"just do something\"\n", - "\n", - "In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to \"destination date\". The destination date still needs to fit within the forecast horizon from training." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# We will take the destination date as a last date in the test set.\n", - "dest = max(X_test[TIME_COLUMN_NAME])\n", - "y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)\n", - "\n", - "# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)\n", - "xy_dest" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting away from training data \n", - "\n", - "Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model \"looks back\" -- uses previous values of the target -- then we somehow need to provide those values to the model.\n", - "\n", - "![Forecasting after training](forecast_function_away_from_train.png)\n", - "\n", - "The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per time-series, so each time-series can have a different forecast origin. \n", - "\n", - "The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# generate the same kind of test data we trained on,\n", - "# but now make the train set much longer, so that the test set will be in the future\n", - "X_context, y_context, X_away, y_away = get_timeseries(\n", - " train_len=42, # train data was 30 steps long\n", - " test_len=4,\n", - " time_column_name=TIME_COLUMN_NAME,\n", - " target_column_name=TARGET_COLUMN_NAME,\n", - " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", - " time_series_number=2,\n", - ")\n", - "\n", - "# end of the data we trained on\n", - "print(X_train.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())\n", - "# start of the data we want to predict on\n", - "print(X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "try:\n", - " y_pred_away, xy_away = fitted_model.forecast(X_away)\n", - " xy_away\n", - "except Exception as e:\n", - " print(e)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the forecast horizon. We need to provide a define `y` value to establish the forecast origin.\n", - "\n", - "We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "def make_forecasting_query(\n", - " fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback\n", - "):\n", - "\n", - " \"\"\"\n", - " This function will take the full dataset, and create the query\n", - " to predict all values of the time series from the `forecast_origin`\n", - " forward for the next `horizon` horizons. Context from previous\n", - " `lookback` periods will be included.\n", - "\n", - "\n", - "\n", - " fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.\n", - " time_column_name: string which column (must be in fulldata) is the time axis\n", - " target_column_name: string which column (must be in fulldata) is to be forecast\n", - " forecast_origin: datetime type the last time we (pretend to) have target values\n", - " horizon: timedelta how far forward, in time units (not periods)\n", - " lookback: timedelta how far back does the model look\n", - "\n", - " Example:\n", - "\n", - "\n", - " ```\n", - "\n", - " forecast_origin = pd.to_datetime(\"2012-09-01\") + pd.DateOffset(days=5) # forecast 5 days after end of training\n", - " print(forecast_origin)\n", - "\n", - " X_query, y_query = make_forecasting_query(data,\n", - " forecast_origin = forecast_origin,\n", - " horizon = pd.DateOffset(days=7), # 7 days into the future\n", - " lookback = pd.DateOffset(days=1), # model has lag 1 period (day)\n", - " )\n", - "\n", - " ```\n", - " \"\"\"\n", - "\n", - " X_past = fulldata[\n", - " (fulldata[time_column_name] > forecast_origin - lookback)\n", - " & (fulldata[time_column_name] <= forecast_origin)\n", - " ]\n", - "\n", - " X_future = fulldata[\n", - " (fulldata[time_column_name] > forecast_origin)\n", - " & (fulldata[time_column_name] <= forecast_origin + horizon)\n", - " ]\n", - "\n", - " y_past = X_past.pop(target_column_name).values.astype(np.float)\n", - " y_future = X_future.pop(target_column_name).values.astype(np.float)\n", - "\n", - " # Now take y_future and turn it into question marks\n", - " y_query = y_future.copy().astype(\n", - " np.float\n", - " ) # because sometimes life hands you an int\n", - " y_query.fill(np.NaN)\n", - "\n", - " print(\"X_past is \" + str(X_past.shape) + \" - shaped\")\n", - " print(\"X_future is \" + str(X_future.shape) + \" - shaped\")\n", - " print(\"y_past is \" + str(y_past.shape) + \" - shaped\")\n", - " print(\"y_query is \" + str(y_query.shape) + \" - shaped\")\n", - "\n", - " X_pred = pd.concat([X_past, X_future])\n", - " y_pred = np.concatenate([y_past, y_query])\n", - " return X_pred, y_pred" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Let's see where the context data ends - it ends, by construction, just before the testing data starts." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\n", - " X_context.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n", - " [\"min\", \"max\", \"count\"]\n", - " )\n", - ")\n", - "print(\n", - " X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n", - " [\"min\", \"max\", \"count\"]\n", - " )\n", - ")\n", - "X_context.tail(5)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Since the length of the lookback is 3,\n", - "# we need to add 3 periods from the context to the request\n", - "# so that the model has the data it needs\n", - "\n", - "# Put the X and y back together for a while.\n", - "# They like each other and it makes them happy.\n", - "X_context[TARGET_COLUMN_NAME] = y_context\n", - "X_away[TARGET_COLUMN_NAME] = y_away\n", - "fulldata = pd.concat([X_context, X_away])\n", - "\n", - "# forecast origin is the last point of data, which is one 1-hr period before test\n", - "forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)\n", - "# it is indeed the last point of the context\n", - "assert forecast_origin == X_context[TIME_COLUMN_NAME].max()\n", - "print(\"Forecast origin: \" + str(forecast_origin))\n", - "\n", - "# the model uses lags and rolling windows to look back in time\n", - "n_lookback_periods = max(lags)\n", - "lookback = pd.DateOffset(hours=n_lookback_periods)\n", - "\n", - "horizon = pd.DateOffset(hours=forecast_horizon)\n", - "\n", - "# now make the forecast query from context (refer to figure)\n", - "X_pred, y_pred = make_forecasting_query(\n", - " fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME, forecast_origin, horizon, lookback\n", - ")\n", - "\n", - "# show the forecast request aligned\n", - "X_show = X_pred.copy()\n", - "X_show[TARGET_COLUMN_NAME] = y_pred\n", - "X_show" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Note that the forecast origin is at 17:00 for both time-series, and periods from 18:00 are to be forecast." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Now everything works\n", - "y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)\n", - "\n", - "# show the forecast aligned\n", - "X_show = xy_away.reset_index()\n", - "# without the generated features\n", - "X_show[[\"date\", \"time_series_id\", \"ext_predictor\", \"_automl_target_col\"]]\n", - "# prediction is in _automl_target_col" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting farther than the forecast horizon \n", - "When the forecast destination, or the latest date in the prediction data frame, is farther into the future than the specified forecast horizon, the `forecast()` function will still make point predictions out to the later date using a recursive operation mode. Internally, the method recursively applies the regular forecaster to generate context so that we can forecast further into the future. \n", - "\n", - "To illustrate the use-case and operation of recursive forecasting, we'll consider an example with a single time-series where the forecasting period directly follows the training period and is twice as long as the forecasting horizon given at training time.\n", - "\n", - "![Recursive_forecast_overview](recursive_forecast_overview_small.png)\n", - "\n", - "Internally, we apply the forecaster in an iterative manner and finish the forecast task in two interations. In the first iteration, we apply the forecaster and get the prediction for the first forecast-horizon periods (y_pred1). In the second iteraction, y_pred1 is used as the context to produce the prediction for the next forecast-horizon periods (y_pred2). The combination of (y_pred1 and y_pred2) gives the results for the total forecast periods. \n", - "\n", - "A caveat: forecast accuracy will likely be worse the farther we predict into the future since errors are compounded with recursive application of the forecaster.\n", - "\n", - "![Recursive_forecast_iter1](recursive_forecast_iter1.png)\n", - "![Recursive_forecast_iter2](recursive_forecast_iter2.png)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# generate the same kind of test data we trained on, but with a single time-series and test period twice as long\n", - "# as the forecast_horizon.\n", - "_, _, X_test_long, y_test_long = get_timeseries(\n", - " train_len=n_train_periods,\n", - " test_len=forecast_horizon * 2,\n", - " time_column_name=TIME_COLUMN_NAME,\n", - " target_column_name=TARGET_COLUMN_NAME,\n", - " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", - " time_series_number=1,\n", - ")\n", - "\n", - "print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())\n", - "print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# forecast() function will invoke the recursive forecast method internally.\n", - "y_pred_long, X_trans_long = fitted_model.forecast(X_test_long)\n", - "y_pred_long" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# What forecast() function does in this case is equivalent to iterating it twice over the test set as the following.\n", - "y_pred1, _ = fitted_model.forecast(X_test_long[:forecast_horizon])\n", - "y_pred_all, _ = fitted_model.forecast(\n", - " X_test_long, np.concatenate((y_pred1, np.full(forecast_horizon, np.nan)))\n", - ")\n", - "np.array_equal(y_pred_all, y_pred_long)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Confidence interval and distributional forecasts\n", - "AutoML cannot currently estimate forecast errors beyond the forecast horizon set during training, so the `forecast_quantiles()` function will return missing values for quantiles not equal to 0.5 beyond the forecast horizon. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "fitted_model.forecast_quantiles(X_test_long)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Similarly with the simple senarios illustrated above, forecasting farther than the forecast horizon in other senarios like 'multiple time-series', 'Destination-date forecast', and 'forecast away from the training data' are also automatically handled by the `forecast()` function. " - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "\n", + "#### Forecasting away from training data\n", + "\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "2. [Setup](#Setup)\n", + "3. [Data](#Data)\n", + "4. [Prepare remote compute and data.](#prepare_remote)\n", + "4. [Create the configuration and train a forecaster](#train)\n", + "5. [Forecasting from the trained model](#forecasting)\n", + "6. [Forecasting away from training data](#forecasting_away)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "This notebook demonstrates the full interface of the `forecast()` function. \n", + "\n", + "The best known and most frequent usage of `forecast` enables forecasting on test sets that immediately follows training data. \n", + "\n", + "However, in many use cases it is necessary to continue using the model for some time before retraining it. This happens especially in **high frequency forecasting** when forecasts need to be made more frequently than the model can be retrained. Examples are in Internet of Things and predictive cloud resource scaling.\n", + "\n", + "Here we show how to use the `forecast()` function when a time gap exists between training data and prediction period.\n", + "\n", + "Terminology:\n", + "* forecast origin: the last period when the target value is known\n", + "* forecast periods(s): the period(s) for which the value of the target is desired.\n", + "* lookback: how many past periods (before forecast origin) the model function depends on. The larger of number of lags and length of rolling window.\n", + "* prediction context: `lookback` periods immediately preceding the forecast origin\n", + "\n", + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/automl-forecasting-function.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Please make sure you have followed the `configuration.ipynb` notebook so that your ML workspace information is saved in the config file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import pandas as pd\n", + "import numpy as np\n", + "import logging\n", + "import warnings\n", + "\n", + "import azureml.core\n", + "from azureml.core.dataset import Dataset\n", + "from pandas.tseries.frequencies import to_offset\n", + "from azureml.core.compute import AmlCompute\n", + "from azureml.core.compute import ComputeTarget\n", + "from azureml.core.runconfig import RunConfiguration\n", + "from azureml.core.conda_dependencies import CondaDependencies\n", + "\n", + "# Squash warning messages for cleaner output in the notebook\n", + "warnings.showwarning = lambda *args, **kwargs: None\n", + "\n", + "np.set_printoptions(precision=4, suppress=True, linewidth=120)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.0 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.train.automl import AutoMLConfig\n", + "\n", + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for the run history container in the workspace\n", + "experiment_name = \"automl-forecast-function-demo\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"SKU\"] = ws.sku\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data\n", + "For the demonstration purposes we will generate the data artificially and use them for the forecasting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "TIME_COLUMN_NAME = \"date\"\n", + "TIME_SERIES_ID_COLUMN_NAME = \"time_series_id\"\n", + "TARGET_COLUMN_NAME = \"y\"\n", + "\n", + "\n", + "def get_timeseries(\n", + " train_len: int,\n", + " test_len: int,\n", + " time_column_name: str,\n", + " target_column_name: str,\n", + " time_series_id_column_name: str,\n", + " time_series_number: int = 1,\n", + " freq: str = \"H\",\n", + "):\n", + " \"\"\"\n", + " Return the time series of designed length.\n", + "\n", + " :param train_len: The length of training data (one series).\n", + " :type train_len: int\n", + " :param test_len: The length of testing data (one series).\n", + " :type test_len: int\n", + " :param time_column_name: The desired name of a time column.\n", + " :type time_column_name: str\n", + " :param time_series_number: The number of time series in the data set.\n", + " :type time_series_number: int\n", + " :param freq: The frequency string representing pandas offset.\n", + " see https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n", + " :type freq: str\n", + " :returns: the tuple of train and test data sets.\n", + " :rtype: tuple\n", + "\n", + " \"\"\"\n", + " data_train = [] # type: List[pd.DataFrame]\n", + " data_test = [] # type: List[pd.DataFrame]\n", + " data_length = train_len + test_len\n", + " for i in range(time_series_number):\n", + " X = pd.DataFrame(\n", + " {\n", + " time_column_name: pd.date_range(\n", + " start=\"2000-01-01\", periods=data_length, freq=freq\n", + " ),\n", + " target_column_name: np.arange(data_length).astype(float)\n", + " + np.random.rand(data_length)\n", + " + i * 5,\n", + " \"ext_predictor\": np.asarray(range(42, 42 + data_length)),\n", + " time_series_id_column_name: np.repeat(\"ts{}\".format(i), data_length),\n", + " }\n", + " )\n", + " data_train.append(X[:train_len])\n", + " data_test.append(X[train_len:])\n", + " X_train = pd.concat(data_train)\n", + " y_train = X_train.pop(target_column_name).values\n", + " X_test = pd.concat(data_test)\n", + " y_test = X_test.pop(target_column_name).values\n", + " return X_train, y_train, X_test, y_test\n", + "\n", + "\n", + "n_test_periods = 6\n", + "n_train_periods = 30\n", + "X_train, y_train, X_test, y_test = get_timeseries(\n", + " train_len=n_train_periods,\n", + " test_len=n_test_periods,\n", + " time_column_name=TIME_COLUMN_NAME,\n", + " target_column_name=TARGET_COLUMN_NAME,\n", + " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", + " time_series_number=2,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's see what the training data looks like." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X_train.tail()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plot the example time series\n", + "import matplotlib.pyplot as plt\n", + "\n", + "whole_data = X_train.copy()\n", + "target_label = \"y\"\n", + "whole_data[target_label] = y_train\n", + "for g in whole_data.groupby(\"time_series_id\"):\n", + " plt.plot(g[1][\"date\"].values, g[1][\"y\"].values, label=g[0])\n", + "plt.legend()\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prepare remote compute and data. \n", + "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the artificial data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# We need to save thw artificial data and then upload them to default workspace datastore.\n", + "DATA_PATH = \"fc_fn_data\"\n", + "DATA_PATH_X = \"{}/data_train.csv\".format(DATA_PATH)\n", + "if not os.path.isdir(\"data\"):\n", + " os.mkdir(\"data\")\n", + "pd.DataFrame(whole_data).to_csv(\"data/data_train.csv\", index=False)\n", + "# Upload saved data to the default data store.\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./data\", target_path=DATA_PATH, overwrite=True, show_progress=True)\n", + "train_data = Dataset.Tabular.from_delimited_files(path=ds.path(DATA_PATH_X))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "amlcompute_cluster_name = \"fcfn-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create the configuration and train a forecaster \n", + "First generate the configuration, in which we:\n", + "* Set metadata columns: target, time column and time-series id column names.\n", + "* Validate our data using cross validation with rolling window method.\n", + "* Set normalized root mean squared error as a metric to select the best model.\n", + "* Set early termination to True, so the iterations through the models will stop when no improvements in accuracy score will be made.\n", + "* Set limitations on the length of experiment run to 15 minutes.\n", + "* Finally, we set the task to be forecasting.\n", + "* We apply the lag lead operator to the target value i.e. we use the previous values as a predictor for the future ones.\n", + "* [Optional] Forecast frequency parameter (freq) represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", + "\n", + "lags = [1, 2, 3]\n", + "forecast_horizon = n_test_periods\n", + "forecasting_parameters = ForecastingParameters(\n", + " time_column_name=TIME_COLUMN_NAME,\n", + " forecast_horizon=forecast_horizon,\n", + " time_series_id_column_names=[TIME_SERIES_ID_COLUMN_NAME],\n", + " target_lags=lags,\n", + " freq=\"H\", # Set the forecast frequency to be hourly\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Run the model selection and training process. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.train.automl import AutoMLConfig\n", + "\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " debug_log=\"automl_forecasting_function.log\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " experiment_timeout_hours=0.25,\n", + " enable_early_stopping=True,\n", + " training_data=train_data,\n", + " compute_target=compute_target,\n", + " n_cross_validations=3,\n", + " verbosity=logging.INFO,\n", + " max_concurrent_iterations=4,\n", + " max_cores_per_iteration=-1,\n", + " label_column_name=target_label,\n", + " forecasting_parameters=forecasting_parameters,\n", + ")\n", + "\n", + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieve the best model to use it further.\n", + "_, fitted_model = remote_run.get_output()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting from the trained model " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this section we will review the `forecast` interface for two main scenarios: forecasting right after the training data, and the more complex interface for forecasting when there is a gap (in the time sense) between training and testing data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### X_train is directly followed by the X_test\n", + "\n", + "Let's first consider the case when the prediction period immediately follows the training data. This is typical in scenarios where we have the time to retrain the model every time we wish to forecast. Forecasts that are made on daily and slower cadence typically fall into this category. Retraining the model every time benefits the accuracy because the most recent data is often the most informative.\n", + "\n", + "![Forecasting after training](forecast_function_at_train.png)\n", + "\n", + "We use `X_test` as a **forecast request** to generate the predictions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Typical path: X_test is known, forecast all upcoming periods" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The data set contains hourly data, the training set ends at 01/02/2000 at 05:00\n", + "\n", + "# These are predictions we are asking the model to make (does not contain thet target column y),\n", + "# for 6 periods beginning with 2000-01-02 06:00, which immediately follows the training data\n", + "X_test" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y_pred_no_gap, xy_nogap = fitted_model.forecast(X_test)\n", + "\n", + "# xy_nogap contains the predictions in the _automl_target_col column.\n", + "# Those same numbers are output in y_pred_no_gap\n", + "xy_nogap" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Confidence intervals" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Forecasting model may be used for the prediction of forecasting intervals by running ```forecast_quantiles()```. \n", + "This method accepts the same parameters as forecast()." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "quantiles = fitted_model.forecast_quantiles(X_test)\n", + "quantiles" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Distribution forecasts\n", + "\n", + "Often the figure of interest is not just the point prediction, but the prediction at some quantile of the distribution. \n", + "This arises when the forecast is used to control some kind of inventory, for example of grocery items or virtual machines for a cloud service. In such case, the control point is usually something like \"we want the item to be in stock and not run out 99% of the time\". This is called a \"service level\". Here is how you get quantile forecasts." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# specify which quantiles you would like\n", + "fitted_model.quantiles = [0.01, 0.5, 0.95]\n", + "# use forecast_quantiles function, not the forecast() one\n", + "y_pred_quantiles = fitted_model.forecast_quantiles(X_test)\n", + "\n", + "# quantile forecasts returned in a Dataframe along with the time and time series id columns\n", + "y_pred_quantiles" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Destination-date forecast: \"just do something\"\n", + "\n", + "In some scenarios, the X_test is not known. The forecast is likely to be weak, because it is missing contemporaneous predictors, which we will need to impute. If you still wish to predict forward under the assumption that the last known values will be carried forward, you can forecast out to \"destination date\". The destination date still needs to fit within the forecast horizon from training." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# We will take the destination date as a last date in the test set.\n", + "dest = max(X_test[TIME_COLUMN_NAME])\n", + "y_pred_dest, xy_dest = fitted_model.forecast(forecast_destination=dest)\n", + "\n", + "# This form also shows how we imputed the predictors which were not given. (Not so well! Use with caution!)\n", + "xy_dest" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting away from training data \n", + "\n", + "Suppose we trained a model, some time passed, and now we want to apply the model without re-training. If the model \"looks back\" -- uses previous values of the target -- then we somehow need to provide those values to the model.\n", + "\n", + "![Forecasting after training](forecast_function_away_from_train.png)\n", + "\n", + "The notion of forecast origin comes into play: the forecast origin is **the last period for which we have seen the target value**. This applies per time-series, so each time-series can have a different forecast origin. \n", + "\n", + "The part of data before the forecast origin is the **prediction context**. To provide the context values the model needs when it looks back, we pass definite values in `y_test` (aligned with corresponding times in `X_test`)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# generate the same kind of test data we trained on,\n", + "# but now make the train set much longer, so that the test set will be in the future\n", + "X_context, y_context, X_away, y_away = get_timeseries(\n", + " train_len=42, # train data was 30 steps long\n", + " test_len=4,\n", + " time_column_name=TIME_COLUMN_NAME,\n", + " target_column_name=TARGET_COLUMN_NAME,\n", + " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", + " time_series_number=2,\n", + ")\n", + "\n", + "# end of the data we trained on\n", + "print(X_train.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())\n", + "# start of the data we want to predict on\n", + "print(X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There is a gap of 12 hours between end of training and beginning of `X_away`. (It looks like 13 because all timestamps point to the start of the one hour periods.) Using only `X_away` will fail without adding context data for the model to consume." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "try:\n", + " y_pred_away, xy_away = fitted_model.forecast(X_away)\n", + " xy_away\n", + "except Exception as e:\n", + " print(e)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "How should we read that eror message? The forecast origin is at the last time the model saw an actual value of `y` (the target). That was at the end of the training data! The model is attempting to forecast from the end of training data. But the requested forecast periods are past the forecast horizon. We need to provide a define `y` value to establish the forecast origin.\n", + "\n", + "We will use this helper function to take the required amount of context from the data preceding the testing data. It's definition is intentionally simplified to keep the idea in the clear." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def make_forecasting_query(\n", + " fulldata, time_column_name, target_column_name, forecast_origin, horizon, lookback\n", + "):\n", + "\n", + " \"\"\"\n", + " This function will take the full dataset, and create the query\n", + " to predict all values of the time series from the `forecast_origin`\n", + " forward for the next `horizon` horizons. Context from previous\n", + " `lookback` periods will be included.\n", + "\n", + "\n", + "\n", + " fulldata: pandas.DataFrame a time series dataset. Needs to contain X and y.\n", + " time_column_name: string which column (must be in fulldata) is the time axis\n", + " target_column_name: string which column (must be in fulldata) is to be forecast\n", + " forecast_origin: datetime type the last time we (pretend to) have target values\n", + " horizon: timedelta how far forward, in time units (not periods)\n", + " lookback: timedelta how far back does the model look\n", + "\n", + " Example:\n", + "\n", + "\n", + " ```\n", + "\n", + " forecast_origin = pd.to_datetime(\"2012-09-01\") + pd.DateOffset(days=5) # forecast 5 days after end of training\n", + " print(forecast_origin)\n", + "\n", + " X_query, y_query = make_forecasting_query(data,\n", + " forecast_origin = forecast_origin,\n", + " horizon = pd.DateOffset(days=7), # 7 days into the future\n", + " lookback = pd.DateOffset(days=1), # model has lag 1 period (day)\n", + " )\n", + "\n", + " ```\n", + " \"\"\"\n", + "\n", + " X_past = fulldata[\n", + " (fulldata[time_column_name] > forecast_origin - lookback)\n", + " & (fulldata[time_column_name] <= forecast_origin)\n", + " ]\n", + "\n", + " X_future = fulldata[\n", + " (fulldata[time_column_name] > forecast_origin)\n", + " & (fulldata[time_column_name] <= forecast_origin + horizon)\n", + " ]\n", + "\n", + " y_past = X_past.pop(target_column_name).values.astype(np.float)\n", + " y_future = X_future.pop(target_column_name).values.astype(np.float)\n", + "\n", + " # Now take y_future and turn it into question marks\n", + " y_query = y_future.copy().astype(\n", + " np.float\n", + " ) # because sometimes life hands you an int\n", + " y_query.fill(np.NaN)\n", + "\n", + " print(\"X_past is \" + str(X_past.shape) + \" - shaped\")\n", + " print(\"X_future is \" + str(X_future.shape) + \" - shaped\")\n", + " print(\"y_past is \" + str(y_past.shape) + \" - shaped\")\n", + " print(\"y_query is \" + str(y_query.shape) + \" - shaped\")\n", + "\n", + " X_pred = pd.concat([X_past, X_future])\n", + " y_pred = np.concatenate([y_past, y_query])\n", + " return X_pred, y_pred" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's see where the context data ends - it ends, by construction, just before the testing data starts." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\n", + " X_context.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n", + " [\"min\", \"max\", \"count\"]\n", + " )\n", + ")\n", + "print(\n", + " X_away.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].agg(\n", + " [\"min\", \"max\", \"count\"]\n", + " )\n", + ")\n", + "X_context.tail(5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Since the length of the lookback is 3,\n", + "# we need to add 3 periods from the context to the request\n", + "# so that the model has the data it needs\n", + "\n", + "# Put the X and y back together for a while.\n", + "# They like each other and it makes them happy.\n", + "X_context[TARGET_COLUMN_NAME] = y_context\n", + "X_away[TARGET_COLUMN_NAME] = y_away\n", + "fulldata = pd.concat([X_context, X_away])\n", + "\n", + "# forecast origin is the last point of data, which is one 1-hr period before test\n", + "forecast_origin = X_away[TIME_COLUMN_NAME].min() - pd.DateOffset(hours=1)\n", + "# it is indeed the last point of the context\n", + "assert forecast_origin == X_context[TIME_COLUMN_NAME].max()\n", + "print(\"Forecast origin: \" + str(forecast_origin))\n", + "\n", + "# the model uses lags and rolling windows to look back in time\n", + "n_lookback_periods = max(lags)\n", + "lookback = pd.DateOffset(hours=n_lookback_periods)\n", + "\n", + "horizon = pd.DateOffset(hours=forecast_horizon)\n", + "\n", + "# now make the forecast query from context (refer to figure)\n", + "X_pred, y_pred = make_forecasting_query(\n", + " fulldata, TIME_COLUMN_NAME, TARGET_COLUMN_NAME, forecast_origin, horizon, lookback\n", + ")\n", + "\n", + "# show the forecast request aligned\n", + "X_show = X_pred.copy()\n", + "X_show[TARGET_COLUMN_NAME] = y_pred\n", + "X_show" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that the forecast origin is at 17:00 for both time-series, and periods from 18:00 are to be forecast." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Now everything works\n", + "y_pred_away, xy_away = fitted_model.forecast(X_pred, y_pred)\n", + "\n", + "# show the forecast aligned\n", + "X_show = xy_away.reset_index()\n", + "# without the generated features\n", + "X_show[[\"date\", \"time_series_id\", \"ext_predictor\", \"_automl_target_col\"]]\n", + "# prediction is in _automl_target_col" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting farther than the forecast horizon \n", + "When the forecast destination, or the latest date in the prediction data frame, is farther into the future than the specified forecast horizon, the `forecast()` function will still make point predictions out to the later date using a recursive operation mode. Internally, the method recursively applies the regular forecaster to generate context so that we can forecast further into the future. \n", + "\n", + "To illustrate the use-case and operation of recursive forecasting, we'll consider an example with a single time-series where the forecasting period directly follows the training period and is twice as long as the forecasting horizon given at training time.\n", + "\n", + "![Recursive_forecast_overview](recursive_forecast_overview_small.png)\n", + "\n", + "Internally, we apply the forecaster in an iterative manner and finish the forecast task in two interations. In the first iteration, we apply the forecaster and get the prediction for the first forecast-horizon periods (y_pred1). In the second iteraction, y_pred1 is used as the context to produce the prediction for the next forecast-horizon periods (y_pred2). The combination of (y_pred1 and y_pred2) gives the results for the total forecast periods. \n", + "\n", + "A caveat: forecast accuracy will likely be worse the farther we predict into the future since errors are compounded with recursive application of the forecaster.\n", + "\n", + "![Recursive_forecast_iter1](recursive_forecast_iter1.png)\n", + "![Recursive_forecast_iter2](recursive_forecast_iter2.png)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# generate the same kind of test data we trained on, but with a single time-series and test period twice as long\n", + "# as the forecast_horizon.\n", + "_, _, X_test_long, y_test_long = get_timeseries(\n", + " train_len=n_train_periods,\n", + " test_len=forecast_horizon * 2,\n", + " time_column_name=TIME_COLUMN_NAME,\n", + " target_column_name=TARGET_COLUMN_NAME,\n", + " time_series_id_column_name=TIME_SERIES_ID_COLUMN_NAME,\n", + " time_series_number=1,\n", + ")\n", + "\n", + "print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].min())\n", + "print(X_test_long.groupby(TIME_SERIES_ID_COLUMN_NAME)[TIME_COLUMN_NAME].max())" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# forecast() function will invoke the recursive forecast method internally.\n", + "y_pred_long, X_trans_long = fitted_model.forecast(X_test_long)\n", + "y_pred_long" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# What forecast() function does in this case is equivalent to iterating it twice over the test set as the following.\n", + "y_pred1, _ = fitted_model.forecast(X_test_long[:forecast_horizon])\n", + "y_pred_all, _ = fitted_model.forecast(\n", + " X_test_long, np.concatenate((y_pred1, np.full(forecast_horizon, np.nan)))\n", + ")\n", + "np.array_equal(y_pred_all, y_pred_long)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Confidence interval and distributional forecasts\n", + "AutoML cannot currently estimate forecast errors beyond the forecast horizon set during training, so the `forecast_quantiles()` function will return missing values for quantiles not equal to 0.5 beyond the forecast horizon. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "fitted_model.forecast_quantiles(X_test_long)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Similarly with the simple senarios illustrated above, forecasting farther than the forecast horizon in other senarios like 'multiple time-series', 'Destination-date forecast', and 'forecast away from the training data' are also automatically handled by the `forecast()` function. " + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "category": "tutorial", + "compute": [ + "Remote" + ], + "datasets": [ + "None" + ], + "deployment": [ + "None" + ], + "exclude_from_index": false, + "framework": [ + "Azure ML AutoML" + ], + "friendly_name": "Forecasting away from training data", + "index_order": 3, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.8" + }, + "tags": [ + "Forecasting", + "Confidence Intervals" ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "category": "tutorial", - "compute": [ - "Remote" - ], - "datasets": [ - "None" - ], - "deployment": [ - "None" - ], - "exclude_from_index": false, - "framework": [ - "Azure ML AutoML" - ], - "friendly_name": "Forecasting away from training data", - "index_order": 3, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" - }, - "tags": [ - "Forecasting", - "Confidence Intervals" - ], - "task": "Forecasting" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "task": "Forecasting" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb new file mode 100644 index 000000000..0b5681f60 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb @@ -0,0 +1,725 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-beer-remote/auto-ml-forecasting-beer-remote.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "# Automated Machine Learning\n", + "**Github DAU Forecasting**\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Data](#Data)\n", + "1. [Train](#Train)\n", + "1. [Evaluate](#Evaluate)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "## Introduction\n", + "This notebook demonstrates demand forecasting for Github Daily Active Users Dataset using AutoML.\n", + "\n", + "AutoML highlights here include using Deep Learning forecasts, Arima, Prophet, Remote Execution and Remote Inferencing, and working with the `forecast` function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.\n", + "\n", + "Make sure you have executed the [configuration](../../../configuration.ipynb) before running this notebook.\n", + "\n", + "Notebook synopsis:\n", + "\n", + "1. Creating an Experiment in an existing Workspace\n", + "2. Configuration and remote run of AutoML for a time-series model exploring Regression learners, Arima, Prophet and DNNs\n", + "4. Evaluating the fitted model using a rolling test " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "## Setup\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "import os\n", + "import azureml.core\n", + "import pandas as pd\n", + "import numpy as np\n", + "import logging\n", + "import warnings\n", + "\n", + "from pandas.tseries.frequencies import to_offset\n", + "\n", + "# Squash warning messages for cleaner output in the notebook\n", + "warnings.showwarning = lambda *args, **kwargs: None\n", + "\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.train.automl import AutoMLConfig\n", + "from matplotlib import pyplot as plt\n", + "from sklearn.metrics import mean_absolute_error, mean_squared_error\n", + "from azureml.train.estimator import Estimator" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.0 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for the run history container in the workspace\n", + "experiment_name = \"github-remote-cpu\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "### Using AmlCompute\n", + "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you use `AmlCompute` as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "cpu_cluster_name = \"github-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "## Data\n", + "Read Github DAU data from file, and preview data." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "Let's set up what we know about the dataset. \n", + "\n", + "**Target column** is what we want to forecast.\n", + "\n", + "**Time column** is the time axis along which to predict.\n", + "\n", + "**Time series identifier columns** are identified by values of the columns listed `time_series_id_column_names`, for example \"store\" and \"item\" if your data has multiple time series of sales, one series for each combination of store and item sold.\n", + "\n", + "**Forecast frequency (freq)** This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information.\n", + "\n", + "This dataset has only one time series. Please see the [orange juice notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales) for an example of a multi-time series dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "import pandas as pd\n", + "from pandas import DataFrame\n", + "from pandas import Grouper\n", + "from pandas import concat\n", + "from pandas.plotting import register_matplotlib_converters\n", + "\n", + "register_matplotlib_converters()\n", + "plt.figure(figsize=(20, 10))\n", + "plt.tight_layout()\n", + "\n", + "plt.subplot(2, 1, 1)\n", + "plt.title(\"Github Daily Active User By Year\")\n", + "df = pd.read_csv(\"github_dau_2011-2018_train.csv\", parse_dates=True, index_col=\"date\")\n", + "test_df = pd.read_csv(\n", + " \"github_dau_2011-2018_test.csv\", parse_dates=True, index_col=\"date\"\n", + ")\n", + "plt.plot(df)\n", + "\n", + "plt.subplot(2, 1, 2)\n", + "plt.title(\"Github Daily Active User By Month\")\n", + "groups = df.groupby(df.index.month)\n", + "months = concat([DataFrame(x[1].values) for x in groups], axis=1)\n", + "months = DataFrame(months)\n", + "months.columns = range(1, 49)\n", + "months.boxplot()\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "target_column_name = \"count\"\n", + "time_column_name = \"date\"\n", + "time_series_id_column_names = []\n", + "freq = \"D\" # Daily data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Split Training data into Train and Validation set and Upload to Datastores" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from helper import split_fraction_by_grain\n", + "from helper import split_full_for_forecasting\n", + "\n", + "train, valid = split_full_for_forecasting(df, time_column_name)\n", + "train.to_csv(\"train.csv\")\n", + "valid.to_csv(\"valid.csv\")\n", + "test_df.to_csv(\"test.csv\")\n", + "\n", + "datastore = ws.get_default_datastore()\n", + "datastore.upload_files(\n", + " files=[\"./train.csv\"],\n", + " target_path=\"github-dataset/tabular/\",\n", + " overwrite=True,\n", + " show_progress=True,\n", + ")\n", + "datastore.upload_files(\n", + " files=[\"./valid.csv\"],\n", + " target_path=\"github-dataset/tabular/\",\n", + " overwrite=True,\n", + " show_progress=True,\n", + ")\n", + "datastore.upload_files(\n", + " files=[\"./test.csv\"],\n", + " target_path=\"github-dataset/tabular/\",\n", + " overwrite=True,\n", + " show_progress=True,\n", + ")\n", + "\n", + "from azureml.core import Dataset\n", + "\n", + "train_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"github-dataset/tabular/train.csv\")]\n", + ")\n", + "valid_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"github-dataset/tabular/valid.csv\")]\n", + ")\n", + "test_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"github-dataset/tabular/test.csv\")]\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "### Setting forecaster maximum horizon \n", + "\n", + "The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 12 periods (i.e. 12 months). Notice that this is much shorter than the number of months in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the [energy demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "forecast_horizon = 12" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "## Train\n", + "\n", + "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error\n", + "|**iteration_timeout_minutes**|Time limit in minutes for each iteration.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "|**enable_dnn**|Enable Forecasting DNNs|\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", + "\n", + "forecasting_parameters = ForecastingParameters(\n", + " time_column_name=time_column_name,\n", + " forecast_horizon=forecast_horizon,\n", + " freq=\"D\", # Set the forecast frequency to be daily\n", + ")\n", + "\n", + "# We will disable the enable_early_stopping flag to ensure the DNN model is recommended for demonstration purpose.\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " experiment_timeout_hours=1,\n", + " training_data=train_dataset,\n", + " label_column_name=target_column_name,\n", + " validation_data=valid_dataset,\n", + " verbosity=logging.INFO,\n", + " compute_target=compute_target,\n", + " max_concurrent_iterations=4,\n", + " max_cores_per_iteration=-1,\n", + " enable_dnn=True,\n", + " enable_early_stopping=False,\n", + " forecasting_parameters=forecasting_parameters,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "# If you need to retrieve a run that already started, use the following code\n", + "# from azureml.train.automl.run import AutoMLRun\n", + "# remote_run = AutoMLRun(experiment = experiment, run_id = '')" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "### Retrieve the Best Model for Each Algorithm\n", + "Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from helper import get_result_df\n", + "\n", + "summary_df = get_result_df(remote_run)\n", + "summary_df" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from azureml.core.run import Run\n", + "from azureml.widgets import RunDetails\n", + "\n", + "forecast_model = \"TCNForecaster\"\n", + "if not forecast_model in summary_df[\"run_id\"]:\n", + " forecast_model = \"ForecastTCN\"\n", + "\n", + "best_dnn_run_id = summary_df[\"run_id\"][forecast_model]\n", + "best_dnn_run = Run(experiment, best_dnn_run_id)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "best_dnn_run.parent\n", + "RunDetails(best_dnn_run.parent).show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "best_dnn_run\n", + "RunDetails(best_dnn_run).show()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "## Evaluate on Test Data" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "source": [ + "We now use the best fitted model from the AutoML Run to make forecasts for the test set. \n", + "\n", + "We always score on the original dataset whose schema matches the training set schema." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "\n", + "test_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"github-dataset/tabular/test.csv\")]\n", + ")\n", + "# preview the first 3 rows of the dataset\n", + "test_dataset.take(5).to_pandas_dataframe()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "compute_target = ws.compute_targets[\"github-cluster\"]\n", + "test_experiment = Experiment(ws, experiment_name + \"_test\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "import os\n", + "import shutil\n", + "\n", + "script_folder = os.path.join(os.getcwd(), \"inference\")\n", + "os.makedirs(script_folder, exist_ok=True)\n", + "shutil.copy(\"infer.py\", script_folder)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from helper import run_inference\n", + "\n", + "test_run = run_inference(\n", + " test_experiment,\n", + " compute_target,\n", + " script_folder,\n", + " best_dnn_run,\n", + " test_dataset,\n", + " valid_dataset,\n", + " forecast_horizon,\n", + " target_column_name,\n", + " time_column_name,\n", + " freq,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "RunDetails(test_run).show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from helper import run_multiple_inferences\n", + "\n", + "summary_df = run_multiple_inferences(\n", + " summary_df,\n", + " experiment,\n", + " test_experiment,\n", + " compute_target,\n", + " script_folder,\n", + " test_dataset,\n", + " valid_dataset,\n", + " forecast_horizon,\n", + " target_column_name,\n", + " time_column_name,\n", + " freq,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "for run_name, run_summary in summary_df.iterrows():\n", + " print(run_name)\n", + " print(run_summary)\n", + " run_id = run_summary.run_id\n", + " test_run_id = run_summary.test_run_id\n", + " test_run = Run(test_experiment, test_run_id)\n", + " test_run.wait_for_completion()\n", + " test_score = test_run.get_metrics()[run_summary.primary_metric]\n", + " summary_df.loc[summary_df.run_id == run_id, \"Test Score\"] = test_score\n", + " print(\"Test Score: \", test_score)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "hideCode": false, + "hidePrompt": false + }, + "outputs": [], + "source": [ + "summary_df" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "hide_code_all_hidden": false, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_test.csv b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_test.csv new file mode 100644 index 000000000..6061b0d21 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_test.csv @@ -0,0 +1,455 @@ +date,count,day_of_week,month_of_year,holiday +2017-06-04,104663,6.0,5.0,0.0 +2017-06-05,155824,0.0,5.0,0.0 +2017-06-06,164908,1.0,5.0,0.0 +2017-06-07,170309,2.0,5.0,0.0 +2017-06-08,164256,3.0,5.0,0.0 +2017-06-09,153406,4.0,5.0,0.0 +2017-06-10,97024,5.0,5.0,0.0 +2017-06-11,103442,6.0,5.0,0.0 +2017-06-12,160768,0.0,5.0,0.0 +2017-06-13,166288,1.0,5.0,0.0 +2017-06-14,163819,2.0,5.0,0.0 +2017-06-15,157593,3.0,5.0,0.0 +2017-06-16,149259,4.0,5.0,0.0 +2017-06-17,95579,5.0,5.0,0.0 +2017-06-18,98723,6.0,5.0,0.0 +2017-06-19,159076,0.0,5.0,0.0 +2017-06-20,163340,1.0,5.0,0.0 +2017-06-21,163344,2.0,5.0,0.0 +2017-06-22,159528,3.0,5.0,0.0 +2017-06-23,146563,4.0,5.0,0.0 +2017-06-24,92631,5.0,5.0,0.0 +2017-06-25,96549,6.0,5.0,0.0 +2017-06-26,153249,0.0,5.0,0.0 +2017-06-27,160357,1.0,5.0,0.0 +2017-06-28,159941,2.0,5.0,0.0 +2017-06-29,156781,3.0,5.0,0.0 +2017-06-30,144709,4.0,5.0,0.0 +2017-07-01,89101,5.0,6.0,0.0 +2017-07-02,93046,6.0,6.0,0.0 +2017-07-03,144113,0.0,6.0,0.0 +2017-07-04,143061,1.0,6.0,1.0 +2017-07-05,154603,2.0,6.0,0.0 +2017-07-06,157200,3.0,6.0,0.0 +2017-07-07,147213,4.0,6.0,0.0 +2017-07-08,92348,5.0,6.0,0.0 +2017-07-09,97018,6.0,6.0,0.0 +2017-07-10,157192,0.0,6.0,0.0 +2017-07-11,161819,1.0,6.0,0.0 +2017-07-12,161998,2.0,6.0,0.0 +2017-07-13,160280,3.0,6.0,0.0 +2017-07-14,146818,4.0,6.0,0.0 +2017-07-15,93041,5.0,6.0,0.0 +2017-07-16,97505,6.0,6.0,0.0 +2017-07-17,156167,0.0,6.0,0.0 +2017-07-18,162855,1.0,6.0,0.0 +2017-07-19,162519,2.0,6.0,0.0 +2017-07-20,159941,3.0,6.0,0.0 +2017-07-21,148460,4.0,6.0,0.0 +2017-07-22,93431,5.0,6.0,0.0 +2017-07-23,98553,6.0,6.0,0.0 +2017-07-24,156202,0.0,6.0,0.0 +2017-07-25,162503,1.0,6.0,0.0 +2017-07-26,158479,2.0,6.0,0.0 +2017-07-27,158192,3.0,6.0,0.0 +2017-07-28,147108,4.0,6.0,0.0 +2017-07-29,93799,5.0,6.0,0.0 +2017-07-30,97920,6.0,6.0,0.0 +2017-07-31,152197,0.0,6.0,0.0 +2017-08-01,158477,1.0,7.0,0.0 +2017-08-02,159089,2.0,7.0,0.0 +2017-08-03,157182,3.0,7.0,0.0 +2017-08-04,146345,4.0,7.0,0.0 +2017-08-05,92534,5.0,7.0,0.0 +2017-08-06,97128,6.0,7.0,0.0 +2017-08-07,151359,0.0,7.0,0.0 +2017-08-08,159895,1.0,7.0,0.0 +2017-08-09,158329,2.0,7.0,0.0 +2017-08-10,155468,3.0,7.0,0.0 +2017-08-11,144914,4.0,7.0,0.0 +2017-08-12,92258,5.0,7.0,0.0 +2017-08-13,95933,6.0,7.0,0.0 +2017-08-14,147706,0.0,7.0,0.0 +2017-08-15,151115,1.0,7.0,0.0 +2017-08-16,157640,2.0,7.0,0.0 +2017-08-17,156600,3.0,7.0,0.0 +2017-08-18,146980,4.0,7.0,0.0 +2017-08-19,94592,5.0,7.0,0.0 +2017-08-20,99320,6.0,7.0,0.0 +2017-08-21,145727,0.0,7.0,0.0 +2017-08-22,160260,1.0,7.0,0.0 +2017-08-23,160440,2.0,7.0,0.0 +2017-08-24,157830,3.0,7.0,0.0 +2017-08-25,145822,4.0,7.0,0.0 +2017-08-26,94706,5.0,7.0,0.0 +2017-08-27,99047,6.0,7.0,0.0 +2017-08-28,152112,0.0,7.0,0.0 +2017-08-29,162440,1.0,7.0,0.0 +2017-08-30,162902,2.0,7.0,0.0 +2017-08-31,159498,3.0,7.0,0.0 +2017-09-01,145689,4.0,8.0,0.0 +2017-09-02,93589,5.0,8.0,0.0 +2017-09-03,100058,6.0,8.0,0.0 +2017-09-04,140865,0.0,8.0,1.0 +2017-09-05,165715,1.0,8.0,0.0 +2017-09-06,167463,2.0,8.0,0.0 +2017-09-07,164811,3.0,8.0,0.0 +2017-09-08,156157,4.0,8.0,0.0 +2017-09-09,101358,5.0,8.0,0.0 +2017-09-10,107915,6.0,8.0,0.0 +2017-09-11,167845,0.0,8.0,0.0 +2017-09-12,172756,1.0,8.0,0.0 +2017-09-13,172851,2.0,8.0,0.0 +2017-09-14,171675,3.0,8.0,0.0 +2017-09-15,159266,4.0,8.0,0.0 +2017-09-16,103547,5.0,8.0,0.0 +2017-09-17,110964,6.0,8.0,0.0 +2017-09-18,170976,0.0,8.0,0.0 +2017-09-19,177864,1.0,8.0,0.0 +2017-09-20,173567,2.0,8.0,0.0 +2017-09-21,172017,3.0,8.0,0.0 +2017-09-22,161357,4.0,8.0,0.0 +2017-09-23,104681,5.0,8.0,0.0 +2017-09-24,111711,6.0,8.0,0.0 +2017-09-25,173517,0.0,8.0,0.0 +2017-09-26,180049,1.0,8.0,0.0 +2017-09-27,178307,2.0,8.0,0.0 +2017-09-28,174157,3.0,8.0,0.0 +2017-09-29,161707,4.0,8.0,0.0 +2017-09-30,110536,5.0,8.0,0.0 +2017-10-01,106505,6.0,9.0,0.0 +2017-10-02,157565,0.0,9.0,0.0 +2017-10-03,164764,1.0,9.0,0.0 +2017-10-04,163383,2.0,9.0,0.0 +2017-10-05,162847,3.0,9.0,0.0 +2017-10-06,153575,4.0,9.0,0.0 +2017-10-07,107472,5.0,9.0,0.0 +2017-10-08,116127,6.0,9.0,0.0 +2017-10-09,174457,0.0,9.0,1.0 +2017-10-10,185217,1.0,9.0,0.0 +2017-10-11,185120,2.0,9.0,0.0 +2017-10-12,180844,3.0,9.0,0.0 +2017-10-13,170178,4.0,9.0,0.0 +2017-10-14,112754,5.0,9.0,0.0 +2017-10-15,121251,6.0,9.0,0.0 +2017-10-16,183906,0.0,9.0,0.0 +2017-10-17,188945,1.0,9.0,0.0 +2017-10-18,187297,2.0,9.0,0.0 +2017-10-19,183867,3.0,9.0,0.0 +2017-10-20,173021,4.0,9.0,0.0 +2017-10-21,115851,5.0,9.0,0.0 +2017-10-22,126088,6.0,9.0,0.0 +2017-10-23,189452,0.0,9.0,0.0 +2017-10-24,194412,1.0,9.0,0.0 +2017-10-25,192293,2.0,9.0,0.0 +2017-10-26,190163,3.0,9.0,0.0 +2017-10-27,177053,4.0,9.0,0.0 +2017-10-28,114934,5.0,9.0,0.0 +2017-10-29,125289,6.0,9.0,0.0 +2017-10-30,189245,0.0,9.0,0.0 +2017-10-31,191480,1.0,9.0,0.0 +2017-11-01,182281,2.0,10.0,0.0 +2017-11-02,186351,3.0,10.0,0.0 +2017-11-03,175422,4.0,10.0,0.0 +2017-11-04,118160,5.0,10.0,0.0 +2017-11-05,127602,6.0,10.0,0.0 +2017-11-06,191067,0.0,10.0,0.0 +2017-11-07,197083,1.0,10.0,0.0 +2017-11-08,194333,2.0,10.0,0.0 +2017-11-09,193914,3.0,10.0,0.0 +2017-11-10,179933,4.0,10.0,1.0 +2017-11-11,121346,5.0,10.0,0.0 +2017-11-12,131900,6.0,10.0,0.0 +2017-11-13,196969,0.0,10.0,0.0 +2017-11-14,201949,1.0,10.0,0.0 +2017-11-15,198424,2.0,10.0,0.0 +2017-11-16,196902,3.0,10.0,0.0 +2017-11-17,183893,4.0,10.0,0.0 +2017-11-18,122767,5.0,10.0,0.0 +2017-11-19,130890,6.0,10.0,0.0 +2017-11-20,194515,0.0,10.0,0.0 +2017-11-21,198601,1.0,10.0,0.0 +2017-11-22,191041,2.0,10.0,0.0 +2017-11-23,170321,3.0,10.0,1.0 +2017-11-24,155623,4.0,10.0,0.0 +2017-11-25,115759,5.0,10.0,0.0 +2017-11-26,128771,6.0,10.0,0.0 +2017-11-27,199419,0.0,10.0,0.0 +2017-11-28,207253,1.0,10.0,0.0 +2017-11-29,205406,2.0,10.0,0.0 +2017-11-30,200674,3.0,10.0,0.0 +2017-12-01,187017,4.0,11.0,0.0 +2017-12-02,129735,5.0,11.0,0.0 +2017-12-03,139120,6.0,11.0,0.0 +2017-12-04,205505,0.0,11.0,0.0 +2017-12-05,208218,1.0,11.0,0.0 +2017-12-06,202480,2.0,11.0,0.0 +2017-12-07,197822,3.0,11.0,0.0 +2017-12-08,180686,4.0,11.0,0.0 +2017-12-09,123667,5.0,11.0,0.0 +2017-12-10,130987,6.0,11.0,0.0 +2017-12-11,193901,0.0,11.0,0.0 +2017-12-12,194997,1.0,11.0,0.0 +2017-12-13,192063,2.0,11.0,0.0 +2017-12-14,186496,3.0,11.0,0.0 +2017-12-15,170812,4.0,11.0,0.0 +2017-12-16,110474,5.0,11.0,0.0 +2017-12-17,118165,6.0,11.0,0.0 +2017-12-18,176843,0.0,11.0,0.0 +2017-12-19,179550,1.0,11.0,0.0 +2017-12-20,173506,2.0,11.0,0.0 +2017-12-21,165910,3.0,11.0,0.0 +2017-12-22,145886,4.0,11.0,0.0 +2017-12-23,95246,5.0,11.0,0.0 +2017-12-24,88781,6.0,11.0,0.0 +2017-12-25,98189,0.0,11.0,1.0 +2017-12-26,121383,1.0,11.0,0.0 +2017-12-27,135300,2.0,11.0,0.0 +2017-12-28,136827,3.0,11.0,0.0 +2017-12-29,127700,4.0,11.0,0.0 +2017-12-30,93014,5.0,11.0,0.0 +2017-12-31,82878,6.0,11.0,0.0 +2018-01-01,86419,0.0,0.0,1.0 +2018-01-02,147428,1.0,0.0,0.0 +2018-01-03,162193,2.0,0.0,0.0 +2018-01-04,163784,3.0,0.0,0.0 +2018-01-05,158606,4.0,0.0,0.0 +2018-01-06,113467,5.0,0.0,0.0 +2018-01-07,118313,6.0,0.0,0.0 +2018-01-08,175623,0.0,0.0,0.0 +2018-01-09,183880,1.0,0.0,0.0 +2018-01-10,183945,2.0,0.0,0.0 +2018-01-11,181769,3.0,0.0,0.0 +2018-01-12,170552,4.0,0.0,0.0 +2018-01-13,115707,5.0,0.0,0.0 +2018-01-14,121191,6.0,0.0,0.0 +2018-01-15,176127,0.0,0.0,1.0 +2018-01-16,188032,1.0,0.0,0.0 +2018-01-17,189871,2.0,0.0,0.0 +2018-01-18,189348,3.0,0.0,0.0 +2018-01-19,177456,4.0,0.0,0.0 +2018-01-20,123321,5.0,0.0,0.0 +2018-01-21,128306,6.0,0.0,0.0 +2018-01-22,186132,0.0,0.0,0.0 +2018-01-23,197618,1.0,0.0,0.0 +2018-01-24,196402,2.0,0.0,0.0 +2018-01-25,192722,3.0,0.0,0.0 +2018-01-26,179415,4.0,0.0,0.0 +2018-01-27,125769,5.0,0.0,0.0 +2018-01-28,133306,6.0,0.0,0.0 +2018-01-29,194151,0.0,0.0,0.0 +2018-01-30,198680,1.0,0.0,0.0 +2018-01-31,198652,2.0,0.0,0.0 +2018-02-01,195472,3.0,1.0,0.0 +2018-02-02,183173,4.0,1.0,0.0 +2018-02-03,124276,5.0,1.0,0.0 +2018-02-04,129054,6.0,1.0,0.0 +2018-02-05,190024,0.0,1.0,0.0 +2018-02-06,198658,1.0,1.0,0.0 +2018-02-07,198272,2.0,1.0,0.0 +2018-02-08,195339,3.0,1.0,0.0 +2018-02-09,183086,4.0,1.0,0.0 +2018-02-10,122536,5.0,1.0,0.0 +2018-02-11,133033,6.0,1.0,0.0 +2018-02-12,185386,0.0,1.0,0.0 +2018-02-13,184789,1.0,1.0,0.0 +2018-02-14,176089,2.0,1.0,0.0 +2018-02-15,171317,3.0,1.0,0.0 +2018-02-16,162693,4.0,1.0,0.0 +2018-02-17,116342,5.0,1.0,0.0 +2018-02-18,122466,6.0,1.0,0.0 +2018-02-19,172364,0.0,1.0,1.0 +2018-02-20,185896,1.0,1.0,0.0 +2018-02-21,188166,2.0,1.0,0.0 +2018-02-22,189427,3.0,1.0,0.0 +2018-02-23,178732,4.0,1.0,0.0 +2018-02-24,132664,5.0,1.0,0.0 +2018-02-25,134008,6.0,1.0,0.0 +2018-02-26,200075,0.0,1.0,0.0 +2018-02-27,207996,1.0,1.0,0.0 +2018-02-28,204416,2.0,1.0,0.0 +2018-03-01,201320,3.0,2.0,0.0 +2018-03-02,188205,4.0,2.0,0.0 +2018-03-03,131162,5.0,2.0,0.0 +2018-03-04,138320,6.0,2.0,0.0 +2018-03-05,207326,0.0,2.0,0.0 +2018-03-06,212462,1.0,2.0,0.0 +2018-03-07,209357,2.0,2.0,0.0 +2018-03-08,194876,3.0,2.0,0.0 +2018-03-09,193761,4.0,2.0,0.0 +2018-03-10,133449,5.0,2.0,0.0 +2018-03-11,142258,6.0,2.0,0.0 +2018-03-12,208753,0.0,2.0,0.0 +2018-03-13,210602,1.0,2.0,0.0 +2018-03-14,214236,2.0,2.0,0.0 +2018-03-15,210761,3.0,2.0,0.0 +2018-03-16,196619,4.0,2.0,0.0 +2018-03-17,133056,5.0,2.0,0.0 +2018-03-18,141335,6.0,2.0,0.0 +2018-03-19,211580,0.0,2.0,0.0 +2018-03-20,219051,1.0,2.0,0.0 +2018-03-21,215435,2.0,2.0,0.0 +2018-03-22,211961,3.0,2.0,0.0 +2018-03-23,196009,4.0,2.0,0.0 +2018-03-24,132390,5.0,2.0,0.0 +2018-03-25,140021,6.0,2.0,0.0 +2018-03-26,205273,0.0,2.0,0.0 +2018-03-27,212686,1.0,2.0,0.0 +2018-03-28,210683,2.0,2.0,0.0 +2018-03-29,189044,3.0,2.0,0.0 +2018-03-30,170256,4.0,2.0,0.0 +2018-03-31,125999,5.0,2.0,0.0 +2018-04-01,126749,6.0,3.0,0.0 +2018-04-02,186546,0.0,3.0,0.0 +2018-04-03,207905,1.0,3.0,0.0 +2018-04-04,201528,2.0,3.0,0.0 +2018-04-05,188580,3.0,3.0,0.0 +2018-04-06,173714,4.0,3.0,0.0 +2018-04-07,125723,5.0,3.0,0.0 +2018-04-08,142545,6.0,3.0,0.0 +2018-04-09,204767,0.0,3.0,0.0 +2018-04-10,212048,1.0,3.0,0.0 +2018-04-11,210517,2.0,3.0,0.0 +2018-04-12,206924,3.0,3.0,0.0 +2018-04-13,191679,4.0,3.0,0.0 +2018-04-14,126394,5.0,3.0,0.0 +2018-04-15,137279,6.0,3.0,0.0 +2018-04-16,208085,0.0,3.0,0.0 +2018-04-17,213273,1.0,3.0,0.0 +2018-04-18,211580,2.0,3.0,0.0 +2018-04-19,206037,3.0,3.0,0.0 +2018-04-20,191211,4.0,3.0,0.0 +2018-04-21,125564,5.0,3.0,0.0 +2018-04-22,136469,6.0,3.0,0.0 +2018-04-23,206288,0.0,3.0,0.0 +2018-04-24,212115,1.0,3.0,0.0 +2018-04-25,207948,2.0,3.0,0.0 +2018-04-26,205759,3.0,3.0,0.0 +2018-04-27,181330,4.0,3.0,0.0 +2018-04-28,130046,5.0,3.0,0.0 +2018-04-29,120802,6.0,3.0,0.0 +2018-04-30,170390,0.0,3.0,0.0 +2018-05-01,169054,1.0,4.0,0.0 +2018-05-02,197891,2.0,4.0,0.0 +2018-05-03,199820,3.0,4.0,0.0 +2018-05-04,186783,4.0,4.0,0.0 +2018-05-05,124420,5.0,4.0,0.0 +2018-05-06,130666,6.0,4.0,0.0 +2018-05-07,196014,0.0,4.0,0.0 +2018-05-08,203058,1.0,4.0,0.0 +2018-05-09,198582,2.0,4.0,0.0 +2018-05-10,191321,3.0,4.0,0.0 +2018-05-11,183639,4.0,4.0,0.0 +2018-05-12,122023,5.0,4.0,0.0 +2018-05-13,128775,6.0,4.0,0.0 +2018-05-14,199104,0.0,4.0,0.0 +2018-05-15,200658,1.0,4.0,0.0 +2018-05-16,201541,2.0,4.0,0.0 +2018-05-17,196886,3.0,4.0,0.0 +2018-05-18,188597,4.0,4.0,0.0 +2018-05-19,121392,5.0,4.0,0.0 +2018-05-20,126981,6.0,4.0,0.0 +2018-05-21,189291,0.0,4.0,0.0 +2018-05-22,203038,1.0,4.0,0.0 +2018-05-23,205330,2.0,4.0,0.0 +2018-05-24,199208,3.0,4.0,0.0 +2018-05-25,187768,4.0,4.0,0.0 +2018-05-26,117635,5.0,4.0,0.0 +2018-05-27,124352,6.0,4.0,0.0 +2018-05-28,180398,0.0,4.0,1.0 +2018-05-29,194170,1.0,4.0,0.0 +2018-05-30,200281,2.0,4.0,0.0 +2018-05-31,197244,3.0,4.0,0.0 +2018-06-01,184037,4.0,5.0,0.0 +2018-06-02,121135,5.0,5.0,0.0 +2018-06-03,129389,6.0,5.0,0.0 +2018-06-04,200331,0.0,5.0,0.0 +2018-06-05,207735,1.0,5.0,0.0 +2018-06-06,203354,2.0,5.0,0.0 +2018-06-07,200520,3.0,5.0,0.0 +2018-06-08,182038,4.0,5.0,0.0 +2018-06-09,120164,5.0,5.0,0.0 +2018-06-10,125256,6.0,5.0,0.0 +2018-06-11,194786,0.0,5.0,0.0 +2018-06-12,200815,1.0,5.0,0.0 +2018-06-13,197740,2.0,5.0,0.0 +2018-06-14,192294,3.0,5.0,0.0 +2018-06-15,173587,4.0,5.0,0.0 +2018-06-16,105955,5.0,5.0,0.0 +2018-06-17,110780,6.0,5.0,0.0 +2018-06-18,174582,0.0,5.0,0.0 +2018-06-19,193310,1.0,5.0,0.0 +2018-06-20,193062,2.0,5.0,0.0 +2018-06-21,187986,3.0,5.0,0.0 +2018-06-22,173606,4.0,5.0,0.0 +2018-06-23,111795,5.0,5.0,0.0 +2018-06-24,116134,6.0,5.0,0.0 +2018-06-25,185919,0.0,5.0,0.0 +2018-06-26,193142,1.0,5.0,0.0 +2018-06-27,188114,2.0,5.0,0.0 +2018-06-28,183737,3.0,5.0,0.0 +2018-06-29,171496,4.0,5.0,0.0 +2018-06-30,107210,5.0,5.0,0.0 +2018-07-01,111053,6.0,6.0,0.0 +2018-07-02,176198,0.0,6.0,0.0 +2018-07-03,184040,1.0,6.0,0.0 +2018-07-04,169783,2.0,6.0,1.0 +2018-07-05,177996,3.0,6.0,0.0 +2018-07-06,167378,4.0,6.0,0.0 +2018-07-07,106401,5.0,6.0,0.0 +2018-07-08,112327,6.0,6.0,0.0 +2018-07-09,182835,0.0,6.0,0.0 +2018-07-10,187694,1.0,6.0,0.0 +2018-07-11,185762,2.0,6.0,0.0 +2018-07-12,184099,3.0,6.0,0.0 +2018-07-13,170860,4.0,6.0,0.0 +2018-07-14,106799,5.0,6.0,0.0 +2018-07-15,108475,6.0,6.0,0.0 +2018-07-16,175704,0.0,6.0,0.0 +2018-07-17,183596,1.0,6.0,0.0 +2018-07-18,179897,2.0,6.0,0.0 +2018-07-19,183373,3.0,6.0,0.0 +2018-07-20,169626,4.0,6.0,0.0 +2018-07-21,106785,5.0,6.0,0.0 +2018-07-22,112387,6.0,6.0,0.0 +2018-07-23,180572,0.0,6.0,0.0 +2018-07-24,186943,1.0,6.0,0.0 +2018-07-25,185744,2.0,6.0,0.0 +2018-07-26,183117,3.0,6.0,0.0 +2018-07-27,168526,4.0,6.0,0.0 +2018-07-28,105936,5.0,6.0,0.0 +2018-07-29,111708,6.0,6.0,0.0 +2018-07-30,179950,0.0,6.0,0.0 +2018-07-31,185930,1.0,6.0,0.0 +2018-08-01,183366,2.0,7.0,0.0 +2018-08-02,182412,3.0,7.0,0.0 +2018-08-03,173429,4.0,7.0,0.0 +2018-08-04,106108,5.0,7.0,0.0 +2018-08-05,110059,6.0,7.0,0.0 +2018-08-06,178355,0.0,7.0,0.0 +2018-08-07,185518,1.0,7.0,0.0 +2018-08-08,183204,2.0,7.0,0.0 +2018-08-09,181276,3.0,7.0,0.0 +2018-08-10,168297,4.0,7.0,0.0 +2018-08-11,106488,5.0,7.0,0.0 +2018-08-12,111786,6.0,7.0,0.0 +2018-08-13,178620,0.0,7.0,0.0 +2018-08-14,181922,1.0,7.0,0.0 +2018-08-15,172198,2.0,7.0,0.0 +2018-08-16,177367,3.0,7.0,0.0 +2018-08-17,166550,4.0,7.0,0.0 +2018-08-18,107011,5.0,7.0,0.0 +2018-08-19,112299,6.0,7.0,0.0 +2018-08-20,176718,0.0,7.0,0.0 +2018-08-21,182562,1.0,7.0,0.0 +2018-08-22,181484,2.0,7.0,0.0 +2018-08-23,180317,3.0,7.0,0.0 +2018-08-24,170197,4.0,7.0,0.0 +2018-08-25,109383,5.0,7.0,0.0 +2018-08-26,113373,6.0,7.0,0.0 +2018-08-27,180142,0.0,7.0,0.0 +2018-08-28,191628,1.0,7.0,0.0 +2018-08-29,191149,2.0,7.0,0.0 +2018-08-30,187503,3.0,7.0,0.0 +2018-08-31,172280,4.0,7.0,0.0 diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_train.csv b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_train.csv new file mode 100644 index 000000000..5a409ad26 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/github_dau_2011-2018_train.csv @@ -0,0 +1,2286 @@ +date,count,day_of_week,month_of_year,holiday +2011-03-01,8583,1.0,2.0,0.0 +2011-03-02,8561,2.0,2.0,0.0 +2011-03-03,8406,3.0,2.0,0.0 +2011-03-04,7921,4.0,2.0,0.0 +2011-03-05,5597,5.0,2.0,0.0 +2011-03-06,6400,6.0,2.0,0.0 +2011-03-07,8043,0.0,2.0,0.0 +2011-03-08,8666,1.0,2.0,0.0 +2011-03-09,8344,2.0,2.0,0.0 +2011-03-10,8344,3.0,2.0,0.0 +2011-03-11,8017,4.0,2.0,0.0 +2011-03-12,5756,5.0,2.0,0.0 +2011-03-13,6294,6.0,2.0,0.0 +2011-03-14,8210,0.0,2.0,0.0 +2011-03-15,8882,1.0,2.0,0.0 +2011-03-16,8849,2.0,2.0,0.0 +2011-03-17,8611,3.0,2.0,0.0 +2011-03-18,8160,4.0,2.0,0.0 +2011-03-19,6068,5.0,2.0,0.0 +2011-03-20,6485,6.0,2.0,0.0 +2011-03-21,8596,0.0,2.0,0.0 +2011-03-22,9240,1.0,2.0,0.0 +2011-03-23,9005,2.0,2.0,0.0 +2011-03-24,8653,3.0,2.0,0.0 +2011-03-25,8288,4.0,2.0,0.0 +2011-03-26,6317,5.0,2.0,0.0 +2011-03-27,6793,6.0,2.0,0.0 +2011-03-28,9369,0.0,2.0,0.0 +2011-03-29,8589,1.0,2.0,0.0 +2011-03-30,9100,2.0,2.0,0.0 +2011-03-31,9013,3.0,2.0,0.0 +2011-04-01,8439,4.0,3.0,0.0 +2011-04-02,6142,5.0,3.0,0.0 +2011-04-03,6703,6.0,3.0,0.0 +2011-04-04,9516,0.0,3.0,0.0 +2011-04-05,9736,1.0,3.0,0.0 +2011-04-06,9370,2.0,3.0,0.0 +2011-04-07,9178,3.0,3.0,0.0 +2011-04-08,8862,4.0,3.0,0.0 +2011-04-09,6183,5.0,3.0,0.0 +2011-04-10,6798,6.0,3.0,0.0 +2011-04-11,9661,0.0,3.0,0.0 +2011-04-12,9498,1.0,3.0,0.0 +2011-04-13,9668,2.0,3.0,0.0 +2011-04-14,9651,3.0,3.0,0.0 +2011-04-15,9052,4.0,3.0,0.0 +2011-04-16,6559,5.0,3.0,0.0 +2011-04-17,6826,6.0,3.0,0.0 +2011-04-18,9243,0.0,3.0,0.0 +2011-04-19,9787,1.0,3.0,0.0 +2011-04-20,9259,2.0,3.0,0.0 +2011-04-21,9090,3.0,3.0,0.0 +2011-04-22,7812,4.0,3.0,0.0 +2011-04-23,6081,5.0,3.0,0.0 +2011-04-24,6106,6.0,3.0,0.0 +2011-04-25,7975,0.0,3.0,0.0 +2011-04-26,9656,1.0,3.0,0.0 +2011-04-27,9090,2.0,3.0,0.0 +2011-04-28,8600,3.0,3.0,0.0 +2011-04-29,9050,4.0,3.0,0.0 +2011-04-30,6073,5.0,3.0,0.0 +2011-05-01,6554,6.0,4.0,0.0 +2011-05-02,8287,0.0,4.0,0.0 +2011-05-03,9763,1.0,4.0,0.0 +2011-05-04,10105,2.0,4.0,0.0 +2011-05-05,10113,3.0,4.0,0.0 +2011-05-06,9085,4.0,4.0,0.0 +2011-05-07,6286,5.0,4.0,0.0 +2011-05-08,6674,6.0,4.0,0.0 +2011-05-09,9810,0.0,4.0,0.0 +2011-05-10,9390,1.0,4.0,0.0 +2011-05-11,10237,2.0,4.0,0.0 +2011-05-12,9630,3.0,4.0,0.0 +2011-05-13,9248,4.0,4.0,0.0 +2011-05-14,6785,5.0,4.0,0.0 +2011-05-15,7197,6.0,4.0,0.0 +2011-05-16,9794,0.0,4.0,0.0 +2011-05-17,10042,1.0,4.0,0.0 +2011-05-18,9978,2.0,4.0,0.0 +2011-05-19,10032,3.0,4.0,0.0 +2011-05-20,8662,4.0,4.0,0.0 +2011-05-21,6172,5.0,4.0,0.0 +2011-05-22,6423,6.0,4.0,0.0 +2011-05-23,10039,0.0,4.0,0.0 +2011-05-24,10487,1.0,4.0,0.0 +2011-05-25,10291,2.0,4.0,0.0 +2011-05-26,10188,3.0,4.0,0.0 +2011-05-27,8773,4.0,4.0,0.0 +2011-05-28,6323,5.0,4.0,0.0 +2011-05-29,6728,6.0,4.0,0.0 +2011-05-30,8663,0.0,4.0,1.0 +2011-05-31,10047,1.0,4.0,0.0 +2011-06-01,10183,2.0,5.0,0.0 +2011-06-02,9305,3.0,5.0,0.0 +2011-06-03,9493,4.0,5.0,0.0 +2011-06-04,6682,5.0,5.0,0.0 +2011-06-05,7043,6.0,5.0,0.0 +2011-06-06,9619,0.0,5.0,0.0 +2011-06-07,10108,1.0,5.0,0.0 +2011-06-08,10330,2.0,5.0,0.0 +2011-06-09,9792,3.0,5.0,0.0 +2011-06-10,9287,4.0,5.0,0.0 +2011-06-11,6432,5.0,5.0,0.0 +2011-06-12,6278,6.0,5.0,0.0 +2011-06-13,9515,0.0,5.0,0.0 +2011-06-14,10155,1.0,5.0,0.0 +2011-06-15,9979,2.0,5.0,0.0 +2011-06-16,9880,3.0,5.0,0.0 +2011-06-17,9855,4.0,5.0,0.0 +2011-06-18,6356,5.0,5.0,0.0 +2011-06-19,7028,6.0,5.0,0.0 +2011-06-20,10335,0.0,5.0,0.0 +2011-06-21,10383,1.0,5.0,0.0 +2011-06-22,10391,2.0,5.0,0.0 +2011-06-23,7190,3.0,5.0,0.0 +2011-06-24,9613,4.0,5.0,0.0 +2011-06-25,5890,5.0,5.0,0.0 +2011-06-26,6256,6.0,5.0,0.0 +2011-06-27,8825,0.0,5.0,0.0 +2011-06-28,10263,1.0,5.0,0.0 +2011-06-29,10628,2.0,5.0,0.0 +2011-06-30,10043,3.0,5.0,0.0 +2011-07-01,9403,4.0,6.0,0.0 +2011-07-02,6294,5.0,6.0,0.0 +2011-07-03,6485,6.0,6.0,0.0 +2011-07-04,8954,0.0,6.0,1.0 +2011-07-05,9672,1.0,6.0,0.0 +2011-07-06,10488,2.0,6.0,0.0 +2011-07-07,10199,3.0,6.0,0.0 +2011-07-08,9300,4.0,6.0,0.0 +2011-07-09,6544,5.0,6.0,0.0 +2011-07-10,6898,6.0,6.0,0.0 +2011-07-11,10087,0.0,6.0,0.0 +2011-07-12,10623,1.0,6.0,0.0 +2011-07-13,10201,2.0,6.0,0.0 +2011-07-14,9771,3.0,6.0,0.0 +2011-07-15,9339,4.0,6.0,0.0 +2011-07-16,6690,5.0,6.0,0.0 +2011-07-17,7059,6.0,6.0,0.0 +2011-07-18,10367,0.0,6.0,0.0 +2011-07-19,10123,1.0,6.0,0.0 +2011-07-20,10370,2.0,6.0,0.0 +2011-07-21,10296,3.0,6.0,0.0 +2011-07-22,9479,4.0,6.0,0.0 +2011-07-23,6667,5.0,6.0,0.0 +2011-07-24,6929,6.0,6.0,0.0 +2011-07-25,9924,0.0,6.0,0.0 +2011-07-26,10840,1.0,6.0,0.0 +2011-07-27,10588,2.0,6.0,0.0 +2011-07-28,10195,3.0,6.0,0.0 +2011-07-29,9688,4.0,6.0,0.0 +2011-07-30,6070,5.0,6.0,0.0 +2011-07-31,6858,6.0,6.0,0.0 +2011-08-01,9822,0.0,7.0,0.0 +2011-08-02,10529,1.0,7.0,0.0 +2011-08-03,10392,2.0,7.0,0.0 +2011-08-04,10498,3.0,7.0,0.0 +2011-08-05,9775,4.0,7.0,0.0 +2011-08-06,6653,5.0,7.0,0.0 +2011-08-07,6361,6.0,7.0,0.0 +2011-08-08,10287,0.0,7.0,0.0 +2011-08-09,10742,1.0,7.0,0.0 +2011-08-10,10086,2.0,7.0,0.0 +2011-08-11,10391,3.0,7.0,0.0 +2011-08-12,9614,4.0,7.0,0.0 +2011-08-13,6835,5.0,7.0,0.0 +2011-08-14,6912,6.0,7.0,0.0 +2011-08-15,10075,0.0,7.0,0.0 +2011-08-16,10949,1.0,7.0,0.0 +2011-08-17,11041,2.0,7.0,0.0 +2011-08-18,10742,3.0,7.0,0.0 +2011-08-19,10146,4.0,7.0,0.0 +2011-08-20,6424,5.0,7.0,0.0 +2011-08-21,7248,6.0,7.0,0.0 +2011-08-22,10650,0.0,7.0,0.0 +2011-08-23,11171,1.0,7.0,0.0 +2011-08-24,11385,2.0,7.0,0.0 +2011-08-25,10968,3.0,7.0,0.0 +2011-08-26,10179,4.0,7.0,0.0 +2011-08-27,7129,5.0,7.0,0.0 +2011-08-28,7341,6.0,7.0,0.0 +2011-08-29,10953,0.0,7.0,0.0 +2011-08-30,11251,1.0,7.0,0.0 +2011-08-31,11103,2.0,7.0,0.0 +2011-09-01,11120,3.0,8.0,0.0 +2011-09-02,10610,4.0,8.0,0.0 +2011-09-03,7280,5.0,8.0,0.0 +2011-09-04,7798,6.0,8.0,0.0 +2011-09-05,10391,0.0,8.0,1.0 +2011-09-06,11625,1.0,8.0,0.0 +2011-09-07,11869,2.0,8.0,0.0 +2011-09-08,11653,3.0,8.0,0.0 +2011-09-09,10962,4.0,8.0,0.0 +2011-09-10,7616,5.0,8.0,0.0 +2011-09-11,8209,6.0,8.0,0.0 +2011-09-12,11410,0.0,8.0,0.0 +2011-09-13,12278,1.0,8.0,0.0 +2011-09-14,12162,2.0,8.0,0.0 +2011-09-15,11739,3.0,8.0,0.0 +2011-09-16,11476,4.0,8.0,0.0 +2011-09-17,7297,5.0,8.0,0.0 +2011-09-18,8467,6.0,8.0,0.0 +2011-09-19,11276,0.0,8.0,0.0 +2011-09-20,11934,1.0,8.0,0.0 +2011-09-21,12059,2.0,8.0,0.0 +2011-09-22,12279,3.0,8.0,0.0 +2011-09-23,11209,4.0,8.0,0.0 +2011-09-24,7928,5.0,8.0,0.0 +2011-09-25,8584,6.0,8.0,0.0 +2011-09-26,12586,0.0,8.0,0.0 +2011-09-27,13016,1.0,8.0,0.0 +2011-09-28,12805,2.0,8.0,0.0 +2011-09-29,12525,3.0,8.0,0.0 +2011-09-30,11612,4.0,8.0,0.0 +2011-10-01,7829,5.0,9.0,0.0 +2011-10-02,8493,6.0,9.0,0.0 +2011-10-03,11934,0.0,9.0,0.0 +2011-10-04,12469,1.0,9.0,0.0 +2011-10-05,12576,2.0,9.0,0.0 +2011-10-06,12347,3.0,9.0,0.0 +2011-10-07,11916,4.0,9.0,0.0 +2011-10-08,8281,5.0,9.0,0.0 +2011-10-09,8830,6.0,9.0,0.0 +2011-10-10,12618,0.0,9.0,1.0 +2011-10-11,13105,1.0,9.0,0.0 +2011-10-12,12897,2.0,9.0,0.0 +2011-10-13,12674,3.0,9.0,0.0 +2011-10-14,11783,4.0,9.0,0.0 +2011-10-15,8104,5.0,9.0,0.0 +2011-10-16,8805,6.0,9.0,0.0 +2011-10-17,12899,0.0,9.0,0.0 +2011-10-18,13196,1.0,9.0,0.0 +2011-10-19,13200,2.0,9.0,0.0 +2011-10-20,13142,3.0,9.0,0.0 +2011-10-21,12269,4.0,9.0,0.0 +2011-10-22,8506,5.0,9.0,0.0 +2011-10-23,9133,6.0,9.0,0.0 +2011-10-24,13230,0.0,9.0,0.0 +2011-10-25,13364,1.0,9.0,0.0 +2011-10-26,13443,2.0,9.0,0.0 +2011-10-27,11080,3.0,9.0,0.0 +2011-10-28,10718,4.0,9.0,0.0 +2011-10-29,7997,5.0,9.0,0.0 +2011-10-30,8613,6.0,9.0,0.0 +2011-10-31,12319,0.0,9.0,0.0 +2011-11-01,12598,1.0,10.0,0.0 +2011-11-02,13218,2.0,10.0,0.0 +2011-11-03,12805,3.0,10.0,0.0 +2011-11-04,12883,4.0,10.0,0.0 +2011-11-05,8569,5.0,10.0,0.0 +2011-11-06,9090,6.0,10.0,0.0 +2011-11-07,11174,0.0,10.0,0.0 +2011-11-08,14122,1.0,10.0,0.0 +2011-11-09,12036,2.0,10.0,0.0 +2011-11-10,12966,3.0,10.0,0.0 +2011-11-11,12005,4.0,10.0,1.0 +2011-11-12,8419,5.0,10.0,0.0 +2011-11-13,9036,6.0,10.0,0.0 +2011-11-14,12804,0.0,10.0,0.0 +2011-11-15,13378,1.0,10.0,0.0 +2011-11-16,12693,2.0,10.0,0.0 +2011-11-17,13360,3.0,10.0,0.0 +2011-11-18,11744,4.0,10.0,0.0 +2011-11-19,8190,5.0,10.0,0.0 +2011-11-20,9690,6.0,10.0,0.0 +2011-11-21,12145,0.0,10.0,0.0 +2011-11-22,13212,1.0,10.0,0.0 +2011-11-23,13477,2.0,10.0,0.0 +2011-11-24,12085,3.0,10.0,1.0 +2011-11-25,10505,4.0,10.0,0.0 +2011-11-26,8705,5.0,10.0,0.0 +2011-11-27,9648,6.0,10.0,0.0 +2011-11-28,13613,0.0,10.0,0.0 +2011-11-29,14272,1.0,10.0,0.0 +2011-11-30,13957,2.0,10.0,0.0 +2011-12-01,14827,3.0,11.0,0.0 +2011-12-02,13591,4.0,11.0,0.0 +2011-12-03,9827,5.0,11.0,0.0 +2011-12-04,10540,6.0,11.0,0.0 +2011-12-05,14286,0.0,11.0,0.0 +2011-12-06,14420,1.0,11.0,0.0 +2011-12-07,13800,2.0,11.0,0.0 +2011-12-08,13077,3.0,11.0,0.0 +2011-12-09,13409,4.0,11.0,0.0 +2011-12-10,9537,5.0,11.0,0.0 +2011-12-11,9686,6.0,11.0,0.0 +2011-12-12,14003,0.0,11.0,0.0 +2011-12-13,13616,1.0,11.0,0.0 +2011-12-14,13695,2.0,11.0,0.0 +2011-12-15,13702,3.0,11.0,0.0 +2011-12-16,13328,4.0,11.0,0.0 +2011-12-17,8779,5.0,11.0,0.0 +2011-12-18,9541,6.0,11.0,0.0 +2011-12-19,13250,0.0,11.0,0.0 +2011-12-20,12924,1.0,11.0,0.0 +2011-12-21,12238,2.0,11.0,0.0 +2011-12-22,11812,3.0,11.0,0.0 +2011-12-23,10407,4.0,11.0,0.0 +2011-12-24,6600,5.0,11.0,0.0 +2011-12-25,5670,6.0,11.0,0.0 +2011-12-26,7446,0.0,11.0,1.0 +2011-12-27,9742,1.0,11.0,0.0 +2011-12-28,10019,2.0,11.0,0.0 +2011-12-29,10927,3.0,11.0,0.0 +2011-12-30,10146,4.0,11.0,0.0 +2012-01-01,6587,6.0,0.0,0.0 +2012-01-02,10254,0.0,0.0,1.0 +2012-01-03,12412,1.0,0.0,0.0 +2012-01-04,11806,2.0,0.0,0.0 +2012-01-05,13030,3.0,0.0,0.0 +2012-01-06,13081,4.0,0.0,0.0 +2012-01-07,9688,5.0,0.0,0.0 +2012-01-08,9682,6.0,0.0,0.0 +2012-01-09,12389,0.0,0.0,0.0 +2012-01-10,12888,1.0,0.0,0.0 +2012-01-11,14916,2.0,0.0,0.0 +2012-01-12,13966,3.0,0.0,0.0 +2012-01-13,13629,4.0,0.0,0.0 +2012-01-14,9862,5.0,0.0,0.0 +2012-01-15,10764,6.0,0.0,0.0 +2012-01-16,14066,0.0,0.0,1.0 +2012-01-17,14636,1.0,0.0,0.0 +2012-01-18,14308,2.0,0.0,0.0 +2012-01-19,14301,3.0,0.0,0.0 +2012-01-20,13525,4.0,0.0,0.0 +2012-01-21,10410,5.0,0.0,0.0 +2012-01-22,10384,6.0,0.0,0.0 +2012-01-23,14114,0.0,0.0,0.0 +2012-01-24,14996,1.0,0.0,0.0 +2012-01-25,14904,2.0,0.0,0.0 +2012-01-26,14957,3.0,0.0,0.0 +2012-01-27,15145,4.0,0.0,0.0 +2012-01-28,11182,5.0,0.0,0.0 +2012-01-29,11845,6.0,0.0,0.0 +2012-01-30,15747,0.0,0.0,0.0 +2012-01-31,16974,1.0,0.0,0.0 +2012-02-01,16410,2.0,1.0,0.0 +2012-02-02,15344,3.0,1.0,0.0 +2012-02-03,15275,4.0,1.0,0.0 +2012-02-04,10634,5.0,1.0,0.0 +2012-02-05,11996,6.0,1.0,0.0 +2012-02-06,13976,0.0,1.0,0.0 +2012-02-07,14838,1.0,1.0,0.0 +2012-02-08,15306,2.0,1.0,0.0 +2012-02-09,15598,3.0,1.0,0.0 +2012-02-10,14349,4.0,1.0,0.0 +2012-02-11,11061,5.0,1.0,0.0 +2012-02-12,12209,6.0,1.0,0.0 +2012-02-13,13869,0.0,1.0,0.0 +2012-02-14,15581,1.0,1.0,0.0 +2012-02-15,13850,2.0,1.0,0.0 +2012-02-16,15864,3.0,1.0,0.0 +2012-02-17,15855,4.0,1.0,0.0 +2012-02-18,11506,5.0,1.0,0.0 +2012-02-19,12713,6.0,1.0,0.0 +2012-02-20,15871,0.0,1.0,1.0 +2012-02-21,18141,1.0,1.0,0.0 +2012-02-22,18658,2.0,1.0,0.0 +2012-02-23,18336,3.0,1.0,0.0 +2012-02-24,17493,4.0,1.0,0.0 +2012-02-25,13047,5.0,1.0,0.0 +2012-02-26,13470,6.0,1.0,0.0 +2012-02-27,18588,0.0,1.0,0.0 +2012-02-28,19337,1.0,1.0,0.0 +2012-02-29,18919,2.0,1.0,0.0 +2012-03-01,16831,3.0,2.0,0.0 +2012-03-02,16858,4.0,2.0,0.0 +2012-03-03,12768,5.0,2.0,0.0 +2012-03-04,11378,6.0,2.0,0.0 +2012-03-05,17247,0.0,2.0,0.0 +2012-03-06,19299,1.0,2.0,0.0 +2012-03-07,19070,2.0,2.0,0.0 +2012-03-08,18345,3.0,2.0,0.0 +2012-03-09,17563,4.0,2.0,0.0 +2012-03-10,4558,5.0,2.0,0.0 +2012-03-11,11403,6.0,2.0,0.0 +2012-03-12,19012,0.0,2.0,0.0 +2012-03-13,19453,1.0,2.0,0.0 +2012-03-14,18612,2.0,2.0,0.0 +2012-03-15,18516,3.0,2.0,0.0 +2012-03-16,17712,4.0,2.0,0.0 +2012-03-17,12388,5.0,2.0,0.0 +2012-03-18,13136,6.0,2.0,0.0 +2012-03-19,19017,0.0,2.0,0.0 +2012-03-20,19748,1.0,2.0,0.0 +2012-03-21,19332,2.0,2.0,0.0 +2012-03-22,19193,3.0,2.0,0.0 +2012-03-23,17920,4.0,2.0,0.0 +2012-03-24,12753,5.0,2.0,0.0 +2012-03-25,13249,6.0,2.0,0.0 +2012-03-26,19124,0.0,2.0,0.0 +2012-03-27,19509,1.0,2.0,0.0 +2012-03-28,19821,2.0,2.0,0.0 +2012-03-29,19472,3.0,2.0,0.0 +2012-03-30,18427,4.0,2.0,0.0 +2012-03-31,13115,5.0,2.0,0.0 +2012-04-01,13515,6.0,3.0,0.0 +2012-04-02,18399,0.0,3.0,0.0 +2012-04-03,19605,1.0,3.0,0.0 +2012-04-04,19252,2.0,3.0,0.0 +2012-04-05,18543,3.0,3.0,0.0 +2012-04-06,16503,4.0,3.0,0.0 +2012-04-07,12460,5.0,3.0,0.0 +2012-04-08,12448,6.0,3.0,0.0 +2012-04-09,17445,0.0,3.0,0.0 +2012-04-10,19932,1.0,3.0,0.0 +2012-04-11,20228,2.0,3.0,0.0 +2012-04-12,19756,3.0,3.0,0.0 +2012-04-13,18782,4.0,3.0,0.0 +2012-04-14,13467,5.0,3.0,0.0 +2012-04-15,14327,6.0,3.0,0.0 +2012-04-16,20054,0.0,3.0,0.0 +2012-04-17,20519,1.0,3.0,0.0 +2012-04-18,20550,2.0,3.0,0.0 +2012-04-19,20701,3.0,3.0,0.0 +2012-04-20,19581,4.0,3.0,0.0 +2012-04-21,13836,5.0,3.0,0.0 +2012-04-22,15203,6.0,3.0,0.0 +2012-04-23,21022,0.0,3.0,0.0 +2012-04-24,21531,1.0,3.0,0.0 +2012-04-25,20843,2.0,3.0,0.0 +2012-04-26,20502,3.0,3.0,0.0 +2012-04-27,19350,4.0,3.0,0.0 +2012-04-28,13435,5.0,3.0,0.0 +2012-04-29,13740,6.0,3.0,0.0 +2012-04-30,18399,0.0,3.0,0.0 +2012-05-01,18568,1.0,4.0,0.0 +2012-05-02,20450,2.0,4.0,0.0 +2012-05-03,20346,3.0,4.0,0.0 +2012-05-04,19046,4.0,4.0,0.0 +2012-05-05,13624,5.0,4.0,0.0 +2012-05-06,14067,6.0,4.0,0.0 +2012-05-07,19843,0.0,4.0,0.0 +2012-05-08,20642,1.0,4.0,0.0 +2012-05-09,20494,2.0,4.0,0.0 +2012-05-10,20582,3.0,4.0,0.0 +2012-05-11,19082,4.0,4.0,0.0 +2012-05-12,12969,5.0,4.0,0.0 +2012-05-13,13213,6.0,4.0,0.0 +2012-05-14,19891,0.0,4.0,0.0 +2012-05-15,20429,1.0,4.0,0.0 +2012-05-16,19803,2.0,4.0,0.0 +2012-05-17,18502,3.0,4.0,0.0 +2012-05-18,17863,4.0,4.0,0.0 +2012-05-19,11967,5.0,4.0,0.0 +2012-05-20,12955,6.0,4.0,0.0 +2012-05-21,19504,0.0,4.0,0.0 +2012-05-22,21177,1.0,4.0,0.0 +2012-05-23,20755,2.0,4.0,0.0 +2012-05-24,20334,3.0,4.0,0.0 +2012-05-25,18596,4.0,4.0,0.0 +2012-05-26,11896,5.0,4.0,0.0 +2012-05-27,12267,6.0,4.0,0.0 +2012-05-28,16877,0.0,4.0,1.0 +2012-05-29,20475,1.0,4.0,0.0 +2012-05-30,20843,2.0,4.0,0.0 +2012-05-31,19725,3.0,4.0,0.0 +2012-06-01,18977,4.0,5.0,0.0 +2012-06-02,12762,5.0,5.0,0.0 +2012-06-03,13811,6.0,5.0,0.0 +2012-06-04,19603,0.0,5.0,0.0 +2012-06-05,20407,1.0,5.0,0.0 +2012-06-06,20109,2.0,5.0,0.0 +2012-06-07,20065,3.0,5.0,0.0 +2012-06-08,18897,4.0,5.0,0.0 +2012-06-09,12974,5.0,5.0,0.0 +2012-06-10,13579,6.0,5.0,0.0 +2012-06-11,19795,0.0,5.0,0.0 +2012-06-12,20766,1.0,5.0,0.0 +2012-06-13,20493,2.0,5.0,0.0 +2012-06-14,20337,3.0,5.0,0.0 +2012-06-15,18872,4.0,5.0,0.0 +2012-06-16,12563,5.0,5.0,0.0 +2012-06-17,12595,6.0,5.0,0.0 +2012-06-18,19942,0.0,5.0,0.0 +2012-06-19,20901,1.0,5.0,0.0 +2012-06-20,20460,2.0,5.0,0.0 +2012-06-21,20208,3.0,5.0,0.0 +2012-06-22,18334,4.0,5.0,0.0 +2012-06-23,12188,5.0,5.0,0.0 +2012-06-24,12974,6.0,5.0,0.0 +2012-06-25,19997,0.0,5.0,0.0 +2012-06-26,21259,1.0,5.0,0.0 +2012-06-27,20474,2.0,5.0,0.0 +2012-06-28,19885,3.0,5.0,0.0 +2012-06-29,18686,4.0,5.0,0.0 +2012-06-30,12240,5.0,5.0,0.0 +2012-07-01,12825,6.0,6.0,0.0 +2012-07-02,19514,0.0,6.0,0.0 +2012-07-03,20326,1.0,6.0,0.0 +2012-07-04,18182,2.0,6.0,1.0 +2012-07-05,19268,3.0,6.0,0.0 +2012-07-06,19182,4.0,6.0,0.0 +2012-07-07,12835,5.0,6.0,0.0 +2012-07-08,13365,6.0,6.0,0.0 +2012-07-09,20486,0.0,6.0,0.0 +2012-07-10,21706,1.0,6.0,0.0 +2012-07-11,21626,2.0,6.0,0.0 +2012-07-12,21252,3.0,6.0,0.0 +2012-07-13,20151,4.0,6.0,0.0 +2012-07-14,12797,5.0,6.0,0.0 +2012-07-15,13483,6.0,6.0,0.0 +2012-07-16,20626,0.0,6.0,0.0 +2012-07-17,21534,1.0,6.0,0.0 +2012-07-18,21272,2.0,6.0,0.0 +2012-07-19,20996,3.0,6.0,0.0 +2012-07-20,19689,4.0,6.0,0.0 +2012-07-21,12728,5.0,6.0,0.0 +2012-07-22,13196,6.0,6.0,0.0 +2012-07-23,20682,0.0,6.0,0.0 +2012-07-24,21436,1.0,6.0,0.0 +2012-07-25,20928,2.0,6.0,0.0 +2012-07-26,20682,3.0,6.0,0.0 +2012-07-27,19471,4.0,6.0,0.0 +2012-07-28,12348,5.0,6.0,0.0 +2012-07-29,13181,6.0,6.0,0.0 +2012-07-30,20472,0.0,6.0,0.0 +2012-07-31,20755,1.0,6.0,0.0 +2012-08-01,20981,2.0,7.0,0.0 +2012-08-02,20754,3.0,7.0,0.0 +2012-08-03,19474,4.0,7.0,0.0 +2012-08-04,12608,5.0,7.0,0.0 +2012-08-05,13300,6.0,7.0,0.0 +2012-08-06,20171,0.0,7.0,0.0 +2012-08-07,21381,1.0,7.0,0.0 +2012-08-08,21414,2.0,7.0,0.0 +2012-08-09,21189,3.0,7.0,0.0 +2012-08-10,20258,4.0,7.0,0.0 +2012-08-11,13126,5.0,7.0,0.0 +2012-08-12,13542,6.0,7.0,0.0 +2012-08-13,21095,0.0,7.0,0.0 +2012-08-14,21820,1.0,7.0,0.0 +2012-08-15,20412,2.0,7.0,0.0 +2012-08-16,20654,3.0,7.0,0.0 +2012-08-17,19865,4.0,7.0,0.0 +2012-08-18,13124,5.0,7.0,0.0 +2012-08-19,13500,6.0,7.0,0.0 +2012-08-20,21156,0.0,7.0,0.0 +2012-08-21,22188,1.0,7.0,0.0 +2012-08-22,22133,2.0,7.0,0.0 +2012-08-23,21972,3.0,7.0,0.0 +2012-08-24,20575,4.0,7.0,0.0 +2012-08-25,13606,5.0,7.0,0.0 +2012-08-26,14147,6.0,7.0,0.0 +2012-08-27,21513,0.0,7.0,0.0 +2012-08-28,22396,1.0,7.0,0.0 +2012-08-29,22023,2.0,7.0,0.0 +2012-08-30,22032,3.0,7.0,0.0 +2012-08-31,20667,4.0,7.0,0.0 +2012-09-01,13193,5.0,8.0,0.0 +2012-09-02,14236,6.0,8.0,0.0 +2012-09-03,19533,0.0,8.0,1.0 +2012-09-04,22529,1.0,8.0,0.0 +2012-09-05,23006,2.0,8.0,0.0 +2012-09-06,22463,3.0,8.0,0.0 +2012-09-07,21547,4.0,8.0,0.0 +2012-09-08,14061,5.0,8.0,0.0 +2012-09-09,15149,6.0,8.0,0.0 +2012-09-10,22730,0.0,8.0,0.0 +2012-09-11,23336,1.0,8.0,0.0 +2012-09-12,23521,2.0,8.0,0.0 +2012-09-13,23435,3.0,8.0,0.0 +2012-09-14,21632,4.0,8.0,0.0 +2012-09-15,14370,5.0,8.0,0.0 +2012-09-16,15122,6.0,8.0,0.0 +2012-09-17,23351,0.0,8.0,0.0 +2012-09-18,24066,1.0,8.0,0.0 +2012-09-19,23742,2.0,8.0,0.0 +2012-09-20,23585,3.0,8.0,0.0 +2012-09-21,22157,4.0,8.0,0.0 +2012-09-22,14539,5.0,8.0,0.0 +2012-09-23,15735,6.0,8.0,0.0 +2012-09-24,23613,0.0,8.0,0.0 +2012-09-25,24315,1.0,8.0,0.0 +2012-09-26,24513,2.0,8.0,0.0 +2012-09-27,23950,3.0,8.0,0.0 +2012-09-28,22489,4.0,8.0,0.0 +2012-09-29,15130,5.0,8.0,0.0 +2012-09-30,15516,6.0,8.0,0.0 +2012-10-01,22938,0.0,9.0,0.0 +2012-10-02,23758,1.0,9.0,0.0 +2012-10-03,24048,2.0,9.0,0.0 +2012-10-04,23651,3.0,9.0,0.0 +2012-10-05,22488,4.0,9.0,0.0 +2012-10-06,15261,5.0,9.0,0.0 +2012-10-07,16074,6.0,9.0,0.0 +2012-10-08,24300,0.0,9.0,1.0 +2012-10-09,26112,1.0,9.0,0.0 +2012-10-10,26118,2.0,9.0,0.0 +2012-10-11,25481,3.0,9.0,0.0 +2012-10-12,23749,4.0,9.0,0.0 +2012-10-13,16161,5.0,9.0,0.0 +2012-10-14,17196,6.0,9.0,0.0 +2012-10-15,25711,0.0,9.0,0.0 +2012-10-16,26368,1.0,9.0,0.0 +2012-10-17,26436,2.0,9.0,0.0 +2012-10-18,25588,3.0,9.0,0.0 +2012-10-19,24120,4.0,9.0,0.0 +2012-10-20,16546,5.0,9.0,0.0 +2012-10-21,17939,6.0,9.0,0.0 +2012-10-22,26790,0.0,9.0,0.0 +2012-10-23,26904,1.0,9.0,0.0 +2012-10-24,27135,2.0,9.0,0.0 +2012-10-25,26631,3.0,9.0,0.0 +2012-10-26,24735,4.0,9.0,0.0 +2012-10-27,16414,5.0,9.0,0.0 +2012-10-28,17832,6.0,9.0,0.0 +2012-10-29,26382,0.0,9.0,0.0 +2012-10-30,27051,1.0,9.0,0.0 +2012-10-31,26630,2.0,9.0,0.0 +2012-11-01,25001,3.0,10.0,0.0 +2012-11-02,24505,4.0,10.0,0.0 +2012-11-03,17411,5.0,10.0,0.0 +2012-11-04,18421,6.0,10.0,0.0 +2012-11-05,27468,0.0,10.0,0.0 +2012-11-06,28425,1.0,10.0,0.0 +2012-11-07,27405,2.0,10.0,0.0 +2012-11-08,28017,3.0,10.0,0.0 +2012-11-09,26332,4.0,10.0,0.0 +2012-11-10,18246,5.0,10.0,0.0 +2012-11-11,19133,6.0,10.0,0.0 +2012-11-12,27814,0.0,10.0,1.0 +2012-11-13,28922,1.0,10.0,0.0 +2012-11-14,28695,2.0,10.0,0.0 +2012-11-15,28078,3.0,10.0,0.0 +2012-11-16,26404,4.0,10.0,0.0 +2012-11-17,18254,5.0,10.0,0.0 +2012-11-18,19573,6.0,10.0,0.0 +2012-11-19,28486,0.0,10.0,0.0 +2012-11-20,28976,1.0,10.0,0.0 +2012-11-21,28161,2.0,10.0,0.0 +2012-11-22,24228,3.0,10.0,1.0 +2012-11-23,22550,4.0,10.0,0.0 +2012-11-24,17484,5.0,10.0,0.0 +2012-11-25,19188,6.0,10.0,0.0 +2012-11-26,28974,0.0,10.0,0.0 +2012-11-27,29963,1.0,10.0,0.0 +2012-11-28,30244,2.0,10.0,0.0 +2012-11-29,29538,3.0,10.0,0.0 +2012-11-30,26786,4.0,10.0,0.0 +2012-12-01,19253,5.0,11.0,0.0 +2012-12-02,20778,6.0,11.0,0.0 +2012-12-03,30026,0.0,11.0,0.0 +2012-12-04,30295,1.0,11.0,0.0 +2012-12-05,30105,2.0,11.0,0.0 +2012-12-06,29559,3.0,11.0,0.0 +2012-12-07,26613,4.0,11.0,0.0 +2012-12-08,18467,5.0,11.0,0.0 +2012-12-09,20055,6.0,11.0,0.0 +2012-12-10,28579,0.0,11.0,0.0 +2012-12-11,29642,1.0,11.0,0.0 +2012-12-12,29168,2.0,11.0,0.0 +2012-12-13,28652,3.0,11.0,0.0 +2012-12-14,26568,4.0,11.0,0.0 +2012-12-15,17788,5.0,11.0,0.0 +2012-12-16,18785,6.0,11.0,0.0 +2012-12-17,27496,0.0,11.0,0.0 +2012-12-18,27723,1.0,11.0,0.0 +2012-12-19,27055,2.0,11.0,0.0 +2012-12-20,26013,3.0,11.0,0.0 +2012-12-21,23140,4.0,11.0,0.0 +2012-12-22,15245,5.0,11.0,0.0 +2012-12-23,14097,6.0,11.0,0.0 +2012-12-24,16373,0.0,11.0,0.0 +2012-12-25,13596,1.0,11.0,1.0 +2012-12-26,17465,2.0,11.0,0.0 +2012-12-27,20445,3.0,11.0,0.0 +2012-12-28,20120,4.0,11.0,0.0 +2012-12-29,16407,5.0,11.0,0.0 +2012-12-30,15777,6.0,11.0,0.0 +2012-12-31,6200,0.0,11.0,0.0 +2013-01-01,11208,1.0,0.0,1.0 +2013-01-02,22522,2.0,0.0,0.0 +2013-01-03,24859,3.0,0.0,0.0 +2013-01-04,25302,4.0,0.0,0.0 +2013-01-05,19114,5.0,0.0,0.0 +2013-01-06,19650,6.0,0.0,0.0 +2013-01-07,27504,0.0,0.0,0.0 +2013-01-08,29375,1.0,0.0,0.0 +2013-01-09,29679,2.0,0.0,0.0 +2013-01-10,29661,3.0,0.0,0.0 +2013-01-11,28997,4.0,0.0,0.0 +2013-01-12,19920,5.0,0.0,0.0 +2013-01-13,21301,6.0,0.0,0.0 +2013-01-14,30089,0.0,0.0,0.0 +2013-01-15,30936,1.0,0.0,0.0 +2013-01-16,31416,2.0,0.0,0.0 +2013-01-17,30992,3.0,0.0,0.0 +2013-01-18,29420,4.0,0.0,0.0 +2013-01-19,20790,5.0,0.0,0.0 +2013-01-20,21897,6.0,0.0,0.0 +2013-01-21,29606,0.0,0.0,1.0 +2013-01-22,31573,1.0,0.0,0.0 +2013-01-23,32344,2.0,0.0,0.0 +2013-01-24,32485,3.0,0.0,0.0 +2013-01-25,30793,4.0,0.0,0.0 +2013-01-26,21917,5.0,0.0,0.0 +2013-01-27,23032,6.0,0.0,0.0 +2013-01-28,31946,0.0,0.0,0.0 +2013-01-29,33487,1.0,0.0,0.0 +2013-01-30,33192,2.0,0.0,0.0 +2013-01-31,32722,3.0,0.0,0.0 +2013-02-01,30716,4.0,1.0,0.0 +2013-02-02,21484,5.0,1.0,0.0 +2013-02-03,22962,6.0,1.0,0.0 +2013-02-04,31284,0.0,1.0,0.0 +2013-02-05,33106,1.0,1.0,0.0 +2013-02-06,32976,2.0,1.0,0.0 +2013-02-07,32429,3.0,1.0,0.0 +2013-02-08,30524,4.0,1.0,0.0 +2013-02-09,21085,5.0,1.0,0.0 +2013-02-10,22281,6.0,1.0,0.0 +2013-02-11,30989,0.0,1.0,0.0 +2013-02-12,32543,1.0,1.0,0.0 +2013-02-13,31854,2.0,1.0,0.0 +2013-02-14,30875,3.0,1.0,0.0 +2013-02-15,29531,4.0,1.0,0.0 +2013-02-16,22299,5.0,1.0,0.0 +2013-02-17,23941,6.0,1.0,0.0 +2013-02-18,33106,0.0,1.0,1.0 +2013-02-19,35274,1.0,1.0,0.0 +2013-02-20,35265,2.0,1.0,0.0 +2013-02-21,34535,3.0,1.0,0.0 +2013-02-22,33009,4.0,1.0,0.0 +2013-02-23,23466,5.0,1.0,0.0 +2013-02-24,24903,6.0,1.0,0.0 +2013-02-25,35081,0.0,1.0,0.0 +2013-02-26,36143,1.0,1.0,0.0 +2013-02-27,35992,2.0,1.0,0.0 +2013-02-28,35284,3.0,1.0,0.0 +2013-03-01,33063,4.0,2.0,0.0 +2013-03-02,23944,5.0,2.0,0.0 +2013-03-03,25119,6.0,2.0,0.0 +2013-03-04,35777,0.0,2.0,0.0 +2013-03-05,36559,1.0,2.0,0.0 +2013-03-06,35998,2.0,2.0,0.0 +2013-03-07,35682,3.0,2.0,0.0 +2013-03-08,33619,4.0,2.0,0.0 +2013-03-09,23860,5.0,2.0,0.0 +2013-03-10,25293,6.0,2.0,0.0 +2013-03-11,36253,0.0,2.0,0.0 +2013-03-12,37391,1.0,2.0,0.0 +2013-03-13,37132,2.0,2.0,0.0 +2013-03-14,36044,3.0,2.0,0.0 +2013-03-15,34297,4.0,2.0,0.0 +2013-03-16,24005,5.0,2.0,0.0 +2013-03-17,25836,6.0,2.0,0.0 +2013-03-18,36614,0.0,2.0,0.0 +2013-03-19,38229,1.0,2.0,0.0 +2013-03-20,38085,2.0,2.0,0.0 +2013-03-21,37290,3.0,2.0,0.0 +2013-03-22,35173,4.0,2.0,0.0 +2013-03-23,23732,5.0,2.0,0.0 +2013-03-24,26573,6.0,2.0,0.0 +2013-03-25,38095,0.0,2.0,0.0 +2013-03-26,38959,1.0,2.0,0.0 +2013-03-27,36841,2.0,2.0,0.0 +2013-03-28,35861,3.0,2.0,0.0 +2013-03-29,31458,4.0,2.0,0.0 +2013-03-30,23375,5.0,2.0,0.0 +2013-03-31,23229,6.0,2.0,0.0 +2013-04-01,32188,0.0,3.0,0.0 +2013-04-02,37574,1.0,3.0,0.0 +2013-04-03,37688,2.0,3.0,0.0 +2013-04-04,36662,3.0,3.0,0.0 +2013-04-05,35247,4.0,3.0,0.0 +2013-04-06,25579,5.0,3.0,0.0 +2013-04-07,28152,6.0,3.0,0.0 +2013-04-08,38770,0.0,3.0,0.0 +2013-04-09,39537,1.0,3.0,0.0 +2013-04-10,39099,2.0,3.0,0.0 +2013-04-11,38970,3.0,3.0,0.0 +2013-04-12,37006,4.0,3.0,0.0 +2013-04-13,25241,5.0,3.0,0.0 +2013-04-14,26604,6.0,3.0,0.0 +2013-04-15,38046,0.0,3.0,0.0 +2013-04-16,39572,1.0,3.0,0.0 +2013-04-17,39873,2.0,3.0,0.0 +2013-04-18,39338,3.0,3.0,0.0 +2013-04-19,36343,4.0,3.0,0.0 +2013-04-20,25210,5.0,3.0,0.0 +2013-04-21,26877,6.0,3.0,0.0 +2013-04-22,39663,0.0,3.0,0.0 +2013-04-23,40706,1.0,3.0,0.0 +2013-04-24,39844,2.0,3.0,0.0 +2013-04-25,38703,3.0,3.0,0.0 +2013-04-26,35427,4.0,3.0,0.0 +2013-04-27,26071,5.0,3.0,0.0 +2013-04-28,27388,6.0,3.0,0.0 +2013-04-29,37487,0.0,3.0,0.0 +2013-04-30,36940,1.0,3.0,0.0 +2013-05-01,33606,2.0,4.0,0.0 +2013-05-02,37390,3.0,4.0,0.0 +2013-05-03,35633,4.0,4.0,0.0 +2013-05-04,24228,5.0,4.0,0.0 +2013-05-05,24997,6.0,4.0,0.0 +2013-05-06,36749,0.0,4.0,0.0 +2013-05-07,37704,1.0,4.0,0.0 +2013-05-08,37857,2.0,4.0,0.0 +2013-05-09,35833,3.0,4.0,0.0 +2013-05-10,34646,4.0,4.0,0.0 +2013-05-11,24376,5.0,4.0,0.0 +2013-05-12,25378,6.0,4.0,0.0 +2013-05-13,38290,0.0,4.0,0.0 +2013-05-14,39639,1.0,4.0,0.0 +2013-05-15,38600,2.0,4.0,0.0 +2013-05-16,38360,3.0,4.0,0.0 +2013-05-17,35699,4.0,4.0,0.0 +2013-05-18,23617,5.0,4.0,0.0 +2013-05-19,24777,6.0,4.0,0.0 +2013-05-20,36164,0.0,4.0,0.0 +2013-05-21,38868,1.0,4.0,0.0 +2013-05-22,39343,2.0,4.0,0.0 +2013-05-23,38808,3.0,4.0,0.0 +2013-05-24,35952,4.0,4.0,0.0 +2013-05-25,23631,5.0,4.0,0.0 +2013-05-26,24617,6.0,4.0,0.0 +2013-05-27,33553,0.0,4.0,1.0 +2013-05-28,38933,1.0,4.0,0.0 +2013-05-29,39393,2.0,4.0,0.0 +2013-05-30,37654,3.0,4.0,0.0 +2013-05-31,36341,4.0,4.0,0.0 +2013-06-01,23781,5.0,5.0,0.0 +2013-06-02,25611,6.0,5.0,0.0 +2013-06-03,38377,0.0,5.0,0.0 +2013-06-04,39508,1.0,5.0,0.0 +2013-06-05,38949,2.0,5.0,0.0 +2013-06-06,38397,3.0,5.0,0.0 +2013-06-07,36512,4.0,5.0,0.0 +2013-06-08,24453,5.0,5.0,0.0 +2013-06-09,25513,6.0,5.0,0.0 +2013-06-10,35931,0.0,5.0,0.0 +2013-06-11,36456,1.0,5.0,0.0 +2013-06-12,36649,2.0,5.0,0.0 +2013-06-13,37838,3.0,5.0,0.0 +2013-06-14,35372,4.0,5.0,0.0 +2013-06-15,22633,5.0,5.0,0.0 +2013-06-16,23632,6.0,5.0,0.0 +2013-06-17,36996,0.0,5.0,0.0 +2013-06-18,38905,1.0,5.0,0.0 +2013-06-19,38128,2.0,5.0,0.0 +2013-06-20,37205,3.0,5.0,0.0 +2013-06-21,34488,4.0,5.0,0.0 +2013-06-22,22328,5.0,5.0,0.0 +2013-06-23,24116,6.0,5.0,0.0 +2013-06-24,37051,0.0,5.0,0.0 +2013-06-25,38924,1.0,5.0,0.0 +2013-06-26,38481,2.0,5.0,0.0 +2013-06-27,37527,3.0,5.0,0.0 +2013-06-28,35081,4.0,5.0,0.0 +2013-06-29,22609,5.0,5.0,0.0 +2013-06-30,23535,6.0,5.0,0.0 +2013-07-01,35825,0.0,6.0,0.0 +2013-07-02,37818,1.0,6.0,0.0 +2013-07-03,37797,2.0,6.0,0.0 +2013-07-04,33322,3.0,6.0,1.0 +2013-07-05,32777,4.0,6.0,0.0 +2013-07-06,22675,5.0,6.0,0.0 +2013-07-07,24558,6.0,6.0,0.0 +2013-07-08,38713,0.0,6.0,0.0 +2013-07-09,40620,1.0,6.0,0.0 +2013-07-10,42070,2.0,6.0,0.0 +2013-07-11,41020,3.0,6.0,0.0 +2013-07-12,37346,4.0,6.0,0.0 +2013-07-13,23190,5.0,6.0,0.0 +2013-07-14,24518,6.0,6.0,0.0 +2013-07-15,38390,0.0,6.0,0.0 +2013-07-16,40149,1.0,6.0,0.0 +2013-07-17,40568,2.0,6.0,0.0 +2013-07-18,40213,3.0,6.0,0.0 +2013-07-19,38293,4.0,6.0,0.0 +2013-07-20,24090,5.0,6.0,0.0 +2013-07-21,24762,6.0,6.0,0.0 +2013-07-22,39038,0.0,6.0,0.0 +2013-07-23,40878,1.0,6.0,0.0 +2013-07-24,39600,2.0,6.0,0.0 +2013-07-25,38883,3.0,6.0,0.0 +2013-07-26,36596,4.0,6.0,0.0 +2013-07-27,23424,5.0,6.0,0.0 +2013-07-28,24364,6.0,6.0,0.0 +2013-07-29,37997,0.0,6.0,0.0 +2013-07-30,39569,1.0,6.0,0.0 +2013-07-31,39220,2.0,6.0,0.0 +2013-08-01,38151,3.0,7.0,0.0 +2013-08-02,35991,4.0,7.0,0.0 +2013-08-03,23359,5.0,7.0,0.0 +2013-08-04,24392,6.0,7.0,0.0 +2013-08-05,37880,0.0,7.0,0.0 +2013-08-06,39787,1.0,7.0,0.0 +2013-08-07,40562,2.0,7.0,0.0 +2013-08-08,39204,3.0,7.0,0.0 +2013-08-09,36126,4.0,7.0,0.0 +2013-08-10,23322,5.0,7.0,0.0 +2013-08-11,24528,6.0,7.0,0.0 +2013-08-12,37294,0.0,7.0,0.0 +2013-08-13,38848,1.0,7.0,0.0 +2013-08-14,38772,2.0,7.0,0.0 +2013-08-15,34626,3.0,7.0,0.0 +2013-08-16,34857,4.0,7.0,0.0 +2013-08-17,23932,5.0,7.0,0.0 +2013-08-18,24779,6.0,7.0,0.0 +2013-08-19,37843,0.0,7.0,0.0 +2013-08-20,38890,1.0,7.0,0.0 +2013-08-21,39298,2.0,7.0,0.0 +2013-08-22,38649,3.0,7.0,0.0 +2013-08-23,36410,4.0,7.0,0.0 +2013-08-24,23893,5.0,7.0,0.0 +2013-08-25,25183,6.0,7.0,0.0 +2013-08-26,37745,0.0,7.0,0.0 +2013-08-27,40279,1.0,7.0,0.0 +2013-08-28,40041,2.0,7.0,0.0 +2013-08-29,39814,3.0,7.0,0.0 +2013-08-30,36737,4.0,7.0,0.0 +2013-08-31,23496,5.0,7.0,0.0 +2013-09-01,24887,6.0,8.0,0.0 +2013-09-02,34734,0.0,8.0,1.0 +2013-09-03,40062,1.0,8.0,0.0 +2013-09-04,40547,2.0,8.0,0.0 +2013-09-05,39817,3.0,8.0,0.0 +2013-09-06,36795,4.0,8.0,0.0 +2013-09-07,25041,5.0,8.0,0.0 +2013-09-08,26867,6.0,8.0,0.0 +2013-09-09,40162,0.0,8.0,0.0 +2013-09-10,41282,1.0,8.0,0.0 +2013-09-11,41776,2.0,8.0,0.0 +2013-09-12,40797,3.0,8.0,0.0 +2013-09-13,39038,4.0,8.0,0.0 +2013-09-14,25547,5.0,8.0,0.0 +2013-09-15,27248,6.0,8.0,0.0 +2013-09-16,41174,0.0,8.0,0.0 +2013-09-17,41800,1.0,8.0,0.0 +2013-09-18,40673,2.0,8.0,0.0 +2013-09-19,35777,3.0,8.0,0.0 +2013-09-20,37267,4.0,8.0,0.0 +2013-09-21,25963,5.0,8.0,0.0 +2013-09-22,28105,6.0,8.0,0.0 +2013-09-23,40921,0.0,8.0,0.0 +2013-09-24,42979,1.0,8.0,0.0 +2013-09-25,42683,2.0,8.0,0.0 +2013-09-26,42336,3.0,8.0,0.0 +2013-09-27,39720,4.0,8.0,0.0 +2013-09-28,26060,5.0,8.0,0.0 +2013-09-29,29404,6.0,8.0,0.0 +2013-09-30,41805,0.0,8.0,0.0 +2013-10-01,41029,1.0,9.0,0.0 +2013-10-02,41378,2.0,9.0,0.0 +2013-10-03,40288,3.0,9.0,0.0 +2013-10-04,38966,4.0,9.0,0.0 +2013-10-05,26606,5.0,9.0,0.0 +2013-10-06,28694,6.0,9.0,0.0 +2013-10-07,42983,0.0,9.0,0.0 +2013-10-08,45969,1.0,9.0,0.0 +2013-10-09,45673,2.0,9.0,0.0 +2013-10-10,44823,3.0,9.0,0.0 +2013-10-11,42240,4.0,9.0,0.0 +2013-10-12,28719,5.0,9.0,0.0 +2013-10-13,29129,6.0,9.0,0.0 +2013-10-14,42706,0.0,9.0,1.0 +2013-10-15,45380,1.0,9.0,0.0 +2013-10-16,46301,2.0,9.0,0.0 +2013-10-17,45649,3.0,9.0,0.0 +2013-10-18,42778,4.0,9.0,0.0 +2013-10-19,28774,5.0,9.0,0.0 +2013-10-20,31296,6.0,9.0,0.0 +2013-10-21,45838,0.0,9.0,0.0 +2013-10-22,46948,1.0,9.0,0.0 +2013-10-23,46510,2.0,9.0,0.0 +2013-10-24,44514,3.0,9.0,0.0 +2013-10-25,44395,4.0,9.0,0.0 +2013-10-26,29485,5.0,9.0,0.0 +2013-10-27,31661,6.0,9.0,0.0 +2013-10-28,46946,0.0,9.0,0.0 +2013-10-29,48500,1.0,9.0,0.0 +2013-10-30,48321,2.0,9.0,0.0 +2013-10-31,46159,3.0,9.0,0.0 +2013-11-01,41112,4.0,10.0,0.0 +2013-11-02,29827,5.0,10.0,0.0 +2013-11-03,31521,6.0,10.0,0.0 +2013-11-04,47735,0.0,10.0,0.0 +2013-11-05,49358,1.0,10.0,0.0 +2013-11-06,49622,2.0,10.0,0.0 +2013-11-07,48864,3.0,10.0,0.0 +2013-11-08,46153,4.0,10.0,0.0 +2013-11-09,31598,5.0,10.0,0.0 +2013-11-10,33505,6.0,10.0,0.0 +2013-11-11,47101,0.0,10.0,1.0 +2013-11-12,50609,1.0,10.0,0.0 +2013-11-13,48306,2.0,10.0,0.0 +2013-11-14,49673,3.0,10.0,0.0 +2013-11-15,46797,4.0,10.0,0.0 +2013-11-16,32098,5.0,10.0,0.0 +2013-11-17,34542,6.0,10.0,0.0 +2013-11-18,50981,0.0,10.0,0.0 +2013-11-19,51901,1.0,10.0,0.0 +2013-11-20,51862,2.0,10.0,0.0 +2013-11-21,51330,3.0,10.0,0.0 +2013-11-22,48100,4.0,10.0,0.0 +2013-11-23,32590,5.0,10.0,0.0 +2013-11-24,34863,6.0,10.0,0.0 +2013-11-25,49346,0.0,10.0,0.0 +2013-11-26,51549,1.0,10.0,0.0 +2013-11-27,49231,2.0,10.0,0.0 +2013-11-28,42985,3.0,10.0,1.0 +2013-11-29,39014,4.0,10.0,0.0 +2013-11-30,29927,5.0,10.0,0.0 +2013-12-01,32875,6.0,11.0,0.0 +2013-12-02,50342,0.0,11.0,0.0 +2013-12-03,52500,1.0,11.0,0.0 +2013-12-04,52398,2.0,11.0,0.0 +2013-12-05,51352,3.0,11.0,0.0 +2013-12-06,47337,4.0,11.0,0.0 +2013-12-07,32551,5.0,11.0,0.0 +2013-12-08,34756,6.0,11.0,0.0 +2013-12-09,50839,0.0,11.0,0.0 +2013-12-10,51506,1.0,11.0,0.0 +2013-12-11,50204,2.0,11.0,0.0 +2013-12-12,48640,3.0,11.0,0.0 +2013-12-13,45504,4.0,11.0,0.0 +2013-12-14,30350,5.0,11.0,0.0 +2013-12-15,32192,6.0,11.0,0.0 +2013-12-16,47571,0.0,11.0,0.0 +2013-12-17,48189,1.0,11.0,0.0 +2013-12-18,46983,2.0,11.0,0.0 +2013-12-19,44986,3.0,11.0,0.0 +2013-12-20,41717,4.0,11.0,0.0 +2013-12-21,26649,5.0,11.0,0.0 +2013-12-22,26917,6.0,11.0,0.0 +2013-12-23,36144,0.0,11.0,0.0 +2013-12-24,30015,1.0,11.0,0.0 +2013-12-25,23280,2.0,11.0,1.0 +2013-12-26,29732,3.0,11.0,0.0 +2013-12-27,32334,4.0,11.0,0.0 +2013-12-28,26369,5.0,11.0,0.0 +2013-12-29,27110,6.0,11.0,0.0 +2013-12-30,35237,0.0,11.0,0.0 +2013-12-31,12471,1.0,11.0,0.0 +2014-01-01,19103,2.0,0.0,1.0 +2014-01-02,38454,3.0,0.0,0.0 +2014-01-03,38788,4.0,0.0,0.0 +2014-01-04,31132,5.0,0.0,0.0 +2014-01-05,32334,6.0,0.0,0.0 +2014-01-06,44539,0.0,0.0,0.0 +2014-01-07,47256,1.0,0.0,0.0 +2014-01-08,47472,2.0,0.0,0.0 +2014-01-09,48662,3.0,0.0,0.0 +2014-01-10,46462,4.0,0.0,0.0 +2014-01-11,32376,5.0,0.0,0.0 +2014-01-12,34043,6.0,0.0,0.0 +2014-01-13,49000,0.0,0.0,0.0 +2014-01-14,50766,1.0,0.0,0.0 +2014-01-15,51247,2.0,0.0,0.0 +2014-01-16,51321,3.0,0.0,0.0 +2014-01-17,48280,4.0,0.0,0.0 +2014-01-18,33741,5.0,0.0,0.0 +2014-01-19,35398,6.0,0.0,0.0 +2014-01-20,48750,0.0,0.0,1.0 +2014-01-21,52079,1.0,0.0,0.0 +2014-01-22,52542,2.0,0.0,0.0 +2014-01-23,52376,3.0,0.0,0.0 +2014-01-24,48155,4.0,0.0,0.0 +2014-01-25,36337,5.0,0.0,0.0 +2014-01-26,38223,6.0,0.0,0.0 +2014-01-27,51032,0.0,0.0,0.0 +2014-01-28,52414,1.0,0.0,0.0 +2014-01-29,51673,2.0,0.0,0.0 +2014-01-30,50439,3.0,0.0,0.0 +2014-01-31,47161,4.0,0.0,0.0 +2014-02-01,33166,5.0,1.0,0.0 +2014-02-02,34890,6.0,1.0,0.0 +2014-02-03,47975,0.0,1.0,0.0 +2014-02-04,51265,1.0,1.0,0.0 +2014-02-05,51264,2.0,1.0,0.0 +2014-02-06,52288,3.0,1.0,0.0 +2014-02-07,50247,4.0,1.0,0.0 +2014-02-08,36698,5.0,1.0,0.0 +2014-02-09,38503,6.0,1.0,0.0 +2014-02-10,54226,0.0,1.0,0.0 +2014-02-11,56115,1.0,1.0,0.0 +2014-02-12,56224,2.0,1.0,0.0 +2014-02-13,55362,3.0,1.0,0.0 +2014-02-14,50776,4.0,1.0,0.0 +2014-02-15,35096,5.0,1.0,0.0 +2014-02-16,38108,6.0,1.0,0.0 +2014-02-17,53408,0.0,1.0,1.0 +2014-02-18,57332,1.0,1.0,0.0 +2014-02-19,56375,2.0,1.0,0.0 +2014-02-20,53624,3.0,1.0,0.0 +2014-02-21,53840,4.0,1.0,0.0 +2014-02-22,37965,5.0,1.0,0.0 +2014-02-23,38901,6.0,1.0,0.0 +2014-02-24,57056,0.0,1.0,0.0 +2014-02-25,58723,1.0,1.0,0.0 +2014-02-26,58317,2.0,1.0,0.0 +2014-02-27,58104,3.0,1.0,0.0 +2014-02-28,54417,4.0,1.0,0.0 +2014-03-01,37253,5.0,2.0,0.0 +2014-03-02,39545,6.0,2.0,0.0 +2014-03-03,56281,0.0,2.0,0.0 +2014-03-04,58275,1.0,2.0,0.0 +2014-03-05,58531,2.0,2.0,0.0 +2014-03-06,58230,3.0,2.0,0.0 +2014-03-07,54381,4.0,2.0,0.0 +2014-03-08,36908,5.0,2.0,0.0 +2014-03-09,38903,6.0,2.0,0.0 +2014-03-10,57466,0.0,2.0,0.0 +2014-03-11,58201,1.0,2.0,0.0 +2014-03-12,59508,2.0,2.0,0.0 +2014-03-13,58819,3.0,2.0,0.0 +2014-03-14,54631,4.0,2.0,0.0 +2014-03-15,37045,5.0,2.0,0.0 +2014-03-16,40071,6.0,2.0,0.0 +2014-03-17,58119,0.0,2.0,0.0 +2014-03-18,60296,1.0,2.0,0.0 +2014-03-19,60348,2.0,2.0,0.0 +2014-03-20,59653,3.0,2.0,0.0 +2014-03-21,54723,4.0,2.0,0.0 +2014-03-22,38438,5.0,2.0,0.0 +2014-03-23,41116,6.0,2.0,0.0 +2014-03-24,59413,0.0,2.0,0.0 +2014-03-25,58491,1.0,2.0,0.0 +2014-03-26,57706,2.0,2.0,0.0 +2014-03-27,59629,3.0,2.0,0.0 +2014-03-28,55961,4.0,2.0,0.0 +2014-03-29,37785,5.0,2.0,0.0 +2014-03-30,40405,6.0,2.0,0.0 +2014-03-31,59279,0.0,2.0,0.0 +2014-04-01,59904,1.0,3.0,0.0 +2014-04-02,61078,2.0,3.0,0.0 +2014-04-03,60665,3.0,3.0,0.0 +2014-04-04,56880,4.0,3.0,0.0 +2014-04-05,38088,5.0,3.0,0.0 +2014-04-06,40745,6.0,3.0,0.0 +2014-04-07,59201,0.0,3.0,0.0 +2014-04-08,62373,1.0,3.0,0.0 +2014-04-09,61447,2.0,3.0,0.0 +2014-04-10,60721,3.0,3.0,0.0 +2014-04-11,57113,4.0,3.0,0.0 +2014-04-12,38778,5.0,3.0,0.0 +2014-04-13,41754,6.0,3.0,0.0 +2014-04-14,60584,0.0,3.0,0.0 +2014-04-15,62573,1.0,3.0,0.0 +2014-04-16,61692,2.0,3.0,0.0 +2014-04-17,58954,3.0,3.0,0.0 +2014-04-18,50828,4.0,3.0,0.0 +2014-04-19,37908,5.0,3.0,0.0 +2014-04-20,37161,6.0,3.0,0.0 +2014-04-21,53971,0.0,3.0,0.0 +2014-04-22,63154,1.0,3.0,0.0 +2014-04-23,64034,2.0,3.0,0.0 +2014-04-24,63013,3.0,3.0,0.0 +2014-04-25,58866,4.0,3.0,0.0 +2014-04-26,41588,5.0,3.0,0.0 +2014-04-27,46281,6.0,3.0,0.0 +2014-04-28,62917,0.0,3.0,0.0 +2014-04-29,58880,1.0,3.0,0.0 +2014-04-30,59201,2.0,3.0,0.0 +2014-05-01,51873,3.0,4.0,0.0 +2014-05-02,53187,4.0,4.0,0.0 +2014-05-03,38524,5.0,4.0,0.0 +2014-05-04,42442,6.0,4.0,0.0 +2014-05-05,59938,0.0,4.0,0.0 +2014-05-06,63226,1.0,4.0,0.0 +2014-05-07,64056,2.0,4.0,0.0 +2014-05-08,62352,3.0,4.0,0.0 +2014-05-09,58471,4.0,4.0,0.0 +2014-05-10,40697,5.0,4.0,0.0 +2014-05-11,42929,6.0,4.0,0.0 +2014-05-12,62494,0.0,4.0,0.0 +2014-05-13,63889,1.0,4.0,0.0 +2014-05-14,63489,2.0,4.0,0.0 +2014-05-15,62615,3.0,4.0,0.0 +2014-05-16,57477,4.0,4.0,0.0 +2014-05-17,38612,5.0,4.0,0.0 +2014-05-18,41834,6.0,4.0,0.0 +2014-05-19,61093,0.0,4.0,0.0 +2014-05-20,63539,1.0,4.0,0.0 +2014-05-21,63520,2.0,4.0,0.0 +2014-05-22,62947,3.0,4.0,0.0 +2014-05-23,58949,4.0,4.0,0.0 +2014-05-24,38860,5.0,4.0,0.0 +2014-05-25,41500,6.0,4.0,0.0 +2014-05-26,54191,0.0,4.0,1.0 +2014-05-27,62110,1.0,4.0,0.0 +2014-05-28,62119,2.0,4.0,0.0 +2014-05-29,57778,3.0,4.0,0.0 +2014-05-30,55360,4.0,4.0,0.0 +2014-05-31,36956,5.0,4.0,0.0 +2014-06-01,38803,6.0,5.0,0.0 +2014-06-02,57279,0.0,5.0,0.0 +2014-06-03,61545,1.0,5.0,0.0 +2014-06-04,62143,2.0,5.0,0.0 +2014-06-05,61565,3.0,5.0,0.0 +2014-06-06,56682,4.0,5.0,0.0 +2014-06-07,37230,5.0,5.0,0.0 +2014-06-08,39439,6.0,5.0,0.0 +2014-06-09,56406,0.0,5.0,0.0 +2014-06-10,60934,1.0,5.0,0.0 +2014-06-11,61030,2.0,5.0,0.0 +2014-06-12,58531,3.0,5.0,0.0 +2014-06-13,54801,4.0,5.0,0.0 +2014-06-14,36688,5.0,5.0,0.0 +2014-06-15,38911,6.0,5.0,0.0 +2014-06-16,57628,0.0,5.0,0.0 +2014-06-17,60437,1.0,5.0,0.0 +2014-06-18,59666,2.0,5.0,0.0 +2014-06-19,58550,3.0,5.0,0.0 +2014-06-20,54734,4.0,5.0,0.0 +2014-06-21,36936,5.0,5.0,0.0 +2014-06-22,40998,6.0,5.0,0.0 +2014-06-23,57817,0.0,5.0,0.0 +2014-06-24,59898,1.0,5.0,0.0 +2014-06-25,59275,2.0,5.0,0.0 +2014-06-26,58194,3.0,5.0,0.0 +2014-06-27,54687,4.0,5.0,0.0 +2014-06-28,34376,5.0,5.0,0.0 +2014-06-29,36039,6.0,5.0,0.0 +2014-06-30,56288,0.0,5.0,0.0 +2014-07-01,57564,1.0,6.0,0.0 +2014-07-02,58226,2.0,6.0,0.0 +2014-07-03,57447,3.0,6.0,0.0 +2014-07-04,46868,4.0,6.0,1.0 +2014-07-05,31976,5.0,6.0,0.0 +2014-07-06,35625,6.0,6.0,0.0 +2014-07-07,57648,0.0,6.0,0.0 +2014-07-08,59817,1.0,6.0,0.0 +2014-07-09,58684,2.0,6.0,0.0 +2014-07-10,59610,3.0,6.0,0.0 +2014-07-11,56361,4.0,6.0,0.0 +2014-07-12,36405,5.0,6.0,0.0 +2014-07-13,37367,6.0,6.0,0.0 +2014-07-14,57220,0.0,6.0,0.0 +2014-07-15,60954,1.0,6.0,0.0 +2014-07-16,60772,2.0,6.0,0.0 +2014-07-17,58139,3.0,6.0,0.0 +2014-07-18,55605,4.0,6.0,0.0 +2014-07-19,35444,5.0,6.0,0.0 +2014-07-20,37516,6.0,6.0,0.0 +2014-07-21,58789,0.0,6.0,0.0 +2014-07-22,61115,1.0,6.0,0.0 +2014-07-23,61183,2.0,6.0,0.0 +2014-07-24,60482,3.0,6.0,0.0 +2014-07-25,56642,4.0,6.0,0.0 +2014-07-26,37052,5.0,6.0,0.0 +2014-07-27,40482,6.0,6.0,0.0 +2014-07-28,58625,0.0,6.0,0.0 +2014-07-29,60214,1.0,6.0,0.0 +2014-07-30,60244,2.0,6.0,0.0 +2014-07-31,59555,3.0,6.0,0.0 +2014-08-01,54851,4.0,7.0,0.0 +2014-08-02,34918,5.0,7.0,0.0 +2014-08-03,36852,6.0,7.0,0.0 +2014-08-04,57355,0.0,7.0,0.0 +2014-08-05,60536,1.0,7.0,0.0 +2014-08-06,60691,2.0,7.0,0.0 +2014-08-07,59387,3.0,7.0,0.0 +2014-08-08,55824,4.0,7.0,0.0 +2014-08-09,35770,5.0,7.0,0.0 +2014-08-10,38102,6.0,7.0,0.0 +2014-08-11,59054,0.0,7.0,0.0 +2014-08-12,60590,1.0,7.0,0.0 +2014-08-13,60448,2.0,7.0,0.0 +2014-08-14,58944,3.0,7.0,0.0 +2014-08-15,53160,4.0,7.0,0.0 +2014-08-16,35988,5.0,7.0,0.0 +2014-08-17,39009,6.0,7.0,0.0 +2014-08-18,59031,0.0,7.0,0.0 +2014-08-19,61664,1.0,7.0,0.0 +2014-08-20,61490,2.0,7.0,0.0 +2014-08-21,61343,3.0,7.0,0.0 +2014-08-22,58054,4.0,7.0,0.0 +2014-08-23,38573,5.0,7.0,0.0 +2014-08-24,41813,6.0,7.0,0.0 +2014-08-25,58804,0.0,7.0,0.0 +2014-08-26,61870,1.0,7.0,0.0 +2014-08-27,61716,2.0,7.0,0.0 +2014-08-28,60539,3.0,7.0,0.0 +2014-08-29,56147,4.0,7.0,0.0 +2014-08-30,36483,5.0,7.0,0.0 +2014-08-31,38402,6.0,7.0,0.0 +2014-09-01,53643,0.0,8.0,1.0 +2014-09-02,62318,1.0,8.0,0.0 +2014-09-03,63877,2.0,8.0,0.0 +2014-09-04,63233,3.0,8.0,0.0 +2014-09-05,59368,4.0,8.0,0.0 +2014-09-06,39023,5.0,8.0,0.0 +2014-09-07,40969,6.0,8.0,0.0 +2014-09-08,59558,0.0,8.0,0.0 +2014-09-09,63536,1.0,8.0,0.0 +2014-09-10,64457,2.0,8.0,0.0 +2014-09-11,64373,3.0,8.0,0.0 +2014-09-12,60704,4.0,8.0,0.0 +2014-09-13,40285,5.0,8.0,0.0 +2014-09-14,42980,6.0,8.0,0.0 +2014-09-15,63854,0.0,8.0,0.0 +2014-09-16,66603,1.0,8.0,0.0 +2014-09-17,66943,2.0,8.0,0.0 +2014-09-18,65374,3.0,8.0,0.0 +2014-09-19,61976,4.0,8.0,0.0 +2014-09-20,41540,5.0,8.0,0.0 +2014-09-21,45895,6.0,8.0,0.0 +2014-09-22,65680,0.0,8.0,0.0 +2014-09-23,65894,1.0,8.0,0.0 +2014-09-24,67516,2.0,8.0,0.0 +2014-09-25,66172,3.0,8.0,0.0 +2014-09-26,62052,4.0,8.0,0.0 +2014-09-27,40681,5.0,8.0,0.0 +2014-09-28,44507,6.0,8.0,0.0 +2014-09-29,66009,0.0,8.0,0.0 +2014-09-30,65377,1.0,8.0,0.0 +2014-10-01,64361,2.0,9.0,0.0 +2014-10-02,63192,3.0,9.0,0.0 +2014-10-03,58623,4.0,9.0,0.0 +2014-10-04,40046,5.0,9.0,0.0 +2014-10-05,42635,6.0,9.0,0.0 +2014-10-06,64289,0.0,9.0,0.0 +2014-10-07,67728,1.0,9.0,0.0 +2014-10-08,70580,2.0,9.0,0.0 +2014-10-09,68939,3.0,9.0,0.0 +2014-10-10,65565,4.0,9.0,0.0 +2014-10-11,45396,5.0,9.0,0.0 +2014-10-12,46315,6.0,9.0,0.0 +2014-10-13,68081,0.0,9.0,1.0 +2014-10-14,70462,1.0,9.0,0.0 +2014-10-15,71679,2.0,9.0,0.0 +2014-10-16,71133,3.0,9.0,0.0 +2014-10-17,66584,4.0,9.0,0.0 +2014-10-18,45259,5.0,9.0,0.0 +2014-10-19,46726,6.0,9.0,0.0 +2014-10-20,71061,0.0,9.0,0.0 +2014-10-21,74351,1.0,9.0,0.0 +2014-10-22,71496,2.0,9.0,0.0 +2014-10-23,72852,3.0,9.0,0.0 +2014-10-24,68836,4.0,9.0,0.0 +2014-10-25,46343,5.0,9.0,0.0 +2014-10-26,51704,6.0,9.0,0.0 +2014-10-27,72386,0.0,9.0,0.0 +2014-10-28,73319,1.0,9.0,0.0 +2014-10-29,71694,2.0,9.0,0.0 +2014-10-30,73188,3.0,9.0,0.0 +2014-10-31,66606,4.0,9.0,0.0 +2014-11-01,43864,5.0,10.0,0.0 +2014-11-02,48725,6.0,10.0,0.0 +2014-11-03,72901,0.0,10.0,0.0 +2014-11-04,75637,1.0,10.0,0.0 +2014-11-05,76423,2.0,10.0,0.0 +2014-11-06,74164,3.0,10.0,0.0 +2014-11-07,71186,4.0,10.0,0.0 +2014-11-08,49033,5.0,10.0,0.0 +2014-11-09,52480,6.0,10.0,0.0 +2014-11-10,74984,0.0,10.0,0.0 +2014-11-11,76113,1.0,10.0,1.0 +2014-11-12,77768,2.0,10.0,0.0 +2014-11-13,77072,3.0,10.0,0.0 +2014-11-14,72203,4.0,10.0,0.0 +2014-11-15,50149,5.0,10.0,0.0 +2014-11-16,53584,6.0,10.0,0.0 +2014-11-17,77223,0.0,10.0,0.0 +2014-11-18,79371,1.0,10.0,0.0 +2014-11-19,79472,2.0,10.0,0.0 +2014-11-20,78357,3.0,10.0,0.0 +2014-11-21,73355,4.0,10.0,0.0 +2014-11-22,50881,5.0,10.0,0.0 +2014-11-23,55522,6.0,10.0,0.0 +2014-11-24,78109,0.0,10.0,0.0 +2014-11-25,79387,1.0,10.0,0.0 +2014-11-26,76170,2.0,10.0,0.0 +2014-11-27,67321,3.0,10.0,1.0 +2014-11-28,61368,4.0,10.0,0.0 +2014-11-29,46144,5.0,10.0,0.0 +2014-11-30,51623,6.0,10.0,0.0 +2014-12-01,77869,0.0,11.0,0.0 +2014-12-02,80799,1.0,11.0,0.0 +2014-12-03,80672,2.0,11.0,0.0 +2014-12-04,78755,3.0,11.0,0.0 +2014-12-05,74423,4.0,11.0,0.0 +2014-12-06,51209,5.0,11.0,0.0 +2014-12-07,54238,6.0,11.0,0.0 +2014-12-08,77058,0.0,11.0,0.0 +2014-12-09,79147,1.0,11.0,0.0 +2014-12-10,77471,2.0,11.0,0.0 +2014-12-11,76005,3.0,11.0,0.0 +2014-12-12,70489,4.0,11.0,0.0 +2014-12-13,47806,5.0,11.0,0.0 +2014-12-14,50486,6.0,11.0,0.0 +2014-12-15,73353,0.0,11.0,0.0 +2014-12-16,74217,1.0,11.0,0.0 +2014-12-17,72849,2.0,11.0,0.0 +2014-12-18,70140,3.0,11.0,0.0 +2014-12-19,64016,4.0,11.0,0.0 +2014-12-20,42131,5.0,11.0,0.0 +2014-12-21,45466,6.0,11.0,0.0 +2014-12-22,59804,0.0,11.0,0.0 +2014-12-23,57678,1.0,11.0,0.0 +2014-12-24,45609,2.0,11.0,0.0 +2014-12-25,34924,3.0,11.0,1.0 +2014-12-26,40747,4.0,11.0,0.0 +2014-12-27,37359,5.0,11.0,0.0 +2014-12-28,39682,6.0,11.0,0.0 +2014-12-29,53699,0.0,11.0,0.0 +2014-12-30,54029,1.0,11.0,0.0 +2014-12-31,18574,2.0,11.0,0.0 +2015-01-01,33211,3.0,0.0,1.0 +2015-01-02,48077,4.0,0.0,0.0 +2015-01-03,44563,5.0,0.0,0.0 +2015-01-04,49137,6.0,0.0,0.0 +2015-01-05,66676,0.0,0.0,0.0 +2015-01-06,66039,1.0,0.0,0.0 +2015-01-07,70055,2.0,0.0,0.0 +2015-01-08,71505,3.0,0.0,0.0 +2015-01-09,66446,4.0,0.0,0.0 +2015-01-10,49634,5.0,0.0,0.0 +2015-01-11,52346,6.0,0.0,0.0 +2015-01-12,76021,0.0,0.0,0.0 +2015-01-13,77374,1.0,0.0,0.0 +2015-01-14,78209,2.0,0.0,0.0 +2015-01-15,77896,3.0,0.0,0.0 +2015-01-16,73533,4.0,0.0,0.0 +2015-01-17,51229,5.0,0.0,0.0 +2015-01-18,54212,6.0,0.0,0.0 +2015-01-19,75243,0.0,0.0,1.0 +2015-01-20,80898,1.0,0.0,0.0 +2015-01-21,81397,2.0,0.0,0.0 +2015-01-22,80848,3.0,0.0,0.0 +2015-01-23,77202,4.0,0.0,0.0 +2015-01-24,55935,5.0,0.0,0.0 +2015-01-25,61597,6.0,0.0,0.0 +2015-01-26,79962,0.0,0.0,0.0 +2015-01-27,82207,1.0,0.0,0.0 +2015-01-28,82554,2.0,0.0,0.0 +2015-01-29,81467,3.0,0.0,0.0 +2015-01-30,76405,4.0,0.0,0.0 +2015-01-31,53052,5.0,0.0,0.0 +2015-02-01,55516,6.0,1.0,0.0 +2015-02-02,78483,0.0,1.0,0.0 +2015-02-03,80571,1.0,1.0,0.0 +2015-02-04,83041,2.0,1.0,0.0 +2015-02-05,82992,3.0,1.0,0.0 +2015-02-06,79509,4.0,1.0,0.0 +2015-02-07,54980,5.0,1.0,0.0 +2015-02-08,59201,6.0,1.0,0.0 +2015-02-09,84344,0.0,1.0,0.0 +2015-02-10,85600,1.0,1.0,0.0 +2015-02-11,84990,2.0,1.0,0.0 +2015-02-12,84056,3.0,1.0,0.0 +2015-02-13,78771,4.0,1.0,0.0 +2015-02-14,50473,5.0,1.0,0.0 +2015-02-15,55681,6.0,1.0,0.0 +2015-02-16,76934,0.0,1.0,1.0 +2015-02-17,80882,1.0,1.0,0.0 +2015-02-18,80672,2.0,1.0,0.0 +2015-02-19,79879,3.0,1.0,0.0 +2015-02-20,77309,4.0,1.0,0.0 +2015-02-21,56256,5.0,1.0,0.0 +2015-02-22,62005,6.0,1.0,0.0 +2015-02-23,81400,0.0,1.0,0.0 +2015-02-24,84252,1.0,1.0,0.0 +2015-02-25,85804,2.0,1.0,0.0 +2015-02-26,86417,3.0,1.0,0.0 +2015-02-27,81035,4.0,1.0,0.0 +2015-02-28,57647,5.0,1.0,0.0 +2015-03-01,59286,6.0,2.0,0.0 +2015-03-02,87020,0.0,2.0,0.0 +2015-03-03,89520,1.0,2.0,0.0 +2015-03-04,90519,2.0,2.0,0.0 +2015-03-05,88078,3.0,2.0,0.0 +2015-03-06,83016,4.0,2.0,0.0 +2015-03-07,57201,5.0,2.0,0.0 +2015-03-08,60121,6.0,2.0,0.0 +2015-03-09,88330,0.0,2.0,0.0 +2015-03-10,91456,1.0,2.0,0.0 +2015-03-11,91102,2.0,2.0,0.0 +2015-03-12,90934,3.0,2.0,0.0 +2015-03-13,86003,4.0,2.0,0.0 +2015-03-14,58089,5.0,2.0,0.0 +2015-03-15,62177,6.0,2.0,0.0 +2015-03-16,90924,0.0,2.0,0.0 +2015-03-17,93210,1.0,2.0,0.0 +2015-03-18,92153,2.0,2.0,0.0 +2015-03-19,91674,3.0,2.0,0.0 +2015-03-20,86065,4.0,2.0,0.0 +2015-03-21,59532,5.0,2.0,0.0 +2015-03-22,65999,6.0,2.0,0.0 +2015-03-23,91418,0.0,2.0,0.0 +2015-03-24,94159,1.0,2.0,0.0 +2015-03-25,93458,2.0,2.0,0.0 +2015-03-26,92072,3.0,2.0,0.0 +2015-03-27,83128,4.0,2.0,0.0 +2015-03-28,57894,5.0,2.0,0.0 +2015-03-29,60676,6.0,2.0,0.0 +2015-03-30,91212,0.0,2.0,0.0 +2015-03-31,93079,1.0,2.0,0.0 +2015-04-01,90691,2.0,3.0,0.0 +2015-04-02,86589,3.0,3.0,0.0 +2015-04-03,74443,4.0,3.0,0.0 +2015-04-04,55184,5.0,3.0,0.0 +2015-04-05,54861,6.0,3.0,0.0 +2015-04-06,78892,0.0,3.0,0.0 +2015-04-07,93458,1.0,3.0,0.0 +2015-04-08,95291,2.0,3.0,0.0 +2015-04-09,93141,3.0,3.0,0.0 +2015-04-10,86853,4.0,3.0,0.0 +2015-04-11,59522,5.0,3.0,0.0 +2015-04-12,63432,6.0,3.0,0.0 +2015-04-13,91817,0.0,3.0,0.0 +2015-04-14,94974,1.0,3.0,0.0 +2015-04-15,94061,2.0,3.0,0.0 +2015-04-16,94221,3.0,3.0,0.0 +2015-04-17,88699,4.0,3.0,0.0 +2015-04-18,59654,5.0,3.0,0.0 +2015-04-19,65146,6.0,3.0,0.0 +2015-04-20,94916,0.0,3.0,0.0 +2015-04-21,97299,1.0,3.0,0.0 +2015-04-22,97751,2.0,3.0,0.0 +2015-04-23,95638,3.0,3.0,0.0 +2015-04-24,89613,4.0,3.0,0.0 +2015-04-25,61119,5.0,3.0,0.0 +2015-04-26,68408,6.0,3.0,0.0 +2015-04-27,94300,0.0,3.0,0.0 +2015-04-28,97417,1.0,3.0,0.0 +2015-04-29,95247,2.0,3.0,0.0 +2015-04-30,89512,3.0,3.0,0.0 +2015-05-01,71100,4.0,4.0,0.0 +2015-05-02,55068,5.0,4.0,0.0 +2015-05-03,59245,6.0,4.0,0.0 +2015-05-04,89677,0.0,4.0,0.0 +2015-05-05,94643,1.0,4.0,0.0 +2015-05-06,94869,2.0,4.0,0.0 +2015-05-07,93583,3.0,4.0,0.0 +2015-05-08,85836,4.0,4.0,0.0 +2015-05-09,57774,5.0,4.0,0.0 +2015-05-10,61098,6.0,4.0,0.0 +2015-05-11,92261,0.0,4.0,0.0 +2015-05-12,96912,1.0,4.0,0.0 +2015-05-13,94490,2.0,4.0,0.0 +2015-05-14,88189,3.0,4.0,0.0 +2015-05-15,84151,4.0,4.0,0.0 +2015-05-16,57518,5.0,4.0,0.0 +2015-05-17,62282,6.0,4.0,0.0 +2015-05-18,92330,0.0,4.0,0.0 +2015-05-19,96248,1.0,4.0,0.0 +2015-05-20,96061,2.0,4.0,0.0 +2015-05-21,94121,3.0,4.0,0.0 +2015-05-22,87344,4.0,4.0,0.0 +2015-05-23,56965,5.0,4.0,0.0 +2015-05-24,60744,6.0,4.0,0.0 +2015-05-25,77609,0.0,4.0,1.0 +2015-05-26,93876,1.0,4.0,0.0 +2015-05-27,95475,2.0,4.0,0.0 +2015-05-28,92911,3.0,4.0,0.0 +2015-05-29,86540,4.0,4.0,0.0 +2015-05-30,56399,5.0,4.0,0.0 +2015-05-31,59770,6.0,4.0,0.0 +2015-06-01,89681,0.0,5.0,0.0 +2015-06-02,94065,1.0,5.0,0.0 +2015-06-03,93262,2.0,5.0,0.0 +2015-06-04,89150,3.0,5.0,0.0 +2015-06-05,84240,4.0,5.0,0.0 +2015-06-06,55264,5.0,5.0,0.0 +2015-06-07,59114,6.0,5.0,0.0 +2015-06-08,89414,0.0,5.0,0.0 +2015-06-09,94342,1.0,5.0,0.0 +2015-06-10,92730,2.0,5.0,0.0 +2015-06-11,90337,3.0,5.0,0.0 +2015-06-12,82629,4.0,5.0,0.0 +2015-06-13,54393,5.0,5.0,0.0 +2015-06-14,58454,6.0,5.0,0.0 +2015-06-15,88580,0.0,5.0,0.0 +2015-06-16,91424,1.0,5.0,0.0 +2015-06-17,91408,2.0,5.0,0.0 +2015-06-18,89458,3.0,5.0,0.0 +2015-06-19,82843,4.0,5.0,0.0 +2015-06-20,52691,5.0,5.0,0.0 +2015-06-21,57034,6.0,5.0,0.0 +2015-06-22,84455,0.0,5.0,0.0 +2015-06-23,90430,1.0,5.0,0.0 +2015-06-24,89483,2.0,5.0,0.0 +2015-06-25,88234,3.0,5.0,0.0 +2015-06-26,81883,4.0,5.0,0.0 +2015-06-27,52129,5.0,5.0,0.0 +2015-06-28,54858,6.0,5.0,0.0 +2015-06-29,86080,0.0,5.0,0.0 +2015-06-30,88498,1.0,5.0,0.0 +2015-07-01,86019,2.0,6.0,0.0 +2015-07-02,84921,3.0,6.0,0.0 +2015-07-03,72626,4.0,6.0,1.0 +2015-07-04,47682,5.0,6.0,0.0 +2015-07-05,51161,6.0,6.0,0.0 +2015-07-06,84781,0.0,6.0,0.0 +2015-07-07,89887,1.0,6.0,0.0 +2015-07-08,89657,2.0,6.0,0.0 +2015-07-09,88592,3.0,6.0,0.0 +2015-07-10,82408,4.0,6.0,0.0 +2015-07-11,52448,5.0,6.0,0.0 +2015-07-12,56396,6.0,6.0,0.0 +2015-07-13,87354,0.0,6.0,0.0 +2015-07-14,88965,1.0,6.0,0.0 +2015-07-15,88859,2.0,6.0,0.0 +2015-07-16,86788,3.0,6.0,0.0 +2015-07-17,80759,4.0,6.0,0.0 +2015-07-18,51601,5.0,6.0,0.0 +2015-07-19,55215,6.0,6.0,0.0 +2015-07-20,85913,0.0,6.0,0.0 +2015-07-21,89034,1.0,6.0,0.0 +2015-07-22,89449,2.0,6.0,0.0 +2015-07-23,89039,3.0,6.0,0.0 +2015-07-24,82762,4.0,6.0,0.0 +2015-07-25,53435,5.0,6.0,0.0 +2015-07-26,57851,6.0,6.0,0.0 +2015-07-27,87111,0.0,6.0,0.0 +2015-07-28,89813,1.0,6.0,0.0 +2015-07-29,89080,2.0,6.0,0.0 +2015-07-30,86852,3.0,6.0,0.0 +2015-07-31,80715,4.0,6.0,0.0 +2015-08-01,49693,5.0,7.0,0.0 +2015-08-02,51980,6.0,7.0,0.0 +2015-08-03,83065,0.0,7.0,0.0 +2015-08-04,87753,1.0,7.0,0.0 +2015-08-05,87047,2.0,7.0,0.0 +2015-08-06,85675,3.0,7.0,0.0 +2015-08-07,79329,4.0,7.0,0.0 +2015-08-08,50372,5.0,7.0,0.0 +2015-08-09,53900,6.0,7.0,0.0 +2015-08-10,84498,0.0,7.0,0.0 +2015-08-11,88065,1.0,7.0,0.0 +2015-08-12,88003,2.0,7.0,0.0 +2015-08-13,86159,3.0,7.0,0.0 +2015-08-14,80407,4.0,7.0,0.0 +2015-08-15,52148,5.0,7.0,0.0 +2015-08-16,55563,6.0,7.0,0.0 +2015-08-17,85716,0.0,7.0,0.0 +2015-08-18,90098,1.0,7.0,0.0 +2015-08-19,90311,2.0,7.0,0.0 +2015-08-20,89112,3.0,7.0,0.0 +2015-08-21,83607,4.0,7.0,0.0 +2015-08-22,54685,5.0,7.0,0.0 +2015-08-23,59679,6.0,7.0,0.0 +2015-08-24,87916,0.0,7.0,0.0 +2015-08-25,89785,1.0,7.0,0.0 +2015-08-26,90842,2.0,7.0,0.0 +2015-08-27,89589,3.0,7.0,0.0 +2015-08-28,84012,4.0,7.0,0.0 +2015-08-29,52998,5.0,7.0,0.0 +2015-08-30,55886,6.0,7.0,0.0 +2015-08-31,86983,0.0,7.0,0.0 +2015-09-01,91295,1.0,8.0,0.0 +2015-09-02,91046,2.0,8.0,0.0 +2015-09-03,87017,3.0,8.0,0.0 +2015-09-04,80813,4.0,8.0,0.0 +2015-09-05,54463,5.0,8.0,0.0 +2015-09-06,59864,6.0,8.0,0.0 +2015-09-07,80617,0.0,8.0,1.0 +2015-09-08,93446,1.0,8.0,0.0 +2015-09-09,94640,2.0,8.0,0.0 +2015-09-10,94089,3.0,8.0,0.0 +2015-09-11,88287,4.0,8.0,0.0 +2015-09-12,57236,5.0,8.0,0.0 +2015-09-13,61339,6.0,8.0,0.0 +2015-09-14,94100,0.0,8.0,0.0 +2015-09-15,97210,1.0,8.0,0.0 +2015-09-16,97520,2.0,8.0,0.0 +2015-09-17,95561,3.0,8.0,0.0 +2015-09-18,90210,4.0,8.0,0.0 +2015-09-19,58521,5.0,8.0,0.0 +2015-09-20,62414,6.0,8.0,0.0 +2015-09-21,96432,0.0,8.0,0.0 +2015-09-22,99956,1.0,8.0,0.0 +2015-09-23,99207,2.0,8.0,0.0 +2015-09-24,97696,3.0,8.0,0.0 +2015-09-25,90619,4.0,8.0,0.0 +2015-09-26,59733,5.0,8.0,0.0 +2015-09-27,64337,6.0,8.0,0.0 +2015-09-28,95277,0.0,8.0,0.0 +2015-09-29,99909,1.0,8.0,0.0 +2015-09-30,98496,2.0,8.0,0.0 +2015-10-01,93111,3.0,9.0,0.0 +2015-10-02,86753,4.0,9.0,0.0 +2015-10-03,58268,5.0,9.0,0.0 +2015-10-04,62592,6.0,9.0,0.0 +2015-10-05,95603,0.0,9.0,0.0 +2015-10-06,99837,1.0,9.0,0.0 +2015-10-07,100860,2.0,9.0,0.0 +2015-10-08,102409,3.0,9.0,0.0 +2015-10-09,95631,4.0,9.0,0.0 +2015-10-10,66043,5.0,9.0,0.0 +2015-10-11,66601,6.0,9.0,0.0 +2015-10-12,98066,0.0,9.0,1.0 +2015-10-13,106570,1.0,9.0,0.0 +2015-10-14,105415,2.0,9.0,0.0 +2015-10-15,104366,3.0,9.0,0.0 +2015-10-16,97556,4.0,9.0,0.0 +2015-10-17,64064,5.0,9.0,0.0 +2015-10-18,69221,6.0,9.0,0.0 +2015-10-19,105710,0.0,9.0,0.0 +2015-10-20,108226,1.0,9.0,0.0 +2015-10-21,107216,2.0,9.0,0.0 +2015-10-22,106180,3.0,9.0,0.0 +2015-10-23,99348,4.0,9.0,0.0 +2015-10-24,67090,5.0,9.0,0.0 +2015-10-25,73283,6.0,9.0,0.0 +2015-10-26,104805,0.0,9.0,0.0 +2015-10-27,111076,1.0,9.0,0.0 +2015-10-28,110991,2.0,9.0,0.0 +2015-10-29,109068,3.0,9.0,0.0 +2015-10-30,100655,4.0,9.0,0.0 +2015-10-31,63910,5.0,9.0,0.0 +2015-11-01,67454,6.0,10.0,0.0 +2015-11-02,106405,0.0,10.0,0.0 +2015-11-03,113189,1.0,10.0,0.0 +2015-11-04,112399,2.0,10.0,0.0 +2015-11-05,112257,3.0,10.0,0.0 +2015-11-06,105629,4.0,10.0,0.0 +2015-11-07,70570,5.0,10.0,0.0 +2015-11-08,75161,6.0,10.0,0.0 +2015-11-09,110784,0.0,10.0,0.0 +2015-11-10,112978,1.0,10.0,0.0 +2015-11-11,107347,2.0,10.0,1.0 +2015-11-12,111293,3.0,10.0,0.0 +2015-11-13,104493,4.0,10.0,0.0 +2015-11-14,68039,5.0,10.0,0.0 +2015-11-15,73945,6.0,10.0,0.0 +2015-11-16,111285,0.0,10.0,0.0 +2015-11-17,115457,1.0,10.0,0.0 +2015-11-18,115393,2.0,10.0,0.0 +2015-11-19,115387,3.0,10.0,0.0 +2015-11-20,107008,4.0,10.0,0.0 +2015-11-21,71677,5.0,10.0,0.0 +2015-11-22,77702,6.0,10.0,0.0 +2015-11-23,113226,0.0,10.0,0.0 +2015-11-24,114841,1.0,10.0,0.0 +2015-11-25,109386,2.0,10.0,0.0 +2015-11-26,96620,3.0,10.0,1.0 +2015-11-27,88369,4.0,10.0,0.0 +2015-11-28,66696,5.0,10.0,0.0 +2015-11-29,74591,6.0,10.0,0.0 +2015-11-30,114424,0.0,10.0,0.0 +2015-12-01,117806,1.0,11.0,0.0 +2015-12-02,118201,2.0,11.0,0.0 +2015-12-03,117780,3.0,11.0,0.0 +2015-12-04,108975,4.0,11.0,0.0 +2015-12-05,72662,5.0,11.0,0.0 +2015-12-06,76360,6.0,11.0,0.0 +2015-12-07,113903,0.0,11.0,0.0 +2015-12-08,115911,1.0,11.0,0.0 +2015-12-09,115324,2.0,11.0,0.0 +2015-12-10,113844,3.0,11.0,0.0 +2015-12-11,105420,4.0,11.0,0.0 +2015-12-12,70442,5.0,11.0,0.0 +2015-12-13,74537,6.0,11.0,0.0 +2015-12-14,110352,0.0,11.0,0.0 +2015-12-15,111033,1.0,11.0,0.0 +2015-12-16,107508,2.0,11.0,0.0 +2015-12-17,103108,3.0,11.0,0.0 +2015-12-18,93664,4.0,11.0,0.0 +2015-12-19,60441,5.0,11.0,0.0 +2015-12-20,62608,6.0,11.0,0.0 +2015-12-21,91916,0.0,11.0,0.0 +2015-12-22,91125,1.0,11.0,0.0 +2015-12-23,84466,2.0,11.0,0.0 +2015-12-24,66672,3.0,11.0,0.0 +2015-12-25,50812,4.0,11.0,1.0 +2015-12-26,49720,5.0,11.0,0.0 +2015-12-27,57018,6.0,11.0,0.0 +2015-12-28,76983,0.0,11.0,0.0 +2015-12-29,80256,1.0,11.0,0.0 +2015-12-30,78067,2.0,11.0,0.0 +2016-01-01,46109,4.0,0.0,1.0 +2016-01-02,56771,5.0,0.0,0.0 +2016-01-03,63608,6.0,0.0,0.0 +2016-01-04,96670,0.0,0.0,0.0 +2016-01-05,102054,1.0,0.0,0.0 +2016-01-06,101968,2.0,0.0,0.0 +2016-01-07,103695,3.0,0.0,0.0 +2016-01-08,99226,4.0,0.0,0.0 +2016-01-09,68617,5.0,0.0,0.0 +2016-01-10,73313,6.0,0.0,0.0 +2016-01-11,107882,0.0,0.0,0.0 +2016-01-12,111240,1.0,0.0,0.0 +2016-01-13,111346,2.0,0.0,0.0 +2016-01-14,110350,3.0,0.0,0.0 +2016-01-15,103836,4.0,0.0,0.0 +2016-01-16,69762,5.0,0.0,0.0 +2016-01-17,73548,6.0,0.0,0.0 +2016-01-18,106252,0.0,0.0,1.0 +2016-01-19,114235,1.0,0.0,0.0 +2016-01-20,114520,2.0,0.0,0.0 +2016-01-21,113333,3.0,0.0,0.0 +2016-01-22,106865,4.0,0.0,0.0 +2016-01-23,74103,5.0,0.0,0.0 +2016-01-24,78655,6.0,0.0,0.0 +2016-01-25,114045,0.0,0.0,0.0 +2016-01-26,116293,1.0,0.0,0.0 +2016-01-27,117360,2.0,0.0,0.0 +2016-01-28,112890,3.0,0.0,0.0 +2016-01-29,110408,4.0,0.0,0.0 +2016-01-30,77881,5.0,0.0,0.0 +2016-01-31,81804,6.0,0.0,0.0 +2016-02-01,115705,0.0,1.0,0.0 +2016-02-02,117639,1.0,1.0,0.0 +2016-02-03,118168,2.0,1.0,0.0 +2016-02-04,115485,3.0,1.0,0.0 +2016-02-05,106779,4.0,1.0,0.0 +2016-02-06,72602,5.0,1.0,0.0 +2016-02-07,73299,6.0,1.0,0.0 +2016-02-08,103308,0.0,1.0,0.0 +2016-02-09,110246,1.0,1.0,0.0 +2016-02-10,111835,2.0,1.0,0.0 +2016-02-11,112118,3.0,1.0,0.0 +2016-02-12,105677,4.0,1.0,0.0 +2016-02-13,74145,5.0,1.0,0.0 +2016-02-14,76379,6.0,1.0,0.0 +2016-02-15,111654,0.0,1.0,1.0 +2016-02-16,121528,1.0,1.0,0.0 +2016-02-17,122884,2.0,1.0,0.0 +2016-02-18,123112,3.0,1.0,0.0 +2016-02-19,117492,4.0,1.0,0.0 +2016-02-20,81509,5.0,1.0,0.0 +2016-02-21,86026,6.0,1.0,0.0 +2016-02-22,124960,0.0,1.0,0.0 +2016-02-23,128025,1.0,1.0,0.0 +2016-02-24,128860,2.0,1.0,0.0 +2016-02-25,126574,3.0,1.0,0.0 +2016-02-26,119158,4.0,1.0,0.0 +2016-02-27,81761,5.0,1.0,0.0 +2016-02-28,86421,6.0,1.0,0.0 +2016-02-29,125898,0.0,1.0,0.0 +2016-03-01,128020,1.0,2.0,0.0 +2016-03-02,130518,2.0,2.0,0.0 +2016-03-03,129859,3.0,2.0,0.0 +2016-03-04,121636,4.0,2.0,0.0 +2016-03-05,83814,5.0,2.0,0.0 +2016-03-06,86859,6.0,2.0,0.0 +2016-03-07,127229,0.0,2.0,0.0 +2016-03-08,129281,1.0,2.0,0.0 +2016-03-09,131505,2.0,2.0,0.0 +2016-03-10,126847,3.0,2.0,0.0 +2016-03-11,121670,4.0,2.0,0.0 +2016-03-12,82209,5.0,2.0,0.0 +2016-03-13,87358,6.0,2.0,0.0 +2016-03-14,129607,0.0,2.0,0.0 +2016-03-15,132397,1.0,2.0,0.0 +2016-03-16,132666,2.0,2.0,0.0 +2016-03-17,129579,3.0,2.0,0.0 +2016-03-18,120239,4.0,2.0,0.0 +2016-03-19,81427,5.0,2.0,0.0 +2016-03-20,86878,6.0,2.0,0.0 +2016-03-21,128245,0.0,2.0,0.0 +2016-03-22,130351,1.0,2.0,0.0 +2016-03-23,128611,2.0,2.0,0.0 +2016-03-24,122141,3.0,2.0,0.0 +2016-03-25,105815,4.0,2.0,0.0 +2016-03-26,78197,5.0,2.0,0.0 +2016-03-27,78675,6.0,2.0,0.0 +2016-03-28,116328,0.0,2.0,0.0 +2016-03-29,131001,1.0,2.0,0.0 +2016-03-30,133101,2.0,2.0,0.0 +2016-03-31,130283,3.0,2.0,0.0 +2016-04-01,119257,4.0,3.0,0.0 +2016-04-02,81281,5.0,3.0,0.0 +2016-04-03,87360,6.0,3.0,0.0 +2016-04-04,126389,0.0,3.0,0.0 +2016-04-05,133803,1.0,3.0,0.0 +2016-04-06,135934,2.0,3.0,0.0 +2016-04-07,134653,3.0,3.0,0.0 +2016-04-08,125221,4.0,3.0,0.0 +2016-04-09,85645,5.0,3.0,0.0 +2016-04-10,91857,6.0,3.0,0.0 +2016-04-11,136700,0.0,3.0,0.0 +2016-04-12,138801,1.0,3.0,0.0 +2016-04-13,137409,2.0,3.0,0.0 +2016-04-14,134651,3.0,3.0,0.0 +2016-04-15,125713,4.0,3.0,0.0 +2016-04-16,84789,5.0,3.0,0.0 +2016-04-17,90514,6.0,3.0,0.0 +2016-04-18,135770,0.0,3.0,0.0 +2016-04-19,140338,1.0,3.0,0.0 +2016-04-20,138994,2.0,3.0,0.0 +2016-04-21,134338,3.0,3.0,0.0 +2016-04-22,125713,4.0,3.0,0.0 +2016-04-23,85348,5.0,3.0,0.0 +2016-04-24,91963,6.0,3.0,0.0 +2016-04-25,135422,0.0,3.0,0.0 +2016-04-26,141059,1.0,3.0,0.0 +2016-04-27,138390,2.0,3.0,0.0 +2016-04-28,134493,3.0,3.0,0.0 +2016-04-29,123089,4.0,3.0,0.0 +2016-04-30,78081,5.0,3.0,0.0 +2016-05-01,80160,6.0,4.0,0.0 +2016-05-02,118508,0.0,4.0,0.0 +2016-05-03,131204,1.0,4.0,0.0 +2016-05-04,132146,2.0,4.0,0.0 +2016-05-05,123214,3.0,4.0,0.0 +2016-05-06,117566,4.0,4.0,0.0 +2016-05-07,78005,5.0,4.0,0.0 +2016-05-08,81871,6.0,4.0,0.0 +2016-05-09,127489,0.0,4.0,0.0 +2016-05-10,136121,1.0,4.0,0.0 +2016-05-11,135402,2.0,4.0,0.0 +2016-05-12,132926,3.0,4.0,0.0 +2016-05-13,123555,4.0,4.0,0.0 +2016-05-14,80533,5.0,4.0,0.0 +2016-05-15,84697,6.0,4.0,0.0 +2016-05-16,125306,0.0,4.0,0.0 +2016-05-17,135812,1.0,4.0,0.0 +2016-05-18,135197,2.0,4.0,0.0 +2016-05-19,131924,3.0,4.0,0.0 +2016-05-20,122504,4.0,4.0,0.0 +2016-05-21,79192,5.0,4.0,0.0 +2016-05-22,84851,6.0,4.0,0.0 +2016-05-23,127438,0.0,4.0,0.0 +2016-05-24,133972,1.0,4.0,0.0 +2016-05-25,131697,2.0,4.0,0.0 +2016-05-26,126174,3.0,4.0,0.0 +2016-05-27,117773,4.0,4.0,0.0 +2016-05-28,74793,5.0,4.0,0.0 +2016-05-29,79262,6.0,4.0,0.0 +2016-05-30,113390,0.0,4.0,1.0 +2016-05-31,129636,1.0,4.0,0.0 +2016-06-01,129838,2.0,5.0,0.0 +2016-06-02,127650,3.0,5.0,0.0 +2016-06-03,119107,4.0,5.0,0.0 +2016-06-04,76582,5.0,5.0,0.0 +2016-06-05,80829,6.0,5.0,0.0 +2016-06-06,123175,0.0,5.0,0.0 +2016-06-07,128655,1.0,5.0,0.0 +2016-06-08,126728,2.0,5.0,0.0 +2016-06-09,116963,3.0,5.0,0.0 +2016-06-10,108602,4.0,5.0,0.0 +2016-06-11,73541,5.0,5.0,0.0 +2016-06-12,82245,6.0,5.0,0.0 +2016-06-13,119977,0.0,5.0,0.0 +2016-06-14,125678,1.0,5.0,0.0 +2016-06-15,125977,2.0,5.0,0.0 +2016-06-16,122900,3.0,5.0,0.0 +2016-06-17,113905,4.0,5.0,0.0 +2016-06-18,71738,5.0,5.0,0.0 +2016-06-19,74376,6.0,5.0,0.0 +2016-06-20,93499,0.0,5.0,0.0 +2016-06-21,124257,1.0,5.0,0.0 +2016-06-22,122793,2.0,5.0,0.0 +2016-06-23,120902,3.0,5.0,0.0 +2016-06-24,108118,4.0,5.0,0.0 +2016-06-25,69170,5.0,5.0,0.0 +2016-06-26,72480,6.0,5.0,0.0 +2016-06-27,115501,0.0,5.0,0.0 +2016-06-28,121523,1.0,5.0,0.0 +2016-06-29,121456,2.0,5.0,0.0 +2016-06-30,119093,3.0,5.0,0.0 +2016-07-01,107813,4.0,6.0,0.0 +2016-07-02,66427,5.0,6.0,0.0 +2016-07-03,68168,6.0,6.0,0.0 +2016-07-04,101448,0.0,6.0,1.0 +2016-07-05,114130,1.0,6.0,0.0 +2016-07-06,118196,2.0,6.0,0.0 +2016-07-07,116360,3.0,6.0,0.0 +2016-07-08,109588,4.0,6.0,0.0 +2016-07-09,68949,5.0,6.0,0.0 +2016-07-10,71387,6.0,6.0,0.0 +2016-07-11,116802,0.0,6.0,0.0 +2016-07-12,119864,1.0,6.0,0.0 +2016-07-13,120468,2.0,6.0,0.0 +2016-07-14,117523,3.0,6.0,0.0 +2016-07-15,108681,4.0,6.0,0.0 +2016-07-16,67189,5.0,6.0,0.0 +2016-07-17,71085,6.0,6.0,0.0 +2016-07-18,116616,0.0,6.0,0.0 +2016-07-19,121000,1.0,6.0,0.0 +2016-07-20,119165,2.0,6.0,0.0 +2016-07-21,117941,3.0,6.0,0.0 +2016-07-22,110570,4.0,6.0,0.0 +2016-07-23,68398,5.0,6.0,0.0 +2016-07-24,71980,6.0,6.0,0.0 +2016-07-25,116361,0.0,6.0,0.0 +2016-07-26,120986,1.0,6.0,0.0 +2016-07-27,120932,2.0,6.0,0.0 +2016-07-28,118101,3.0,6.0,0.0 +2016-07-29,110240,4.0,6.0,0.0 +2016-07-30,69022,5.0,6.0,0.0 +2016-07-31,71959,6.0,6.0,0.0 +2016-08-01,114920,0.0,7.0,0.0 +2016-08-02,120783,1.0,7.0,0.0 +2016-08-03,119825,2.0,7.0,0.0 +2016-08-04,117712,3.0,7.0,0.0 +2016-08-05,109966,4.0,7.0,0.0 +2016-08-06,67755,5.0,7.0,0.0 +2016-08-07,70693,6.0,7.0,0.0 +2016-08-08,115440,0.0,7.0,0.0 +2016-08-09,118682,1.0,7.0,0.0 +2016-08-10,119555,2.0,7.0,0.0 +2016-08-11,117924,3.0,7.0,0.0 +2016-08-12,110083,4.0,7.0,0.0 +2016-08-13,68028,5.0,7.0,0.0 +2016-08-14,69705,6.0,7.0,0.0 +2016-08-15,109543,0.0,7.0,0.0 +2016-08-16,120896,1.0,7.0,0.0 +2016-08-17,121107,2.0,7.0,0.0 +2016-08-18,119516,3.0,7.0,0.0 +2016-08-19,112999,4.0,7.0,0.0 +2016-08-20,71603,5.0,7.0,0.0 +2016-08-21,74724,6.0,7.0,0.0 +2016-08-22,120374,0.0,7.0,0.0 +2016-08-23,125253,1.0,7.0,0.0 +2016-08-24,124546,2.0,7.0,0.0 +2016-08-25,123134,3.0,7.0,0.0 +2016-08-26,115443,4.0,7.0,0.0 +2016-08-27,73510,5.0,7.0,0.0 +2016-08-28,77456,6.0,7.0,0.0 +2016-08-29,122370,0.0,7.0,0.0 +2016-08-30,128081,1.0,7.0,0.0 +2016-08-31,127520,2.0,7.0,0.0 +2016-09-01,124829,3.0,8.0,0.0 +2016-09-02,115659,4.0,8.0,0.0 +2016-09-03,71772,5.0,8.0,0.0 +2016-09-04,76164,6.0,8.0,0.0 +2016-09-05,109751,0.0,8.0,1.0 +2016-09-06,127745,1.0,8.0,0.0 +2016-09-07,128145,2.0,8.0,0.0 +2016-09-08,127996,3.0,8.0,0.0 +2016-09-09,120314,4.0,8.0,0.0 +2016-09-10,77719,5.0,8.0,0.0 +2016-09-11,81649,6.0,8.0,0.0 +2016-09-12,127325,0.0,8.0,0.0 +2016-09-13,131451,1.0,8.0,0.0 +2016-09-14,128826,2.0,8.0,0.0 +2016-09-15,120041,3.0,8.0,0.0 +2016-09-16,113989,4.0,8.0,0.0 +2016-09-17,80862,5.0,8.0,0.0 +2016-09-18,91832,6.0,8.0,0.0 +2016-09-19,131871,0.0,8.0,0.0 +2016-09-20,138590,1.0,8.0,0.0 +2016-09-21,138146,2.0,8.0,0.0 +2016-09-22,136479,3.0,8.0,0.0 +2016-09-23,127803,4.0,8.0,0.0 +2016-09-24,81861,5.0,8.0,0.0 +2016-09-25,86861,6.0,8.0,0.0 +2016-09-26,137176,0.0,8.0,0.0 +2016-09-27,139433,1.0,8.0,0.0 +2016-09-28,140373,2.0,8.0,0.0 +2016-09-29,138011,3.0,8.0,0.0 +2016-09-30,127044,4.0,8.0,0.0 +2016-10-01,78726,5.0,9.0,0.0 +2016-10-02,82758,6.0,9.0,0.0 +2016-10-03,125866,0.0,9.0,0.0 +2016-10-04,132182,1.0,9.0,0.0 +2016-10-05,131995,2.0,9.0,0.0 +2016-10-06,132759,3.0,9.0,0.0 +2016-10-07,124588,4.0,9.0,0.0 +2016-10-08,90358,5.0,9.0,0.0 +2016-10-09,96542,6.0,9.0,0.0 +2016-10-10,135850,0.0,9.0,1.0 +2016-10-11,144073,1.0,9.0,0.0 +2016-10-12,143248,2.0,9.0,0.0 +2016-10-13,144176,3.0,9.0,0.0 +2016-10-14,134423,4.0,9.0,0.0 +2016-10-15,88312,5.0,9.0,0.0 +2016-10-16,94694,6.0,9.0,0.0 +2016-10-17,140981,0.0,9.0,0.0 +2016-10-18,150758,1.0,9.0,0.0 +2016-10-19,148760,2.0,9.0,0.0 +2016-10-20,145021,3.0,9.0,0.0 +2016-10-21,123991,4.0,9.0,0.0 +2016-10-22,90117,5.0,9.0,0.0 +2016-10-23,95498,6.0,9.0,0.0 +2016-10-24,146136,0.0,9.0,0.0 +2016-10-25,150283,1.0,9.0,0.0 +2016-10-26,149086,2.0,9.0,0.0 +2016-10-27,146600,3.0,9.0,0.0 +2016-10-28,134101,4.0,9.0,0.0 +2016-10-29,85873,5.0,9.0,0.0 +2016-10-30,91905,6.0,9.0,0.0 +2016-10-31,141022,0.0,9.0,0.0 +2016-11-01,142467,1.0,10.0,0.0 +2016-11-02,148404,2.0,10.0,0.0 +2016-11-03,149540,3.0,10.0,0.0 +2016-11-04,138040,4.0,10.0,0.0 +2016-11-05,93128,5.0,10.0,0.0 +2016-11-06,99820,6.0,10.0,0.0 +2016-11-07,150788,0.0,10.0,0.0 +2016-11-08,150053,1.0,10.0,0.0 +2016-11-09,140674,2.0,10.0,0.0 +2016-11-10,146301,3.0,10.0,0.0 +2016-11-11,132609,4.0,10.0,1.0 +2016-11-12,93843,5.0,10.0,0.0 +2016-11-13,100633,6.0,10.0,0.0 +2016-11-14,150935,0.0,10.0,0.0 +2016-11-15,156066,1.0,10.0,0.0 +2016-11-16,156273,2.0,10.0,0.0 +2016-11-17,154473,3.0,10.0,0.0 +2016-11-18,144040,4.0,10.0,0.0 +2016-11-19,95853,5.0,10.0,0.0 +2016-11-20,103220,6.0,10.0,0.0 +2016-11-21,154232,0.0,10.0,0.0 +2016-11-22,156131,1.0,10.0,0.0 +2016-11-23,149146,2.0,10.0,0.0 +2016-11-24,133080,3.0,10.0,1.0 +2016-11-25,120535,4.0,10.0,0.0 +2016-11-26,90022,5.0,10.0,0.0 +2016-11-27,100373,6.0,10.0,0.0 +2016-11-28,154971,0.0,10.0,0.0 +2016-11-29,161691,1.0,10.0,0.0 +2016-11-30,159450,2.0,10.0,0.0 +2016-12-01,157196,3.0,11.0,0.0 +2016-12-02,147743,4.0,11.0,0.0 +2016-12-03,98102,5.0,11.0,0.0 +2016-12-04,104400,6.0,11.0,0.0 +2016-12-05,156268,0.0,11.0,0.0 +2016-12-06,158169,1.0,11.0,0.0 +2016-12-07,158758,2.0,11.0,0.0 +2016-12-08,152258,3.0,11.0,0.0 +2016-12-09,142222,4.0,11.0,0.0 +2016-12-10,95665,5.0,11.0,0.0 +2016-12-11,100707,6.0,11.0,0.0 +2016-12-12,148783,0.0,11.0,0.0 +2016-12-13,152591,1.0,11.0,0.0 +2016-12-14,149908,2.0,11.0,0.0 +2016-12-15,145085,3.0,11.0,0.0 +2016-12-16,131580,4.0,11.0,0.0 +2016-12-17,84443,5.0,11.0,0.0 +2016-12-18,88845,6.0,11.0,0.0 +2016-12-19,134794,0.0,11.0,0.0 +2016-12-20,136427,1.0,11.0,0.0 +2016-12-21,131770,2.0,11.0,0.0 +2016-12-22,124751,3.0,11.0,0.0 +2016-12-23,105776,4.0,11.0,0.0 +2016-12-24,66740,5.0,11.0,0.0 +2016-12-25,60535,6.0,11.0,0.0 +2016-12-26,86775,0.0,11.0,1.0 +2016-12-27,102574,1.0,11.0,0.0 +2016-12-28,106393,2.0,11.0,0.0 +2016-12-29,105158,3.0,11.0,0.0 +2016-12-30,98098,4.0,11.0,0.0 +2016-12-31,64696,5.0,11.0,0.0 +2017-01-01,59005,6.0,0.0,0.0 +2017-01-02,95818,0.0,0.0,1.0 +2017-01-03,127728,1.0,0.0,0.0 +2017-01-04,133210,2.0,0.0,0.0 +2017-01-05,128376,3.0,0.0,0.0 +2017-01-06,125230,4.0,0.0,0.0 +2017-01-07,71521,5.0,0.0,0.0 +2017-01-08,94736,6.0,0.0,0.0 +2017-01-09,140861,0.0,0.0,0.0 +2017-01-10,145521,1.0,0.0,0.0 +2017-01-11,145604,2.0,0.0,0.0 +2017-01-12,144985,3.0,0.0,0.0 +2017-01-13,135657,4.0,0.0,0.0 +2017-01-14,91791,5.0,0.0,0.0 +2017-01-15,97570,6.0,0.0,0.0 +2017-01-16,140046,0.0,0.0,1.0 +2017-01-17,151455,1.0,0.0,0.0 +2017-01-18,151122,2.0,0.0,0.0 +2017-01-19,149733,3.0,0.0,0.0 +2017-01-20,140506,4.0,0.0,0.0 +2017-01-21,97774,5.0,0.0,0.0 +2017-01-22,106965,6.0,0.0,0.0 +2017-01-23,147843,0.0,0.0,0.0 +2017-01-24,149039,1.0,0.0,0.0 +2017-01-25,144802,2.0,0.0,0.0 +2017-01-26,138288,3.0,0.0,0.0 +2017-01-27,127738,4.0,0.0,0.0 +2017-01-28,88164,5.0,0.0,0.0 +2017-01-29,92052,6.0,0.0,0.0 +2017-01-30,137919,0.0,0.0,0.0 +2017-01-31,143069,1.0,0.0,0.0 +2017-02-01,143529,2.0,1.0,0.0 +2017-02-02,145011,3.0,1.0,0.0 +2017-02-03,139875,4.0,1.0,0.0 +2017-02-04,101218,5.0,1.0,0.0 +2017-02-05,104585,6.0,1.0,0.0 +2017-02-06,152808,0.0,1.0,0.0 +2017-02-07,161273,1.0,1.0,0.0 +2017-02-08,162144,2.0,1.0,0.0 +2017-02-09,159440,3.0,1.0,0.0 +2017-02-10,149755,4.0,1.0,0.0 +2017-02-11,100746,5.0,1.0,0.0 +2017-02-12,106434,6.0,1.0,0.0 +2017-02-13,160474,0.0,1.0,0.0 +2017-02-14,159982,1.0,1.0,0.0 +2017-02-15,161897,2.0,1.0,0.0 +2017-02-16,164364,3.0,1.0,0.0 +2017-02-17,153956,4.0,1.0,0.0 +2017-02-18,104661,5.0,1.0,0.0 +2017-02-19,109589,6.0,1.0,0.0 +2017-02-20,158043,0.0,1.0,1.0 +2017-02-21,170265,1.0,1.0,0.0 +2017-02-22,170559,2.0,1.0,0.0 +2017-02-23,163711,3.0,1.0,0.0 +2017-02-24,154537,4.0,1.0,0.0 +2017-02-25,106039,5.0,1.0,0.0 +2017-02-26,111816,6.0,1.0,0.0 +2017-02-27,163119,0.0,1.0,0.0 +2017-02-28,165643,1.0,1.0,0.0 +2017-03-01,167480,2.0,2.0,0.0 +2017-03-02,168730,3.0,2.0,0.0 +2017-03-03,158171,4.0,2.0,0.0 +2017-03-04,106739,5.0,2.0,0.0 +2017-03-05,114464,6.0,2.0,0.0 +2017-03-06,169538,0.0,2.0,0.0 +2017-03-07,173736,1.0,2.0,0.0 +2017-03-08,168734,2.0,2.0,0.0 +2017-03-09,171452,3.0,2.0,0.0 +2017-03-10,159470,4.0,2.0,0.0 +2017-03-11,107371,5.0,2.0,0.0 +2017-03-12,114907,6.0,2.0,0.0 +2017-03-13,170043,0.0,2.0,0.0 +2017-03-14,174748,1.0,2.0,0.0 +2017-03-15,171274,2.0,2.0,0.0 +2017-03-16,172067,3.0,2.0,0.0 +2017-03-17,159312,4.0,2.0,0.0 +2017-03-18,107141,5.0,2.0,0.0 +2017-03-19,116705,6.0,2.0,0.0 +2017-03-20,173053,0.0,2.0,0.0 +2017-03-21,179270,1.0,2.0,0.0 +2017-03-22,178776,2.0,2.0,0.0 +2017-03-23,175353,3.0,2.0,0.0 +2017-03-24,155802,4.0,2.0,0.0 +2017-03-25,107862,5.0,2.0,0.0 +2017-03-26,114867,6.0,2.0,0.0 +2017-03-27,174989,0.0,2.0,0.0 +2017-03-28,177936,1.0,2.0,0.0 +2017-03-29,177053,2.0,2.0,0.0 +2017-03-30,174951,3.0,2.0,0.0 +2017-03-31,161692,4.0,2.0,0.0 +2017-04-01,111982,5.0,3.0,0.0 +2017-04-02,109185,6.0,3.0,0.0 +2017-04-03,159117,0.0,3.0,0.0 +2017-04-04,162855,1.0,3.0,0.0 +2017-04-05,176611,2.0,3.0,0.0 +2017-04-06,174519,3.0,3.0,0.0 +2017-04-07,161085,4.0,3.0,0.0 +2017-04-08,106383,5.0,3.0,0.0 +2017-04-09,112315,6.0,3.0,0.0 +2017-04-10,169584,0.0,3.0,0.0 +2017-04-11,171826,1.0,3.0,0.0 +2017-04-12,168847,2.0,3.0,0.0 +2017-04-13,160786,3.0,3.0,0.0 +2017-04-14,137040,4.0,3.0,0.0 +2017-04-15,100190,5.0,3.0,0.0 +2017-04-16,100898,6.0,3.0,0.0 +2017-04-17,152066,0.0,3.0,0.0 +2017-04-18,174171,1.0,3.0,0.0 +2017-04-19,175620,2.0,3.0,0.0 +2017-04-20,173856,3.0,3.0,0.0 +2017-04-21,160574,4.0,3.0,0.0 +2017-04-22,110084,5.0,3.0,0.0 +2017-04-23,117159,6.0,3.0,0.0 +2017-04-24,174875,0.0,3.0,0.0 +2017-04-25,179750,1.0,3.0,0.0 +2017-04-26,179115,2.0,3.0,0.0 +2017-04-27,172230,3.0,3.0,0.0 +2017-04-28,157630,4.0,3.0,0.0 +2017-04-29,99513,5.0,3.0,0.0 +2017-04-30,100849,6.0,3.0,0.0 +2017-05-01,137413,0.0,4.0,0.0 +2017-05-02,169970,1.0,4.0,0.0 +2017-05-03,173007,2.0,4.0,0.0 +2017-05-04,171814,3.0,4.0,0.0 +2017-05-05,158556,4.0,4.0,0.0 +2017-05-06,104891,5.0,4.0,0.0 +2017-05-07,111184,6.0,4.0,0.0 +2017-05-08,167207,0.0,4.0,0.0 +2017-05-09,174139,1.0,4.0,0.0 +2017-05-10,173376,2.0,4.0,0.0 +2017-05-11,170399,3.0,4.0,0.0 +2017-05-12,159003,4.0,4.0,0.0 +2017-05-13,104441,5.0,4.0,0.0 +2017-05-14,108658,6.0,4.0,0.0 +2017-05-15,169555,0.0,4.0,0.0 +2017-05-16,174468,1.0,4.0,0.0 +2017-05-17,172630,2.0,4.0,0.0 +2017-05-18,168885,3.0,4.0,0.0 +2017-05-19,158328,4.0,4.0,0.0 +2017-05-20,101883,5.0,4.0,0.0 +2017-05-21,108279,6.0,4.0,0.0 +2017-05-22,167274,0.0,4.0,0.0 +2017-05-23,173357,1.0,4.0,0.0 +2017-05-24,170350,2.0,4.0,0.0 +2017-05-25,157737,3.0,4.0,0.0 +2017-05-26,150028,4.0,4.0,0.0 +2017-05-27,103856,5.0,4.0,0.0 +2017-05-28,99612,6.0,4.0,0.0 +2017-05-29,138303,0.0,4.0,1.0 +2017-05-30,159403,1.0,4.0,0.0 +2017-05-31,167107,2.0,4.0,0.0 +2017-06-01,165586,3.0,5.0,0.0 +2017-06-02,154671,4.0,5.0,0.0 +2017-06-03,99082,5.0,5.0,0.0 diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/helper.py b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/helper.py new file mode 100644 index 000000000..5b78e0ba4 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/helper.py @@ -0,0 +1,183 @@ +import pandas as pd +from azureml.core import Environment +from azureml.core.conda_dependencies import CondaDependencies +from azureml.train.estimator import Estimator +from azureml.core.run import Run +from azureml.automl.core.shared import constants + + +def split_fraction_by_grain(df, fraction, time_column_name, grain_column_names=None): + if not grain_column_names: + df["tmp_grain_column"] = "grain" + grain_column_names = ["tmp_grain_column"] + + """Group df by grain and split on last n rows for each group.""" + df_grouped = df.sort_values(time_column_name).groupby( + grain_column_names, group_keys=False + ) + + df_head = df_grouped.apply( + lambda dfg: dfg.iloc[: -int(len(dfg) * fraction)] if fraction > 0 else dfg + ) + + df_tail = df_grouped.apply( + lambda dfg: dfg.iloc[-int(len(dfg) * fraction) :] if fraction > 0 else dfg[:0] + ) + + if "tmp_grain_column" in grain_column_names: + for df2 in (df, df_head, df_tail): + df2.drop("tmp_grain_column", axis=1, inplace=True) + + grain_column_names.remove("tmp_grain_column") + + return df_head, df_tail + + +def split_full_for_forecasting( + df, time_column_name, grain_column_names=None, test_split=0.2 +): + index_name = df.index.name + + # Assumes that there isn't already a column called tmpindex + + df["tmpindex"] = df.index + + train_df, test_df = split_fraction_by_grain( + df, test_split, time_column_name, grain_column_names + ) + + train_df = train_df.set_index("tmpindex") + train_df.index.name = index_name + + test_df = test_df.set_index("tmpindex") + test_df.index.name = index_name + + df.drop("tmpindex", axis=1, inplace=True) + + return train_df, test_df + + +def get_result_df(remote_run): + children = list(remote_run.get_children(recursive=True)) + summary_df = pd.DataFrame( + index=["run_id", "run_algorithm", "primary_metric", "Score"] + ) + goal_minimize = False + for run in children: + if ( + run.get_status().lower() == constants.RunState.COMPLETE_RUN + and "run_algorithm" in run.properties + and "score" in run.properties + ): + # We only count in the completed child runs. + summary_df[run.id] = [ + run.id, + run.properties["run_algorithm"], + run.properties["primary_metric"], + float(run.properties["score"]), + ] + if "goal" in run.properties: + goal_minimize = run.properties["goal"].split("_")[-1] == "min" + + summary_df = summary_df.T.sort_values( + "Score", ascending=goal_minimize + ).drop_duplicates(["run_algorithm"]) + summary_df = summary_df.set_index("run_algorithm") + return summary_df + + +def run_inference( + test_experiment, + compute_target, + script_folder, + train_run, + test_dataset, + lookback_dataset, + max_horizon, + target_column_name, + time_column_name, + freq, +): + model_base_name = "model.pkl" + if "model_data_location" in train_run.properties: + model_location = train_run.properties["model_data_location"] + _, model_base_name = model_location.rsplit("/", 1) + train_run.download_file( + "outputs/{}".format(model_base_name), "inference/{}".format(model_base_name) + ) + train_run.download_file("outputs/conda_env_v_1_0_0.yml", "inference/condafile.yml") + + inference_env = Environment("myenv") + inference_env.docker.enabled = True + inference_env.python.conda_dependencies = CondaDependencies( + conda_dependencies_file_path="inference/condafile.yml" + ) + + est = Estimator( + source_directory=script_folder, + entry_script="infer.py", + script_params={ + "--max_horizon": max_horizon, + "--target_column_name": target_column_name, + "--time_column_name": time_column_name, + "--frequency": freq, + "--model_path": model_base_name, + }, + inputs=[ + test_dataset.as_named_input("test_data"), + lookback_dataset.as_named_input("lookback_data"), + ], + compute_target=compute_target, + environment_definition=inference_env, + ) + + run = test_experiment.submit( + est, + tags={ + "training_run_id": train_run.id, + "run_algorithm": train_run.properties["run_algorithm"], + "valid_score": train_run.properties["score"], + "primary_metric": train_run.properties["primary_metric"], + }, + ) + + run.log("run_algorithm", run.tags["run_algorithm"]) + return run + + +def run_multiple_inferences( + summary_df, + train_experiment, + test_experiment, + compute_target, + script_folder, + test_dataset, + lookback_dataset, + max_horizon, + target_column_name, + time_column_name, + freq, +): + for run_name, run_summary in summary_df.iterrows(): + print(run_name) + print(run_summary) + run_id = run_summary.run_id + train_run = Run(train_experiment, run_id) + + test_run = run_inference( + test_experiment, + compute_target, + script_folder, + train_run, + test_dataset, + lookback_dataset, + max_horizon, + target_column_name, + time_column_name, + freq, + ) + + print(test_run) + summary_df.loc[summary_df.run_id == run_id, "test_run_id"] = test_run.id + + return summary_df diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/infer.py b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/infer.py new file mode 100644 index 000000000..7b2f1eee4 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-github-dau/infer.py @@ -0,0 +1,386 @@ +import argparse +import os + +import numpy as np +import pandas as pd + +from pandas.tseries.frequencies import to_offset +from sklearn.externals import joblib +from sklearn.metrics import mean_absolute_error, mean_squared_error + +from azureml.automl.runtime.shared.score import scoring, constants +from azureml.core import Run + +try: + import torch + + _torch_present = True +except ImportError: + _torch_present = False + + +def align_outputs( + y_predicted, + X_trans, + X_test, + y_test, + predicted_column_name="predicted", + horizon_colname="horizon_origin", +): + """ + Demonstrates how to get the output aligned to the inputs + using pandas indexes. Helps understand what happened if + the output's shape differs from the input shape, or if + the data got re-sorted by time and grain during forecasting. + + Typical causes of misalignment are: + * we predicted some periods that were missing in actuals -> drop from eval + * model was asked to predict past max_horizon -> increase max horizon + * data at start of X_test was needed for lags -> provide previous periods + """ + if horizon_colname in X_trans: + df_fcst = pd.DataFrame( + { + predicted_column_name: y_predicted, + horizon_colname: X_trans[horizon_colname], + } + ) + else: + df_fcst = pd.DataFrame({predicted_column_name: y_predicted}) + + # y and X outputs are aligned by forecast() function contract + df_fcst.index = X_trans.index + + # align original X_test to y_test + X_test_full = X_test.copy() + X_test_full[target_column_name] = y_test + + # X_test_full's index does not include origin, so reset for merge + df_fcst.reset_index(inplace=True) + X_test_full = X_test_full.reset_index().drop(columns="index") + together = df_fcst.merge(X_test_full, how="right") + + # drop rows where prediction or actuals are nan + # happens because of missing actuals + # or at edges of time due to lags/rolling windows + clean = together[ + together[[target_column_name, predicted_column_name]].notnull().all(axis=1) + ] + return clean + + +def do_rolling_forecast_with_lookback( + fitted_model, X_test, y_test, max_horizon, X_lookback, y_lookback, freq="D" +): + """ + Produce forecasts on a rolling origin over the given test set. + + Each iteration makes a forecast for the next 'max_horizon' periods + with respect to the current origin, then advances the origin by the + horizon time duration. The prediction context for each forecast is set so + that the forecaster uses the actual target values prior to the current + origin time for constructing lag features. + + This function returns a concatenated DataFrame of rolling forecasts. + """ + print("Using lookback of size: ", y_lookback.size) + df_list = [] + origin_time = X_test[time_column_name].min() + X = X_lookback.append(X_test) + y = np.concatenate((y_lookback, y_test), axis=0) + while origin_time <= X_test[time_column_name].max(): + # Set the horizon time - end date of the forecast + horizon_time = origin_time + max_horizon * to_offset(freq) + + # Extract test data from an expanding window up-to the horizon + expand_wind = X[time_column_name] < horizon_time + X_test_expand = X[expand_wind] + y_query_expand = np.zeros(len(X_test_expand)).astype(np.float) + y_query_expand.fill(np.NaN) + + if origin_time != X[time_column_name].min(): + # Set the context by including actuals up-to the origin time + test_context_expand_wind = X[time_column_name] < origin_time + context_expand_wind = X_test_expand[time_column_name] < origin_time + y_query_expand[context_expand_wind] = y[test_context_expand_wind] + + # Print some debug info + print( + "Horizon_time:", + horizon_time, + " origin_time: ", + origin_time, + " max_horizon: ", + max_horizon, + " freq: ", + freq, + ) + print("expand_wind: ", expand_wind) + print("y_query_expand") + print(y_query_expand) + print("X_test") + print(X) + print("X_test_expand") + print(X_test_expand) + print("Type of X_test_expand: ", type(X_test_expand)) + print("Type of y_query_expand: ", type(y_query_expand)) + + print("y_query_expand") + print(y_query_expand) + + # Make a forecast out to the maximum horizon + # y_fcst, X_trans = y_query_expand, X_test_expand + y_fcst, X_trans = fitted_model.forecast(X_test_expand, y_query_expand) + + print("y_fcst") + print(y_fcst) + + # Align forecast with test set for dates within + # the current rolling window + trans_tindex = X_trans.index.get_level_values(time_column_name) + trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time) + test_roll_wind = expand_wind & (X[time_column_name] >= origin_time) + df_list.append( + align_outputs( + y_fcst[trans_roll_wind], + X_trans[trans_roll_wind], + X[test_roll_wind], + y[test_roll_wind], + ) + ) + + # Advance the origin time + origin_time = horizon_time + + return pd.concat(df_list, ignore_index=True) + + +def do_rolling_forecast(fitted_model, X_test, y_test, max_horizon, freq="D"): + """ + Produce forecasts on a rolling origin over the given test set. + + Each iteration makes a forecast for the next 'max_horizon' periods + with respect to the current origin, then advances the origin by the + horizon time duration. The prediction context for each forecast is set so + that the forecaster uses the actual target values prior to the current + origin time for constructing lag features. + + This function returns a concatenated DataFrame of rolling forecasts. + """ + df_list = [] + origin_time = X_test[time_column_name].min() + while origin_time <= X_test[time_column_name].max(): + # Set the horizon time - end date of the forecast + horizon_time = origin_time + max_horizon * to_offset(freq) + + # Extract test data from an expanding window up-to the horizon + expand_wind = X_test[time_column_name] < horizon_time + X_test_expand = X_test[expand_wind] + y_query_expand = np.zeros(len(X_test_expand)).astype(np.float) + y_query_expand.fill(np.NaN) + + if origin_time != X_test[time_column_name].min(): + # Set the context by including actuals up-to the origin time + test_context_expand_wind = X_test[time_column_name] < origin_time + context_expand_wind = X_test_expand[time_column_name] < origin_time + y_query_expand[context_expand_wind] = y_test[test_context_expand_wind] + + # Print some debug info + print( + "Horizon_time:", + horizon_time, + " origin_time: ", + origin_time, + " max_horizon: ", + max_horizon, + " freq: ", + freq, + ) + print("expand_wind: ", expand_wind) + print("y_query_expand") + print(y_query_expand) + print("X_test") + print(X_test) + print("X_test_expand") + print(X_test_expand) + print("Type of X_test_expand: ", type(X_test_expand)) + print("Type of y_query_expand: ", type(y_query_expand)) + print("y_query_expand") + print(y_query_expand) + + # Make a forecast out to the maximum horizon + y_fcst, X_trans = fitted_model.forecast(X_test_expand, y_query_expand) + + print("y_fcst") + print(y_fcst) + + # Align forecast with test set for dates within the + # current rolling window + trans_tindex = X_trans.index.get_level_values(time_column_name) + trans_roll_wind = (trans_tindex >= origin_time) & (trans_tindex < horizon_time) + test_roll_wind = expand_wind & (X_test[time_column_name] >= origin_time) + df_list.append( + align_outputs( + y_fcst[trans_roll_wind], + X_trans[trans_roll_wind], + X_test[test_roll_wind], + y_test[test_roll_wind], + ) + ) + + # Advance the origin time + origin_time = horizon_time + + return pd.concat(df_list, ignore_index=True) + + +def APE(actual, pred): + """ + Calculate absolute percentage error. + Returns a vector of APE values with same length as actual/pred. + """ + return 100 * np.abs((actual - pred) / actual) + + +def MAPE(actual, pred): + """ + Calculate mean absolute percentage error. + Remove NA and values where actual is close to zero + """ + not_na = ~(np.isnan(actual) | np.isnan(pred)) + not_zero = ~np.isclose(actual, 0.0) + actual_safe = actual[not_na & not_zero] + pred_safe = pred[not_na & not_zero] + return np.mean(APE(actual_safe, pred_safe)) + + +def map_location_cuda(storage, loc): + return storage.cuda() + + +parser = argparse.ArgumentParser() +parser.add_argument( + "--max_horizon", + type=int, + dest="max_horizon", + default=10, + help="Max Horizon for forecasting", +) +parser.add_argument( + "--target_column_name", + type=str, + dest="target_column_name", + help="Target Column Name", +) +parser.add_argument( + "--time_column_name", type=str, dest="time_column_name", help="Time Column Name" +) +parser.add_argument( + "--frequency", type=str, dest="freq", help="Frequency of prediction" +) +parser.add_argument( + "--model_path", + type=str, + dest="model_path", + default="model.pkl", + help="Filename of model to be loaded", +) + +args = parser.parse_args() +max_horizon = args.max_horizon +target_column_name = args.target_column_name +time_column_name = args.time_column_name +freq = args.freq +model_path = args.model_path + +print("args passed are: ") +print(max_horizon) +print(target_column_name) +print(time_column_name) +print(freq) +print(model_path) + +run = Run.get_context() +# get input dataset by name +test_dataset = run.input_datasets["test_data"] +lookback_dataset = run.input_datasets["lookback_data"] + +grain_column_names = [] + +df = test_dataset.to_pandas_dataframe() + +print("Read df") +print(df) + +X_test_df = test_dataset.drop_columns(columns=[target_column_name]) +y_test_df = test_dataset.with_timestamp_columns(None).keep_columns( + columns=[target_column_name] +) + +X_lookback_df = lookback_dataset.drop_columns(columns=[target_column_name]) +y_lookback_df = lookback_dataset.with_timestamp_columns(None).keep_columns( + columns=[target_column_name] +) + +_, ext = os.path.splitext(model_path) +if ext == ".pt": + # Load the fc-tcn torch model. + assert _torch_present + if torch.cuda.is_available(): + map_location = map_location_cuda + else: + map_location = "cpu" + with open(model_path, "rb") as fh: + fitted_model = torch.load(fh, map_location=map_location) +else: + # Load the sklearn pipeline. + fitted_model = joblib.load(model_path) + +if hasattr(fitted_model, "get_lookback"): + lookback = fitted_model.get_lookback() + df_all = do_rolling_forecast_with_lookback( + fitted_model, + X_test_df.to_pandas_dataframe(), + y_test_df.to_pandas_dataframe().values.T[0], + max_horizon, + X_lookback_df.to_pandas_dataframe()[-lookback:], + y_lookback_df.to_pandas_dataframe().values.T[0][-lookback:], + freq, + ) +else: + df_all = do_rolling_forecast( + fitted_model, + X_test_df.to_pandas_dataframe(), + y_test_df.to_pandas_dataframe().values.T[0], + max_horizon, + freq, + ) + +print(df_all) + +print("target values:::") +print(df_all[target_column_name]) +print("predicted values:::") +print(df_all["predicted"]) + +# Use the AutoML scoring module +regression_metrics = list(constants.REGRESSION_SCALAR_SET) +y_test = np.array(df_all[target_column_name]) +y_pred = np.array(df_all["predicted"]) +scores = scoring.score_regression(y_test, y_pred, regression_metrics) + +print("scores:") +print(scores) + +for key, value in scores.items(): + run.log(key, value) + +print("Simple forecasting model") +rmse = np.sqrt(mean_squared_error(df_all[target_column_name], df_all["predicted"])) +print("[Test Data] \nRoot Mean squared error: %.2f" % rmse) +mae = mean_absolute_error(df_all[target_column_name], df_all["predicted"]) +print("mean_absolute_error score: %.2f" % mae) +print("MAPE: %.2f" % MAPE(df_all[target_column_name], df_all["predicted"])) + +run.log("rmse", rmse) +run.log("mae", mae) diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/README.md b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/README.md new file mode 100644 index 000000000..735e348d4 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/README.md @@ -0,0 +1,94 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Tutorial showing how to solve a complex machine learning time series forecasting problems at scale by using Azure Automated ML and Hierarchical time series accelerator. +--- + +## Microsoft Solution Accelerator: Hierachical Time Series Forecasting + +In most applications, customers have a need to understand their forecasts at a macro and micro level of the business. Whether that be predicting sales of products at different geographic locations, or understanding the expected workforce demand for different organizations at a company, the ability to train a machine learning model to intelligently forecast on hierarchy data is essential. + +This business pattern is common across a wide variety of industries and applicable to many real world use cases. Below are some examples of where the hierarchical time series pattern is useful. + +| Industry | Scenario | +|----------------|--------------------------------------------| +| *Restaurant Chain* | Building demand forecasting models across thousands of restaurants and several countries. | +| *Retail Organization* | Building workforce optimization models for thousands of stores. | +| *Retail Organization*| Price optimization models for hundreds of thousands of products available. | + + +### Technical Summary + +A hierarchical time series is a structure in which each of the unique series are arranged into a hierarchy based on dimensions such as geography, or product type. The table below shows an example of data whose unique attributes form a hierarchy. Our hierarchy is defined by the `product type` such as headphones or tablets, the `product category` which splits product types into accessories and devices, and the `region` the products are sold in. The table below demonstrates the first input of each unique series in the hierarchy. + +![data-table](./media/data-table.png) + +To further visualize this, the leaf levels of the hierarchy contain all the time series with unique combinations of attribute values. Each higher level in the hierarchy will consider one less dimension for defining the time series and will aggregate each set of `child nodes` from the lower level into a `parent node`. + +![hierachy-sample](./media/hierarchy-sample-ms.PNG) + +> **Note:** If no unique root level exists in the data, Automated Machine Learning will create a node `automl_top_level` for users to train or forecasts totals. + +## Prerequisites + +To use this solution accelerator, all you need is access to an [Azure subscription](https://azure.microsoft.com/free/) and an [Azure Machine Learning Workspace](https://docs.microsoft.com/azure/machine-learning/how-to-manage-workspace) that you'll create below. + +A basic understanding of Azure Machine Learning and hierarchical time series concepts will be helpful for understanding the solution. The following resources can help introduce you to these concepts: + +1. [Azure Machine Learning Overview](https://azure.microsoft.com/services/machine-learning/) +2. [Azure Machine Learning Tutorials](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup) +3. [Azure Machine Learning Sample Notebooks on Github](https://github.com/Azure/azureml-examples/) +4. [Forecasting: Principles and Practice, Hierarchical time series](https://otexts.com/fpp2/hts.html) + +## Getting started + +### 1. Set up the Compute Instance +Please create a [Compute Instance](https://docs.microsoft.com/en-us/azure/machine-learning/concept-compute-instance#create) and clone the git repo to your workspace. + +### 2. Run the Notebook + +Once your environment is set up, go to JupyterLab and run the notebook auto-ml-hierarchical-timeseries.ipynb on Compute Instance you created. It would run through the steps outlined sequentially. By the end, you'll know how to train, score, and make predictions using the hierarchical time series model pattern on Azure Machine Learning. + +| Notebook | Description | +|----------------|--------------------------------------------| +| `auto-ml-forecasting-hierarchical-timeseries.ipynb`|Creates a pipeline to train machine learning models for the defined hierarchy and forecast at the desired hierarchy level using Automated ML. | + + +![Work Flow](./media/workflow.PNG) + +## Key Concepts + +### Automated Machine Learning + +[Automated Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) also referred to as automated ML or AutoML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. + +### Pipelines + +[Pipelines](https://docs.microsoft.com/azure/machine-learning/concept-ml-pipelines) allow you to create workflows in your machine learning projects. These workflows have a number of benefits including speed, simplicity, repeatability, and modularity. + +### ParallelRunStep + +[ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) enables the parallel training of models and is commonly used for batch inferencing. This [document](https://docs.microsoft.com/azure/machine-learning/how-to-use-parallel-run-step) walks through some of the key concepts around ParallelRunStep. + +### Other Concepts + +In additional to ParallelRunStep, Pipelines and Automated Machine Learning, you'll also be working with the following concepts including [workspace](https://docs.microsoft.com/azure/machine-learning/concept-workspace), [datasets](https://docs.microsoft.com/azure/machine-learning/concept-data#datasets), [compute targets](https://docs.microsoft.com/azure/machine-learning/concept-compute-target#train), [python script steps](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py), and [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). + +## Contributing + +This project welcomes contributions and suggestions. To learn more visit the [contributing](CONTRIBUTING.md) section. + +Most contributions require you to agree to a Contributor License Agreement (CLA) +declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb index e2ab133f9..ebbf8b04b 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb @@ -1,639 +1,639 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Hierarchical Time Series - Automated ML\n", - "**_Generate hierarchical time series forecasts with Automated Machine Learning_**\n", - "\n", - "---" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n", - "\n", - "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prerequisites\n", - "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1.0 Set up workspace, datastore, experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003526897 - } - }, - "outputs": [], - "source": [ - "import azureml.core\n", - "from azureml.core import Workspace, Datastore\n", - "import pandas as pd\n", - "\n", - "# Set up your workspace\n", - "ws = Workspace.from_config()\n", - "ws.get_details()\n", - "\n", - "# Set up your datastores\n", - "dstore = ws.get_default_datastore()\n", - "\n", - "output = {}\n", - "output[\"SDK version\"] = azureml.core.VERSION\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Default datastore name\"] = dstore.name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose an experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003540729 - } - }, - "outputs": [], - "source": [ - "from azureml.core import Experiment\n", - "\n", - "experiment = Experiment(ws, \"automl-hts\")\n", - "\n", - "print(\"Experiment name: \" + experiment.name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2.0 Data\n" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "nteract": { - "transient": { - "deleting": false - } - } - }, - "source": [ - "### Upload local csv files to datastore\n", - "You can upload your train and inference csv files to the default datastore in your workspace. \n", - "\n", - "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", - "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) documentation on how to access data from Datastore." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datastore_path = \"hts-sample\"" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "datastore = ws.get_default_datastore()\n", - "datastore" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create the TabularDatasets \n", - "\n", - "Datasets in Azure Machine Learning are references to specific data in a Datastore. The data can be retrieved as a [TabularDatasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py). We will read in the data as a pandas DataFrame, upload to the data store and register them to your Workspace using ```register_pandas_dataframe``` so they can be called as an input into the training pipeline. We will use the inference dataset as part of the forecasting pipeline. The step need only be completed once." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007017296 - } - }, - "outputs": [], - "source": [ - "from azureml.data.dataset_factory import TabularDatasetFactory\n", - "\n", - "registered_train = TabularDatasetFactory.register_pandas_dataframe(\n", - " pd.read_csv(\"Data/hts-sample-train.csv\"),\n", - " target=(datastore, \"hts-sample\"),\n", - " name=\"hts-sales-train\",\n", - ")\n", - "registered_inference = TabularDatasetFactory.register_pandas_dataframe(\n", - " pd.read_csv(\"Data/hts-sample-test.csv\"),\n", - " target=(datastore, \"hts-sample\"),\n", - " name=\"hts-sales-test\",\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.0 Build the training pipeline\n", - "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose a compute target\n", - "\n", - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", - "\n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007037308 - } - }, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "\n", - "# Name your cluster\n", - "compute_name = \"hts-compute\"\n", - "\n", - "\n", - "if compute_name in ws.compute_targets:\n", - " compute_target = ws.compute_targets[compute_name]\n", - " if compute_target and type(compute_target) is AmlCompute:\n", - " print(\"Found compute target: \" + compute_name)\n", - "else:\n", - " print(\"Creating a new compute target...\")\n", - " provisioning_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_D16S_V3\", max_nodes=20\n", - " )\n", - " # Create the compute target\n", - " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", - "\n", - " # Can poll for a minimum number of nodes and for a specific timeout.\n", - " # If no min node count is provided it will use the scale settings for the cluster\n", - " compute_target.wait_for_completion(\n", - " show_output=True, min_node_count=None, timeout_in_minutes=20\n", - " )\n", - "\n", - " # For a more detailed view of current cluster status, use the 'status' property\n", - " print(compute_target.status.serialize())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up training parameters\n", - "\n", - "This dictionary defines the AutoML and hierarchy settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, the hierarchy definition, and the level of the hierarchy at which to train.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **task** | forecasting |\n", - "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error |\n", - "| **blocked_models** | Blocked models won't be used by AutoML. |\n", - "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", - "| **label_column_name** | The name of the label column. |\n", - "| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", - "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", - "| **enable_early_stopping** | Flag to enable early termination if the score is not improving in the short term. |\n", - "| **time_column_name** | The name of your time column. |\n", - "| **hierarchy_column_names** | The names of columns that define the hierarchical structure of the data from highest level to most granular. |\n", - "| **training_level** | The level of the hierarchy to be used for training models. |\n", - "| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n", - "| **time_series_id_column_name** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", - "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", - "| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n", - "| **model_explainability** | Flag to disable explaining the best automated ML model at the end of all training iterations. The default is True and will block non-explainable models which may impact the forecast accuracy. For more information, see [Interpretability: model explanations in automated machine learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl). |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007061544 - } - }, - "outputs": [], - "source": [ - "from azureml.train.automl.runtime._hts.hts_parameters import HTSTrainParameters\n", - "\n", - "model_explainability = True\n", - "\n", - "engineered_explanations = False\n", - "# Define your hierarchy. Adjust the settings below based on your dataset.\n", - "hierarchy = [\"state\", \"store_id\", \"product_category\", \"SKU\"]\n", - "training_level = \"SKU\"\n", - "\n", - "# Set your forecast parameters. Adjust the settings below based on your dataset.\n", - "time_column_name = \"date\"\n", - "label_column_name = \"quantity\"\n", - "forecast_horizon = 7\n", - "\n", - "\n", - "automl_settings = {\n", - " \"task\": \"forecasting\",\n", - " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", - " \"label_column_name\": label_column_name,\n", - " \"time_column_name\": time_column_name,\n", - " \"forecast_horizon\": forecast_horizon,\n", - " \"hierarchy_column_names\": hierarchy,\n", - " \"hierarchy_training_level\": training_level,\n", - " \"track_child_runs\": False,\n", - " \"pipeline_fetch_max_batch_size\": 15,\n", - " \"model_explainability\": model_explainability,\n", - " # The following settings are specific to this sample and should be adjusted according to your own needs.\n", - " \"iteration_timeout_minutes\": 10,\n", - " \"iterations\": 10,\n", - " \"n_cross_validations\": 2,\n", - "}\n", - "\n", - "hts_parameters = HTSTrainParameters(\n", - " automl_settings=automl_settings,\n", - " hierarchy_column_names=hierarchy,\n", - " training_level=training_level,\n", - " enable_engineered_explanations=engineered_explanations,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up hierarchy training pipeline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Parallel run step is leveraged to train the hierarchy. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The `process_count_per_node` is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", - "\n", - "* **experiment:** The experiment used for training.\n", - "* **train_data:** The tabular dataset to be used as input to the training run.\n", - "* **node_count:** The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long.\n", - "* **process_count_per_node:** Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance.\n", - "* **train_pipeline_parameters:** The set of configuration parameters defined in the previous section. \n", - "\n", - "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", - "\n", - "\n", - "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", - " experiment=experiment,\n", - " train_data=registered_train,\n", - " compute_target=compute_target,\n", - " node_count=2,\n", - " process_count_per_node=8,\n", - " train_pipeline_parameters=hts_parameters,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Submit the pipeline to run\n", - "Next we submit our pipeline to run. The whole training pipeline takes about 1h 11m using a Standard_D12_V2 VM with our current ParallelRunConfig setting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run = experiment.submit(training_pipeline)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### [Optional] Get the explanations\n", - "First we need to download the explanations to the local disk." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "if model_explainability:\n", - " expl_output = training_run.get_pipeline_output(\"explanations\")\n", - " expl_output.download(\"training_explanations\")\n", - "else:\n", - " print(\n", - " \"Model explanations are available only if model_explainability is set to True.\"\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The explanations are downloaded to the \"training_explanations/azureml\" directory." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "\n", - "if model_explainability:\n", - " explanations_dirrectory = os.listdir(\n", - " os.path.join(\"training_explanations\", \"azureml\")\n", - " )\n", - " if len(explanations_dirrectory) > 1:\n", - " print(\n", - " \"Warning! The directory contains multiple explanations, only the first one will be displayed.\"\n", - " )\n", - " print(\"The explanations are located at {}.\".format(explanations_dirrectory[0]))\n", - " # Now we will list all the explanations.\n", - " explanation_path = os.path.join(\n", - " \"training_explanations\",\n", - " \"azureml\",\n", - " explanations_dirrectory[0],\n", - " \"training_explanations\",\n", - " )\n", - " print(\"Available explanations\")\n", - " print(\"==============================\")\n", - " print(\"\\n\".join(os.listdir(explanation_path)))\n", - "else:\n", - " print(\n", - " \"Model explanations are available only if model_explainability is set to True.\"\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "View the explanations on \"state\" level." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from IPython.display import display\n", - "\n", - "explanation_type = \"raw\"\n", - "level = \"state\"\n", - "\n", - "if model_explainability:\n", - " display(\n", - " pd.read_csv(\n", - " os.path.join(explanation_path, \"{}_explanations_{}.csv\").format(\n", - " explanation_type, level\n", - " )\n", - " )\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 5.0 Forecasting\n", - "For hierarchical forecasting we need to provide the HTSInferenceParameters object.\n", - "#### HTSInferenceParameters arguments\n", - "* **hierarchy_forecast_level:** The default level of the hierarchy to produce prediction/forecast on.\n", - "* **allocation_method:** \\[Optional] The disaggregation method to use if the hierarchy forecast level specified is below the define hierarchy training level.
(average historical proportions) 'average_historical_proportions'
(proportions of the historical averages) 'proportions_of_historical_average'\n", - "\n", - "#### get_many_models_batch_inference_steps arguments\n", - "* **experiment:** The experiment used for inference run.\n", - "* **inference_data:** The data to use for inferencing. It should be the same schema as used for training.\n", - "* **compute_target:** The compute target that runs the inference pipeline.\n", - "* **node_count:** The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku).\n", - "* **process_count_per_node:** The number of processes per node.\n", - "* **train_run_id:** \\[Optional] The run id of the hierarchy training, by default it is the latest successful training hts run in the experiment.\n", - "* **train_experiment_name:** \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline.\n", - "* **process_count_per_node:** \\[Optional] The number of processes per node, by default it's 4." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.train.automl.runtime._hts.hts_parameters import HTSInferenceParameters\n", - "\n", - "inference_parameters = HTSInferenceParameters(\n", - " hierarchy_forecast_level=\"store_id\", # The setting is specific to this dataset and should be changed based on your dataset.\n", - " allocation_method=\"proportions_of_historical_average\",\n", - ")\n", - "\n", - "steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", - " experiment=experiment,\n", - " inference_data=registered_inference,\n", - " compute_target=compute_target,\n", - " inference_pipeline_parameters=inference_parameters,\n", - " node_count=2,\n", - " process_count_per_node=8,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "inference_pipeline = Pipeline(ws, steps=steps)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "inference_run = experiment.submit(inference_pipeline)\n", - "inference_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Retrieve results\n", - "\n", - "Forecast results can be retrieved through the following code. The prediction results summary and the actual predictions are downloaded the \"forecast_results\" folder" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "forecasts = inference_run.get_pipeline_output(\"forecasts\")\n", - "forecasts.download(\"forecast_results\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Resbumit the Pipeline\n", - "\n", - "The inference pipeline can be submitted with different configurations." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "inference_run = experiment.submit(\n", - " inference_pipeline, pipeline_parameters={\"hierarchy_forecast_level\": \"state\"}\n", - ")\n", - "inference_run.wait_for_completion(show_output=False)" - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Hierarchical Time Series - Automated ML\n", + "**_Generate hierarchical time series forecasts with Automated Machine Learning_**\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n", + "\n", + "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prerequisites\n", + "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1.0 Set up workspace, datastore, experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003526897 } - ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" + }, + "outputs": [], + "source": [ + "import azureml.core\n", + "from azureml.core import Workspace, Datastore\n", + "import pandas as pd\n", + "\n", + "# Set up your workspace\n", + "ws = Workspace.from_config()\n", + "ws.get_details()\n", + "\n", + "# Set up your datastores\n", + "dstore = ws.get_default_datastore()\n", + "\n", + "output = {}\n", + "output[\"SDK version\"] = azureml.core.VERSION\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Default datastore name\"] = dstore.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose an experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003540729 + } + }, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment = Experiment(ws, \"automl-hts\")\n", + "\n", + "print(\"Experiment name: \" + experiment.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2.0 Data\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "### Upload local csv files to datastore\n", + "You can upload your train and inference csv files to the default datastore in your workspace. \n", + "\n", + "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", + "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore.datastore?view=azure-ml-py) documentation on how to access data from Datastore." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datastore_path = \"hts-sample\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "datastore = ws.get_default_datastore()\n", + "datastore" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create the TabularDatasets \n", + "\n", + "Datasets in Azure Machine Learning are references to specific data in a Datastore. The data can be retrieved as a [TabularDatasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py). We will read in the data as a pandas DataFrame, upload to the data store and register them to your Workspace using ```register_pandas_dataframe``` so they can be called as an input into the training pipeline. We will use the inference dataset as part of the forecasting pipeline. The step need only be completed once." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007017296 + } + }, + "outputs": [], + "source": [ + "from azureml.data.dataset_factory import TabularDatasetFactory\n", + "\n", + "registered_train = TabularDatasetFactory.register_pandas_dataframe(\n", + " pd.read_csv(\"Data/hts-sample-train.csv\"),\n", + " target=(datastore, \"hts-sample\"),\n", + " name=\"hts-sales-train\",\n", + ")\n", + "registered_inference = TabularDatasetFactory.register_pandas_dataframe(\n", + " pd.read_csv(\"Data/hts-sample-test.csv\"),\n", + " target=(datastore, \"hts-sample\"),\n", + " name=\"hts-sales-test\",\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3.0 Build the training pipeline\n", + "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose a compute target\n", + "\n", + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", + "\n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007037308 + } + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "\n", + "# Name your cluster\n", + "compute_name = \"hts-compute\"\n", + "\n", + "\n", + "if compute_name in ws.compute_targets:\n", + " compute_target = ws.compute_targets[compute_name]\n", + " if compute_target and type(compute_target) is AmlCompute:\n", + " print(\"Found compute target: \" + compute_name)\n", + "else:\n", + " print(\"Creating a new compute target...\")\n", + " provisioning_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_D16S_V3\", max_nodes=20\n", + " )\n", + " # Create the compute target\n", + " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", + "\n", + " # Can poll for a minimum number of nodes and for a specific timeout.\n", + " # If no min node count is provided it will use the scale settings for the cluster\n", + " compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + " )\n", + "\n", + " # For a more detailed view of current cluster status, use the 'status' property\n", + " print(compute_target.status.serialize())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up training parameters\n", + "\n", + "This dictionary defines the AutoML and hierarchy settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, the hierarchy definition, and the level of the hierarchy at which to train.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **task** | forecasting |\n", + "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error |\n", + "| **blocked_models** | Blocked models won't be used by AutoML. |\n", + "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", + "| **label_column_name** | The name of the label column. |\n", + "| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", + "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", + "| **enable_early_stopping** | Flag to enable early termination if the score is not improving in the short term. |\n", + "| **time_column_name** | The name of your time column. |\n", + "| **hierarchy_column_names** | The names of columns that define the hierarchical structure of the data from highest level to most granular. |\n", + "| **training_level** | The level of the hierarchy to be used for training models. |\n", + "| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n", + "| **time_series_id_column_name** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", + "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", + "| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n", + "| **model_explainability** | Flag to disable explaining the best automated ML model at the end of all training iterations. The default is True and will block non-explainable models which may impact the forecast accuracy. For more information, see [Interpretability: model explanations in automated machine learning](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability-automl). |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007061544 } + }, + "outputs": [], + "source": [ + "from azureml.train.automl.runtime._hts.hts_parameters import HTSTrainParameters\n", + "\n", + "model_explainability = True\n", + "\n", + "engineered_explanations = False\n", + "# Define your hierarchy. Adjust the settings below based on your dataset.\n", + "hierarchy = [\"state\", \"store_id\", \"product_category\", \"SKU\"]\n", + "training_level = \"SKU\"\n", + "\n", + "# Set your forecast parameters. Adjust the settings below based on your dataset.\n", + "time_column_name = \"date\"\n", + "label_column_name = \"quantity\"\n", + "forecast_horizon = 7\n", + "\n", + "\n", + "automl_settings = {\n", + " \"task\": \"forecasting\",\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"label_column_name\": label_column_name,\n", + " \"time_column_name\": time_column_name,\n", + " \"forecast_horizon\": forecast_horizon,\n", + " \"hierarchy_column_names\": hierarchy,\n", + " \"hierarchy_training_level\": training_level,\n", + " \"track_child_runs\": False,\n", + " \"pipeline_fetch_max_batch_size\": 15,\n", + " \"model_explainability\": model_explainability,\n", + " # The following settings are specific to this sample and should be adjusted according to your own needs.\n", + " \"iteration_timeout_minutes\": 10,\n", + " \"iterations\": 10,\n", + " \"n_cross_validations\": 2,\n", + "}\n", + "\n", + "hts_parameters = HTSTrainParameters(\n", + " automl_settings=automl_settings,\n", + " hierarchy_column_names=hierarchy,\n", + " training_level=training_level,\n", + " enable_engineered_explanations=engineered_explanations,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up hierarchy training pipeline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Parallel run step is leveraged to train the hierarchy. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The `process_count_per_node` is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", + "\n", + "* **experiment:** The experiment used for training.\n", + "* **train_data:** The tabular dataset to be used as input to the training run.\n", + "* **node_count:** The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long.\n", + "* **process_count_per_node:** Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance.\n", + "* **train_pipeline_parameters:** The set of configuration parameters defined in the previous section. \n", + "\n", + "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", + "\n", + "\n", + "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", + " experiment=experiment,\n", + " train_data=registered_train,\n", + " compute_target=compute_target,\n", + " node_count=2,\n", + " process_count_per_node=8,\n", + " train_pipeline_parameters=hts_parameters,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Submit the pipeline to run\n", + "Next we submit our pipeline to run. The whole training pipeline takes about 1h using a Standard_D16_V3 VM with our current ParallelRunConfig setting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run = experiment.submit(training_pipeline)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### [Optional] Get the explanations\n", + "First we need to download the explanations to the local disk." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if model_explainability:\n", + " expl_output = training_run.get_pipeline_output(\"explanations\")\n", + " expl_output.download(\"training_explanations\")\n", + "else:\n", + " print(\n", + " \"Model explanations are available only if model_explainability is set to True.\"\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The explanations are downloaded to the \"training_explanations/azureml\" directory." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "\n", + "if model_explainability:\n", + " explanations_dirrectory = os.listdir(\n", + " os.path.join(\"training_explanations\", \"azureml\")\n", + " )\n", + " if len(explanations_dirrectory) > 1:\n", + " print(\n", + " \"Warning! The directory contains multiple explanations, only the first one will be displayed.\"\n", + " )\n", + " print(\"The explanations are located at {}.\".format(explanations_dirrectory[0]))\n", + " # Now we will list all the explanations.\n", + " explanation_path = os.path.join(\n", + " \"training_explanations\",\n", + " \"azureml\",\n", + " explanations_dirrectory[0],\n", + " \"training_explanations\",\n", + " )\n", + " print(\"Available explanations\")\n", + " print(\"==============================\")\n", + " print(\"\\n\".join(os.listdir(explanation_path)))\n", + "else:\n", + " print(\n", + " \"Model explanations are available only if model_explainability is set to True.\"\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "View the explanations on \"state\" level." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import display\n", + "\n", + "explanation_type = \"raw\"\n", + "level = \"state\"\n", + "\n", + "if model_explainability:\n", + " display(\n", + " pd.read_csv(\n", + " os.path.join(explanation_path, \"{}_explanations_{}.csv\").format(\n", + " explanation_type, level\n", + " )\n", + " )\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5.0 Forecasting\n", + "For hierarchical forecasting we need to provide the HTSInferenceParameters object.\n", + "#### HTSInferenceParameters arguments\n", + "* **hierarchy_forecast_level:** The default level of the hierarchy to produce prediction/forecast on.\n", + "* **allocation_method:** \\[Optional] The disaggregation method to use if the hierarchy forecast level specified is below the define hierarchy training level.
(average historical proportions) 'average_historical_proportions'
(proportions of the historical averages) 'proportions_of_historical_average'\n", + "\n", + "#### get_many_models_batch_inference_steps arguments\n", + "* **experiment:** The experiment used for inference run.\n", + "* **inference_data:** The data to use for inferencing. It should be the same schema as used for training.\n", + "* **compute_target:** The compute target that runs the inference pipeline.\n", + "* **node_count:** The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku).\n", + "* **process_count_per_node:** The number of processes per node.\n", + "* **train_run_id:** \\[Optional] The run id of the hierarchy training, by default it is the latest successful training hts run in the experiment.\n", + "* **train_experiment_name:** \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline.\n", + "* **process_count_per_node:** \\[Optional] The number of processes per node, by default it's 4." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.train.automl.runtime._hts.hts_parameters import HTSInferenceParameters\n", + "\n", + "inference_parameters = HTSInferenceParameters(\n", + " hierarchy_forecast_level=\"store_id\", # The setting is specific to this dataset and should be changed based on your dataset.\n", + " allocation_method=\"proportions_of_historical_average\",\n", + ")\n", + "\n", + "steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", + " experiment=experiment,\n", + " inference_data=registered_inference,\n", + " compute_target=compute_target,\n", + " inference_pipeline_parameters=inference_parameters,\n", + " node_count=2,\n", + " process_count_per_node=8,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "inference_pipeline = Pipeline(ws, steps=steps)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "inference_run = experiment.submit(inference_pipeline)\n", + "inference_run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieve results\n", + "\n", + "Forecast results can be retrieved through the following code. The prediction results summary and the actual predictions are downloaded in forecast_results folder" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "forecasts = inference_run.get_pipeline_output(\"forecasts\")\n", + "forecasts.download(\"forecast_results\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Resbumit the Pipeline\n", + "\n", + "The inference pipeline can be submitted with different configurations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "inference_run = experiment.submit(\n", + " inference_pipeline, pipeline_parameters={\"hierarchy_forecast_level\": \"state\"}\n", + ")\n", + "inference_run.wait_for_completion(show_output=False)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.8" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/data-table.png b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/data-table.png new file mode 100644 index 000000000..193d9c06b Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/data-table.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/deploy-button.png b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/deploy-button.png new file mode 100644 index 000000000..e81f2c1c5 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/deploy-button.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/food-chain.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/food-chain.PNG new file mode 100644 index 000000000..7f46d6d9f Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/food-chain.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/hierarchy-sample-ms.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/hierarchy-sample-ms.PNG new file mode 100644 index 000000000..82bb14f7f Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/hierarchy-sample-ms.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org-2.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org-2.PNG new file mode 100644 index 000000000..70f48b053 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org-2.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org.PNG new file mode 100644 index 000000000..0b4f84b6f Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/retail-org.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/workflow.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/workflow.PNG new file mode 100644 index 000000000..7f161548d Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/media/workflow.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/update_env.yml b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/update_env.yml new file mode 100644 index 000000000..d0b193dab --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/update_env.yml @@ -0,0 +1,3 @@ +dependencies: +- pip: + - azureml-contrib-automl-pipeline-steps diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/README.md b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/README.md new file mode 100644 index 000000000..681528e33 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/README.md @@ -0,0 +1,122 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Tutorial showing how to solve a complex machine learning time series forecasting problems at scale by using Azure Automated ML and Many Models solution accelerator. +--- + +![Many Models Solution Accelerator Banner](images/mmsa.png) +# Many Models Solution Accelerator + + + +In the real world, many problems can be too complex to be solved by a single machine learning model. Whether that be predicting sales for each individual store, building a predictive maintanence model for hundreds of oil wells, or tailoring an experience to individual users, building a model for each instance can lead to improved results on many machine learning problems. + +This Pattern is very common across a wide variety of industries and applicable to many real world use cases. Below are some examples we have seen where this pattern is being used. + +- Energy and utility companies building predictive maintenance models for thousands of oil wells, hundreds of wind turbines or hundreds of smart meters + +- Retail organizations building workforce optimization models for thousands of stores, campaign promotion propensity models, Price optimization models for hundreds of thousands of products they sell + +- Restaurant chains building demand forecasting models across thousands of restaurants  + +- Banks and financial institutes building models for cash replenishment for ATM Machine and for several ATMs or building personalized models for individuals + +- Enterprises building revenue forecasting models at each division level + +- Document management companies building text analytics and legal document search models per each state + +Azure Machine Learning (AML) makes it easy to train, operate, and manage hundreds or even thousands of models. This repo will walk you through the end to end process of creating a many models solution from training to scoring to monitoring. + +## Prerequisites + +To use this solution accelerator, all you need is access to an [Azure subscription](https://azure.microsoft.com/free/) and an [Azure Machine Learning Workspace](https://docs.microsoft.com/azure/machine-learning/how-to-manage-workspace) that you'll create below. + +While it's not required, a basic understanding of Azure Machine Learning will be helpful for understanding the solution. The following resources can help introduce you to AML: + +1. [Azure Machine Learning Overview](https://azure.microsoft.com/services/machine-learning/) +2. [Azure Machine Learning Tutorials](https://docs.microsoft.com/azure/machine-learning/tutorial-1st-experiment-sdk-setup) +3. [Azure Machine Learning Sample Notebooks on Github](https://github.com/Azure/azureml-examples) + +## Getting started + +### 1. Deploy Resources + +Start by deploying the resources to Azure. The button below will deploy Azure Machine Learning and its related resources: + + + + + +### 2. Configure Development Environment + +Next you'll need to configure your [development environment](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment) for Azure Machine Learning. We recommend using a [Compute Instance](https://docs.microsoft.com/azure/machine-learning/how-to-configure-environment#compute-instance) as it's the fastest way to get up and running. + +### 3. Run Notebooks + +Once your development environment is set up, run through the Jupyter Notebooks sequentially following the steps outlined. By the end, you'll know how to train, score, and make predictions using the many models pattern on Azure Machine Learning. + +![Sequence of Notebooks](./images/mmsa-overview.png) + + +## Contents + +In this repo, you'll train and score a forecasting model for each orange juice brand and for each store at a (simulated) grocery chain. By the end, you'll have forecasted sales by using up to 11,973 models to predict sales for the next few weeks. + +The data used in this sample is simulated based on the [Dominick's Orange Juice Dataset](http://www.cs.unitn.it/~taufer/QMMA/L10-OJ-Data.html#(1)), sales data from a Chicago area grocery store. + + + +### Using Automated ML to train the models: + +The [`auto-ml-forecasting-many-models.ipynb`](./auto-ml-forecasting-many-models.ipynb) noteboook is a guided solution accelerator that demonstrates steps from data preparation, to model training, and forecasting on train models as well as operationalizing the solution. + +## How-to-videos + +Watch these how-to-videos for a step by step walk-through of the many model solution accelerator to learn how to setup your models using Automated ML. + +### Automated ML + +[![Watch the video](https://media.giphy.com/media/dWUKfameudyNGRnp1t/giphy.gif)](https://channel9.msdn.com/Shows/Docs-AI/Building-Large-Scale-Machine-Learning-Forecasting-Models-using-Azure-Machine-Learnings-Automated-ML) + +## Key concepts + +### ParallelRunStep + +[ParallelRunStep](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep?view=azure-ml-py) enables the parallel training of models and is commonly used for batch inferencing. This [document](https://docs.microsoft.com/azure/machine-learning/how-to-use-parallel-run-step) walks through some of the key concepts around ParallelRunStep. + +### Pipelines + +[Pipelines](https://docs.microsoft.com/azure/machine-learning/concept-ml-pipelines) allow you to create workflows in your machine learning projects. These workflows have a number of benefits including speed, simplicity, repeatability, and modularity. + +### Automated Machine Learning + +[Automated Machine Learning](https://docs.microsoft.com/azure/machine-learning/concept-automated-ml) also referred to as automated ML or AutoML, is the process of automating the time consuming, iterative tasks of machine learning model development. It allows data scientists, analysts, and developers to build ML models with high scale, efficiency, and productivity all while sustaining model quality. + +### Other Concepts + +In additional to ParallelRunStep, Pipelines and Automated Machine Learning, you'll also be working with the following concepts including [workspace](https://docs.microsoft.com/azure/machine-learning/concept-workspace), [datasets](https://docs.microsoft.com/azure/machine-learning/concept-data#datasets), [compute targets](https://docs.microsoft.com/azure/machine-learning/concept-compute-target#train), [python script steps](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep?view=azure-ml-py), and [Azure Open Datasets](https://azure.microsoft.com/services/open-datasets/). + +## Contributing + +This project welcomes contributions and suggestions. To learn more visit the [contributing](../../../CONTRIBUTING.md) section. + +Most contributions require you to agree to a Contributor License Agreement (CLA) +declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb index 75caf8596..686b8aeb2 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/auto-ml-forecasting-many-models.ipynb @@ -1,746 +1,746 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Many Models - Automated ML\n", - "**_Generate many models time series forecasts with Automated Machine Learning_**\n", - "\n", - "---" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For this notebook we are using a synthetic dataset portraying sales data to predict the quantity of a vartiety of product SKUs across several states, stores, and product categories.\n", - "\n", - "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Prerequisites\n", - "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 1.0 Set up workspace, datastore, experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003526897 - } - }, - "outputs": [], - "source": [ - "import azureml.core\n", - "from azureml.core import Workspace, Datastore\n", - "import pandas as pd\n", - "\n", - "# Set up your workspace\n", - "ws = Workspace.from_config()\n", - "ws.get_details()\n", - "\n", - "# Set up your datastores\n", - "dstore = ws.get_default_datastore()\n", - "\n", - "output = {}\n", - "output[\"SDK version\"] = azureml.core.VERSION\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Default datastore name\"] = dstore.name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose an experiment" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613003540729 - } - }, - "outputs": [], - "source": [ - "from azureml.core import Experiment\n", - "\n", - "experiment = Experiment(ws, \"automl-many-models\")\n", - "\n", - "print(\"Experiment name: \" + experiment.name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 2.0 Data\n", - "\n", - "This notebook uses simulated orange juice sales data to walk you through the process of training many models on Azure Machine Learning using Automated ML. \n", - "\n", - "The time series data used in this example was simulated based on the University of Chicago's Dominick's Finer Foods dataset which featured two years of sales of 3 different orange juice brands for individual stores. The full simulated dataset includes 3,991 stores with 3 orange juice brands each thus allowing 11,973 models to be trained to showcase the power of the many models pattern.\n", - "\n", - " \n", - "In this notebook, two datasets will be created: one with all 11,973 files and one with only 10 files that can be used to quickly test and debug. For each dataset, you'll be walked through the process of:\n", - "\n", - "1. Registering the blob container as a Datastore to the Workspace\n", - "2. Registering a tabular dataset to the Workspace" - ] - }, - { - "cell_type": "markdown", - "metadata": { - "nteract": { - "transient": { - "deleting": false - } - } - }, - "source": [ - "### 2.1 Data Preparation\n", - "The OJ data is available in the public blob container. The data is split to be used for training and for inferencing. For the current dataset, the data was split on time column ('WeekStarting') before and after '1992-5-28' .\n", - "\n", - "The container has\n", - "
    \n", - "
  1. 'oj-data-tabular' and 'oj-inference-tabular' folders that contains training and inference data respectively for the 11,973 models.
  2. \n", - "
  3. It also has 'oj-data-small-tabular' and 'oj-inference-small-tabular' folders that has training and inference data for 10 models.
  4. \n", - "
\n", - "\n", - "To create the [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) needed for the ParallelRunStep, you first need to register the blob container to the workspace." - ] - }, - { - "cell_type": "markdown", - "metadata": { - "nteract": { - "transient": { - "deleting": false - } - } - }, - "source": [ - " To use your own data, put your own data in a blobstore folder. As shown it can be one file or multiple files. We can then register datastore using that blob as shown below.\n", - " \n", - "

How sample data in blob store looks like

\n", - "\n", - "['oj-data-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)
\n", - "![image-4.png](mm-1.png)\n", - "\n", - "['oj-inference-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", - "![image-3.png](mm-2.png)\n", - "\n", - "['oj-data-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", - "\n", - "![image-5.png](mm-3.png)\n", - "\n", - "['oj-inference-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", - "![image-6.png](mm-4.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### 2.2 Register the blob container as DataStore\n", - "\n", - "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", - "\n", - "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n", - "\n", - "In this next step, we will be registering blob storage as datastore to the Workspace." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core import Datastore\n", - "\n", - "# Please change the following to point to your own blob container and pass in account_key\n", - "blob_datastore_name = \"automl_many_models\"\n", - "container_name = \"automl-sample-notebook-data\"\n", - "account_name = \"automlsamplenotebookdata\"\n", - "\n", - "oj_datastore = Datastore.register_azure_blob_container(\n", - " workspace=ws,\n", - " datastore_name=blob_datastore_name,\n", - " container_name=container_name,\n", - " account_name=account_name,\n", - " create_if_not_exists=True,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### 2.3 Using tabular datasets \n", - "\n", - "Now that the datastore is available from the Workspace, [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) can be created. Datasets in Azure Machine Learning are references to specific data in a Datastore. We are using TabularDataset, so that users who have their data which can be in one or many files (*.parquet or *.csv) and have not split up data according to group columns needed for training, can do so using out of box support for 'partiion_by' feature of TabularDataset shown in section 5.0 below." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007017296 - } - }, - "outputs": [], - "source": [ - "from azureml.core import Dataset\n", - "\n", - "ds_name_small = \"oj-data-small-tabular\"\n", - "input_ds_small = Dataset.Tabular.from_delimited_files(\n", - " path=oj_datastore.path(ds_name_small + \"/\"), validate=False\n", - ")\n", - "\n", - "inference_name_small = \"oj-inference-small-tabular\"\n", - "inference_ds_small = Dataset.Tabular.from_delimited_files(\n", - " path=oj_datastore.path(inference_name_small + \"/\"), validate=False\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 3.0 Build the training pipeline\n", - "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Choose a compute target\n", - "\n", - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", - "\n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007037308 - } - }, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "\n", - "# Name your cluster\n", - "compute_name = \"mm-compute\"\n", - "\n", - "\n", - "if compute_name in ws.compute_targets:\n", - " compute_target = ws.compute_targets[compute_name]\n", - " if compute_target and type(compute_target) is AmlCompute:\n", - " print(\"Found compute target: \" + compute_name)\n", - "else:\n", - " print(\"Creating a new compute target...\")\n", - " provisioning_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_D16S_V3\", max_nodes=20\n", - " )\n", - " # Create the compute target\n", - " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", - "\n", - " # Can poll for a minimum number of nodes and for a specific timeout.\n", - " # If no min node count is provided it will use the scale settings for the cluster\n", - " compute_target.wait_for_completion(\n", - " show_output=True, min_node_count=None, timeout_in_minutes=20\n", - " )\n", - "\n", - " # For a more detailed view of current cluster status, use the 'status' property\n", - " print(compute_target.status.serialize())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up training parameters\n", - "\n", - "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings including the name of the time column, the maximum forecast horizon, and the partition column name definition.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **task** | forecasting |\n", - "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error |\n", - "| **blocked_models** | Blocked models won't be used by AutoML. |\n", - "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", - "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", - "| **label_column_name** | The name of the label column. |\n", - "| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", - "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", - "| **enable_early_stopping** | Flag to enable early termination if the score is not improving in the short term. |\n", - "| **time_column_name** | The name of your time column. |\n", - "| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n", - "| **time_series_id_column_name** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", - "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", - "| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n", - "| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "gather": { - "logged": 1613007061544 - } - }, - "outputs": [], - "source": [ - "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", - " ManyModelsTrainParameters,\n", - ")\n", - "\n", - "partition_column_names = [\"Store\", \"Brand\"]\n", - "automl_settings = {\n", - " \"task\": \"forecasting\",\n", - " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", - " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", - " \"iterations\": 15,\n", - " \"experiment_timeout_hours\": 0.25,\n", - " \"label_column_name\": \"Quantity\",\n", - " \"n_cross_validations\": 3,\n", - " \"time_column_name\": \"WeekStarting\",\n", - " \"drop_column_names\": \"Revenue\",\n", - " \"max_horizon\": 6,\n", - " \"grain_column_names\": partition_column_names,\n", - " \"track_child_runs\": False,\n", - "}\n", - "\n", - "mm_paramters = ManyModelsTrainParameters(\n", - " automl_settings=automl_settings, partition_column_names=partition_column_names\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up many models pipeline" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", - "\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **experiment** | The experiment used for training. |\n", - "| **train_data** | The file dataset to be used as input to the training run. |\n", - "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n", - "| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |\n", - "| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n", - "\n", - "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", - "\n", - "\n", - "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", - " experiment=experiment,\n", - " train_data=input_ds_small,\n", - " compute_target=compute_target,\n", - " node_count=2,\n", - " process_count_per_node=8,\n", - " run_invocation_timeout=920,\n", - " train_pipeline_parameters=mm_paramters,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Submit the pipeline to run\n", - "Next we submit our pipeline to run. The whole training pipeline takes about 40m using a STANDARD_D16S_V3 VM with our current ParallelRunConfig setting." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run = experiment.submit(training_pipeline)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "training_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 5.0 Publish and schedule the train pipeline (Optional)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 5.1 Publish the pipeline\n", - "\n", - "Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# published_pipeline = training_pipeline.publish(name = 'automl_train_many_models',\n", - "# description = 'train many models',\n", - "# version = '1',\n", - "# continue_on_step_failure = False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 7.2 Schedule the pipeline\n", - "You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n", - "\n", - "# training_pipeline_id = published_pipeline.id\n", - "\n", - "# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n", - "# recurring_schedule = Schedule.create(ws, name=\"automl_training_recurring_schedule\",\n", - "# description=\"Schedule Training Pipeline to run on the first day of every month\",\n", - "# pipeline_id=training_pipeline_id,\n", - "# experiment_name=experiment.name,\n", - "# recurrence=recurrence)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 6.0 Forecasting" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set up output dataset for inference data\n", - "Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.data import OutputFileDatasetConfig\n", - "\n", - "output_inference_data_ds = OutputFileDatasetConfig(\n", - " name=\"many_models_inference_output\", destination=(dstore, \"oj/inference_data/\")\n", - ").register_on_complete(name=\"oj_inference_data_ds\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For many models we need to provide the ManyModelsInferenceParameters object.\n", - "\n", - "#### ManyModelsInferenceParameters arguments\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **partition_column_names** | List of column names that identifies groups. |\n", - "| **target_column_name** | \\[Optional] Column name only if the inference dataset has the target. |\n", - "| **time_column_name** | \\[Optional] Column name only if it is timeseries. |\n", - "| **many_models_run_id** | \\[Optional] Many models run id where models were trained. |\n", - "\n", - "#### get_many_models_batch_inference_steps arguments\n", - "| Property | Description|\n", - "| :--------------- | :------------------- |\n", - "| **experiment** | The experiment used for inference run. |\n", - "| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n", - "| **compute_target** | The compute target that runs the inference pipeline.|\n", - "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n", - "| **process_count_per_node** | The number of processes per node.\n", - "| **train_run_id** | \\[Optional\\] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n", - "| **train_experiment_name** | \\[Optional\\] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n", - "| **process_count_per_node** | \\[Optional\\] The number of processes per node, by default it's 4. |" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", - "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", - " ManyModelsInferenceParameters,\n", - ")\n", - "\n", - "mm_parameters = ManyModelsInferenceParameters(\n", - " partition_column_names=[\"Store\", \"Brand\"],\n", - " time_column_name=\"WeekStarting\",\n", - " target_column_name=\"Quantity\",\n", - ")\n", - "\n", - "inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", - " experiment=experiment,\n", - " inference_data=inference_ds_small,\n", - " node_count=2,\n", - " process_count_per_node=8,\n", - " compute_target=compute_target,\n", - " run_invocation_timeout=300,\n", - " output_datastore=output_inference_data_ds,\n", - " train_run_id=training_run.id,\n", - " train_experiment_name=training_run.experiment.name,\n", - " inference_pipeline_parameters=mm_parameters,\n", - ")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.pipeline.core import Pipeline\n", - "\n", - "inference_pipeline = Pipeline(ws, steps=inference_steps)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "inference_run = experiment.submit(inference_pipeline)\n", - "inference_run.wait_for_completion(show_output=False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Retrieve results\n", - "\n", - "The forecasting pipeline forecasts the orange juice quantity for a Store by Brand. The pipeline returns one file with the predictions for each store and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n", - "\n", - "The following code snippet:\n", - "1. Downloads the contents of the output folder that is passed in the parallel run step \n", - "2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe and \n", - "3. Displays the top 10 rows of the predictions" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n", - "\n", - "forecasting_results_name = \"forecasting_results\"\n", - "forecasting_output_name = \"many_models_inference_output\"\n", - "forecast_file = get_output_from_mm_pipeline(\n", - " inference_run, forecasting_results_name, forecasting_output_name\n", - ")\n", - "df = pd.read_csv(forecast_file, delimiter=\" \", header=None)\n", - "df.columns = [\n", - " \"Week Starting\",\n", - " \"Store\",\n", - " \"Brand\",\n", - " \"Quantity\",\n", - " \"Advert\",\n", - " \"Price\",\n", - " \"Revenue\",\n", - " \"Predicted\",\n", - "]\n", - "print(\n", - " \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n", - ")\n", - "df.head(10)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## 7.0 Publish and schedule the inference pipeline (Optional)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 7.1 Publish the pipeline\n", - "\n", - "Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# published_pipeline_inf = inference_pipeline.publish(name = 'automl_forecast_many_models',\n", - "# description = 'forecast many models',\n", - "# version = '1',\n", - "# continue_on_step_failure = False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### 7.2 Schedule the pipeline\n", - "You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain or forecast models every month or based on another trigger such as data drift." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n", - "\n", - "# forecasting_pipeline_id = published_pipeline.id\n", - "\n", - "# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n", - "# recurring_schedule = Schedule.create(ws, name=\"automl_forecasting_recurring_schedule\",\n", - "# description=\"Schedule Forecasting Pipeline to run on the first day of every week\",\n", - "# pipeline_id=forecasting_pipeline_id,\n", - "# experiment_name=experiment.name,\n", - "# recurrence=recurrence)" - ] + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Many Models - Automated ML\n", + "**_Generate many models time series forecasts with Automated Machine Learning_**\n", + "\n", + "---" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For this notebook we are using a synthetic dataset portraying sales data to predict the the quantity of a vartiety of product skus across several states, stores, and product categories.\n", + "\n", + "**NOTE: There are limits on how many runs we can do in parallel per workspace, and we currently recommend to set the parallelism to maximum of 320 runs per experiment per workspace. If users want to have more parallelism and increase this limit they might encounter Too Many Requests errors (HTTP 429).**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Prerequisites\n", + "You'll need to create a compute Instance by following the instructions in the [EnvironmentSetup.md](../Setup_Resources/EnvironmentSetup.md)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 1.0 Set up workspace, datastore, experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003526897 } - ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.8" + }, + "outputs": [], + "source": [ + "import azureml.core\n", + "from azureml.core import Workspace, Datastore\n", + "import pandas as pd\n", + "\n", + "# Set up your workspace\n", + "ws = Workspace.from_config()\n", + "ws.get_details()\n", + "\n", + "# Set up your datastores\n", + "dstore = ws.get_default_datastore()\n", + "\n", + "output = {}\n", + "output[\"SDK version\"] = azureml.core.VERSION\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Default datastore name\"] = dstore.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose an experiment" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613003540729 + } + }, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment = Experiment(ws, \"automl-many-models\")\n", + "\n", + "print(\"Experiment name: \" + experiment.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 2.0 Data\n", + "\n", + "This notebook uses simulated orange juice sales data to walk you through the process of training many models on Azure Machine Learning using Automated ML. \n", + "\n", + "The time series data used in this example was simulated based on the University of Chicago's Dominick's Finer Foods dataset which featured two years of sales of 3 different orange juice brands for individual stores. The full simulated dataset includes 3,991 stores with 3 orange juice brands each thus allowing 11,973 models to be trained to showcase the power of the many models pattern.\n", + "\n", + " \n", + "In this notebook, two datasets will be created: one with all 11,973 files and one with only 10 files that can be used to quickly test and debug. For each dataset, you'll be walked through the process of:\n", + "\n", + "1. Registering the blob container as a Datastore to the Workspace\n", + "2. Registering a tabular dataset to the Workspace" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } + } + }, + "source": [ + "### 2.1 Data Preparation\n", + "The OJ data is available in the public blob container. The data is split to be used for training and for inferencing. For the current dataset, the data was split on time column ('WeekStarting') before and after '1992-5-28' .\n", + "\n", + "The container has\n", + "
    \n", + "
  1. 'oj-data-tabular' and 'oj-inference-tabular' folders that contains training and inference data respectively for the 11,973 models.
  2. \n", + "
  3. It also has 'oj-data-small-tabular' and 'oj-inference-small-tabular' folders that has training and inference data for 10 models.
  4. \n", + "
\n", + "\n", + "To create the [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) needed for the ParallelRunStep, you first need to register the blob container to the workspace." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "nteract": { + "transient": { + "deleting": false + } } + }, + "source": [ + " To use your own data, put your own data in a blobstore folder. As shown it can be one file or multiple files. We can then register datastore using that blob as shown below.\n", + " \n", + "

How sample data in blob store looks like

\n", + "\n", + "['oj-data-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)
\n", + "![image-4.png](mm-1.png)\n", + "\n", + "['oj-inference-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", + "![image-3.png](mm-2.png)\n", + "\n", + "['oj-data-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", + "\n", + "![image-5.png](mm-3.png)\n", + "\n", + "['oj-inference-small-tabular'](https://ms.portal.azure.com/#blade/Microsoft_Azure_Storage/ContainerMenuBlade/overview/storageAccountId/%2Fsubscriptions%2F102a16c3-37d3-48a8-9237-4c9b1e8e80e0%2FresourceGroups%2FAutoMLSampleNotebooksData%2Fproviders%2FMicrosoft.Storage%2FstorageAccounts%2Fautomlsamplenotebookdata/path/automl-sample-notebook-data/etag/%220x8D84EAA65DE50B7%22/defaultEncryptionScope/%24account-encryption-key/denyEncryptionScopeOverride//defaultId//publicAccessVal/Container)\n", + "![image-6.png](mm-4.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 2.2 Register the blob container as DataStore\n", + "\n", + "A Datastore is a place where data can be stored that is then made accessible to a compute either by means of mounting or copying the data to the compute target.\n", + "\n", + "Please refer to [Datastore](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.core.datastore(class)?view=azure-ml-py) documentation on how to access data from Datastore.\n", + "\n", + "In this next step, we will be registering blob storage as datastore to the Workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Datastore\n", + "\n", + "# Please change the following to point to your own blob container and pass in account_key\n", + "blob_datastore_name = \"automl_many_models\"\n", + "container_name = \"automl-sample-notebook-data\"\n", + "account_name = \"automlsamplenotebookdata\"\n", + "\n", + "oj_datastore = Datastore.register_azure_blob_container(\n", + " workspace=ws,\n", + " datastore_name=blob_datastore_name,\n", + " container_name=container_name,\n", + " account_name=account_name,\n", + " create_if_not_exists=True,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### 2.3 Using tabular datasets \n", + "\n", + "Now that the datastore is available from the Workspace, [TabularDataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py) can be created. Datasets in Azure Machine Learning are references to specific data in a Datastore. We are using TabularDataset, so that users who have their data which can be in one or many files (*.parquet or *.csv) and have not split up data according to group columns needed for training, can do so using out of box support for 'partiion_by' feature of TabularDataset shown in section 5.0 below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007017296 + } + }, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "\n", + "ds_name_small = \"oj-data-small-tabular\"\n", + "input_ds_small = Dataset.Tabular.from_delimited_files(\n", + " path=oj_datastore.path(ds_name_small + \"/\"), validate=False\n", + ")\n", + "\n", + "inference_name_small = \"oj-inference-small-tabular\"\n", + "inference_ds_small = Dataset.Tabular.from_delimited_files(\n", + " path=oj_datastore.path(inference_name_small + \"/\"), validate=False\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 3.0 Build the training pipeline\n", + "Now that the dataset, WorkSpace, and datastore are set up, we can put together a pipeline for training.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choose a compute target\n", + "\n", + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "\\*\\*Creation of AmlCompute takes approximately 5 minutes.**\n", + "\n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process. As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read this [article](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007037308 + } + }, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "\n", + "# Name your cluster\n", + "compute_name = \"mm-compute\"\n", + "\n", + "\n", + "if compute_name in ws.compute_targets:\n", + " compute_target = ws.compute_targets[compute_name]\n", + " if compute_target and type(compute_target) is AmlCompute:\n", + " print(\"Found compute target: \" + compute_name)\n", + "else:\n", + " print(\"Creating a new compute target...\")\n", + " provisioning_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_D16S_V3\", max_nodes=20\n", + " )\n", + " # Create the compute target\n", + " compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)\n", + "\n", + " # Can poll for a minimum number of nodes and for a specific timeout.\n", + " # If no min node count is provided it will use the scale settings for the cluster\n", + " compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + " )\n", + "\n", + " # For a more detailed view of current cluster status, use the 'status' property\n", + " print(compute_target.status.serialize())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up training parameters\n", + "\n", + "This dictionary defines the AutoML and many models settings. For this forecasting task we need to define several settings inncluding the name of the time column, the maximum forecast horizon, and the partition column name definition.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **task** | forecasting |\n", + "| **primary_metric** | This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error |\n", + "| **blocked_models** | Blocked models won't be used by AutoML. |\n", + "| **iteration_timeout_minutes** | Maximum amount of time in minutes that the model can train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **iterations** | Number of models to train. This is optional but provides customers with greater control on exit criteria. |\n", + "| **experiment_timeout_hours** | Maximum amount of time in hours that the experiment can take before it terminates. This is optional but provides customers with greater control on exit criteria. |\n", + "| **label_column_name** | The name of the label column. |\n", + "| **forecast_horizon** | The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly). Periods are inferred from your data. |\n", + "| **n_cross_validations** | Number of cross validation splits. Rolling Origin Validation is used to split time-series in a temporally consistent way. |\n", + "| **enable_early_stopping** | Flag to enable early termination if the score is not improving in the short term. |\n", + "| **time_column_name** | The name of your time column. |\n", + "| **enable_engineered_explanations** | Engineered feature explanations will be downloaded if enable_engineered_explanations flag is set to True. By default it is set to False to save storage space. |\n", + "| **time_series_id_column_name** | The column names used to uniquely identify timeseries in data that has multiple rows with the same timestamp. |\n", + "| **track_child_runs** | Flag to disable tracking of child runs. Only best run is tracked if the flag is set to False (this includes the model and metrics of the run). |\n", + "| **pipeline_fetch_max_batch_size** | Determines how many pipelines (training algorithms) to fetch at a time for training, this helps reduce throttling when training at large scale. |\n", + "| **partition_column_names** | The names of columns used to group your models. For timeseries, the groups must not split up individual time-series. That is, each group must contain one or more whole time-series. |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "gather": { + "logged": 1613007061544 + } + }, + "outputs": [], + "source": [ + "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", + " ManyModelsTrainParameters,\n", + ")\n", + "\n", + "partition_column_names = [\"Store\", \"Brand\"]\n", + "automl_settings = {\n", + " \"task\": \"forecasting\",\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"iteration_timeout_minutes\": 10, # This needs to be changed based on the dataset. We ask customer to explore how long training is taking before settings this value\n", + " \"iterations\": 15,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"label_column_name\": \"Quantity\",\n", + " \"n_cross_validations\": 3,\n", + " \"time_column_name\": \"WeekStarting\",\n", + " \"drop_column_names\": \"Revenue\",\n", + " \"max_horizon\": 6,\n", + " \"grain_column_names\": partition_column_names,\n", + " \"track_child_runs\": False,\n", + "}\n", + "\n", + "mm_paramters = ManyModelsTrainParameters(\n", + " automl_settings=automl_settings, partition_column_names=partition_column_names\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up many models pipeline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Parallel run step is leveraged to train multiple models at once. To configure the ParallelRunConfig you will need to determine the appropriate number of workers and nodes for your use case. The process_count_per_node is based off the number of cores of the compute VM. The node_count will determine the number of master nodes to use, increasing the node count will speed up the training process.\n", + "\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **experiment** | The experiment used for training. |\n", + "| **train_data** | The file dataset to be used as input to the training run. |\n", + "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with 3 and increase the node_count if the training time is taking too long. |\n", + "| **process_count_per_node** | Process count per node, we recommend 2:1 ratio for number of cores: number of processes per node. eg. If node has 16 cores then configure 8 or less process count per node or optimal performance. |\n", + "| **train_pipeline_parameters** | The set of configuration parameters defined in the previous section. |\n", + "\n", + "Calling this method will create a new aggregated dataset which is generated dynamically on pipeline execution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", + "\n", + "\n", + "training_pipeline_steps = AutoMLPipelineBuilder.get_many_models_train_steps(\n", + " experiment=experiment,\n", + " train_data=input_ds_small,\n", + " compute_target=compute_target,\n", + " node_count=2,\n", + " process_count_per_node=8,\n", + " run_invocation_timeout=920,\n", + " train_pipeline_parameters=mm_paramters,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "training_pipeline = Pipeline(ws, steps=training_pipeline_steps)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Submit the pipeline to run\n", + "Next we submit our pipeline to run. The whole training pipeline takes about 40m using a STANDARD_D16S_V3 VM with our current ParallelRunConfig setting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run = experiment.submit(training_pipeline)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check the run status, if training_run is in completed state, continue to forecasting. If training_run is in another state, check the portal for failures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 5.0 Publish and schedule the train pipeline (Optional)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 5.1 Publish the pipeline\n", + "\n", + "Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# published_pipeline = training_pipeline.publish(name = 'automl_train_many_models',\n", + "# description = 'train many models',\n", + "# version = '1',\n", + "# continue_on_step_failure = False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 7.2 Schedule the pipeline\n", + "You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain models every month or based on another trigger such as data drift." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n", + "\n", + "# training_pipeline_id = published_pipeline.id\n", + "\n", + "# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n", + "# recurring_schedule = Schedule.create(ws, name=\"automl_training_recurring_schedule\",\n", + "# description=\"Schedule Training Pipeline to run on the first day of every month\",\n", + "# pipeline_id=training_pipeline_id,\n", + "# experiment_name=experiment.name,\n", + "# recurrence=recurrence)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 6.0 Forecasting" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set up output dataset for inference data\n", + "Output of inference can be represented as [OutputFileDatasetConfig](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.output_dataset_config.outputdatasetconfig?view=azure-ml-py) object and OutputFileDatasetConfig can be registered as a dataset. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.data import OutputFileDatasetConfig\n", + "\n", + "output_inference_data_ds = OutputFileDatasetConfig(\n", + " name=\"many_models_inference_output\", destination=(dstore, \"oj/inference_data/\")\n", + ").register_on_complete(name=\"oj_inference_data_ds\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For many models we need to provide the ManyModelsInferenceParameters object.\n", + "\n", + "#### ManyModelsInferenceParameters arguments\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **partition_column_names** | List of column names that identifies groups. |\n", + "| **target_column_name** | \\[Optional] Column name only if the inference dataset has the target. |\n", + "| **time_column_name** | \\[Optional] Column name only if it is timeseries. |\n", + "| **many_models_run_id** | \\[Optional] Many models run id where models were trained. |\n", + "\n", + "#### get_many_models_batch_inference_steps arguments\n", + "| Property | Description|\n", + "| :--------------- | :------------------- |\n", + "| **experiment** | The experiment used for inference run. |\n", + "| **inference_data** | The data to use for inferencing. It should be the same schema as used for training.\n", + "| **compute_target** The compute target that runs the inference pipeline.|\n", + "| **node_count** | The number of compute nodes to be used for running the user script. We recommend to start with the number of cores per node (varies by compute sku). |\n", + "| **process_count_per_node** The number of processes per node.\n", + "| **train_run_id** | \\[Optional] The run id of the hierarchy training, by default it is the latest successful training many model run in the experiment. |\n", + "| **train_experiment_name** | \\[Optional] The train experiment that contains the train pipeline. This one is only needed when the train pipeline is not in the same experiement as the inference pipeline. |\n", + "| **process_count_per_node** | \\[Optional] The number of processes per node, by default it's 4. |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps import AutoMLPipelineBuilder\n", + "from azureml.train.automl.runtime._many_models.many_models_parameters import (\n", + " ManyModelsInferenceParameters,\n", + ")\n", + "\n", + "mm_parameters = ManyModelsInferenceParameters(\n", + " partition_column_names=[\"Store\", \"Brand\"],\n", + " time_column_name=\"WeekStarting\",\n", + " target_column_name=\"Quantity\",\n", + ")\n", + "\n", + "inference_steps = AutoMLPipelineBuilder.get_many_models_batch_inference_steps(\n", + " experiment=experiment,\n", + " inference_data=inference_ds_small,\n", + " node_count=2,\n", + " process_count_per_node=8,\n", + " compute_target=compute_target,\n", + " run_invocation_timeout=300,\n", + " output_datastore=output_inference_data_ds,\n", + " train_run_id=training_run.id,\n", + " train_experiment_name=training_run.experiment.name,\n", + " inference_pipeline_parameters=mm_parameters,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core import Pipeline\n", + "\n", + "inference_pipeline = Pipeline(ws, steps=inference_steps)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "inference_run = experiment.submit(inference_pipeline)\n", + "inference_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieve results\n", + "\n", + "The forecasting pipeline forecasts the orange juice quantity for a Store by Brand. The pipeline returns one file with the predictions for each store and outputs the result to the forecasting_output Blob container. The details of the blob container is listed in 'forecasting_output.txt' under Outputs+logs. \n", + "\n", + "The following code snippet:\n", + "1. Downloads the contents of the output folder that is passed in the parallel run step \n", + "2. Reads the parallel_run_step.txt file that has the predictions as pandas dataframe and \n", + "3. Displays the top 10 rows of the predictions" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.contrib.automl.pipeline.steps.utilities import get_output_from_mm_pipeline\n", + "\n", + "forecasting_results_name = \"forecasting_results\"\n", + "forecasting_output_name = \"many_models_inference_output\"\n", + "forecast_file = get_output_from_mm_pipeline(\n", + " inference_run, forecasting_results_name, forecasting_output_name\n", + ")\n", + "df = pd.read_csv(forecast_file, delimiter=\" \", header=None)\n", + "df.columns = [\n", + " \"Week Starting\",\n", + " \"Store\",\n", + " \"Brand\",\n", + " \"Quantity\",\n", + " \"Advert\",\n", + " \"Price\",\n", + " \"Revenue\",\n", + " \"Predicted\",\n", + "]\n", + "print(\n", + " \"Prediction has \", df.shape[0], \" rows. Here the first 10 rows are being displayed.\"\n", + ")\n", + "df.head(10)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## 7.0 Publish and schedule the inference pipeline (Optional)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 7.1 Publish the pipeline\n", + "\n", + "Once you have a pipeline you're happy with, you can publish a pipeline so you can call it programmatically later on. See this [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-create-your-first-pipeline#publish-a-pipeline) for additional information on publishing and calling pipelines." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# published_pipeline_inf = inference_pipeline.publish(name = 'automl_forecast_many_models',\n", + "# description = 'forecast many models',\n", + "# version = '1',\n", + "# continue_on_step_failure = False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### 7.2 Schedule the pipeline\n", + "You can also [schedule the pipeline](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-schedule-pipelines) to run on a time-based or change-based schedule. This could be used to automatically retrain or forecast models every month or based on another trigger such as data drift." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# from azureml.pipeline.core import Schedule, ScheduleRecurrence\n", + "\n", + "# forecasting_pipeline_id = published_pipeline.id\n", + "\n", + "# recurrence = ScheduleRecurrence(frequency=\"Month\", interval=1, start_time=\"2020-01-01T09:00:00\")\n", + "# recurring_schedule = Schedule.create(ws, name=\"automl_forecasting_recurring_schedule\",\n", + "# description=\"Schedule Forecasting Pipeline to run on the first day of every week\",\n", + "# pipeline_id=forecasting_pipeline_id,\n", + "# experiment_name=experiment.name,\n", + "# recurrence=recurrence)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.8" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/01_userfilesupdate.PNG b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/01_userfilesupdate.PNG new file mode 100644 index 000000000..6b46a9c02 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/01_userfilesupdate.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/Flow_map.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/Flow_map.png new file mode 100644 index 000000000..e895d0bcc Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/Flow_map.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/ai show.gif b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/ai show.gif new file mode 100644 index 000000000..98d280ae0 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/ai show.gif differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/computes_view.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/computes_view.png new file mode 100644 index 000000000..634ab83cb Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/computes_view.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/create_notebook_vm.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/create_notebook_vm.png new file mode 100644 index 000000000..59f632920 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/create_notebook_vm.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa-overview.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa-overview.png new file mode 100644 index 000000000..d95817c80 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa-overview.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa.png new file mode 100644 index 000000000..2e0f12f7e Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/mmsa.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/terminal.png b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/terminal.png new file mode 100644 index 000000000..d0d342db8 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/images/terminal.png differ diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-many-models/update_env.yml b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/update_env.yml new file mode 100644 index 000000000..d0b193dab --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/forecasting-many-models/update_env.yml @@ -0,0 +1,3 @@ +dependencies: +- pip: + - azureml-contrib-automl-pipeline-steps diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb index d41a93bc0..6f4967b9c 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.ipynb @@ -1,834 +1,844 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Orange Juice Sales Forecasting**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#introduction)\n", - "1. [Setup](#setup)\n", - "1. [Compute](#compute)\n", - "1. [Data](#data)\n", - "1. [Train](#train)\n", - "1. [Forecast](#forecast)\n", - "1. [Operationalize](#operationalize)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "In this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.\n", - "\n", - "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", - "\n", - "The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "\n", - "import azureml.core\n", - "import pandas as pd\n", - "from azureml.automl.core.featurization import FeaturizationConfig\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.train.automl import AutoMLConfig\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for the run history container in the workspace\n", - "experiment_name = \"automl-ojforecasting\"\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"SKU\"] = ws.sku\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Run History Name\"] = experiment_name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Compute\n", - "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "#### Creation of AmlCompute takes approximately 5 minutes. \n", - "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "amlcompute_cluster_name = \"oj-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print(\"Found existing cluster, use it.\")\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_D12_V2\", max_nodes=6\n", - " )\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data\n", - "You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "time_column_name = \"WeekStarting\"\n", - "data = pd.read_csv(\"dominicks_OJ.csv\", parse_dates=[time_column_name])\n", - "\n", - "# Drop the columns 'logQuantity' as it is a leaky feature.\n", - "data.drop(\"logQuantity\", axis=1, inplace=True)\n", - "\n", - "data.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. \n", - "\n", - "The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series: " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "time_series_id_column_names = [\"Store\", \"Brand\"]\n", - "nseries = data.groupby(time_series_id_column_names).ngroups\n", - "print(\"Data contains {0} individual time-series.\".format(nseries))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For demonstration purposes, we extract sales time-series for just a few of the stores:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "use_stores = [2, 5, 8]\n", - "data_subset = data[data.Store.isin(use_stores)]\n", - "nseries = data_subset.groupby(time_series_id_column_names).ngroups\n", - "print(\"Data subset contains {0} individual time-series.\".format(nseries))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Data Splitting\n", - "We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "n_test_periods = 20\n", - "\n", - "\n", - "def split_last_n_by_series_id(df, n):\n", - " \"\"\"Group df by series identifiers and split on last n rows for each group.\"\"\"\n", - " df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time\n", - " time_series_id_column_names, group_keys=False\n", - " )\n", - " df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n", - " df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n", - " return df_head, df_tail\n", - "\n", - "\n", - "train, test = split_last_n_by_series_id(data_subset, n_test_periods)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Upload data to datastore\n", - "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.data.dataset_factory import TabularDatasetFactory\n", - "\n", - "datastore = ws.get_default_datastore()\n", - "train_dataset = TabularDatasetFactory.register_pandas_dataframe(\n", - " train, target=(datastore, \"dataset/\"), name=\"dominicks_OJ_train\"\n", - ")\n", - "test_dataset = TabularDatasetFactory.register_pandas_dataframe(\n", - " test, target=(datastore, \"dataset/\"), name=\"dominicks_OJ_test\"\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create dataset for training" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "train_dataset.to_pandas_dataframe().tail()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Modeling\n", - "\n", - "For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:\n", - "* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span \n", - "* Impute missing values in the target (via forward-fill) and feature columns (using median column values) \n", - "* Create features based on time series identifiers to enable fixed effects across different series\n", - "* Create time-based features to assist in learning seasonal patterns\n", - "* Encode categorical variables to numeric quantities\n", - "\n", - "In this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.\n", - "\n", - "You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame: " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "target_column_name = \"Quantity\"" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Customization\n", - "\n", - "The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:\n", - "\n", - "1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n", - "2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n", - "3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "sample-featurizationconfig-remarks" - ] - }, - "outputs": [], - "source": [ - "featurization_config = FeaturizationConfig()\n", - "# Force the CPWVOL5 feature to be numeric type.\n", - "featurization_config.add_column_purpose(\"CPWVOL5\", \"Numeric\")\n", - "# Fill missing values in the target column, Quantity, with zeros.\n", - "featurization_config.add_transformer_params(\n", - " \"Imputer\", [\"Quantity\"], {\"strategy\": \"constant\", \"fill_value\": 0}\n", - ")\n", - "# Fill missing values in the INCOME column with median value.\n", - "featurization_config.add_transformer_params(\n", - " \"Imputer\", [\"INCOME\"], {\"strategy\": \"median\"}\n", - ")\n", - "# Fill missing values in the Price column with forward fill (last value carried forward).\n", - "featurization_config.add_transformer_params(\"Imputer\", [\"Price\"], {\"strategy\": \"ffill\"})" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Forecasting Parameters\n", - "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", - "\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**time_column_name**|The name of your time column.|\n", - "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", - "|**time_series_id_column_names**|This optional parameter represents the column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined or incorrectly defined, time series identifiers will be created automatically if they exist.|\n", - "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "The [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.\n", - "\n", - "For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given or incorrectly given, AutoML automatically creates time_series_id columns if they exist. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.\n", - "\n", - "The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n", - "\n", - "We note here that AutoML can sweep over two types of time-series models:\n", - "* Models that are trained for each series such as ARIMA and Facebook's Prophet.\n", - "* Models trained across multiple time-series using a regression approach.\n", - "\n", - "In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n", - "\n", - "\n", - "Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.\n", - "\n", - "Here is a summary of AutoMLConfig parameters used for training the OJ model:\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error\n", - "|**experiment_timeout_hours**|Experimentation timeout in hours.|\n", - "|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|\n", - "|**training_data**|Input dataset, containing both features and label column.|\n", - "|**label_column_name**|The name of the label column.|\n", - "|**compute_target**|The remote compute for training.|\n", - "|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection|\n", - "|**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|\n", - "|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|\n", - "|**debug_log**|Log file path for writing debugging information|\n", - "|**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.|\n", - "|**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", - "\n", - "forecasting_parameters = ForecastingParameters(\n", - " time_column_name=time_column_name,\n", - " forecast_horizon=n_test_periods,\n", - " freq=\"W-THU\", # Set the forecast frequency to be weekly (start on each Thursday)\n", - ")\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " debug_log=\"automl_oj_sales_errors.log\",\n", - " primary_metric=\"normalized_mean_absolute_error\",\n", - " experiment_timeout_hours=0.25,\n", - " training_data=train_dataset,\n", - " label_column_name=target_column_name,\n", - " compute_target=compute_target,\n", - " enable_early_stopping=True,\n", - " featurization=featurization_config,\n", - " n_cross_validations=3,\n", - " verbosity=logging.INFO,\n", - " max_cores_per_iteration=-1,\n", - " forecasting_parameters=forecasting_parameters,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.\n", - "Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output=False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Run details\n", - "Below we retrieve the best Run object from among all the runs in the experiment." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run = remote_run.get_best_child()\n", - "model_name = best_run.properties[\"model_name\"]\n", - "best_run" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Transparency\n", - "\n", - "View updated featurization summary" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Download the featurization summary JSON file locally\n", - "best_run.download_file(\"outputs/featurization_summary.json\", \"featurization_summary.json\")\n", - "\n", - "# Render the JSON as a pandas DataFrame\n", - "with open(\"featurization_summary.json\", \"r\") as f:\n", - " records = json.load(f)\n", - "fs = pd.DataFrame.from_records(records)\n", - "\n", - "# View a summary of the featurization \n", - "fs[[\"RawFeatureName\", \"TypeDetected\", \"Dropped\", \"EngineeredFeatureCount\", \"Transformations\"]]" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Forecast\n", - "\n", - "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", - "\n", - "The inference will run on a remote compute. In this example, it will re-use the training compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment = Experiment(ws, experiment_name + \"_inference\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retreiving forecasts from the model\n", - "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from run_forecast import run_remote_inference\n", - "\n", - "remote_run_infer = run_remote_inference(\n", - " test_experiment=test_experiment,\n", - " compute_target=compute_target,\n", - " train_run=best_run,\n", - " test_dataset=test_dataset,\n", - " target_column_name=target_column_name,\n", - ")\n", - "remote_run_infer.wait_for_completion(show_output=False)\n", - "\n", - "# download the forecast file to the local machine\n", - "remote_run_infer.download_file(\"outputs/predictions.csv\", \"predictions.csv\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Evaluate\n", - "\n", - "To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n", - "\n", - "We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# load forecast data frame\n", - "fcst_df = pd.read_csv(\"predictions.csv\", parse_dates=[time_column_name])\n", - "fcst_df.head()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.shared import constants\n", - "from azureml.automl.runtime.shared.score import scoring\n", - "from matplotlib import pyplot as plt\n", - "\n", - "# use automl scoring module\n", - "scores = scoring.score_regression(\n", - " y_test=fcst_df[target_column_name],\n", - " y_pred=fcst_df[\"predicted\"],\n", - " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", - ")\n", - "\n", - "print(\"[Test data scores]\\n\")\n", - "for key, value in scores.items():\n", - " print(\"{}: {:.3f}\".format(key, value))\n", - "\n", - "# Plot outputs\n", - "%matplotlib inline\n", - "test_pred = plt.scatter(fcst_df[target_column_name], fcst_df[\"predicted\"], color=\"b\")\n", - "test_test = plt.scatter(\n", - " fcst_df[target_column_name], fcst_df[target_column_name], color=\"g\"\n", - ")\n", - "plt.legend(\n", - " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", - ")\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Operationalize" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "_Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "description = \"AutoML OJ forecaster\"\n", - "tags = None\n", - "model = remote_run.register_model(\n", - " model_name=model_name, description=description, tags=tags\n", - ")\n", - "\n", - "print(remote_run.model_id)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Develop the scoring script\n", - "\n", - "For the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "script_file_name = \"score_fcast.py\"\n", - "best_run.download_file(\"outputs/scoring_file_v_1_0_0.py\", script_file_name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deploy the model as a Web Service on Azure Container Instance" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.model import InferenceConfig\n", - "from azureml.core.webservice import AciWebservice\n", - "from azureml.core.webservice import Webservice\n", - "from azureml.core.model import Model\n", - "\n", - "inference_config = InferenceConfig(\n", - " environment=best_run.get_environment(), entry_script=script_file_name\n", - ")\n", - "\n", - "aciconfig = AciWebservice.deploy_configuration(\n", - " cpu_cores=2,\n", - " memory_gb=4,\n", - " tags={\"type\": \"automl-forecasting\"},\n", - " description=\"Automl forecasting sample service\",\n", - ")\n", - "\n", - "aci_service_name = \"automl-oj-forecast-01\"\n", - "print(aci_service_name)\n", - "aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n", - "aci_service.wait_for_deployment(True)\n", - "print(aci_service.state)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aci_service.get_logs()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Call the service" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "\n", - "X_query = test.copy()\n", - "X_query.pop(target_column_name)\n", - "# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.\n", - "X_query[time_column_name] = X_query[time_column_name].astype(str)\n", - "# The Service object accept the complex dictionary, which is internally converted to JSON string.\n", - "# The section 'data' contains the data frame in the form of dictionary.\n", - "sample_quantiles = [0.025, 0.975]\n", - "test_sample = json.dumps(\n", - " {\"data\": X_query.to_dict(orient=\"records\"), \"quantiles\": sample_quantiles}\n", - ")\n", - "response = aci_service.run(input_data=test_sample)\n", - "# translate from networkese to datascientese\n", - "try:\n", - " res_dict = json.loads(response)\n", - " y_fcst_all = pd.DataFrame(res_dict[\"index\"])\n", - " y_fcst_all[time_column_name] = pd.to_datetime(\n", - " y_fcst_all[time_column_name], unit=\"ms\"\n", - " )\n", - " y_fcst_all[\"forecast\"] = res_dict[\"forecast\"]\n", - " y_fcst_all[\"prediction_interval\"] = res_dict[\"prediction_interval\"]\n", - "except:\n", - " print(res_dict)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y_fcst_all.head()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Delete the web service if desired" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "serv = Webservice(ws, \"automl-oj-forecast-01\")\n", - "serv.delete() # don't do it accidentally" - ] - } - ], - "metadata": { - "authors": [ - { - "name": "jialiu" - } - ], - "category": "tutorial", - "celltoolbar": "Raw Cell Format", - "compute": [ - "Remote" - ], - "datasets": [ - "Orange Juice Sales" - ], - "deployment": [ - "Azure Container Instance" - ], - "exclude_from_index": false, - "framework": [ - "Azure ML AutoML" - ], - "friendly_name": "Forecasting orange juice sales with deployment", - "index_order": 1, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" - }, + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-orange-juice-sales/auto-ml-forecasting-orange-juice-sales.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Orange Juice Sales Forecasting**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#introduction)\n", + "1. [Setup](#setup)\n", + "1. [Compute](#compute)\n", + "1. [Data](#data)\n", + "1. [Train](#train)\n", + "1. [Forecast](#forecast)\n", + "1. [Operationalize](#operationalize)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "In this example, we use AutoML to train, select, and operationalize a time-series forecasting model for multiple time-series.\n", + "\n", + "Make sure you have executed the [configuration notebook](../../../configuration.ipynb) before running this notebook.\n", + "\n", + "The examples in the follow code samples use the University of Chicago's Dominick's Finer Foods dataset to forecast orange juice sales. Dominick's was a grocery chain in the Chicago metropolitan area." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "\n", + "import azureml.core\n", + "import pandas as pd\n", + "from azureml.automl.core.featurization import FeaturizationConfig\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This notebook is compatible with Azure ML SDK version 1.35.0 or later." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for the run history container in the workspace\n", + "experiment_name = \"automl-ojforecasting\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"SKU\"] = ws.sku\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute\n", + "You will need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "amlcompute_cluster_name = \"oj-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_D12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data\n", + "You are now ready to load the historical orange juice sales data. We will load the CSV file into a plain pandas DataFrame; the time column in the CSV is called _WeekStarting_, so it will be specially parsed into the datetime type." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "time_column_name = \"WeekStarting\"\n", + "data = pd.read_csv(\"dominicks_OJ.csv\", parse_dates=[time_column_name])\n", + "\n", + "# Drop the columns 'logQuantity' as it is a leaky feature.\n", + "data.drop(\"logQuantity\", axis=1, inplace=True)\n", + "\n", + "data.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Each row in the DataFrame holds a quantity of weekly sales for an OJ brand at a single store. The data also includes the sales price, a flag indicating if the OJ brand was advertised in the store that week, and some customer demographic information based on the store location. For historical reasons, the data also include the logarithm of the sales quantity. The Dominick's grocery data is commonly used to illustrate econometric modeling techniques where logarithms of quantities are generally preferred. \n", + "\n", + "The task is now to build a time-series model for the _Quantity_ column. It is important to note that this dataset is comprised of many individual time-series - one for each unique combination of _Store_ and _Brand_. To distinguish the individual time-series, we define the **time_series_id_column_names** - the columns whose values determine the boundaries between time-series: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "time_series_id_column_names = [\"Store\", \"Brand\"]\n", + "nseries = data.groupby(time_series_id_column_names).ngroups\n", + "print(\"Data contains {0} individual time-series.\".format(nseries))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For demonstration purposes, we extract sales time-series for just a few of the stores:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "use_stores = [2, 5, 8]\n", + "data_subset = data[data.Store.isin(use_stores)]\n", + "nseries = data_subset.groupby(time_series_id_column_names).ngroups\n", + "print(\"Data subset contains {0} individual time-series.\".format(nseries))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Data Splitting\n", + "We now split the data into a training and a testing set for later forecast evaluation. The test set will contain the final 20 weeks of observed sales for each time-series. The splits should be stratified by series, so we use a group-by statement on the time series identifier columns." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "n_test_periods = 20\n", + "\n", + "\n", + "def split_last_n_by_series_id(df, n):\n", + " \"\"\"Group df by series identifiers and split on last n rows for each group.\"\"\"\n", + " df_grouped = df.sort_values(time_column_name).groupby( # Sort by ascending time\n", + " time_series_id_column_names, group_keys=False\n", + " )\n", + " df_head = df_grouped.apply(lambda dfg: dfg.iloc[:-n])\n", + " df_tail = df_grouped.apply(lambda dfg: dfg.iloc[-n:])\n", + " return df_head, df_tail\n", + "\n", + "\n", + "train, test = split_last_n_by_series_id(data_subset, n_test_periods)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload data to datastore\n", + "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace), is paired with the storage account, which contains the default data store. We will use it to upload the train and test data and create [tabular datasets](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training and testing. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.data.dataset_factory import TabularDatasetFactory\n", + "\n", + "datastore = ws.get_default_datastore()\n", + "train_dataset = TabularDatasetFactory.register_pandas_dataframe(\n", + " train, target=(datastore, \"dataset/\"), name=\"dominicks_OJ_train\"\n", + ")\n", + "test_dataset = TabularDatasetFactory.register_pandas_dataframe(\n", + " test, target=(datastore, \"dataset/\"), name=\"dominicks_OJ_test\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create dataset for training" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "train_dataset.to_pandas_dataframe().tail()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Modeling\n", + "\n", + "For forecasting tasks, AutoML uses pre-processing and estimation steps that are specific to time-series. AutoML will undertake the following pre-processing steps:\n", + "* Detect time-series sample frequency (e.g. hourly, daily, weekly) and create new records for absent time points to make the series regular. A regular time series has a well-defined frequency and has a value at every sample point in a contiguous time span \n", + "* Impute missing values in the target (via forward-fill) and feature columns (using median column values) \n", + "* Create features based on time series identifiers to enable fixed effects across different series\n", + "* Create time-based features to assist in learning seasonal patterns\n", + "* Encode categorical variables to numeric quantities\n", + "\n", + "In this notebook, AutoML will train a single, regression-type model across **all** time-series in a given training set. This allows the model to generalize across related series. If you're looking for training multiple models for different time-series, please see the many-models notebook.\n", + "\n", + "You are almost ready to start an AutoML training job. First, we need to separate the target column from the rest of the DataFrame: " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "target_column_name = \"Quantity\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Customization\n", + "\n", + "The featurization customization in forecasting is an advanced feature in AutoML which allows our customers to change the default forecasting featurization behaviors and column types through `FeaturizationConfig`. The supported scenarios include:\n", + "\n", + "1. Column purposes update: Override feature type for the specified column. Currently supports DateTime, Categorical and Numeric. This customization can be used in the scenario that the type of the column cannot correctly reflect its purpose. Some numerical columns, for instance, can be treated as Categorical columns which need to be converted to categorical while some can be treated as epoch timestamp which need to be converted to datetime. To tell our SDK to correctly preprocess these columns, a configuration need to be add with the columns and their desired types.\n", + "2. Transformer parameters update: Currently supports parameter change for Imputer only. User can customize imputation methods. The supported imputing methods for target column are constant and ffill (forward fill). The supported imputing methods for feature columns are mean, median, most frequent, constant and ffill (forward fill). This customization can be used for the scenario that our customers know which imputation methods fit best to the input data. For instance, some datasets use NaN to represent 0 which the correct behavior should impute all the missing value with 0. To achieve this behavior, these columns need to be configured as constant imputation with `fill_value` 0.\n", + "3. Drop columns: Columns to drop from being featurized. These usually are the columns which are leaky or the columns contain no useful data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { "tags": [ - "None" - ], - "task": "Forecasting" + "sample-featurizationconfig-remarks" + ] + }, + "outputs": [], + "source": [ + "featurization_config = FeaturizationConfig()\n", + "# Force the CPWVOL5 feature to be numeric type.\n", + "featurization_config.add_column_purpose(\"CPWVOL5\", \"Numeric\")\n", + "# Fill missing values in the target column, Quantity, with zeros.\n", + "featurization_config.add_transformer_params(\n", + " \"Imputer\", [\"Quantity\"], {\"strategy\": \"constant\", \"fill_value\": 0}\n", + ")\n", + "# Fill missing values in the INCOME column with median value.\n", + "featurization_config.add_transformer_params(\n", + " \"Imputer\", [\"INCOME\"], {\"strategy\": \"median\"}\n", + ")\n", + "# Fill missing values in the Price column with forward fill (last value carried forward).\n", + "featurization_config.add_transformer_params(\"Imputer\", [\"Price\"], {\"strategy\": \"ffill\"})" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Forecasting Parameters\n", + "To define forecasting parameters for your experiment training, you can leverage the ForecastingParameters class. The table below details the forecasting parameter we will be passing into our experiment.\n", + "\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**time_column_name**|The name of your time column.|\n", + "|**forecast_horizon**|The forecast horizon is how many periods forward you would like to forecast. This integer horizon is in units of the timeseries frequency (e.g. daily, weekly).|\n", + "|**time_series_id_column_names**|The column names used to uniquely identify the time series in data that has multiple rows with the same timestamp. If the time series identifiers are not defined, the data set is assumed to be one time series.|\n", + "|**freq**|Forecast frequency. This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to [pandas documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects) for more information." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "The [AutoMLConfig](https://docs.microsoft.com/en-us/python/api/azureml-train-automl-client/azureml.train.automl.automlconfig.automlconfig?view=azure-ml-py) object defines the settings and data for an AutoML training job. Here, we set necessary inputs like the task type, the number of AutoML iterations to try, the training data, and cross-validation parameters.\n", + "\n", + "For forecasting tasks, there are some additional parameters that can be set in the `ForecastingParameters` class: the name of the column holding the date/time, the timeseries id column names, and the maximum forecast horizon. A time column is required for forecasting, while the time_series_id is optional. If time_series_id columns are not given, AutoML assumes that the whole dataset is a single time-series. We also pass a list of columns to drop prior to modeling. The _logQuantity_ column is completely correlated with the target quantity, so it must be removed to prevent a target leak.\n", + "\n", + "The forecast horizon is given in units of the time-series frequency; for instance, the OJ series frequency is weekly, so a horizon of 20 means that a trained model will estimate sales up to 20 weeks beyond the latest date in the training data for each series. In this example, we set the forecast horizon to the number of samples per series in the test set (n_test_periods). Generally, the value of this parameter will be dictated by business needs. For example, a demand planning application that estimates the next month of sales should set the horizon according to suitable planning time-scales. Please see the [energy_demand notebook](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand) for more discussion of forecast horizon.\n", + "\n", + "We note here that AutoML can sweep over two types of time-series models:\n", + "* Models that are trained for each series such as ARIMA and Facebook's Prophet.\n", + "* Models trained across multiple time-series using a regression approach.\n", + "\n", + "In the first case, AutoML loops over all time-series in your dataset and trains one model (e.g. AutoArima or Prophet, as the case may be) for each series. This can result in long runtimes to train these models if there are a lot of series in the data. One way to mitigate this problem is to fit models for different series in parallel if you have multiple compute cores available. To enable this behavior, set the `max_cores_per_iteration` parameter in your AutoMLConfig as shown in the example in the next cell. \n", + "\n", + "\n", + "Finally, a note about the cross-validation (CV) procedure for time-series data. AutoML uses out-of-sample error estimates to select a best pipeline/model, so it is important that the CV fold splitting is done correctly. Time-series can violate the basic statistical assumptions of the canonical K-Fold CV strategy, so AutoML implements a [rolling origin validation](https://robjhyndman.com/hyndsight/tscv/) procedure to create CV folds for time-series data. To use this procedure, you just need to specify the desired number of CV folds in the AutoMLConfig object. It is also possible to bypass CV and use your own validation set by setting the *validation_data* parameter of AutoMLConfig.\n", + "\n", + "Here is a summary of AutoMLConfig parameters used for training the OJ model:\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize.
Forecasting supports the following primary metrics
spearman_correlation
normalized_root_mean_squared_error
r2_score
normalized_mean_absolute_error\n", + "|**experiment_timeout_hours**|Experimentation timeout in hours.|\n", + "|**enable_early_stopping**|If early stopping is on, training will stop when the primary metric is no longer improving.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "|**compute_target**|The remote compute for training.|\n", + "|**n_cross_validations**|Number of cross-validation folds to use for model/pipeline selection|\n", + "|**enable_voting_ensemble**|Allow AutoML to create a Voting ensemble of the best performing models|\n", + "|**enable_stack_ensemble**|Allow AutoML to create a Stack ensemble of the best performing models|\n", + "|**debug_log**|Log file path for writing debugging information|\n", + "|**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*.|\n", + "|**max_cores_per_iteration**|Maximum number of cores to utilize per iteration. A value of -1 indicates all available cores should be used" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.forecasting_parameters import ForecastingParameters\n", + "\n", + "forecasting_parameters = ForecastingParameters(\n", + " time_column_name=time_column_name,\n", + " forecast_horizon=n_test_periods,\n", + " time_series_id_column_names=time_series_id_column_names,\n", + " freq=\"W-THU\", # Set the forecast frequency to be weekly (start on each Thursday)\n", + ")\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " debug_log=\"automl_oj_sales_errors.log\",\n", + " primary_metric=\"normalized_mean_absolute_error\",\n", + " experiment_timeout_hours=0.25,\n", + " training_data=train_dataset,\n", + " label_column_name=target_column_name,\n", + " compute_target=compute_target,\n", + " enable_early_stopping=True,\n", + " featurization=featurization_config,\n", + " n_cross_validations=3,\n", + " verbosity=logging.INFO,\n", + " max_cores_per_iteration=-1,\n", + " forecasting_parameters=forecasting_parameters,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can now submit a new training run. Depending on the data and number of iterations this operation may take several minutes.\n", + "Information from each iteration will be printed to the console. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Run details\n", + "Below we retrieve the best Run object from among all the runs in the experiment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run = remote_run.get_best_child()\n", + "model_name = best_run.properties[\"model_name\"]\n", + "best_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Transparency\n", + "\n", + "View updated featurization summary" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the featurization summary JSON file locally\n", + "best_run.download_file(\n", + " \"outputs/featurization_summary.json\", \"featurization_summary.json\"\n", + ")\n", + "\n", + "# Render the JSON as a pandas DataFrame\n", + "with open(\"featurization_summary.json\", \"r\") as f:\n", + " records = json.load(f)\n", + "fs = pd.DataFrame.from_records(records)\n", + "\n", + "# View a summary of the featurization\n", + "fs[\n", + " [\n", + " \"RawFeatureName\",\n", + " \"TypeDetected\",\n", + " \"Dropped\",\n", + " \"EngineeredFeatureCount\",\n", + " \"Transformations\",\n", + " ]\n", + "]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Forecast\n", + "\n", + "Now that we have retrieved the best pipeline/model, it can be used to make predictions on test data. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", + "\n", + "The inference will run on a remote compute. In this example, it will re-use the training compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment = Experiment(ws, experiment_name + \"_inference\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieving forecasts from the model\n", + "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To produce predictions on the test set, we need to know the feature values at all dates in the test set. This requirement is somewhat reasonable for the OJ sales data since the features mainly consist of price, which is usually set in advance, and customer demographics which are approximately constant for each store over the 20 week forecast horizon in the testing data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from run_forecast import run_remote_inference\n", + "\n", + "remote_run_infer = run_remote_inference(\n", + " test_experiment=test_experiment,\n", + " compute_target=compute_target,\n", + " train_run=best_run,\n", + " test_dataset=test_dataset,\n", + " target_column_name=target_column_name,\n", + ")\n", + "remote_run_infer.wait_for_completion(show_output=False)\n", + "\n", + "# download the forecast file to the local machine\n", + "remote_run_infer.download_file(\"outputs/predictions.csv\", \"predictions.csv\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Evaluate\n", + "\n", + "To evaluate the accuracy of the forecast, we'll compare against the actual sales quantities for some select metrics, included the mean absolute percentage error (MAPE). For more metrics that can be used for evaluation after training, please see [supported metrics](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#regressionforecasting-metrics), and [how to calculate residuals](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-understand-automated-ml#residuals).\n", + "\n", + "We'll add predictions and actuals into a single dataframe for convenience in calculating the metrics." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# load forecast data frame\n", + "fcst_df = pd.read_csv(\"predictions.csv\", parse_dates=[time_column_name])\n", + "fcst_df.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared import constants\n", + "from azureml.automl.runtime.shared.score import scoring\n", + "from matplotlib import pyplot as plt\n", + "\n", + "# use automl scoring module\n", + "scores = scoring.score_regression(\n", + " y_test=fcst_df[target_column_name],\n", + " y_pred=fcst_df[\"predicted\"],\n", + " metrics=list(constants.Metric.SCALAR_REGRESSION_SET),\n", + ")\n", + "\n", + "print(\"[Test data scores]\\n\")\n", + "for key, value in scores.items():\n", + " print(\"{}: {:.3f}\".format(key, value))\n", + "\n", + "# Plot outputs\n", + "%matplotlib inline\n", + "test_pred = plt.scatter(fcst_df[target_column_name], fcst_df[\"predicted\"], color=\"b\")\n", + "test_test = plt.scatter(\n", + " fcst_df[target_column_name], fcst_df[target_column_name], color=\"g\"\n", + ")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Operationalize" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "_Operationalization_ means getting the model into the cloud so that other can run it after you close the notebook. We will create a docker running on Azure Container Instances with the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "description = \"AutoML OJ forecaster\"\n", + "tags = None\n", + "model = remote_run.register_model(\n", + " model_name=model_name, description=description, tags=tags\n", + ")\n", + "\n", + "print(remote_run.model_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Develop the scoring script\n", + "\n", + "For the deployment we need a function which will run the forecast on serialized data. It can be obtained from the best_run." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "script_file_name = \"score_fcast.py\"\n", + "best_run.download_file(\"outputs/scoring_file_v_1_0_0.py\", script_file_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deploy the model as a Web Service on Azure Container Instance" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "from azureml.core.webservice import AciWebservice\n", + "from azureml.core.webservice import Webservice\n", + "from azureml.core.model import Model\n", + "\n", + "inference_config = InferenceConfig(\n", + " environment=best_run.get_environment(), entry_script=script_file_name\n", + ")\n", + "\n", + "aciconfig = AciWebservice.deploy_configuration(\n", + " cpu_cores=2,\n", + " memory_gb=4,\n", + " tags={\"type\": \"automl-forecasting\"},\n", + " description=\"Automl forecasting sample service\",\n", + ")\n", + "\n", + "aci_service_name = \"automl-oj-forecast-01\"\n", + "print(aci_service_name)\n", + "aci_service = Model.deploy(ws, aci_service_name, [model], inference_config, aciconfig)\n", + "aci_service.wait_for_deployment(True)\n", + "print(aci_service.state)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aci_service.get_logs()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Call the service" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "\n", + "X_query = test.copy()\n", + "X_query.pop(target_column_name)\n", + "# We have to convert datetime to string, because Timestamps cannot be serialized to JSON.\n", + "X_query[time_column_name] = X_query[time_column_name].astype(str)\n", + "# The Service object accept the complex dictionary, which is internally converted to JSON string.\n", + "# The section 'data' contains the data frame in the form of dictionary.\n", + "sample_quantiles = [0.025, 0.975]\n", + "test_sample = json.dumps(\n", + " {\"data\": X_query.to_dict(orient=\"records\"), \"quantiles\": sample_quantiles}\n", + ")\n", + "response = aci_service.run(input_data=test_sample)\n", + "# translate from networkese to datascientese\n", + "try:\n", + " res_dict = json.loads(response)\n", + " y_fcst_all = pd.DataFrame(res_dict[\"index\"])\n", + " y_fcst_all[time_column_name] = pd.to_datetime(\n", + " y_fcst_all[time_column_name], unit=\"ms\"\n", + " )\n", + " y_fcst_all[\"forecast\"] = res_dict[\"forecast\"]\n", + " y_fcst_all[\"prediction_interval\"] = res_dict[\"prediction_interval\"]\n", + "except:\n", + " print(res_dict)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y_fcst_all.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Delete the web service if desired" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "serv = Webservice(ws, \"automl-oj-forecast-01\")\n", + "serv.delete() # don't do it accidentally" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "jialiu" + } + ], + "category": "tutorial", + "celltoolbar": "Raw Cell Format", + "compute": [ + "Remote" + ], + "datasets": [ + "Orange Juice Sales" + ], + "deployment": [ + "Azure Container Instance" + ], + "exclude_from_index": false, + "framework": [ + "Azure ML AutoML" + ], + "friendly_name": "Forecasting orange juice sales with deployment", + "index_order": 1, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + }, + "tags": [ + "None" + ], + "task": "Forecasting" + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb index 2e773fbdf..a5f18de72 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb @@ -1,494 +1,494 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/1_determine_experiment_settings.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In this notebook we will explore the univaraite time-series data to determine the settings for an automated ML experiment. We will follow the thought process depicted in the following diagram:
\n", - "![Forecasting after training](figures/univariate_settings_map_20210408.jpg)\n", - "\n", - "The objective is to answer the following questions:\n", - "\n", - "
    \n", - "
  1. Is there a seasonal pattern in the data?
  2. \n", - "
      \n", - "
    • Importance: If we are able to detect regular seasonal patterns, the forecast accuracy may be improved by extracting these patterns and including them as features into the model.
    • \n", - "
    \n", - "
  3. Is the data stationary?
  4. \n", - "
      \n", - "
    • Importance: In the absense of features that capture trend behavior, ML models (regression and tree based) are not well equiped to predict stochastic trends. Working with stationary data solves this problem.
    • \n", - "
    \n", - "
  5. Is there a detectable auto-regressive pattern in the stationary data?
  6. \n", - "
      \n", - "
    • Importance: The accuracy of ML models can be improved if serial correlation is modeled by including lags of the dependent/target varaible as features. Including target lags in every experiment by default will result in a regression in accuracy scores if such setting is not warranted.
    • \n", - "
    \n", - "
\n", - "\n", - "The answers to these questions will help determine the appropriate settings for the automated ML experiment.\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import warnings\n", - "import pandas as pd\n", - "\n", - "from statsmodels.graphics.tsaplots import plot_acf, plot_pacf\n", - "import matplotlib.pyplot as plt\n", - "from pandas.plotting import register_matplotlib_converters\n", - "\n", - "register_matplotlib_converters() # fixes the future warning issue\n", - "\n", - "from helper_functions import unit_root_test_wrapper\n", - "from statsmodels.tools.sm_exceptions import InterpolationWarning\n", - "\n", - "warnings.simplefilter(\"ignore\", InterpolationWarning)\n", - "\n", - "\n", - "# set printing options\n", - "pd.set_option(\"display.max_columns\", 500)\n", - "pd.set_option(\"display.width\", 1000)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# load data\n", - "main_data_loc = \"data\"\n", - "train_file_name = \"S4248SM144SCEN.csv\"\n", - "\n", - "TARGET_COLNAME = \"S4248SM144SCEN\"\n", - "TIME_COLNAME = \"observation_date\"\n", - "COVID_PERIOD_START = \"2020-03-01\"\n", - "\n", - "df = pd.read_csv(os.path.join(main_data_loc, train_file_name))\n", - "df[TIME_COLNAME] = pd.to_datetime(df[TIME_COLNAME], format=\"%Y-%m-%d\")\n", - "df.sort_values(by=TIME_COLNAME, inplace=True)\n", - "df.set_index(TIME_COLNAME, inplace=True)\n", - "df.head(2)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# plot the entire dataset\n", - "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", - "ax.plot(df)\n", - "ax.title.set_text(\"Original Data Series\")\n", - "locs, labels = plt.xticks()\n", - "plt.xticks(rotation=45)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "The graph plots the alcohol sales in the United States. Because the data is trending, it can be difficult to see cycles, seasonality or other interestng behaviors due to the scaling issues. For example, if there is a seasonal pattern, which we will discuss later, we cannot see them on the trending data. In such case, it is worth plotting the same data in first differences." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# plot the entire dataset in first differences\n", - "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", - "ax.plot(df.diff().dropna())\n", - "ax.title.set_text(\"Data in first differences\")\n", - "locs, labels = plt.xticks()\n", - "plt.xticks(rotation=45)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In the previous plot we observe that the data is more volatile towards the end of the series. This period coincides with the Covid-19 period, so we will exclude it from our experiment. Since in this example there are no user-provided features it is hard to make an argument that a model trained on the less volatile pre-covid data will be able to accurately predict the covid period." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 1. Seasonality\n", - "\n", - "#### Questions that need to be answered in this section:\n", - "1. Is there a seasonality?\n", - "2. If it's seasonal, does the data exhibit a trend (up or down)?\n", - "\n", - "It is hard to visually detect seasonality when the data is trending. The reason being is scale of seasonal fluctuations is dwarfed by the range of the trend in the data. One way to deal with this is to de-trend the data by taking the first differences. We will discuss this in more detail in the next section." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# plot the entire dataset in first differences\n", - "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", - "ax.plot(df.diff().dropna())\n", - "ax.title.set_text(\"Data in first differences\")\n", - "locs, labels = plt.xticks()\n", - "plt.xticks(rotation=45)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "For the next plot, we will exclude the Covid period again. We will also shorten the length of data because plotting a very long time series may prevent us from seeing seasonal patterns, if there are any, because the plot may look like a random walk." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# remove COVID period\n", - "df = df[:COVID_PERIOD_START]\n", - "\n", - "# plot the entire dataset in first differences\n", - "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", - "ax.plot(df[\"2015-01-01\":].diff().dropna())\n", - "ax.title.set_text(\"Data in first differences\")\n", - "locs, labels = plt.xticks()\n", - "plt.xticks(rotation=45)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

Conclusion

\n", - "\n", - "Visual examination does not suggest clear seasonal patterns. We will set the STL_TYPE = None, and we will move to the next section that examines stationarity. \n", - "\n", - "\n", - "Say, we are working with a different data set that shows clear patterns of seasonality, we have several options for setting the settings:is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts. \n", - "
    \n", - "
  1. If the data does not appear to be trending, set DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"season\"
  2. \n", - "
  3. If the data appears to be trending, consider one of the following two settings:\n", - "
      \n", - "
        \n", - "
      1. DIFFERENCE_SERIES=True, TARGET_LAGS=None and STL_TYPE = \"season\", or
      2. \n", - "
      3. DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"trend_season\"
      4. \n", - "
      \n", - "
    • In the first case, by taking first differences we are removing stochastic trend, but we do not remove seasonal patterns. In the second case, we do not remove the stochastic trend and it can be captured by the trend component of the STL decomposition. It is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts.
    • \n", - "
    \n", - "
" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 2. Stationarity\n", - "If the data does not exhibit seasonal patterns, we would like to see if the data is non-stationary. Particularly, we want to see if there is a clear trending behavior. If such behavior is observed, we would like to first difference the data and examine the plot of an auto-correlation function (ACF) known as correlogram. If the data is seasonal, differencing it will not get rid off the seasonality and this will be shown on the correlogram as well.\n", - "\n", - "
    \n", - "
  • Question: What is stationarity and how to we detect it?
  • \n", - "
      \n", - "
    • This is a fairly complex topic. Please read the following link for a high level discussion on this subject.
    • \n", - "
    • Simply put, we are looking for scenario when examining the time series plots the mean of the series is roughly the same, regardless which time interval you pick to compute it. Thus, trending and seasonal data are examples of non-stationary series.
    • \n", - "
    \n", - "
\n", - "\n", - "\n", - "
    \n", - "
  • Question: Why do want to work with stationary data?
  • \n", - "
      \n", - "
    • In the absence of features that capture stochastic trends, the ML models that use (deterministic) time based features (hour of the day, day of the week, month of the year, etc) cannot capture such trends, and will over or under predict depending on the behavior of the time series. By working with stationary data, we eliminate the need to predict such trends, which improves the forecast accuracy. Classical time series models such as Arima and Exponential Smoothing handle non-stationary series by design and do not need such transformations. By differencing the data we are still able to run the same family of models.
    • \n", - "
    \n", - "
\n", - "\n", - "#### Questions that need to be answered in this section:\n", - "
    \n", - "
  1. Is the data stationary?
  2. \n", - "
  3. Does the stationarized data (either the original or the differenced series) exhibit a clear auto-regressive pattern?
  4. \n", - "
\n", - "\n", - "To answer the first question, we run a series of tests (we call them unit root tests)." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# unit root tests\n", - "test = unit_root_test_wrapper(df[TARGET_COLNAME])\n", - "print(\"---------------\", \"\\n\")\n", - "print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n", - "print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n", - "print(\"---------------\", \"\\n\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "In the previous cell, we ran a series of unit root tests. The summary table contains the following columns:\n", - "
    \n", - "
  • test_name is the name of the test.\n", - "
      \n", - "
    • ADF: Augmented Dickey-Fuller test
    • \n", - "
    • KPSS: Kwiatkowski-Phillips\u00e2\u20ac\u201cSchmidt\u00e2\u20ac\u201cShin test
    • \n", - "
    • PP: Phillips-Perron test\n", - "
    • ADF GLS: Augmented Dickey-Fuller using generalized least squares method
    • \n", - "
    • AZ: Andrews-Zivot test
    • \n", - "
    \n", - "
  • statistic: test statistic
  • \n", - "
  • crit_val: critical value of the test statistic
  • \n", - "
  • p_val: p-value of the test statistic. If the p-val is less than 0.05, the null hypothesis is rejected.
  • \n", - "
  • stationary: is the series stationary based on the test result?
  • \n", - "
  • Null hypothesis: what is being tested. Notice, some test such as ADF and PP assume the process has a unit root and looks for evidence to reject this hypothesis. Other tests, ex.g: KPSS, assumes the process is stationary and looks for evidence to reject such claim.\n", - "
\n", - "\n", - "Each of the tests shows that the original time series is non-stationary. The final decision is based on the majority rule. If, there is a split decision, the algorithm will claim it is stationary. We run a series of tests because each test by itself may not be accurate. In many cases when there are conflicting test results, the user needs to make determination if the series is stationary or not.\n", - "\n", - "Since we found the series to be non-stationary, we will difference it and then test if the differenced series is stationary." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# unit root tests\n", - "test = unit_root_test_wrapper(df[TARGET_COLNAME].diff().dropna())\n", - "print(\"---------------\", \"\\n\")\n", - "print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n", - "print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n", - "print(\"---------------\", \"\\n\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Four out of five tests show that the series in first differences is stationary. Notice that this decision is not unanimous. Next, let's plot the original series in first-differences to illustrate the difference between non-stationary (unit root) process vs the stationary one." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# plot original and stationary data\n", - "fig = plt.figure(figsize=(10, 10))\n", - "ax1 = fig.add_subplot(211)\n", - "ax1.plot(df[TARGET_COLNAME], \"-b\")\n", - "ax2 = fig.add_subplot(212)\n", - "ax2.plot(df[TARGET_COLNAME].diff().dropna(), \"-b\")\n", - "ax1.title.set_text(\"Original data\")\n", - "ax2.title.set_text(\"Data in first differences\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "If you were asked a question \"What is the mean of the series before and after 2008?\", for the series titled \"Original data\" the mean values will be significantly different. This implies that the first moment of the series (in this case, it is the mean) is time dependent, i.e., mean changes depending on the interval one is looking at. Thus, the series is deemed to be non-stationary. On the other hand, for the series titled \"Data in first differences\" the means for both periods are roughly the same. Hence, the first moment is time invariant; meaning it does not depend on the interval of time one is looking at. In this example it is easy to visually distinguish between stationary and non-stationary data. Often this distinction is not easy to make, therefore we rely on the statistical tests described above to help us make an informed decision. " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

Conclusion

\n", - "Since we found the original process to be non-stationary (contains unit root), we will have to model the data in first differences. As a result, we will set the DIFFERENCE_SERIES parameter to True." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# 3 Check if there is a clear autoregressive pattern\n", - "We need to determine if we should include lags of the target variable as features in order to improve forecast accuracy. To do this, we will examine the ACF and partial ACF (PACF) plots of the stationary series. In our case, it is a series in first diffrences.\n", - "\n", - "
    \n", - "
  • Question: What is an Auto-regressive pattern? What are we looking for?
  • \n", - "
      \n", - "
    • We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. For a more detailed explanation of ACF and PACF please refer to the appendix at the end of this notebook. For illustration purposes, let's examine the ACF/PACF profiles of the simulated data that follows a second order auto-regressive process, abbreviated as an AR(2).
    • \n", - "
    • \n", - "
      \n", - " The lag order is on the x-axis while the auto- and partial-correlation coefficients are on the y-axis. Vertical lines that are outside the shaded area represent statistically significant lags. Notice, the ACF function decays to zero and the PACF shows 2 significant spikes (we ignore the first spike for lag 0 in both plots since the linear relationship of any series with itself is always 1).
    • \n", - "
    \n", - "
      " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
        \n", - "
      • Question: What do I do if I observe an auto-regressive behavior?
      • \n", - "
          \n", - "
        • If such behavior is observed, we might improve the forecast accuracy by enabling the target lags feature in AutoML. There are a few options of doing this
        • \n", - "
            \n", - "
          1. Set the target lags parameter to 'auto', or
          2. \n", - "
          3. Specify the list of lags you want to include. Ex.g: target_lags = [1,2,5]
          4. \n", - "
          \n", - "
        \n", - "
        \n", - "
      • Next, let's examine the ACF and PACF plots of the stationary target variable (depicted below). Here, we do not see a decay in the ACF, instead we see a decay in PACF. It is hard to make an argument the the target variable exhibits auto-regressive behavior.
      • \n", - "
      " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Plot the ACF/PACF for the series in differences\n", - "fig, ax = plt.subplots(1, 2, figsize=(10, 5))\n", - "plot_acf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[0])\n", - "plot_pacf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[1])\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

      Conclusion

      \n", - "Since we do not see a clear indication of an AR(p) process, we will not be using target lags and will set the TARGET_LAGS parameter to None." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "

      AutoML Experiment Settings

      \n", - "Based on the analysis performed, we should try the following settings for the AutoML experiment and use them in the \"2_run_experiment\" notebook.\n", - "
        \n", - "
      • STL_TYPE=None
      • \n", - "
      • DIFFERENCE_SERIES=True
      • \n", - "
      • TARGET_LAGS=None
      • \n", - "
      " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Appendix: ACF, PACF and Lag Selection\n", - "To do this, we will examine the ACF and partial ACF (PACF) plots of the differenced series. \n", - "\n", - "
        \n", - "
      • Question: What is the ACF?
      • \n", - "
          \n", - "
        • To understand the ACF, first let's look at the correlation coefficient $\\rho_{xz}$\n", - " \\begin{equation}\n", - " \\rho_{xz} = \\frac{\\sigma_{xz}}{\\sigma_{x} \\sigma_{zy}}\n", - " \\end{equation}\n", - "
        • \n", - " where $\\sigma_{xzy}$ is the covariance between two random variables $X$ and $Z$; $\\sigma_x$ and $\\sigma_z$ is the variance for $X$ and $Z$, respectively. The correlation coefficient measures the strength of linear relationship between two random variables. This metric can take any value from -1 to 1.
        • \n", - "
          \n", - "
        • The auto-correlation coefficient $\\rho_{Y_{t} Y_{t-k}}$ is the time series equivalent of the correlation coefficient, except instead of measuring linear association between two random variables $X$ and $Z$, it measures the strength of a linear relationship between a random variable $Y_t$ and its lag $Y_{t-k}$ for any positive interger value of $k$.
        • \n", - "
          \n", - "
        • To visualize the ACF for a particular lag, say lag 2, plot the second lag of a series $y_{t-2}$ on the x-axis, and plot the series itself $y_t$ on the y-axis. The autocorrelation coefficient is the slope of the best fitted regression line and can be interpreted as follows. A one unit increase in the lag of a variable one period ago leads to a $\\rho_{Y_{t} Y_{t-2}}$ units change in the variable in the current period. This interpreation can be applied to any lag.
        • \n", - "
          \n", - "
        • In the interpretation posted above we need to be careful not to confuse the word \"leads\" with \"causes\" since these are not the same thing. We do not know the lagged value of the varaible causes it to change. Afterall, there are probably many other features that may explain the movement in $Y_t$. All we are trying to do in this section is to identify situations when the variable contains the strong auto-regressive components that needs to be included in the model to improve forecast accuracy.
        • \n", - "
        \n", - "
      " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
        \n", - "
      • Question: What is the PACF?
      • \n", - "
          \n", - "
        • When describing the ACF we essentially running a regression between a partigular lag of a series, say, lag 4, and the series itself. What this implies is the regression coefficient for lag 4 captures the impact of everything that happens in lags 1, 2 and 3. In other words, if lag 1 is the most important lag and we exclude it from the regression, naturally, the regression model will assign the importance of the 1st lag to the 4th one. Partial auto-correlation function fixes this problem since it measures the contribution of each lag accounting for the information added by the intermediary lags. If we were to illustrate ACF and PACF for the fourth lag using the regression analogy, the difference is a follows: \n", - " \\begin{align}\n", - " Y_{t} &= a_{0} + a_{4} Y_{t-4} + e_{t} \\\\\n", - " Y_{t} &= b_{0} + b_{1} Y_{t-1} + b_{2} Y_{t-2} + b_{3} Y_{t-3} + b_{4} Y_{t-4} + \\varepsilon_{t} \\\\\n", - " \\end{align}\n", - "
        • \n", - "
          \n", - "
        • \n", - " Here, you can think of $a_4$ and $b_{4}$ as the auto- and partial auto-correlation coefficients for lag 4. Notice, in the second equation we explicitely accounting for the intermediate lags by adding them as regrerssors.\n", - "
        • \n", - "
        \n", - "
      " - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "
        \n", - "
      • Question: Auto-regressive pattern? What are we looking for?
      • \n", - "
          \n", - "
        • We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. Let's examine the ACF/PACF profiles of the same simulated AR(2) shown in Section 3, and check if the ACF/PACF explanation are refelcted in these plots.
        • \n", - "
        • \n", - "
        • The autocorrelation coefficient for the 3rd lag is 0.6, which can be interpreted that a one unit increase in the value of the target varaible three periods ago leads to 0.6 units increase in the current period. However, the PACF plot shows that the partial autocorrealtion coefficient is zero (from a statistical point of view since it lies within the shaded region). This is happening because the 1st and 2nd lags are good predictors of the target variable. Ommiting these two lags from the regression results in the misleading conclusion that the third lag is a good prediciton.
        • \n", - "
          \n", - "
        • This is why it is important to examine both the ACF and the PACF plots when tring to determine the auto regressive order for the variable in question.
        • \n", - "
        \n", - "
      " - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/1_determine_experiment_settings.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this notebook we will explore the univaraite time-series data to determine the settings for an automated ML experiment. We will follow the thought process depicted in the following diagram:
      \n", + "![Forecasting after training](figures/univariate_settings_map_20210408.jpg)\n", + "\n", + "The objective is to answer the following questions:\n", + "\n", + "
        \n", + "
      1. Is there a seasonal pattern in the data?
      2. \n", + "
          \n", + "
        • Importance: If we are able to detect regular seasonal patterns, the forecast accuracy may be improved by extracting these patterns and including them as features into the model.
        • \n", + "
        \n", + "
      3. Is the data stationary?
      4. \n", + "
          \n", + "
        • Importance: In the absense of features that capture trend behavior, ML models (regression and tree based) are not well equiped to predict stochastic trends. Working with stationary data solves this problem.
        • \n", + "
        \n", + "
      5. Is there a detectable auto-regressive pattern in the stationary data?
      6. \n", + "
          \n", + "
        • Importance: The accuracy of ML models can be improved if serial correlation is modeled by including lags of the dependent/target varaible as features. Including target lags in every experiment by default will result in a regression in accuracy scores if such setting is not warranted.
        • \n", + "
        \n", + "
      \n", + "\n", + "The answers to these questions will help determine the appropriate settings for the automated ML experiment.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import warnings\n", + "import pandas as pd\n", + "\n", + "from statsmodels.graphics.tsaplots import plot_acf, plot_pacf\n", + "import matplotlib.pyplot as plt\n", + "from pandas.plotting import register_matplotlib_converters\n", + "\n", + "register_matplotlib_converters() # fixes the future warning issue\n", + "\n", + "from helper_functions import unit_root_test_wrapper\n", + "from statsmodels.tools.sm_exceptions import InterpolationWarning\n", + "\n", + "warnings.simplefilter(\"ignore\", InterpolationWarning)\n", + "\n", + "\n", + "# set printing options\n", + "pd.set_option(\"display.max_columns\", 500)\n", + "pd.set_option(\"display.width\", 1000)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# load data\n", + "main_data_loc = \"data\"\n", + "train_file_name = \"S4248SM144SCEN.csv\"\n", + "\n", + "TARGET_COLNAME = \"S4248SM144SCEN\"\n", + "TIME_COLNAME = \"observation_date\"\n", + "COVID_PERIOD_START = \"2020-03-01\"\n", + "\n", + "df = pd.read_csv(os.path.join(main_data_loc, train_file_name))\n", + "df[TIME_COLNAME] = pd.to_datetime(df[TIME_COLNAME], format=\"%Y-%m-%d\")\n", + "df.sort_values(by=TIME_COLNAME, inplace=True)\n", + "df.set_index(TIME_COLNAME, inplace=True)\n", + "df.head(2)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plot the entire dataset\n", + "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", + "ax.plot(df)\n", + "ax.title.set_text(\"Original Data Series\")\n", + "locs, labels = plt.xticks()\n", + "plt.xticks(rotation=45)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The graph plots the alcohol sales in the United States. Because the data is trending, it can be difficult to see cycles, seasonality or other interestng behaviors due to the scaling issues. For example, if there is a seasonal pattern, which we will discuss later, we cannot see them on the trending data. In such case, it is worth plotting the same data in first differences." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plot the entire dataset in first differences\n", + "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", + "ax.plot(df.diff().dropna())\n", + "ax.title.set_text(\"Data in first differences\")\n", + "locs, labels = plt.xticks()\n", + "plt.xticks(rotation=45)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the previous plot we observe that the data is more volatile towards the end of the series. This period coincides with the Covid-19 period, so we will exclude it from our experiment. Since in this example there are no user-provided features it is hard to make an argument that a model trained on the less volatile pre-covid data will be able to accurately predict the covid period." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 1. Seasonality\n", + "\n", + "#### Questions that need to be answered in this section:\n", + "1. Is there a seasonality?\n", + "2. If it's seasonal, does the data exhibit a trend (up or down)?\n", + "\n", + "It is hard to visually detect seasonality when the data is trending. The reason being is scale of seasonal fluctuations is dwarfed by the range of the trend in the data. One way to deal with this is to de-trend the data by taking the first differences. We will discuss this in more detail in the next section." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plot the entire dataset in first differences\n", + "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", + "ax.plot(df.diff().dropna())\n", + "ax.title.set_text(\"Data in first differences\")\n", + "locs, labels = plt.xticks()\n", + "plt.xticks(rotation=45)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For the next plot, we will exclude the Covid period again. We will also shorten the length of data because plotting a very long time series may prevent us from seeing seasonal patterns, if there are any, because the plot may look like a random walk." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# remove COVID period\n", + "df = df[:COVID_PERIOD_START]\n", + "\n", + "# plot the entire dataset in first differences\n", + "fig, ax = plt.subplots(figsize=(6, 2), dpi=180)\n", + "ax.plot(df[\"2015-01-01\":].diff().dropna())\n", + "ax.title.set_text(\"Data in first differences\")\n", + "locs, labels = plt.xticks()\n", + "plt.xticks(rotation=45)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "

      Conclusion

      \n", + "\n", + "Visual examination does not suggest clear seasonal patterns. We will set the STL_TYPE = None, and we will move to the next section that examines stationarity. \n", + "\n", + "\n", + "Say, we are working with a different data set that shows clear patterns of seasonality, we have several options for setting the settings:is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts. \n", + "
        \n", + "
      1. If the data does not appear to be trending, set DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"season\"
      2. \n", + "
      3. If the data appears to be trending, consider one of the following two settings:\n", + "
          \n", + "
            \n", + "
          1. DIFFERENCE_SERIES=True, TARGET_LAGS=None and STL_TYPE = \"season\", or
          2. \n", + "
          3. DIFFERENCE_SERIES=False, TARGET_LAGS=None and STL_TYPE = \"trend_season\"
          4. \n", + "
          \n", + "
        • In the first case, by taking first differences we are removing stochastic trend, but we do not remove seasonal patterns. In the second case, we do not remove the stochastic trend and it can be captured by the trend component of the STL decomposition. It is hard to say which option will work best in your case, hence you will need to run both options to see which one results in more accurate forecasts.
        • \n", + "
        \n", + "
      " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 2. Stationarity\n", + "If the data does not exhibit seasonal patterns, we would like to see if the data is non-stationary. Particularly, we want to see if there is a clear trending behavior. If such behavior is observed, we would like to first difference the data and examine the plot of an auto-correlation function (ACF) known as correlogram. If the data is seasonal, differencing it will not get rid off the seasonality and this will be shown on the correlogram as well.\n", + "\n", + "
        \n", + "
      • Question: What is stationarity and how to we detect it?
      • \n", + "
          \n", + "
        • This is a fairly complex topic. Please read the following link for a high level discussion on this subject.
        • \n", + "
        • Simply put, we are looking for scenario when examining the time series plots the mean of the series is roughly the same, regardless which time interval you pick to compute it. Thus, trending and seasonal data are examples of non-stationary series.
        • \n", + "
        \n", + "
      \n", + "\n", + "\n", + "
        \n", + "
      • Question: Why do want to work with stationary data?
      • \n", + "
          \n", + "
        • In the absence of features that capture stochastic trends, the ML models that use (deterministic) time based features (hour of the day, day of the week, month of the year, etc) cannot capture such trends, and will over or under predict depending on the behavior of the time series. By working with stationary data, we eliminate the need to predict such trends, which improves the forecast accuracy. Classical time series models such as Arima and Exponential Smoothing handle non-stationary series by design and do not need such transformations. By differencing the data we are still able to run the same family of models.
        • \n", + "
        \n", + "
      \n", + "\n", + "#### Questions that need to be answered in this section:\n", + "
        \n", + "
      1. Is the data stationary?
      2. \n", + "
      3. Does the stationarized data (either the original or the differenced series) exhibit a clear auto-regressive pattern?
      4. \n", + "
      \n", + "\n", + "To answer the first question, we run a series of tests (we call them unit root tests)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# unit root tests\n", + "test = unit_root_test_wrapper(df[TARGET_COLNAME])\n", + "print(\"---------------\", \"\\n\")\n", + "print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n", + "print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n", + "print(\"---------------\", \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the previous cell, we ran a series of unit root tests. The summary table contains the following columns:\n", + "
        \n", + "
      • test_name is the name of the test.\n", + "
          \n", + "
        • ADF: Augmented Dickey-Fuller test
        • \n", + "
        • KPSS: Kwiatkowski-Phillips–Schmidt–Shin test
        • \n", + "
        • PP: Phillips-Perron test\n", + "
        • ADF GLS: Augmented Dickey-Fuller using generalized least squares method
        • \n", + "
        • AZ: Andrews-Zivot test
        • \n", + "
        \n", + "
      • statistic: test statistic
      • \n", + "
      • crit_val: critical value of the test statistic
      • \n", + "
      • p_val: p-value of the test statistic. If the p-val is less than 0.05, the null hypothesis is rejected.
      • \n", + "
      • stationary: is the series stationary based on the test result?
      • \n", + "
      • Null hypothesis: what is being tested. Notice, some test such as ADF and PP assume the process has a unit root and looks for evidence to reject this hypothesis. Other tests, ex.g: KPSS, assumes the process is stationary and looks for evidence to reject such claim.\n", + "
      \n", + "\n", + "Each of the tests shows that the original time series is non-stationary. The final decision is based on the majority rule. If, there is a split decision, the algorithm will claim it is stationary. We run a series of tests because each test by itself may not be accurate. In many cases when there are conflicting test results, the user needs to make determination if the series is stationary or not.\n", + "\n", + "Since we found the series to be non-stationary, we will difference it and then test if the differenced series is stationary." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# unit root tests\n", + "test = unit_root_test_wrapper(df[TARGET_COLNAME].diff().dropna())\n", + "print(\"---------------\", \"\\n\")\n", + "print(\"Summary table\", \"\\n\", test[\"summary\"], \"\\n\")\n", + "print(\"Is the {} series stationary?: {}\".format(TARGET_COLNAME, test[\"stationary\"]))\n", + "print(\"---------------\", \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Four out of five tests show that the series in first differences is stationary. Notice that this decision is not unanimous. Next, let's plot the original series in first-differences to illustrate the difference between non-stationary (unit root) process vs the stationary one." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# plot original and stationary data\n", + "fig = plt.figure(figsize=(10, 10))\n", + "ax1 = fig.add_subplot(211)\n", + "ax1.plot(df[TARGET_COLNAME], \"-b\")\n", + "ax2 = fig.add_subplot(212)\n", + "ax2.plot(df[TARGET_COLNAME].diff().dropna(), \"-b\")\n", + "ax1.title.set_text(\"Original data\")\n", + "ax2.title.set_text(\"Data in first differences\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you were asked a question \"What is the mean of the series before and after 2008?\", for the series titled \"Original data\" the mean values will be significantly different. This implies that the first moment of the series (in this case, it is the mean) is time dependent, i.e., mean changes depending on the interval one is looking at. Thus, the series is deemed to be non-stationary. On the other hand, for the series titled \"Data in first differences\" the means for both periods are roughly the same. Hence, the first moment is time invariant; meaning it does not depend on the interval of time one is looking at. In this example it is easy to visually distinguish between stationary and non-stationary data. Often this distinction is not easy to make, therefore we rely on the statistical tests described above to help us make an informed decision. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "

      Conclusion

      \n", + "Since we found the original process to be non-stationary (contains unit root), we will have to model the data in first differences. As a result, we will set the DIFFERENCE_SERIES parameter to True." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# 3 Check if there is a clear autoregressive pattern\n", + "We need to determine if we should include lags of the target variable as features in order to improve forecast accuracy. To do this, we will examine the ACF and partial ACF (PACF) plots of the stationary series. In our case, it is a series in first diffrences.\n", + "\n", + "
        \n", + "
      • Question: What is an Auto-regressive pattern? What are we looking for?
      • \n", + "
          \n", + "
        • We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. For a more detailed explanation of ACF and PACF please refer to the appendix at the end of this notebook. For illustration purposes, let's examine the ACF/PACF profiles of the simulated data that follows a second order auto-regressive process, abbreviated as an AR(2).
        • \n", + "
        • \n", + "
          \n", + " The lag order is on the x-axis while the auto- and partial-correlation coefficients are on the y-axis. Vertical lines that are outside the shaded area represent statistically significant lags. Notice, the ACF function decays to zero and the PACF shows 2 significant spikes (we ignore the first spike for lag 0 in both plots since the linear relationship of any series with itself is always 1).
        • \n", + "
        \n", + "
          " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
            \n", + "
          • Question: What do I do if I observe an auto-regressive behavior?
          • \n", + "
              \n", + "
            • If such behavior is observed, we might improve the forecast accuracy by enabling the target lags feature in AutoML. There are a few options of doing this
            • \n", + "
                \n", + "
              1. Set the target lags parameter to 'auto', or
              2. \n", + "
              3. Specify the list of lags you want to include. Ex.g: target_lags = [1,2,5]
              4. \n", + "
              \n", + "
            \n", + "
            \n", + "
          • Next, let's examine the ACF and PACF plots of the stationary target variable (depicted below). Here, we do not see a decay in the ACF, instead we see a decay in PACF. It is hard to make an argument the the target variable exhibits auto-regressive behavior.
          • \n", + "
          " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Plot the ACF/PACF for the series in differences\n", + "fig, ax = plt.subplots(1, 2, figsize=(10, 5))\n", + "plot_acf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[0])\n", + "plot_pacf(df[TARGET_COLNAME].diff().dropna().values.squeeze(), ax=ax[1])\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "

          Conclusion

          \n", + "Since we do not see a clear indication of an AR(p) process, we will not be using target lags and will set the TARGET_LAGS parameter to None." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "

          AutoML Experiment Settings

          \n", + "Based on the analysis performed, we should try the following settings for the AutoML experiment and use them in the \"2_run_experiment\" notebook.\n", + "
            \n", + "
          • STL_TYPE=None
          • \n", + "
          • DIFFERENCE_SERIES=True
          • \n", + "
          • TARGET_LAGS=None
          • \n", + "
          " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Appendix: ACF, PACF and Lag Selection\n", + "To do this, we will examine the ACF and partial ACF (PACF) plots of the differenced series. \n", + "\n", + "
            \n", + "
          • Question: What is the ACF?
          • \n", + "
              \n", + "
            • To understand the ACF, first let's look at the correlation coefficient $\\rho_{xz}$\n", + " \\begin{equation}\n", + " \\rho_{xz} = \\frac{\\sigma_{xz}}{\\sigma_{x} \\sigma_{zy}}\n", + " \\end{equation}\n", + "
            • \n", + " where $\\sigma_{xzy}$ is the covariance between two random variables $X$ and $Z$; $\\sigma_x$ and $\\sigma_z$ is the variance for $X$ and $Z$, respectively. The correlation coefficient measures the strength of linear relationship between two random variables. This metric can take any value from -1 to 1.
            • \n", + "
              \n", + "
            • The auto-correlation coefficient $\\rho_{Y_{t} Y_{t-k}}$ is the time series equivalent of the correlation coefficient, except instead of measuring linear association between two random variables $X$ and $Z$, it measures the strength of a linear relationship between a random variable $Y_t$ and its lag $Y_{t-k}$ for any positive interger value of $k$.
            • \n", + "
              \n", + "
            • To visualize the ACF for a particular lag, say lag 2, plot the second lag of a series $y_{t-2}$ on the x-axis, and plot the series itself $y_t$ on the y-axis. The autocorrelation coefficient is the slope of the best fitted regression line and can be interpreted as follows. A one unit increase in the lag of a variable one period ago leads to a $\\rho_{Y_{t} Y_{t-2}}$ units change in the variable in the current period. This interpreation can be applied to any lag.
            • \n", + "
              \n", + "
            • In the interpretation posted above we need to be careful not to confuse the word \"leads\" with \"causes\" since these are not the same thing. We do not know the lagged value of the varaible causes it to change. Afterall, there are probably many other features that may explain the movement in $Y_t$. All we are trying to do in this section is to identify situations when the variable contains the strong auto-regressive components that needs to be included in the model to improve forecast accuracy.
            • \n", + "
            \n", + "
          " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
            \n", + "
          • Question: What is the PACF?
          • \n", + "
              \n", + "
            • When describing the ACF we essentially running a regression between a partigular lag of a series, say, lag 4, and the series itself. What this implies is the regression coefficient for lag 4 captures the impact of everything that happens in lags 1, 2 and 3. In other words, if lag 1 is the most important lag and we exclude it from the regression, naturally, the regression model will assign the importance of the 1st lag to the 4th one. Partial auto-correlation function fixes this problem since it measures the contribution of each lag accounting for the information added by the intermediary lags. If we were to illustrate ACF and PACF for the fourth lag using the regression analogy, the difference is a follows: \n", + " \\begin{align}\n", + " Y_{t} &= a_{0} + a_{4} Y_{t-4} + e_{t} \\\\\n", + " Y_{t} &= b_{0} + b_{1} Y_{t-1} + b_{2} Y_{t-2} + b_{3} Y_{t-3} + b_{4} Y_{t-4} + \\varepsilon_{t} \\\\\n", + " \\end{align}\n", + "
            • \n", + "
              \n", + "
            • \n", + " Here, you can think of $a_4$ and $b_{4}$ as the auto- and partial auto-correlation coefficients for lag 4. Notice, in the second equation we explicitely accounting for the intermediate lags by adding them as regrerssors.\n", + "
            • \n", + "
            \n", + "
          " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
            \n", + "
          • Question: Auto-regressive pattern? What are we looking for?
          • \n", + "
              \n", + "
            • We are looking for a classical profiles for an AR(p) process such as an exponential decay of an ACF and a the first $p$ significant lags of the PACF. Let's examine the ACF/PACF profiles of the same simulated AR(2) shown in Section 3, and check if the ACF/PACF explanation are refelcted in these plots.
            • \n", + "
            • \n", + "
            • The autocorrelation coefficient for the 3rd lag is 0.6, which can be interpreted that a one unit increase in the value of the target varaible three periods ago leads to 0.6 units increase in the current period. However, the PACF plot shows that the partial autocorrealtion coefficient is zero (from a statistical point of view since it lies within the shaded region). This is happening because the 1st and 2nd lags are good predictors of the target variable. Ommiting these two lags from the regression results in the misleading conclusion that the third lag is a good prediciton.
            • \n", + "
              \n", + "
            • This is why it is important to examine both the ACF and the PACF plots when tring to determine the auto regressive order for the variable in question.
            • \n", + "
            \n", + "
          " + ] + } + ], + "metadata": { + "authors": [ + { + "name": "vlbejan" + } ], - "metadata": { - "authors": [ - { - "name": "vlbejan" - } - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" - } + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb index 7483fec05..91ffc5440 100644 --- a/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb +++ b/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-run-experiment.ipynb @@ -1,593 +1,593 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/2_run_experiment.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Running AutoML experiments\n", - "\n", - "See the `auto-ml-forecasting-univariate-recipe-experiment-settings` notebook on how to determine settings for seasonal features, target lags and whether the series needs to be differenced or not. To make experimentation user-friendly, the user has to specify several parameters: DIFFERENCE_SERIES, TARGET_LAGS and STL_TYPE. Once these parameters are set, the notebook will generate correct transformations and settings to run experiments, generate forecasts, compute inference set metrics and plot forecast vs actuals. It will also convert the forecast from first differences to levels (original units of measurement) if the DIFFERENCE_SERIES parameter is set to True before calculating inference set metrics.\n", - "\n", - "
          \n", - "\n", - "The output generated by this notebook is saved in the `experiment_output`folder." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Setup" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import os\n", - "import logging\n", - "import pandas as pd\n", - "import numpy as np\n", - "\n", - "import azureml.automl.runtime\n", - "from azureml.core.compute import AmlCompute\n", - "from azureml.core.compute import ComputeTarget\n", - "import matplotlib.pyplot as plt\n", - "from helper_functions import ts_train_test_split, compute_metrics\n", - "\n", - "import azureml.core\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.train.automl import AutoMLConfig\n", - "\n", - "\n", - "# set printing options\n", - "np.set_printoptions(precision=4, suppress=True, linewidth=100)\n", - "pd.set_option(\"display.max_columns\", 500)\n", - "pd.set_option(\"display.width\", 1000)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "As part of the setup you have already created a **Workspace**. You will also need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "amlcompute_cluster_name = \"recipe-cluster\"\n", - "\n", - "found = False\n", - "# Check if this compute target already exists in the workspace.\n", - "cts = ws.compute_targets\n", - "if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == \"AmlCompute\":\n", - " found = True\n", - " print(\"Found existing compute target.\")\n", - " compute_target = cts[amlcompute_cluster_name]\n", - "\n", - "if not found:\n", - " print(\"Creating a new compute target...\")\n", - " provisioning_config = AmlCompute.provisioning_configuration(\n", - " vm_size=\"STANDARD_D2_V2\", max_nodes=6\n", - " )\n", - "\n", - " # Create the cluster.\\n\",\n", - " compute_target = ComputeTarget.create(\n", - " ws, amlcompute_cluster_name, provisioning_config\n", - " )\n", - "\n", - "print(\"Checking cluster status...\")\n", - "# Can poll for a minimum number of nodes and for a specific timeout.\n", - "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", - "compute_target.wait_for_completion(\n", - " show_output=True, min_node_count=None, timeout_in_minutes=20\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Data\n", - "\n", - "Here, we will load the data from the csv file and drop the Covid period." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "main_data_loc = \"data\"\n", - "train_file_name = \"S4248SM144SCEN.csv\"\n", - "\n", - "TARGET_COLNAME = \"S4248SM144SCEN\"\n", - "TIME_COLNAME = \"observation_date\"\n", - "COVID_PERIOD_START = (\n", - " \"2020-03-01\" # start of the covid period. To be excluded from evaluation.\n", - ")\n", - "\n", - "# load data\n", - "df = pd.read_csv(os.path.join(main_data_loc, train_file_name))\n", - "df[TIME_COLNAME] = pd.to_datetime(df[TIME_COLNAME], format=\"%Y-%m-%d\")\n", - "df.sort_values(by=TIME_COLNAME, inplace=True)\n", - "\n", - "# remove the Covid period\n", - "df = df.query('{} <= \"{}\"'.format(TIME_COLNAME, COVID_PERIOD_START))" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Set parameters\n", - "\n", - "The first set of parameters is based on the analysis performed in the `auto-ml-forecasting-univariate-recipe-experiment-settings` notebook. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# set parameters based on the settings notebook analysis\n", - "DIFFERENCE_SERIES = True\n", - "TARGET_LAGS = None\n", - "STL_TYPE = None" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Next, define additional parameters to be used in the AutoML config class.\n", - "\n", - "
            \n", - "
          • FORECAST_HORIZON: The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 12 periods (i.e. 12 quarters). For more discussion of forecast horizons and guiding principles for setting them, please see the energy demand notebook . \n", - "
          • \n", - "
          • TIME_SERIES_ID_COLNAMES: The names of columns used to group a timeseries. It can be used to create multiple series. If time series identifier is not defined, the data set is assumed to be one time-series. This parameter is used with task type forecasting. Since we are working with a single series, this list is empty.\n", - "
          • \n", - "
          • BLOCKED_MODELS: Optional list of models to be blocked from consideration during model selection stage. At this point we want to consider all ML and Time Series models.\n", - "
              \n", - "
            • See the following link for a list of supported Forecasting models
            • \n", - "
            \n", - "
          • \n", - "
          \n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# set other parameters\n", - "FORECAST_HORIZON = 12\n", - "TIME_SERIES_ID_COLNAMES = []\n", - "BLOCKED_MODELS = []" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "To run AutoML, you also need to create an **Experiment**. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# choose a name for the run history container in the workspace\n", - "if isinstance(TARGET_LAGS, list):\n", - " TARGET_LAGS_STR = (\n", - " \"-\".join(map(str, TARGET_LAGS)) if (len(TARGET_LAGS) > 0) else None\n", - " )\n", - "else:\n", - " TARGET_LAGS_STR = TARGET_LAGS\n", - "\n", - "experiment_desc = \"diff-{}_lags-{}_STL-{}\".format(\n", - " DIFFERENCE_SERIES, TARGET_LAGS_STR, STL_TYPE\n", - ")\n", - "experiment_name = \"alcohol_{}\".format(experiment_desc)\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output[\"SDK version\"] = azureml.core.VERSION\n", - "output[\"Subscription ID\"] = ws.subscription_id\n", - "output[\"Workspace\"] = ws.name\n", - "output[\"SKU\"] = ws.sku\n", - "output[\"Resource Group\"] = ws.resource_group\n", - "output[\"Location\"] = ws.location\n", - "output[\"Run History Name\"] = experiment_name\n", - "pd.set_option(\"display.max_colwidth\", -1)\n", - "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", - "print(outputDf.T)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# create output directory\n", - "output_dir = \"experiment_output/{}\".format(experiment_desc)\n", - "if not os.path.exists(output_dir):\n", - " os.makedirs(output_dir)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# difference data and test for unit root\n", - "if DIFFERENCE_SERIES:\n", - " df_delta = df.copy()\n", - " df_delta[TARGET_COLNAME] = df[TARGET_COLNAME].diff()\n", - " df_delta.dropna(axis=0, inplace=True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# split the data into train and test set\n", - "if DIFFERENCE_SERIES:\n", - " # generate train/inference sets using data in first differences\n", - " df_train, df_test = ts_train_test_split(\n", - " df_input=df_delta,\n", - " n=FORECAST_HORIZON,\n", - " time_colname=TIME_COLNAME,\n", - " ts_id_colnames=TIME_SERIES_ID_COLNAMES,\n", - " )\n", - "else:\n", - " df_train, df_test = ts_train_test_split(\n", - " df_input=df,\n", - " n=FORECAST_HORIZON,\n", - " time_colname=TIME_COLNAME,\n", - " ts_id_colnames=TIME_SERIES_ID_COLNAMES,\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Upload files to the Datastore\n", - "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "df_train.to_csv(\"train.csv\", index=False)\n", - "df_test.to_csv(\"test.csv\", index=False)\n", - "\n", - "datastore = ws.get_default_datastore()\n", - "datastore.upload_files(\n", - " files=[\"./train.csv\"],\n", - " target_path=\"uni-recipe-dataset/tabular/\",\n", - " overwrite=True,\n", - " show_progress=True,\n", - ")\n", - "datastore.upload_files(\n", - " files=[\"./test.csv\"],\n", - " target_path=\"uni-recipe-dataset/tabular/\",\n", - " overwrite=True,\n", - " show_progress=True,\n", - ")\n", - "\n", - "from azureml.core import Dataset\n", - "\n", - "train_dataset = Dataset.Tabular.from_delimited_files(\n", - " path=[(datastore, \"uni-recipe-dataset/tabular/train.csv\")]\n", - ")\n", - "test_dataset = Dataset.Tabular.from_delimited_files(\n", - " path=[(datastore, \"uni-recipe-dataset/tabular/test.csv\")]\n", - ")\n", - "\n", - "# print the first 5 rows of the Dataset\n", - "train_dataset.to_pandas_dataframe().reset_index(drop=True).head(5)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Config AutoML" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "time_series_settings = {\n", - " \"time_column_name\": TIME_COLNAME,\n", - " \"forecast_horizon\": FORECAST_HORIZON,\n", - " \"target_lags\": TARGET_LAGS,\n", - " \"use_stl\": STL_TYPE,\n", - " \"blocked_models\": BLOCKED_MODELS,\n", - " \"time_series_id_column_names\": TIME_SERIES_ID_COLNAMES,\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(\n", - " task=\"forecasting\",\n", - " debug_log=\"sample_experiment.log\",\n", - " primary_metric=\"normalized_root_mean_squared_error\",\n", - " experiment_timeout_minutes=20,\n", - " iteration_timeout_minutes=5,\n", - " enable_early_stopping=True,\n", - " training_data=train_dataset,\n", - " label_column_name=TARGET_COLNAME,\n", - " n_cross_validations=5,\n", - " verbosity=logging.INFO,\n", - " max_cores_per_iteration=-1,\n", - " compute_target=compute_target,\n", - " **time_series_settings,\n", - ")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "We will now run the experiment, you can go to Azure ML portal to view the run details." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output=False)\n", - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the best model\n", - "Below we select the best model from all the training iterations using get_output method." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run, fitted_model = remote_run.get_output()\n", - "fitted_model.steps" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Inference\n", - "\n", - "We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", - "\n", - "The inference will run on a remote compute. In this example, it will re-use the training compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "test_experiment = Experiment(ws, experiment_name + \"_inference\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Retreiving forecasts from the model\n", - "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from run_forecast import run_remote_inference\n", - "\n", - "remote_run = run_remote_inference(\n", - " test_experiment=test_experiment,\n", - " compute_target=compute_target,\n", - " train_run=best_run,\n", - " test_dataset=test_dataset,\n", - " target_column_name=TARGET_COLNAME,\n", - ")\n", - "remote_run.wait_for_completion(show_output=False)\n", - "\n", - "remote_run.download_file(\"outputs/predictions.csv\", f\"{output_dir}/predictions.csv\")" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Download the prediction result for metrics calcuation\n", - "The test data with predictions are saved in artifact `outputs/predictions.csv`. We will use it to calculate accuracy metrics and vizualize predictions versus actuals." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "X_trans = pd.read_csv(f\"{output_dir}/predictions.csv\", parse_dates=[TIME_COLNAME])\n", - "X_trans.head()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# convert forecast in differences to levels\n", - "def convert_fcst_diff_to_levels(fcst, yt, df_orig):\n", - " \"\"\"Convert forecast from first differences to levels.\"\"\"\n", - " fcst = fcst.reset_index(drop=False, inplace=False)\n", - " fcst[\"predicted_level\"] = fcst[\"predicted\"].cumsum()\n", - " fcst[\"predicted_level\"] = fcst[\"predicted_level\"].astype(float) + float(yt)\n", - " # merge actuals\n", - " out = pd.merge(\n", - " fcst, df_orig[[TIME_COLNAME, TARGET_COLNAME]], on=[TIME_COLNAME], how=\"inner\"\n", - " )\n", - " out.rename(columns={TARGET_COLNAME: \"actual_level\"}, inplace=True)\n", - " return out" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "if DIFFERENCE_SERIES:\n", - " # convert forecast in differences to the levels\n", - " INFORMATION_SET_DATE = max(df_train[TIME_COLNAME])\n", - " YT = df.query(\"{} == @INFORMATION_SET_DATE\".format(TIME_COLNAME))[TARGET_COLNAME]\n", - "\n", - " fcst_df = convert_fcst_diff_to_levels(fcst=X_trans, yt=YT, df_orig=df)\n", - "else:\n", - " fcst_df = X_trans.copy()\n", - " fcst_df[\"actual_level\"] = y_test\n", - " fcst_df[\"predicted_level\"] = y_predictions\n", - "\n", - "del X_trans" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Calculate metrics and save output" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# compute metrics\n", - "metrics_df = compute_metrics(fcst_df=fcst_df, metric_name=None, ts_id_colnames=None)\n", - "# save output\n", - "metrics_file_name = \"{}_metrics.csv\".format(experiment_name)\n", - "fcst_file_name = \"{}_forecst.csv\".format(experiment_name)\n", - "plot_file_name = \"{}_plot.pdf\".format(experiment_name)\n", - "\n", - "metrics_df.to_csv(os.path.join(output_dir, metrics_file_name), index=True)\n", - "fcst_df.to_csv(os.path.join(output_dir, fcst_file_name), index=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Generate and save visuals" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "plot_df = df.query('{} > \"2010-01-01\"'.format(TIME_COLNAME))\n", - "plot_df.set_index(TIME_COLNAME, inplace=True)\n", - "fcst_df.set_index(TIME_COLNAME, inplace=True)\n", - "\n", - "# generate and save plots\n", - "fig, ax = plt.subplots(dpi=180)\n", - "ax.plot(plot_df[TARGET_COLNAME], \"-g\", label=\"Historical\")\n", - "ax.plot(fcst_df[\"actual_level\"], \"-b\", label=\"Actual\")\n", - "ax.plot(fcst_df[\"predicted_level\"], \"-r\", label=\"Forecast\")\n", - "ax.legend()\n", - "ax.set_title(\"Forecast vs Actuals\")\n", - "ax.set_xlabel(TIME_COLNAME)\n", - "ax.set_ylabel(TARGET_COLNAME)\n", - "locs, labels = plt.xticks()\n", - "\n", - "plt.setp(labels, rotation=45)\n", - "plt.savefig(os.path.join(output_dir, plot_file_name))" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/forecasting-recipes-univariate/2_run_experiment.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Running AutoML experiments\n", + "\n", + "See the `auto-ml-forecasting-univariate-recipe-experiment-settings` notebook on how to determine settings for seasonal features, target lags and whether the series needs to be differenced or not. To make experimentation user-friendly, the user has to specify several parameters: DIFFERENCE_SERIES, TARGET_LAGS and STL_TYPE. Once these parameters are set, the notebook will generate correct transformations and settings to run experiments, generate forecasts, compute inference set metrics and plot forecast vs actuals. It will also convert the forecast from first differences to levels (original units of measurement) if the DIFFERENCE_SERIES parameter is set to True before calculating inference set metrics.\n", + "\n", + "
          \n", + "\n", + "The output generated by this notebook is saved in the `experiment_output`folder." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import logging\n", + "import pandas as pd\n", + "import numpy as np\n", + "\n", + "import azureml.automl.runtime\n", + "from azureml.core.compute import AmlCompute\n", + "from azureml.core.compute import ComputeTarget\n", + "import matplotlib.pyplot as plt\n", + "from helper_functions import ts_train_test_split, compute_metrics\n", + "\n", + "import azureml.core\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.train.automl import AutoMLConfig\n", + "\n", + "\n", + "# set printing options\n", + "np.set_printoptions(precision=4, suppress=True, linewidth=100)\n", + "pd.set_option(\"display.max_columns\", 500)\n", + "pd.set_option(\"display.width\", 1000)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "As part of the setup you have already created a **Workspace**. You will also need to create a [compute target](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-set-up-training-targets#amlcompute) for your AutoML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "amlcompute_cluster_name = \"recipe-cluster\"\n", + "\n", + "found = False\n", + "# Check if this compute target already exists in the workspace.\n", + "cts = ws.compute_targets\n", + "if amlcompute_cluster_name in cts and cts[amlcompute_cluster_name].type == \"AmlCompute\":\n", + " found = True\n", + " print(\"Found existing compute target.\")\n", + " compute_target = cts[amlcompute_cluster_name]\n", + "\n", + "if not found:\n", + " print(\"Creating a new compute target...\")\n", + " provisioning_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_D2_V2\", max_nodes=6\n", + " )\n", + "\n", + " # Create the cluster.\\n\",\n", + " compute_target = ComputeTarget.create(\n", + " ws, amlcompute_cluster_name, provisioning_config\n", + " )\n", + "\n", + "print(\"Checking cluster status...\")\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Data\n", + "\n", + "Here, we will load the data from the csv file and drop the Covid period." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "main_data_loc = \"data\"\n", + "train_file_name = \"S4248SM144SCEN.csv\"\n", + "\n", + "TARGET_COLNAME = \"S4248SM144SCEN\"\n", + "TIME_COLNAME = \"observation_date\"\n", + "COVID_PERIOD_START = (\n", + " \"2020-03-01\" # start of the covid period. To be excluded from evaluation.\n", + ")\n", + "\n", + "# load data\n", + "df = pd.read_csv(os.path.join(main_data_loc, train_file_name))\n", + "df[TIME_COLNAME] = pd.to_datetime(df[TIME_COLNAME], format=\"%Y-%m-%d\")\n", + "df.sort_values(by=TIME_COLNAME, inplace=True)\n", + "\n", + "# remove the Covid period\n", + "df = df.query('{} <= \"{}\"'.format(TIME_COLNAME, COVID_PERIOD_START))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Set parameters\n", + "\n", + "The first set of parameters is based on the analysis performed in the `auto-ml-forecasting-univariate-recipe-experiment-settings` notebook. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# set parameters based on the settings notebook analysis\n", + "DIFFERENCE_SERIES = True\n", + "TARGET_LAGS = None\n", + "STL_TYPE = None" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, define additional parameters to be used in the AutoML config class.\n", + "\n", + "
            \n", + "
          • FORECAST_HORIZON: The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 12 periods (i.e. 12 quarters). For more discussion of forecast horizons and guiding principles for setting them, please see the energy demand notebook . \n", + "
          • \n", + "
          • TIME_SERIES_ID_COLNAMES: The names of columns used to group a timeseries. It can be used to create multiple series. If time series identifier is not defined, the data set is assumed to be one time-series. This parameter is used with task type forecasting. Since we are working with a single series, this list is empty.\n", + "
          • \n", + "
          • BLOCKED_MODELS: Optional list of models to be blocked from consideration during model selection stage. At this point we want to consider all ML and Time Series models.\n", + "
              \n", + "
            • See the following link for a list of supported Forecasting models
            • \n", + "
            \n", + "
          • \n", + "
          \n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# set other parameters\n", + "FORECAST_HORIZON = 12\n", + "TIME_SERIES_ID_COLNAMES = []\n", + "BLOCKED_MODELS = []" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To run AutoML, you also need to create an **Experiment**. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# choose a name for the run history container in the workspace\n", + "if isinstance(TARGET_LAGS, list):\n", + " TARGET_LAGS_STR = (\n", + " \"-\".join(map(str, TARGET_LAGS)) if (len(TARGET_LAGS) > 0) else None\n", + " )\n", + "else:\n", + " TARGET_LAGS_STR = TARGET_LAGS\n", + "\n", + "experiment_desc = \"diff-{}_lags-{}_STL-{}\".format(\n", + " DIFFERENCE_SERIES, TARGET_LAGS_STR, STL_TYPE\n", + ")\n", + "experiment_name = \"alcohol_{}\".format(experiment_desc)\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"SDK version\"] = azureml.core.VERSION\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"SKU\"] = ws.sku\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "print(outputDf.T)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# create output directory\n", + "output_dir = \"experiment_output/{}\".format(experiment_desc)\n", + "if not os.path.exists(output_dir):\n", + " os.makedirs(output_dir)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# difference data and test for unit root\n", + "if DIFFERENCE_SERIES:\n", + " df_delta = df.copy()\n", + " df_delta[TARGET_COLNAME] = df[TARGET_COLNAME].diff()\n", + " df_delta.dropna(axis=0, inplace=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# split the data into train and test set\n", + "if DIFFERENCE_SERIES:\n", + " # generate train/inference sets using data in first differences\n", + " df_train, df_test = ts_train_test_split(\n", + " df_input=df_delta,\n", + " n=FORECAST_HORIZON,\n", + " time_colname=TIME_COLNAME,\n", + " ts_id_colnames=TIME_SERIES_ID_COLNAMES,\n", + " )\n", + "else:\n", + " df_train, df_test = ts_train_test_split(\n", + " df_input=df,\n", + " n=FORECAST_HORIZON,\n", + " time_colname=TIME_COLNAME,\n", + " ts_id_colnames=TIME_SERIES_ID_COLNAMES,\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload files to the Datastore\n", + "The [Machine Learning service workspace](https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-workspace) is paired with the storage account, which contains the default data store. We will use it to upload the bike share data and create [tabular dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset?view=azure-ml-py) for training. A tabular dataset defines a series of lazily-evaluated, immutable operations to load data from the data source into tabular representation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "df_train.to_csv(\"train.csv\", index=False)\n", + "df_test.to_csv(\"test.csv\", index=False)\n", + "\n", + "datastore = ws.get_default_datastore()\n", + "datastore.upload_files(\n", + " files=[\"./train.csv\"],\n", + " target_path=\"uni-recipe-dataset/tabular/\",\n", + " overwrite=True,\n", + " show_progress=True,\n", + ")\n", + "datastore.upload_files(\n", + " files=[\"./test.csv\"],\n", + " target_path=\"uni-recipe-dataset/tabular/\",\n", + " overwrite=True,\n", + " show_progress=True,\n", + ")\n", + "\n", + "from azureml.core import Dataset\n", + "\n", + "train_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"uni-recipe-dataset/tabular/train.csv\")]\n", + ")\n", + "test_dataset = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, \"uni-recipe-dataset/tabular/test.csv\")]\n", + ")\n", + "\n", + "# print the first 5 rows of the Dataset\n", + "train_dataset.to_pandas_dataframe().reset_index(drop=True).head(5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Config AutoML" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "time_series_settings = {\n", + " \"time_column_name\": TIME_COLNAME,\n", + " \"forecast_horizon\": FORECAST_HORIZON,\n", + " \"target_lags\": TARGET_LAGS,\n", + " \"use_stl\": STL_TYPE,\n", + " \"blocked_models\": BLOCKED_MODELS,\n", + " \"time_series_id_column_names\": TIME_SERIES_ID_COLNAMES,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"forecasting\",\n", + " debug_log=\"sample_experiment.log\",\n", + " primary_metric=\"normalized_root_mean_squared_error\",\n", + " experiment_timeout_minutes=20,\n", + " iteration_timeout_minutes=5,\n", + " enable_early_stopping=True,\n", + " training_data=train_dataset,\n", + " label_column_name=TARGET_COLNAME,\n", + " n_cross_validations=5,\n", + " verbosity=logging.INFO,\n", + " max_cores_per_iteration=-1,\n", + " compute_target=compute_target,\n", + " **time_series_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We will now run the experiment, you can go to Azure ML portal to view the run details." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)\n", + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Run details\n", + "Below we retrieve the best Run object from among all the runs in the experiment." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run = remote_run.get_best_child()\n", + "best_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Inference\n", + "\n", + "We now use the best fitted model from the AutoML Run to make forecasts for the test set. We will do batch scoring on the test dataset which should have the same schema as training dataset.\n", + "\n", + "The inference will run on a remote compute. In this example, it will re-use the training compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_experiment = Experiment(ws, experiment_name + \"_inference\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retreiving forecasts from the model\n", + "We have created a function called `run_forecast` that submits the test data to the best model determined during the training run and retrieves forecasts. This function uses a helper script `forecasting_script` which is uploaded and expecuted on the remote compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from run_forecast import run_remote_inference\n", + "\n", + "remote_run = run_remote_inference(\n", + " test_experiment=test_experiment,\n", + " compute_target=compute_target,\n", + " train_run=best_run,\n", + " test_dataset=test_dataset,\n", + " target_column_name=TARGET_COLNAME,\n", + ")\n", + "remote_run.wait_for_completion(show_output=False)\n", + "\n", + "remote_run.download_file(\"outputs/predictions.csv\", f\"{output_dir}/predictions.csv\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download the prediction result for metrics calcuation\n", + "The test data with predictions are saved in artifact `outputs/predictions.csv`. We will use it to calculate accuracy metrics and vizualize predictions versus actuals." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X_trans = pd.read_csv(f\"{output_dir}/predictions.csv\", parse_dates=[TIME_COLNAME])\n", + "X_trans.head()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# convert forecast in differences to levels\n", + "def convert_fcst_diff_to_levels(fcst, yt, df_orig):\n", + " \"\"\"Convert forecast from first differences to levels.\"\"\"\n", + " fcst = fcst.reset_index(drop=False, inplace=False)\n", + " fcst[\"predicted_level\"] = fcst[\"predicted\"].cumsum()\n", + " fcst[\"predicted_level\"] = fcst[\"predicted_level\"].astype(float) + float(yt)\n", + " # merge actuals\n", + " out = pd.merge(\n", + " fcst, df_orig[[TIME_COLNAME, TARGET_COLNAME]], on=[TIME_COLNAME], how=\"inner\"\n", + " )\n", + " out.rename(columns={TARGET_COLNAME: \"actual_level\"}, inplace=True)\n", + " return out" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if DIFFERENCE_SERIES:\n", + " # convert forecast in differences to the levels\n", + " INFORMATION_SET_DATE = max(df_train[TIME_COLNAME])\n", + " YT = df.query(\"{} == @INFORMATION_SET_DATE\".format(TIME_COLNAME))[TARGET_COLNAME]\n", + "\n", + " fcst_df = convert_fcst_diff_to_levels(fcst=X_trans, yt=YT, df_orig=df)\n", + "else:\n", + " fcst_df = X_trans.copy()\n", + " fcst_df[\"actual_level\"] = y_test\n", + " fcst_df[\"predicted_level\"] = y_predictions\n", + "\n", + "del X_trans" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Calculate metrics and save output" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# compute metrics\n", + "metrics_df = compute_metrics(fcst_df=fcst_df, metric_name=None, ts_id_colnames=None)\n", + "# save output\n", + "metrics_file_name = \"{}_metrics.csv\".format(experiment_name)\n", + "fcst_file_name = \"{}_forecst.csv\".format(experiment_name)\n", + "plot_file_name = \"{}_plot.pdf\".format(experiment_name)\n", + "\n", + "metrics_df.to_csv(os.path.join(output_dir, metrics_file_name), index=True)\n", + "fcst_df.to_csv(os.path.join(output_dir, fcst_file_name), index=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Generate and save visuals" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plot_df = df.query('{} > \"2010-01-01\"'.format(TIME_COLNAME))\n", + "plot_df.set_index(TIME_COLNAME, inplace=True)\n", + "fcst_df.set_index(TIME_COLNAME, inplace=True)\n", + "\n", + "# generate and save plots\n", + "fig, ax = plt.subplots(dpi=180)\n", + "ax.plot(plot_df[TARGET_COLNAME], \"-g\", label=\"Historical\")\n", + "ax.plot(fcst_df[\"actual_level\"], \"-b\", label=\"Actual\")\n", + "ax.plot(fcst_df[\"predicted_level\"], \"-r\", label=\"Forecast\")\n", + "ax.legend()\n", + "ax.set_title(\"Forecast vs Actuals\")\n", + "ax.set_xlabel(TIME_COLNAME)\n", + "ax.set_ylabel(TARGET_COLNAME)\n", + "locs, labels = plt.xticks()\n", + "\n", + "plt.setp(labels, rotation=45)\n", + "plt.savefig(os.path.join(output_dir, plot_file_name))" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "vlbejan" + } ], - "metadata": { - "authors": [ - { - "name": "vlbejan" - } - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.9" - } - }, - "nbformat": 4, - "nbformat_minor": 4 -} \ No newline at end of file + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.9" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/README.md b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/README.md new file mode 100644 index 000000000..9aea82592 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/README.md @@ -0,0 +1,18 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Notebook showing how to use Azure Machine Learning pipelines to do Batch Predictions with an Image Classification model trained using AutoML. +--- + +# Batch Scoring with an Image Classification Model +- Dataset: Toy dataset with images of products found in a fridge + - **[Jupyter Notebook](auto-ml-image-classification-multiclass-batch-scoring.ipynb)** + - register an Image Classification Multi-Class model already trained using AutoML + - create an Inference Dataset + - provision compute targets and create a Batch Scoring script + - use ParallelRunStep to do batch scoring + - build, run, and publish a pipeline + - enable a REST endpoint for the pipeline diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/auto-ml-image-classification-multiclass-batch-scoring.ipynb b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/auto-ml-image-classification-multiclass-batch-scoring.ipynb new file mode 100644 index 000000000..bfee26b91 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/auto-ml-image-classification-multiclass-batch-scoring.ipynb @@ -0,0 +1,950 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Batch Predictions for an Image Classification model trained using AutoML\n", + "In this notebook, we go over how you can use [Azure Machine Learning pipelines](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-pipeline-batch-scoring-classification) to run a batch scoring image classification job.\n", + "\n", + "**Please note:** For this notebook you can use an existing image classification model trained using AutoML for Images or use the simple model training we included below for convenience. For detailed instructions on how to train an image classification model with AutoML, please refer to the official [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models) and to the [image classification multiclass notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "Please follow the [\"Setup a new conda environment\"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import azureml.core\n", + "\n", + "print(\"This notebook was created using version 1.35.0 of the Azure ML SDK.\")\n", + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK.\")\n", + "assert (\n", + " azureml.core.VERSION >= \"1.35\"\n", + "), \"Please upgrade the Azure ML SDK by running '!pip install --upgrade azureml-sdk' then restart the kernel.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## You will perform the following tasks:\n", + "\n", + "* Register a Model already trained using AutoML for Image Classification.\n", + "* Create an Inference Dataset.\n", + "* Provision compute targets and create a Batch Scoring script.\n", + "* Use ParallelRunStep to do batch scoring.\n", + "* Build, run, and publish a pipeline.\n", + "* Enable a REST endpoint for the pipeline." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace setup\n", + "\n", + "An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.\n", + "\n", + "Create an Azure ML Workspace within your Azure subscription or load an existing workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace default datastore is used to store inference input images and outputs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def_data_store = ws.get_default_datastore()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute target setup\n", + "You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = \"gpu-cluster-nc6\"\n", + "\n", + "try:\n", + " compute_target = ws.compute_targets[cluster_name]\n", + " print(\"Found existing compute target.\")\n", + "except KeyError:\n", + " print(\"Creating a new compute target...\")\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"Standard_NC6\",\n", + " idle_seconds_before_scaledown=600,\n", + " min_nodes=0,\n", + " max_nodes=4,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train an Image Classification model\n", + "\n", + "In this section we will do a quick model train to use for the batch scoring. For a datailed example on how to train an image classification model, please refer to the official [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models) or to the [image classification multiclass notebook](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb). If you already have a model trained in the same workspace, you can skip to section [\"Create data objects\"](#Create-data-objects)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Experiment Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment_name = \"automl-image-batchscoring\"\n", + "experiment = Experiment(ws, name=experiment_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Download dataset with input Training Data\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import urllib\n", + "from zipfile import ZipFile\n", + "\n", + "# download data\n", + "download_url = \"https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip\"\n", + "data_file = \"./fridgeObjects.zip\"\n", + "urllib.request.urlretrieve(download_url, filename=data_file)\n", + "\n", + "# extract files\n", + "with ZipFile(data_file, \"r\") as zip:\n", + " print(\"extracting files...\")\n", + " zip.extractall()\n", + " print(\"done\")\n", + "# delete zip file\n", + "os.remove(data_file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Convert the downloaded data to JSONL" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "\n", + "src = \"./fridgeObjects/\"\n", + "train_validation_ratio = 5\n", + "\n", + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "workspaceblobstore = ws.get_default_datastore().name\n", + "\n", + "# Path to the training and validation files\n", + "train_annotations_file = os.path.join(src, \"train_annotations.jsonl\")\n", + "validation_annotations_file = os.path.join(src, \"validation_annotations.jsonl\")\n", + "\n", + "# sample json line dictionary\n", + "json_line_sample = {\n", + " \"image_url\": \"AmlDatastore://\"\n", + " + workspaceblobstore\n", + " + \"/\"\n", + " + os.path.basename(os.path.dirname(src)),\n", + " \"label\": \"\",\n", + "}\n", + "\n", + "index = 0\n", + "# Scan each sub directary and generate jsonl line\n", + "with open(train_annotations_file, \"w\") as train_f:\n", + " with open(validation_annotations_file, \"w\") as validation_f:\n", + " for className in os.listdir(src):\n", + " subDir = src + className\n", + " if not os.path.isdir(subDir):\n", + " continue\n", + " # Scan each sub directary\n", + " print(\"Parsing \" + subDir)\n", + " for image in os.listdir(subDir):\n", + " json_line = dict(json_line_sample)\n", + " json_line[\"image_url\"] += f\"/{className}/{image}\"\n", + " json_line[\"label\"] = className\n", + "\n", + " if index % train_validation_ratio == 0:\n", + " # validation annotation\n", + " validation_f.write(json.dumps(json_line) + \"\\n\")\n", + " else:\n", + " # train annotation\n", + " train_f.write(json.dumps(json_line) + \"\\n\")\n", + " index += 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Upload the JSONL file and images to Datastore" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./fridgeObjects\", target_path=\"fridgeObjects\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Create and register datasets in workspace" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "from azureml.data import DataType\n", + "\n", + "# get existing training dataset\n", + "training_dataset_name = \"fridgeObjectsTrainingDataset\"\n", + "if training_dataset_name in ws.datasets:\n", + " training_dataset = ws.datasets.get(training_dataset_name)\n", + " print(\"Found the training dataset\", training_dataset_name)\n", + "else:\n", + " # create training dataset\n", + " training_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"fridgeObjects/train_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " training_dataset = training_dataset.register(\n", + " workspace=ws, name=training_dataset_name\n", + " )\n", + "# get existing validation dataset\n", + "validation_dataset_name = \"fridgeObjectsValidationDataset\"\n", + "if validation_dataset_name in ws.datasets:\n", + " validation_dataset = ws.datasets.get(validation_dataset_name)\n", + " print(\"Found the validation dataset\", validation_dataset_name)\n", + "else:\n", + " # create validation dataset\n", + " validation_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"fridgeObjects/validation_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " validation_dataset = validation_dataset.register(\n", + " workspace=ws, name=validation_dataset_name\n", + " )\n", + "print(\"Training dataset name: \" + training_dataset.name)\n", + "print(\"Validation dataset name: \" + validation_dataset.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Submit training 1 training run with default hyperparameters" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import GridParameterSampling, choice\n", + "\n", + "image_config_vit = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_CLASSIFICATION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " hyperparameter_sampling=GridParameterSampling({\"model_name\": choice(\"vitb16r224\")}),\n", + " iterations=1,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(image_config_vit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create data objects\n", + "\n", + "When building pipelines, `Dataset` objects are used for reading data from workspace datastores, and `PipelineData` objects are used for transferring intermediate data between pipeline steps.\n", + "\n", + "This batch scoring example only uses one pipeline step, but in use-cases with multiple steps, the typical flow will include:\n", + "\n", + "1. Using `Dataset` objects as inputs to fetch raw data, performing some transformations, then output a `PipelineData` object. \n", + "1. Use the previous step's `PipelineData` **output object** as an **input object**, repeated for subsequent steps.\n", + "\n", + "For this scenario you create `Dataset` objects corresponding to the datastore directories for the input images. You also create a `PipelineData` object for the batch scoring output data. An object reference in the `outputs` array becomes available as an **input** for a subsequent pipeline step, for scenarios where there is more than one step. In this case we are just going to build a single step pipeline.\n", + "\n", + "It is assumed that an image classification training run was already performed in this workspace and the files are already in the datastore. If this is not the case, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models) to know how to train an image classification model with AutoML.\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.dataset import Dataset\n", + "from azureml.pipeline.core import PipelineData\n", + "\n", + "input_images = Dataset.File.from_files((def_data_store, \"fridgeObjects/**/*.jpg\"))\n", + "\n", + "output_dir = PipelineData(name=\"scores\", datastore=def_data_store)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, we need to register the input datasets for batch scoring with the workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "input_images = input_images.register(\n", + " workspace=ws, name=\"fridgeObjects_scoring_images\", create_new_version=True\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Retrieve the environment and metrics from the training run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.experiment import Experiment\n", + "from azureml.core import Run\n", + "\n", + "experiment_name = \"automl-image-batchscoring\"\n", + "# If your model was not trained with this notebook, replace the id below\n", + "# with the run id of the child training run (i.e., the one ending with HD_0)\n", + "training_run_id = automl_image_run.id + \"_HD_0\"\n", + "exp = Experiment(ws, experiment_name)\n", + "training_run = Run(exp, training_run_id)\n", + "\n", + "# The below will give only the requested metric\n", + "metrics = training_run.get_metrics(\"accuracy\")\n", + "best_metric = max(metrics[\"accuracy\"])\n", + "print(\"best_metric:\", best_metric)\n", + "\n", + "# Retrieve the training environment\n", + "env = training_run.get_environment()\n", + "print(env)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Register model with metric and environment tags\n", + "\n", + "Now you register the model to your workspace, which allows you to easily retrieve it in the pipeline process. In the `register()` static function, the `model_name` parameter is the key you use to locate your model throughout the SDK.\n", + "Tag the model with the metrics and the environment used to train the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import Model\n", + "\n", + "tags = dict()\n", + "tags[\"accuracy\"] = best_metric\n", + "tags[\"env_name\"] = env.name\n", + "tags[\"env_version\"] = env.version\n", + "\n", + "model_name = \"fridgeObjectsClassifier\"\n", + "model = training_run.register_model(\n", + " model_name=model_name, model_path=\"train_artifacts\", tags=tags\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# List the models from the workspace\n", + "models = Model.list(ws, name=model_name, latest=True)\n", + "print(model.name)\n", + "print(model.tags)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Write a scoring script" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To do the scoring, you create a batch scoring script `batch_scoring.py`, and write it to the scripts folder in current directory. The script takes a minibatch of input images, applies the classification model, and outputs the predictions to a results file.\n", + "\n", + "The script `batch_scoring.py` takes the following parameters, which get passed from the `ParallelRunStep` that you create later:\n", + "\n", + "- `--model_name`: the name of the model being used\n", + "\n", + "While creating the batch scoring script, refer to the scoring scripts generated under the outputs folder of the Automl training runs. This will help to identify the right model settings to be used in the batch scoring script init method while loading the model.\n", + "Note: The batch scoring script we generate in the subsequent step is different from the scoring script generated by the training runs in the below screenshot. We refer to it just to identify the right model settings to be used in the batch scoring script.\n", + "\n", + "![Training run outputs](ui_outputs.PNG \"Training run outputs\")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# View the batch scoring script. Use the model settings as appropriate for your model.\n", + "with open(\"./scripts/batch_scoring.py\", \"r\") as f:\n", + " print(f.read())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Build and run the pipeline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create the parallel-run configuration to wrap the inference script\n", + "Create the pipeline run configuration specifying the script, environment configuration, and parameters. Specify the compute target you already attached to your workspace as the target of execution of the script. This will set the run configuration of the ParallelRunStep we will define next.\n", + "\n", + "Refer this [site](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run) for more details on ParallelRunStep of Azure Machine Learning Pipelines." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.steps import ParallelRunConfig\n", + "\n", + "parallel_run_config = ParallelRunConfig(\n", + " environment=env,\n", + " entry_script=\"batch_scoring.py\",\n", + " source_directory=\"scripts\",\n", + " output_action=\"append_row\",\n", + " append_row_file_name=\"parallel_run_step.txt\",\n", + " mini_batch_size=\"20\", # Num files to process in one call\n", + " error_threshold=1,\n", + " compute_target=compute_target,\n", + " process_count_per_node=2,\n", + " node_count=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create the pipeline step\n", + "\n", + "A pipeline step is an object that encapsulates everything you need for running a pipeline including:\n", + "\n", + "* environment and dependency settings\n", + "* the compute resource to run the pipeline on\n", + "* input and output data, and any custom parameters\n", + "* reference to a script to run during the step\n", + "\n", + "There are multiple classes that inherit from the parent class [`PipelineStep`](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/?view=azure-ml-py) to assist with building a step using certain frameworks and stacks. In this example, you use the [`ParallelRunStep`](https://docs.microsoft.com/en-us/python/api/azureml-contrib-pipeline-steps/azureml.contrib.pipeline.steps.parallelrunstep?view=azure-ml-py) class to define your step logic using a scoring script. `ParallelRunStep` executes the script in a distributed fashion.\n", + "\n", + "The pipelines infrastructure uses the `ArgumentParser` class to pass parameters into pipeline steps. For example, in the code below the first argument `--model_name` is given the property identifier `model_name`. In the `main()` function, this property is accessed using `Model.get_model_path(args.model_name)`.\n", + "\n", + "Note: The pipeline in this tutorial only has one step and writes the output to a file, but for multi-step pipelines, you also use `ArgumentParser` to define a directory to write output data for input to subsequent steps. See the [notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) for an example of passing data between multiple pipeline steps using the `ArgumentParser` design pattern." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.steps import ParallelRunStep\n", + "from datetime import datetime\n", + "\n", + "parallel_step_name = \"batchscoring-\" + datetime.now().strftime(\"%Y%m%d%H%M\")\n", + "\n", + "arguments = [\"--model_name\", model_name]\n", + "\n", + "# Specify inference batch_size, otherwise uses default value. (This is different from the mini_batch_size above)\n", + "# NOTE: Large batch sizes may result in OOM errors.\n", + "# arguments = arguments + [\"--batch_size\", \"20\"]\n", + "\n", + "batch_score_step = ParallelRunStep(\n", + " name=parallel_step_name,\n", + " inputs=[input_images.as_named_input(\"input_images\")],\n", + " output=output_dir,\n", + " arguments=arguments,\n", + " parallel_run_config=parallel_run_config,\n", + " allow_reuse=False,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For a list of all classes for different step types, see the [steps package](https://docs.microsoft.com/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Run the pipeline\n", + "\n", + "Now you run the pipeline. First create a `Pipeline` object with your workspace reference and the pipeline step you created. The `steps` parameter is an array of steps, and in this case, there is only one step for batch scoring. To build pipelines with multiple steps, you place the steps in order in this array.\n", + "\n", + "Next use the `Experiment.submit()` function to submit the pipeline for execution. You also specify the custom parameter `param_batch_size`. The `wait_for_completion` function will output logs during the pipeline build process, which allows you to see current progress.\n", + "\n", + "Note: The first pipeline run takes roughly **15 minutes**, as all dependencies must be downloaded, a Docker image is created, and the Python environment is provisioned/created. Running it again takes significantly less time as those resources are reused. However, total run time depends on the workload of your scripts and processes running in each pipeline step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "from azureml.pipeline.core import Pipeline\n", + "\n", + "pipeline = Pipeline(workspace=ws, steps=[batch_score_step])\n", + "pipeline_run = Experiment(ws, \"batch_scoring_automl_image\").submit(pipeline)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# This will output information of the pipeline run, including the link to the details page of portal.\n", + "pipeline_run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Wait the run for completion and show output log to console\n", + "pipeline_run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Download and review output" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import tempfile\n", + "import os\n", + "\n", + "batch_run = pipeline_run.find_step_run(batch_score_step.name)[0]\n", + "batch_output = batch_run.get_output_data(output_dir.name)\n", + "\n", + "target_dir = tempfile.mkdtemp()\n", + "batch_output.download(local_path=target_dir)\n", + "result_file = os.path.join(\n", + " target_dir, batch_output.path_on_datastore, parallel_run_config.append_row_file_name\n", + ")\n", + "result_file\n", + "\n", + "# Print the first five lines of the output\n", + "with open(result_file) as f:\n", + " for x in range(5):\n", + " print(next(f))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Choose a random file for visualization" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import random\n", + "import json\n", + "\n", + "with open(result_file, \"r\") as f:\n", + " contents = f.readlines()\n", + "rand_file = contents[random.randrange(len(contents))]\n", + "prediction = json.loads(rand_file)\n", + "print(prediction[\"filename\"])\n", + "print(prediction[\"probs\"])\n", + "print(prediction[\"labels\"])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Download the image file from the datastore\n", + "path = (\n", + " \"fridgeObjects\"\n", + " + \"/\"\n", + " + prediction[\"filename\"].split(\"/\")[-2]\n", + " + \"/\"\n", + " + prediction[\"filename\"].split(\"/\")[-1]\n", + ")\n", + "path_on_datastore = def_data_store.path(path)\n", + "single_image_ds = Dataset.File.from_files(path=path_on_datastore, validate=False)\n", + "image = single_image_ds.download()[0]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.image as mpimg\n", + "from PIL import Image\n", + "import numpy as np\n", + "import json\n", + "\n", + "IMAGE_SIZE = (18, 12)\n", + "plt.figure(figsize=IMAGE_SIZE)\n", + "img_np = mpimg.imread(image)\n", + "img = Image.fromarray(img_np.astype(\"uint8\"), \"RGB\")\n", + "x, y = img.size\n", + "\n", + "fig, ax = plt.subplots(1, figsize=(15, 15))\n", + "# Display the image\n", + "ax.imshow(img_np)\n", + "\n", + "label_index = np.argmax(prediction[\"probs\"])\n", + "label = prediction[\"labels\"][label_index]\n", + "conf_score = prediction[\"probs\"][label_index]\n", + "\n", + "display_text = \"{} ({})\".format(label, round(conf_score, 3))\n", + "print(display_text)\n", + "\n", + "color = \"red\"\n", + "plt.text(30, 30, display_text, color=color, fontsize=30)\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Publish and run from REST endpoint" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Run the following code to publish the pipeline to your workspace. In your workspace in the portal, you can see metadata for the pipeline including run history and durations. You can also run the pipeline manually from the portal.\n", + "\n", + "Additionally, publishing the pipeline enables a REST endpoint to rerun the pipeline from any HTTP library on any platform." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "published_pipeline = pipeline_run.publish_pipeline(\n", + " name=\"automl-image-batch-scoring\",\n", + " description=\"Batch scoring using Automl for Image\",\n", + " version=\"1.0\",\n", + ")\n", + "\n", + "published_pipeline" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To run the pipeline from the REST endpoint, you first need an OAuth2 Bearer-type authentication header. This example uses interactive authentication for illustration purposes, but for most production scenarios requiring automated or headless authentication, use service principal authentication as [described in this notebook](https://aka.ms/pl-restep-auth).\n", + "\n", + "Service principal authentication involves creating an **App Registration** in **Azure Active Directory**, generating a client secret, and then granting your service principal **role access** to your machine learning workspace. You then use the [`ServicePrincipalAuthentication`](https://docs.microsoft.com/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication?view=azure-ml-py) class to manage your auth flow.\n", + "\n", + "Both `InteractiveLoginAuthentication` and `ServicePrincipalAuthentication` inherit from `AbstractAuthentication`, and in both cases you use the `get_authentication_header()` function in the same way to fetch the header." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.authentication import InteractiveLoginAuthentication\n", + "\n", + "interactive_auth = InteractiveLoginAuthentication()\n", + "auth_header = interactive_auth.get_authentication_header()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Get the REST url from the `endpoint` property of the published pipeline object. You can also find the REST url in your workspace in the portal. Build an HTTP POST request to the endpoint, specifying your authentication header. Additionally, add a JSON payload object with the experiment name and the batch size parameter. As a reminder, the `process_count_per_node` is passed through to `ParallelRunStep` because you defined it is defined as a `PipelineParameter` object in the step configuration.\n", + "\n", + "Make the request to trigger the run. Access the `Id` key from the response dictionary to get the value of the run id." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "\n", + "rest_endpoint = published_pipeline.endpoint\n", + "response = requests.post(\n", + " rest_endpoint,\n", + " headers=auth_header,\n", + " json={\n", + " \"ExperimentName\": \"batch_scoring\",\n", + " \"ParameterAssignments\": {\"process_count_per_node\": 2},\n", + " },\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "try:\n", + " response.raise_for_status()\n", + "except Exception:\n", + " raise Exception(\n", + " \"Received bad response from the endpoint: {}\\n\"\n", + " \"Response Code: {}\\n\"\n", + " \"Headers: {}\\n\"\n", + " \"Content: {}\".format(\n", + " rest_endpoint, response.status_code, response.headers, response.content\n", + " )\n", + " )\n", + "run_id = response.json().get(\"Id\")\n", + "print(\"Submitted pipeline run: \", run_id)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use the run id to monitor the status of the new run. This will take another 10-15 min to run and will look similar to the previous pipeline run, so if you don't need to see another pipeline run, you can skip watching the full output." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.pipeline.core.run import PipelineRun\n", + "\n", + "published_pipeline_run = PipelineRun(ws.experiments[\"batch_scoring\"], run_id)\n", + "published_pipeline_run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Wait the run for completion and show output log to console\n", + "published_pipeline_run.wait_for_completion(show_output=True)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": [ + "sanpil", + "trmccorm", + "pansav" + ] + } + ], + "categories": [ + "tutorials" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.8.8" + }, + "metadata": { + "interpreter": { + "hash": "0f25b6eb4724eea488a4edd67dd290abce7d142c09986fc811384b5aebc0585a" + } + }, + "msauthor": "trbye" + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/scripts/batch_scoring.py b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/scripts/batch_scoring.py new file mode 100644 index 000000000..7b6ad55d4 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/scripts/batch_scoring.py @@ -0,0 +1,69 @@ +# Copyright (c) Microsoft. All rights reserved. +# Licensed under the MIT license. + +import os +import argparse +import json + +from azureml.core.model import Model +from azureml.automl.core.shared import logging_utilities + +try: + from azureml.automl.dnn.vision.common.logging_utils import get_logger + from azureml.automl.dnn.vision.common.model_export_utils import ( + load_model, + run_inference_batch, + ) + from azureml.automl.dnn.vision.classification.inference.score import ( + _score_with_model, + ) + from azureml.automl.dnn.vision.common.utils import _set_logging_parameters +except ImportError: + from azureml.contrib.automl.dnn.vision.common.logging_utils import get_logger + from azureml.contrib.automl.dnn.vision.common.model_export_utils import ( + load_model, + run_inference_batch, + ) + from azureml.contrib.automl.dnn.vision.classification.inference.score import ( + _score_with_model, + ) + from azureml.contrib.automl.dnn.vision.common.utils import _set_logging_parameters + +TASK_TYPE = "image-classification" +logger = get_logger("azureml.automl.core.scoring_script_images") + + +def init(): + global model + global batch_size + + # Set up logging + _set_logging_parameters(TASK_TYPE, {}) + + parser = argparse.ArgumentParser( + description="Retrieve model_name and batch_size from arguments." + ) + parser.add_argument("--model_name", dest="model_name", required=True) + parser.add_argument("--batch_size", dest="batch_size", type=int, required=False) + args, _ = parser.parse_known_args() + + batch_size = args.batch_size + + model_path = os.path.join(Model.get_model_path(args.model_name), "model.pt") + print(model_path) + + try: + logger.info("Loading model from path: {}.".format(model_path)) + model_settings = {} + model = load_model(TASK_TYPE, model_path, **model_settings) + logger.info("Loading successful.") + except Exception as e: + logging_utilities.log_traceback(e, logger) + raise + + +def run(mini_batch): + logger.info("Running inference.") + result = run_inference_batch(model, mini_batch, _score_with_model, batch_size) + logger.info("Finished inferencing.") + return result diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/ui_outputs.PNG b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/ui_outputs.PNG new file mode 100644 index 000000000..605103d30 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass-batch-scoring/ui_outputs.PNG differ diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/README.md b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/README.md new file mode 100644 index 000000000..bd6fc07d9 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/README.md @@ -0,0 +1,15 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Notebook showing how to use AutoML for training an Image Classification Multi-Class model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. +--- + +# Image Classification Multi-Class using AutoML for Images +- Dataset: Toy dataset with images of products found in a fridge + - **[Jupyter Notebook](auto-ml-image-classification-multiclass.ipynb)** + - train an Image Classification Multi-Class model using AutoML + - tune hyperparameters of the model to optimize model performance + - deploy the model to use in inference scenarios diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb new file mode 100644 index 000000000..5c9ff5942 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/auto-ml-image-classification-multiclass.ipynb @@ -0,0 +1,744 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Training an Image Classification Multi-Class model using AutoML\n", + "In this notebook, we go over how you can use AutoML for training an Image Classification Multi-Class model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![img](example_image_classification_multiclass_predictions.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "Please follow the [\"Setup a new conda environment\"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import azureml.core\n", + "\n", + "print(\"This notebook was created using version 1.35.0 of the Azure ML SDK.\")\n", + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK.\")\n", + "assert (\n", + " azureml.core.VERSION >= \"1.35\"\n", + "), \"Please upgrade the Azure ML SDK by running '!pip install --upgrade azureml-sdk' then restart the kernel.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace setup\n", + "In order to train and deploy models in Azure ML, you will first need to set up a workspace.\n", + "\n", + "An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.\n", + "\n", + "Create an Azure ML Workspace within your Azure subscription or load an existing workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute target setup\n", + "You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = \"gpu-cluster-nc6\"\n", + "\n", + "try:\n", + " compute_target = ws.compute_targets[cluster_name]\n", + " print(\"Found existing compute target.\")\n", + "except KeyError:\n", + " print(\"Creating a new compute target...\")\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"Standard_NC6\",\n", + " idle_seconds_before_scaledown=600,\n", + " min_nodes=0,\n", + " max_nodes=4,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experiment Setup\n", + "Create an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#experiments) in your workspace to track your model training runs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment_name = \"automl-image-multiclass\"\n", + "experiment = Experiment(ws, name=experiment_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dataset with input Training Data\n", + "\n", + "In order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this notebook, we use a toy dataset called Fridge Objects, which consists of 134 images of 4 classes of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds.\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).\n", + "\n", + "We first download and unzip the data locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import urllib\n", + "from zipfile import ZipFile\n", + "\n", + "# download data\n", + "download_url = \"https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/fridgeObjects.zip\"\n", + "data_file = \"./fridgeObjects.zip\"\n", + "urllib.request.urlretrieve(download_url, filename=data_file)\n", + "\n", + "# extract files\n", + "with ZipFile(data_file, \"r\") as zip:\n", + " print(\"extracting files...\")\n", + " zip.extractall()\n", + " print(\"done\")\n", + "# delete zip file\n", + "os.remove(data_file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is a sample image from this dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "\n", + "sample_image = \"./fridgeObjects/milk_bottle/99.jpg\"\n", + "Image(filename=sample_image)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert the downloaded data to JSONL\n", + "In this example, the fridge object dataset is stored in a directory. There are four different folders inside:\n", + "\n", + "- /water_bottle\n", + "- /milk_bottle\n", + "- /carton\n", + "- /can\n", + "\n", + "This is the most common data format for multiclass image classification. Each folder title corresponds to the image label for the images contained inside.\n", + "\n", + "In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).\n", + "\n", + "The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "\n", + "src = \"./fridgeObjects/\"\n", + "train_validation_ratio = 5\n", + "\n", + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "workspaceblobstore = ws.get_default_datastore().name\n", + "\n", + "# Path to the training and validation files\n", + "train_annotations_file = os.path.join(src, \"train_annotations.jsonl\")\n", + "validation_annotations_file = os.path.join(src, \"validation_annotations.jsonl\")\n", + "\n", + "# sample json line dictionary\n", + "json_line_sample = {\n", + " \"image_url\": \"AmlDatastore://\"\n", + " + workspaceblobstore\n", + " + \"/\"\n", + " + os.path.basename(os.path.dirname(src)),\n", + " \"label\": \"\",\n", + "}\n", + "\n", + "index = 0\n", + "# Scan each sub directary and generate jsonl line\n", + "with open(train_annotations_file, \"w\") as train_f:\n", + " with open(validation_annotations_file, \"w\") as validation_f:\n", + " for className in os.listdir(src):\n", + " subDir = src + className\n", + " if not os.path.isdir(subDir):\n", + " continue\n", + " # Scan each sub directary\n", + " print(\"Parsing \" + subDir)\n", + " for image in os.listdir(subDir):\n", + " json_line = dict(json_line_sample)\n", + " json_line[\"image_url\"] += f\"/{className}/{image}\"\n", + " json_line[\"label\"] = className\n", + "\n", + " if index % train_validation_ratio == 0:\n", + " # validation annotation\n", + " validation_f.write(json.dumps(json_line) + \"\\n\")\n", + " else:\n", + " # train annotation\n", + " train_f.write(json.dumps(json_line) + \"\\n\")\n", + " index += 1" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload the JSONL file and images to Datastore\n", + "In order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#datasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./fridgeObjects\", target_path=\"fridgeObjects\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "from azureml.data import DataType\n", + "\n", + "# get existing training dataset\n", + "training_dataset_name = \"fridgeObjectsTrainingDataset\"\n", + "if training_dataset_name in ws.datasets:\n", + " training_dataset = ws.datasets.get(training_dataset_name)\n", + " print(\"Found the training dataset\", training_dataset_name)\n", + "else:\n", + " # create training dataset\n", + " training_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"fridgeObjects/train_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " training_dataset = training_dataset.register(\n", + " workspace=ws, name=training_dataset_name\n", + " )\n", + "# get existing validation dataset\n", + "validation_dataset_name = \"fridgeObjectsValidationDataset\"\n", + "if validation_dataset_name in ws.datasets:\n", + " validation_dataset = ws.datasets.get(validation_dataset_name)\n", + " print(\"Found the validation dataset\", validation_dataset_name)\n", + "else:\n", + " # create validation dataset\n", + " validation_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"fridgeObjects/validation_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " validation_dataset = validation_dataset.register(\n", + " workspace=ws, name=validation_dataset_name\n", + " )\n", + "print(\"Training dataset name: \" + training_dataset.name)\n", + "print(\"Validation dataset name: \" + validation_dataset.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#model-agnostic-hyperparameters) for more details.\n", + "\n", + "This is what the training dataset looks like:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_dataset.to_pandas_dataframe()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuring your AutoML run for image tasks\n", + "AutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-your-experiment-settings) for the details on the parameters that can be used and their values." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms.\n", + "\n", + "### Using default hyperparameter values for the specified algorithm\n", + "Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.\n", + "\n", + "If you wish to use the default hyperparameter values for a given algorithm (say `vitb16r224`), you can specify the config for your AutoML Image runs as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import GridParameterSampling, choice\n", + "\n", + "image_config_vit = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_CLASSIFICATION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " hyperparameter_sampling=GridParameterSampling({\"model_name\": choice(\"vitb16r224\")}),\n", + " iterations=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Submitting an AutoML run for Computer Vision tasks\n", + "Once you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(image_config_vit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Hyperparameter sweeping for your AutoML models for computer vision tasks\n", + "In this example, we use the AutoMLImageConfig to train an Image Classification model using the following model algorithms: `seresnext`, `resnet50`, `vitb16r224`, and `vits16r224`.\n", + "\n", + "When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, number_of_epochs, layers_to_freeze, etc., to generate a model with the optimal 'accuracy'. If hyperparameter values are not specified, then default values are used for the specified algorithm.\n", + "\n", + "We use Random Sampling to pick samples from this parameter space and try a total of 10 iterations with these different samples, running 2 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.\n", + "\n", + "We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.\n", + "\n", + "For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling\n", + "from azureml.train.hyperdrive import choice, uniform\n", + "\n", + "parameter_space = {\n", + " \"learning_rate\": uniform(0.001, 0.01),\n", + " \"model\": choice(\n", + " {\n", + " \"model_name\": choice(\"vitb16r224\", \"vits16r224\"),\n", + " \"number_of_epochs\": choice(15, 30),\n", + " },\n", + " {\n", + " \"model_name\": choice(\"seresnext\", \"resnest50\"),\n", + " \"layers_to_freeze\": choice(0, 2),\n", + " },\n", + " ),\n", + "}\n", + "\n", + "tuning_settings = {\n", + " \"iterations\": 10,\n", + " \"max_concurrent_iterations\": 2,\n", + " \"hyperparameter_sampling\": RandomParameterSampling(parameter_space),\n", + " \"early_termination_policy\": BanditPolicy(\n", + " evaluation_interval=2, slack_factor=0.2, delay_evaluation=6\n", + " ),\n", + "}\n", + "\n", + "automl_image_config = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_CLASSIFICATION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " **tuning_settings,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(automl_image_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Run\n", + "\n", + "hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + \"_HD\")\n", + "hyperdrive_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register the optimal vision model from the AutoML run\n", + "Once the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register the model from the best run\n", + "\n", + "best_child_run = automl_image_run.get_best_child()\n", + "model_name = best_child_run.properties[\"model_name\"]\n", + "model = best_child_run.register_model(\n", + " model_name=model_name, model_path=\"outputs/model.pt\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Deploy model as a web service\n", + "Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.\n", + "In this tutorial, we will deploy the model as a web service in AKS." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AksCompute\n", + "from azureml.exceptions import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster\n", + "aks_name = \"aks-cpu-mc\"\n", + "# Check to see if the cluster already exists\n", + "try:\n", + " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", + " print(\"Found existing compute target\")\n", + "except ComputeTargetException:\n", + " print(\"Creating a new compute target...\")\n", + " # Provision AKS cluster with a CPU machine\n", + " prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n", + " # Create the cluster\n", + " aks_target = ComputeTarget.create(\n", + " workspace=ws, name=aks_name, provisioning_configuration=prov_config\n", + " )\n", + " aks_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#update-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.\n", + "\n", + "Note: To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "\n", + "best_child_run.download_file(\n", + " \"outputs/scoring_file_v_1_0_0.py\", output_file_path=\"score.py\"\n", + ")\n", + "environment = best_child_run.get_environment()\n", + "inference_config = InferenceConfig(entry_script=\"score.py\", environment=environment)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can then deploy the model as an AKS web service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Deploy the model from the best run as an AKS web service\n", + "from azureml.core.webservice import AksWebservice\n", + "from azureml.core.model import Model\n", + "\n", + "aks_config = AksWebservice.deploy_configuration(\n", + " autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True\n", + ")\n", + "\n", + "aks_service = Model.deploy(\n", + " ws,\n", + " models=[model],\n", + " inference_config=inference_config,\n", + " deployment_config=aks_config,\n", + " deployment_target=aks_target,\n", + " name=\"automl-image-test-cpu-mc\",\n", + " overwrite=True,\n", + ")\n", + "aks_service.wait_for_deployment(show_output=True)\n", + "print(aks_service.state)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test the web service\n", + "Finally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "\n", + "# URL for the web service\n", + "scoring_uri = aks_service.scoring_uri\n", + "\n", + "# If the service is authenticated, set the key or token\n", + "key, _ = aks_service.get_keys()\n", + "\n", + "sample_image = \"./test_image.jpg\"\n", + "\n", + "# Load image data\n", + "data = open(sample_image, \"rb\").read()\n", + "\n", + "# Set the content type\n", + "headers = {\"Content-Type\": \"application/octet-stream\"}\n", + "\n", + "# If authentication is enabled, set the authorization header\n", + "headers[\"Authorization\"] = f\"Bearer {key}\"\n", + "\n", + "# Make the request and display the response\n", + "resp = requests.post(scoring_uri, data, headers=headers)\n", + "print(resp.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize predictions\n", + "Now that we have scored a test image, we can visualize the prediction for this image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.image as mpimg\n", + "from PIL import Image\n", + "import numpy as np\n", + "import json\n", + "\n", + "IMAGE_SIZE = (18, 12)\n", + "plt.figure(figsize=IMAGE_SIZE)\n", + "img_np = mpimg.imread(sample_image)\n", + "img = Image.fromarray(img_np.astype(\"uint8\"), \"RGB\")\n", + "x, y = img.size\n", + "\n", + "fig, ax = plt.subplots(1, figsize=(15, 15))\n", + "# Display the image\n", + "ax.imshow(img_np)\n", + "\n", + "prediction = json.loads(resp.text)\n", + "label_index = np.argmax(prediction[\"probs\"])\n", + "label = prediction[\"labels\"][label_index]\n", + "conf_score = prediction[\"probs\"][label_index]\n", + "\n", + "display_text = \"{} ({})\".format(label, round(conf_score, 3))\n", + "print(display_text)\n", + "\n", + "color = \"red\"\n", + "plt.text(30, 30, display_text, color=color, fontsize=30)\n", + "\n", + "plt.show()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/example_image_classification_multiclass_predictions.jpg b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/example_image_classification_multiclass_predictions.jpg new file mode 100644 index 000000000..11f1a3476 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/example_image_classification_multiclass_predictions.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/test_image.jpg b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/test_image.jpg new file mode 100644 index 000000000..d0b0d0fb9 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-classification-multiclass/test_image.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/README.md b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/README.md new file mode 100644 index 000000000..3d83b819b --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/README.md @@ -0,0 +1,15 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Notebook showing how to use AutoML for training an Image Classification Multi-Label model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. +--- + +# Image Classification Multi-Label using AutoML for Images +- Dataset: Toy dataset with images of products found in a fridge + - **[Jupyter Notebook](auto-ml-image-classification-multilabel.ipynb)** + - train an Image Classification Multi-Label model using AutoML + - tune hyperparameters of the model to optimize model performance + - deploy the model to use in inference scenarios diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/auto-ml-image-classification-multilabel.ipynb b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/auto-ml-image-classification-multilabel.ipynb new file mode 100644 index 000000000..2e1e10a5c --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/auto-ml-image-classification-multilabel.ipynb @@ -0,0 +1,742 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Training an Image Classification Multi-Label model using AutoML\n", + "In this notebook, we go over how you can use AutoML for training an Image Classification Multi-Label model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![img](example_image_classification_multilabel_predictions.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "Please follow the [\"Setup a new conda environment\"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import azureml.core\n", + "\n", + "print(\"This notebook was created using version 1.35.0 of the Azure ML SDK.\")\n", + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK.\")\n", + "assert (\n", + " azureml.core.VERSION >= \"1.35\"\n", + "), \"Please upgrade the Azure ML SDK by running '!pip install --upgrade azureml-sdk' then restart the kernel.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace setup\n", + "In order to train and deploy models in Azure ML, you will first need to set up a workspace.\n", + "\n", + "An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.\n", + "\n", + "Create an Azure ML Workspace within your Azure subscription or load an existing workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute target setup\n", + "You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = \"gpu-cluster-nc6\"\n", + "\n", + "try:\n", + " compute_target = ws.compute_targets[cluster_name]\n", + " print(\"Found existing compute target.\")\n", + "except KeyError:\n", + " print(\"Creating a new compute target...\")\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"Standard_NC6\",\n", + " idle_seconds_before_scaledown=600,\n", + " min_nodes=0,\n", + " max_nodes=4,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experiment Setup\n", + "Create an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#experiments) in your workspace to track your model training runs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment_name = \"automl-image-classification-multilabel\"\n", + "experiment = Experiment(ws, name=experiment_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dataset with input Training Data\n", + "\n", + "In order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this notebook, we use a toy dataset called Fridge Objects, which consists of 128 images of 4 labels of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds. It also includes a labels file in .csv format. This is one of the most common data formats for Image Classification Multi-Label: one csv file that contains the mapping of labels to a folder of images.\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).\n", + "\n", + "We first download and unzip the data locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import urllib\n", + "from zipfile import ZipFile\n", + "\n", + "# download data\n", + "download_url = \"https://cvbp-secondary.z19.web.core.windows.net/datasets/image_classification/multilabelFridgeObjects.zip\"\n", + "data_file = \"./multilabelFridgeObjects.zip\"\n", + "urllib.request.urlretrieve(download_url, filename=data_file)\n", + "\n", + "# extract files\n", + "with ZipFile(data_file, \"r\") as zip:\n", + " print(\"extracting files...\")\n", + " zip.extractall()\n", + " print(\"done\")\n", + "# delete zip file\n", + "os.remove(data_file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is a sample image from this dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "\n", + "sample_image = \"./multilabelFridgeObjects/images/56.jpg\"\n", + "Image(filename=sample_image)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert the downloaded data to JSONL\n", + "In this example, the fridge object dataset is annotated in the CSV file, where each image corresponds to a line. It defines a mapping of the filename to the labels. Since this is a multi-label classification problem, each image can be associated to multiple labels. In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).\n", + "\n", + "The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "\n", + "src = \"./multilabelFridgeObjects\"\n", + "train_validation_ratio = 5\n", + "\n", + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "workspaceblobstore = ws.get_default_datastore().name\n", + "\n", + "# Path to the labels file.\n", + "labelFile = os.path.join(src, \"labels.csv\")\n", + "\n", + "# Path to the training and validation files\n", + "train_annotations_file = os.path.join(src, \"train_annotations.jsonl\")\n", + "validation_annotations_file = os.path.join(src, \"validation_annotations.jsonl\")\n", + "\n", + "# sample json line dictionary\n", + "json_line_sample = {\n", + " \"image_url\": \"AmlDatastore://\" + workspaceblobstore + \"/multilabelFridgeObjects\",\n", + " \"label\": [],\n", + "}\n", + "\n", + "# Read each annotation and convert it to jsonl line\n", + "with open(train_annotations_file, \"w\") as train_f:\n", + " with open(validation_annotations_file, \"w\") as validation_f:\n", + " with open(labelFile, \"r\") as labels:\n", + " for i, line in enumerate(labels):\n", + " # Skipping the title line and any empty lines.\n", + " if i == 0 or len(line.strip()) == 0:\n", + " continue\n", + " line_split = line.strip().split(\",\")\n", + " if len(line_split) != 2:\n", + " print(\"Skipping the invalid line: {}\".format(line))\n", + " continue\n", + " json_line = dict(json_line_sample)\n", + " json_line[\"image_url\"] += f\"/images/{line_split[0]}\"\n", + " json_line[\"label\"] = line_split[1].strip().split(\" \")\n", + "\n", + " if i % train_validation_ratio == 0:\n", + " # validation annotation\n", + " validation_f.write(json.dumps(json_line) + \"\\n\")\n", + " else:\n", + " # train annotation\n", + " train_f.write(json.dumps(json_line) + \"\\n\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload the JSONL file and images to Datastore\n", + "In order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#datasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./multilabelFridgeObjects\", target_path=\"multilabelFridgeObjects\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "from azureml.data import DataType\n", + "\n", + "# get existing training dataset\n", + "training_dataset_name = \"multilabelFridgeObjectsTrainingDataset\"\n", + "if training_dataset_name in ws.datasets:\n", + " training_dataset = ws.datasets.get(training_dataset_name)\n", + " print(\"Found the training dataset\", training_dataset_name)\n", + "else:\n", + " # create training dataset\n", + " training_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"multilabelFridgeObjects/train_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " training_dataset = training_dataset.register(\n", + " workspace=ws, name=training_dataset_name\n", + " )\n", + "# get existing validation dataset\n", + "validation_dataset_name = \"multilabelFridgeObjectsValidationDataset\"\n", + "if validation_dataset_name in ws.datasets:\n", + " validation_dataset = ws.datasets.get(validation_dataset_name)\n", + " print(\"Found the validation dataset\", validation_dataset_name)\n", + "else:\n", + " # create validation dataset\n", + " validation_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"multilabelFridgeObjects/validation_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " validation_dataset = validation_dataset.register(\n", + " workspace=ws, name=validation_dataset_name\n", + " )\n", + "print(\"Training dataset name: \" + training_dataset.name)\n", + "print(\"Validation dataset name: \" + validation_dataset.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#model-agnostic-hyperparameters) for more details.\n", + "\n", + "This is what the training dataset looks like:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_dataset.to_pandas_dataframe()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuring your AutoML run for image tasks\n", + "AutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-your-experiment-settings) for the details on the parameters that can be used and their values." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms.\n", + "\n", + "### Using default hyperparameter values for the specified algorithm\n", + "Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.\n", + "\n", + "If you wish to use the default hyperparameter values for a given algorithm (say `vitb16r224`), you can specify the config for your AutoML Image runs as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import GridParameterSampling, choice\n", + "\n", + "image_config_vit = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_CLASSIFICATION_MULTILABEL,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " hyperparameter_sampling=GridParameterSampling({\"model_name\": choice(\"vitb16r224\")}),\n", + " iterations=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Submitting an AutoML run for Computer Vision tasks\n", + "Once you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(image_config_vit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Hyperparameter sweeping for your AutoML models for computer vision tasks\n", + "In this example, we use the AutoMLImageConfig to train an Image Classification model using the `vitb16r224` and `seresnext` model algorithms.\n", + "\n", + "When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, grad_accumulation_step, valid_resize_size, etc., to generate a model with the optimal 'accuracy'. If hyperparameter values are not specified, then default values are used for the specified algorithm.\n", + "\n", + "We use Random Sampling to pick samples from this parameter space and try a total of 10 iterations with these different samples, running 2 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.\n", + "\n", + "We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.\n", + "\n", + "For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling\n", + "from azureml.train.hyperdrive import choice, uniform\n", + "\n", + "parameter_space = {\n", + " \"learning_rate\": uniform(0.005, 0.05),\n", + " \"model\": choice(\n", + " {\n", + " \"model_name\": choice(\"vitb16r224\"),\n", + " \"number_of_epochs\": choice(15, 30),\n", + " \"grad_accumulation_step\": choice(1, 2),\n", + " },\n", + " {\n", + " \"model_name\": choice(\"seresnext\"),\n", + " # model-specific, valid_resize_size should be larger or equal than valid_crop_size\n", + " \"valid_resize_size\": choice(288, 320, 352),\n", + " \"valid_crop_size\": choice(224, 256), # model-specific\n", + " \"train_crop_size\": choice(224, 256), # model-specific\n", + " },\n", + " ),\n", + "}\n", + "\n", + "tuning_settings = {\n", + " \"iterations\": 10,\n", + " \"max_concurrent_iterations\": 2,\n", + " \"hyperparameter_sampling\": RandomParameterSampling(parameter_space),\n", + " \"early_termination_policy\": BanditPolicy(\n", + " evaluation_interval=2, slack_factor=0.2, delay_evaluation=6\n", + " ),\n", + "}\n", + "\n", + "automl_image_config = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_CLASSIFICATION_MULTILABEL,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " **tuning_settings,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(automl_image_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Run\n", + "\n", + "hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + \"_HD\")\n", + "hyperdrive_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register the optimal vision model from the AutoML run\n", + "Once the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register the model from the best run\n", + "\n", + "best_child_run = automl_image_run.get_best_child()\n", + "model_name = best_child_run.properties[\"model_name\"]\n", + "model = best_child_run.register_model(\n", + " model_name=model_name, model_path=\"outputs/model.pt\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Deploy model as a web service\n", + "Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.\n", + "In this tutorial, we will deploy the model as a web service in AKS." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AksCompute\n", + "from azureml.exceptions import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster\n", + "aks_name = \"aks-cpu-ml\"\n", + "# Check to see if the cluster already exists\n", + "try:\n", + " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", + " print(\"Found existing compute target\")\n", + "except ComputeTargetException:\n", + " print(\"Creating a new compute target...\")\n", + " # Provision AKS cluster with a CPU machine\n", + " prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n", + " # Create the cluster\n", + " aks_target = ComputeTarget.create(\n", + " workspace=ws, name=aks_name, provisioning_configuration=prov_config\n", + " )\n", + " aks_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#update-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.\n", + "\n", + "Note: To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "\n", + "best_child_run.download_file(\n", + " \"outputs/scoring_file_v_1_0_0.py\", output_file_path=\"score.py\"\n", + ")\n", + "environment = best_child_run.get_environment()\n", + "inference_config = InferenceConfig(entry_script=\"score.py\", environment=environment)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can then deploy the model as an AKS web service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Deploy the model from the best run as an AKS web service\n", + "from azureml.core.webservice import AksWebservice\n", + "from azureml.core.model import Model\n", + "\n", + "aks_config = AksWebservice.deploy_configuration(\n", + " autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True\n", + ")\n", + "\n", + "aks_service = Model.deploy(\n", + " ws,\n", + " models=[model],\n", + " inference_config=inference_config,\n", + " deployment_config=aks_config,\n", + " deployment_target=aks_target,\n", + " name=\"automl-image-test-cpu-ml\",\n", + " overwrite=True,\n", + ")\n", + "aks_service.wait_for_deployment(show_output=True)\n", + "print(aks_service.state)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test the web service\n", + "Finally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "from IPython.display import Image\n", + "\n", + "# URL for the web service\n", + "scoring_uri = aks_service.scoring_uri\n", + "\n", + "# If the service is authenticated, set the key or token\n", + "key, _ = aks_service.get_keys()\n", + "\n", + "sample_image = \"./test_image.jpg\"\n", + "\n", + "# Load image data\n", + "data = open(sample_image, \"rb\").read()\n", + "\n", + "# Set the content type\n", + "headers = {\"Content-Type\": \"application/octet-stream\"}\n", + "\n", + "# If authentication is enabled, set the authorization header\n", + "headers[\"Authorization\"] = f\"Bearer {key}\"\n", + "\n", + "# Make the request and display the response\n", + "resp = requests.post(scoring_uri, data, headers=headers)\n", + "print(resp.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize predictions\n", + "Now that we have scored a test image, we can visualize the predictions for this image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.image as mpimg\n", + "from PIL import Image\n", + "import json\n", + "\n", + "IMAGE_SIZE = (18, 12)\n", + "plt.figure(figsize=IMAGE_SIZE)\n", + "img_np = mpimg.imread(sample_image)\n", + "img = Image.fromarray(img_np.astype(\"uint8\"), \"RGB\")\n", + "x, y = img.size\n", + "\n", + "fig, ax = plt.subplots(1, figsize=(15, 15))\n", + "# Display the image\n", + "ax.imshow(img_np)\n", + "\n", + "prediction = json.loads(resp.text)\n", + "score_threshold = 0.5\n", + "\n", + "label_offset_x = 30\n", + "label_offset_y = 30\n", + "for index, score in enumerate(prediction[\"probs\"]):\n", + " if score > score_threshold:\n", + " label = prediction[\"labels\"][index]\n", + " display_text = \"{} ({})\".format(label, round(score, 3))\n", + " print(display_text)\n", + "\n", + " color = \"red\"\n", + " plt.text(label_offset_x, label_offset_y, display_text, color=color, fontsize=30)\n", + " label_offset_y += 30\n", + "plt.show()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/example_image_classification_multilabel_predictions.jpg b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/example_image_classification_multilabel_predictions.jpg new file mode 100644 index 000000000..36d356389 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/example_image_classification_multilabel_predictions.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/test_image.jpg b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/test_image.jpg new file mode 100644 index 000000000..be5973bf5 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-classification-multilabel/test_image.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/README.md b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/README.md new file mode 100644 index 000000000..74b4c00d5 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/README.md @@ -0,0 +1,15 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Notebook showing how to use AutoML for training an Instance Segmentation model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. +--- + +# Instance Segmentation using AutoML for Images +- Dataset: Toy dataset with images of products found in a fridge + - **[Jupyter Notebook](auto-ml-image-instance-segmentation.ipynb)** + - train an Instance Segmentation model using AutoML + - tune hyperparameters of the model to optimize model performance + - deploy the model to use in inference scenarios diff --git a/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/auto-ml-image-instance-segmentation.ipynb b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/auto-ml-image-instance-segmentation.ipynb new file mode 100644 index 000000000..a98270104 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/auto-ml-image-instance-segmentation.ipynb @@ -0,0 +1,769 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Training an Instance Segmentation model using AutoML\n", + "In this notebook, we go over how you can use AutoML for training an Instance Segmentation model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![img](example_instance_segmentation_predictions.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "Please follow the [\"Setup a new conda environment\"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import azureml.core\n", + "\n", + "print(\"This notebook was created using version 1.35.0 of the Azure ML SDK.\")\n", + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK.\")\n", + "assert (\n", + " azureml.core.VERSION >= \"1.35\"\n", + "), \"Please upgrade the Azure ML SDK by running '!pip install --upgrade azureml-sdk' then restart the kernel.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Additional environment setup\n", + "You will need to install these additional packages below to run this notebook:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%pip install \"scikit-image==0.17.2\" \"simplification==0.5.1\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace setup\n", + "In order to train and deploy models in Azure ML, you will first need to set up a workspace.\n", + "\n", + "An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.\n", + "\n", + "Create an Azure ML Workspace within your Azure subscription or load an existing workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute target setup\n", + "You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = \"gpu-cluster-nc6\"\n", + "\n", + "try:\n", + " compute_target = ws.compute_targets[cluster_name]\n", + " print(\"Found existing compute target.\")\n", + "except KeyError:\n", + " print(\"Creating a new compute target...\")\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"Standard_NC6\",\n", + " idle_seconds_before_scaledown=600,\n", + " min_nodes=0,\n", + " max_nodes=4,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experiment Setup\n", + "Create an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#experiments) in your workspace to track your model training runs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment_name = \"automl-image-instance-segmentation\"\n", + "experiment = Experiment(ws, name=experiment_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dataset with input Training Data\n", + "\n", + "In order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this notebook, we use a toy dataset called Fridge Objects, which includes 128 images of 4 classes of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds.\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).\n", + "\n", + "We first download and unzip the data locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import urllib\n", + "from zipfile import ZipFile\n", + "\n", + "# download data\n", + "download_url = \"https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjectsMask.zip\"\n", + "data_file = \"./odFridgeObjectsMask.zip\"\n", + "urllib.request.urlretrieve(download_url, filename=data_file)\n", + "\n", + "# extract files\n", + "with ZipFile(data_file, \"r\") as zip:\n", + " print(\"extracting files...\")\n", + " zip.extractall()\n", + " print(\"done\")\n", + "# delete zip file\n", + "os.remove(data_file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is a sample image from this dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "\n", + "Image(filename=\"./odFridgeObjectsMask/images/31.jpg\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert the downloaded data to JSONL\n", + "In this example, the fridge object dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).\n", + "\n", + "The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# The jsonl_converter below relies on scikit-image and simplification.\n", + "# If you don't have them installed, install them before converting data by runing this cell.\n", + "%pip install \"scikit-image==0.17.2\" \"simplification==0.5.1\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from jsonl_converter import convert_mask_in_VOC_to_jsonl\n", + "\n", + "data_path = \"./odFridgeObjectsMask/\"\n", + "convert_mask_in_VOC_to_jsonl(data_path, ws)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload the JSONL file and images to Datastore\n", + "In order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#datasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./odFridgeObjectsMask\", target_path=\"odFridgeObjectsMask\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "from azureml.data import DataType\n", + "\n", + "# get existing training dataset\n", + "training_dataset_name = \"odFridgeObjectsMaskTrainingDataset\"\n", + "if training_dataset_name in ws.datasets:\n", + " training_dataset = ws.datasets.get(training_dataset_name)\n", + " print(\"Found the training dataset\", training_dataset_name)\n", + "else:\n", + " # create training dataset\n", + " training_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"odFridgeObjectsMask/train_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " training_dataset = training_dataset.register(\n", + " workspace=ws, name=training_dataset_name\n", + " )\n", + "# get existing validation dataset\n", + "validation_dataset_name = \"odFridgeObjectsMaskValidationDataset\"\n", + "if validation_dataset_name in ws.datasets:\n", + " validation_dataset = ws.datasets.get(validation_dataset_name)\n", + " print(\"Found the validation dataset\", validation_dataset_name)\n", + "else:\n", + " # create validation dataset\n", + " validation_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"odFridgeObjectsMask/validation_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " validation_dataset = validation_dataset.register(\n", + " workspace=ws, name=validation_dataset_name\n", + " )\n", + "print(\"Training dataset name: \" + training_dataset.name)\n", + "print(\"Validation dataset name: \" + validation_dataset.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#model-agnostic-hyperparameters) for more details.\n", + "\n", + "This is what the training dataset looks like:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_dataset.to_pandas_dataframe()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuring your AutoML run for image tasks\n", + "AutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-your-experiment-settings) for the details on the parameters that can be used and their values." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Using default hyperparameter values for the specified algorithm\n", + "Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.\n", + "\n", + "If you wish to use the default hyperparameter values for a given algorithm (say `maskrcnn`), you can specify the config for your AutoML Image runs as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import GridParameterSampling, choice\n", + "\n", + "image_config_maskrcnn = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_INSTANCE_SEGMENTATION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " hyperparameter_sampling=GridParameterSampling(\n", + " {\"model_name\": choice(\"maskrcnn_resnet50_fpn\")}\n", + " ),\n", + " iterations=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Submitting an AutoML run for Computer Vision tasks\n", + "Once you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(image_config_maskrcnn)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Hyperparameter sweeping for your AutoML models for computer vision tasks\n", + "In this example, we use the AutoMLImageConfig to train an Instance Segmentation model using `maskrcnn_resnet50_fpn` which is pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over 200K labeled images with over 80 label categories.\n", + "\n", + "When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for learning_rate, optimizer, etc., to generate a model with the optimal 'accuracy'. If hyperparameter values are not specified, then default values are used for the specified algorithm.\n", + "\n", + "We use Random Sampling to pick samples from this parameter space and try a total of 10 iterations with these different samples, running 2 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.\n", + "\n", + "We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.\n", + "\n", + "For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling\n", + "from azureml.train.hyperdrive import choice, uniform\n", + "\n", + "parameter_space = {\n", + " \"model_name\": choice(\"maskrcnn_resnet50_fpn\"),\n", + " \"learning_rate\": uniform(0.0001, 0.001),\n", + " #'warmup_cosine_lr_warmup_epochs': choice(0, 3),\n", + " \"optimizer\": choice(\"sgd\", \"adam\", \"adamw\"),\n", + " \"min_size\": choice(600, 800),\n", + "}\n", + "\n", + "tuning_settings = {\n", + " \"iterations\": 10,\n", + " \"max_concurrent_iterations\": 2,\n", + " \"hyperparameter_sampling\": RandomParameterSampling(parameter_space),\n", + " \"early_termination_policy\": BanditPolicy(\n", + " evaluation_interval=2, slack_factor=0.2, delay_evaluation=6\n", + " ),\n", + "}\n", + "\n", + "automl_image_config = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_INSTANCE_SEGMENTATION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " **tuning_settings,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(automl_image_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Run\n", + "\n", + "hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + \"_HD\")\n", + "hyperdrive_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register the optimal vision model from the AutoML run\n", + "Once the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register the model from the best run\n", + "\n", + "best_child_run = automl_image_run.get_best_child()\n", + "model_name = best_child_run.properties[\"model_name\"]\n", + "model = best_child_run.register_model(\n", + " model_name=model_name, model_path=\"outputs/model.pt\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Deploy model as a web service\n", + "Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.\n", + "In this tutorial, we will deploy the model as a web service in AKS." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AksCompute\n", + "from azureml.exceptions import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster\n", + "aks_name = \"aks-cpu-is\"\n", + "# Check to see if the cluster already exists\n", + "try:\n", + " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", + " print(\"Found existing compute target\")\n", + "except ComputeTargetException:\n", + " print(\"Creating a new compute target...\")\n", + " # Provision AKS cluster with a CPU machine\n", + " prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n", + " # Create the cluster\n", + " aks_target = ComputeTarget.create(\n", + " workspace=ws, name=aks_name, provisioning_configuration=prov_config\n", + " )\n", + " aks_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#update-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.\n", + "\n", + "Note: To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "\n", + "best_child_run.download_file(\n", + " \"outputs/scoring_file_v_1_0_0.py\", output_file_path=\"score.py\"\n", + ")\n", + "environment = best_child_run.get_environment()\n", + "inference_config = InferenceConfig(entry_script=\"score.py\", environment=environment)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can then deploy the model as an AKS web service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Deploy the model from the best run as an AKS web service\n", + "from azureml.core.webservice import AksWebservice\n", + "from azureml.core.model import Model\n", + "\n", + "aks_config = AksWebservice.deploy_configuration(\n", + " autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True\n", + ")\n", + "\n", + "aks_service = Model.deploy(\n", + " ws,\n", + " models=[model],\n", + " inference_config=inference_config,\n", + " deployment_config=aks_config,\n", + " deployment_target=aks_target,\n", + " name=\"automl-image-test-cpu-is\",\n", + " overwrite=True,\n", + ")\n", + "aks_service.wait_for_deployment(show_output=True)\n", + "print(aks_service.state)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test the web service\n", + "Finally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "\n", + "# URL for the web service\n", + "scoring_uri = aks_service.scoring_uri\n", + "\n", + "# If the service is authenticated, set the key or token\n", + "key, _ = aks_service.get_keys()\n", + "\n", + "sample_image = \"./test_image.jpg\"\n", + "\n", + "# Load image data\n", + "data = open(sample_image, \"rb\").read()\n", + "\n", + "# Set the content type\n", + "headers = {\"Content-Type\": \"application/octet-stream\"}\n", + "\n", + "# If authentication is enabled, set the authorization header\n", + "headers[\"Authorization\"] = f\"Bearer {key}\"\n", + "\n", + "# Make the request and display the response\n", + "resp = requests.post(scoring_uri, data, headers=headers)\n", + "print(resp.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize predictions\n", + "Now that we have scored a test image, we can visualize the predictions for this image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.image as mpimg\n", + "import matplotlib.patches as patches\n", + "from matplotlib.lines import Line2D\n", + "from PIL import Image\n", + "import numpy as np\n", + "import json\n", + "\n", + "IMAGE_SIZE = (18, 12)\n", + "plt.figure(figsize=IMAGE_SIZE)\n", + "img_np = mpimg.imread(sample_image)\n", + "img = Image.fromarray(img_np.astype(\"uint8\"), \"RGB\")\n", + "x, y = img.size\n", + "\n", + "fig, ax = plt.subplots(1, figsize=(15, 15))\n", + "# Display the image\n", + "ax.imshow(img_np)\n", + "\n", + "# draw box and label for each detection\n", + "detections = json.loads(resp.text)\n", + "for detect in detections[\"boxes\"]:\n", + " label = detect[\"label\"]\n", + " box = detect[\"box\"]\n", + " polygon = detect[\"polygon\"]\n", + " conf_score = detect[\"score\"]\n", + " if conf_score > 0.6:\n", + " ymin, xmin, ymax, xmax = (\n", + " box[\"topY\"],\n", + " box[\"topX\"],\n", + " box[\"bottomY\"],\n", + " box[\"bottomX\"],\n", + " )\n", + " topleft_x, topleft_y = x * xmin, y * ymin\n", + " width, height = x * (xmax - xmin), y * (ymax - ymin)\n", + " print(\n", + " \"{}: [{}, {}, {}, {}], {}\".format(\n", + " detect[\"label\"],\n", + " round(topleft_x, 3),\n", + " round(topleft_y, 3),\n", + " round(width, 3),\n", + " round(height, 3),\n", + " round(conf_score, 3),\n", + " )\n", + " )\n", + "\n", + " color = np.random.rand(3) #'red'\n", + " rect = patches.Rectangle(\n", + " (topleft_x, topleft_y),\n", + " width,\n", + " height,\n", + " linewidth=2,\n", + " edgecolor=color,\n", + " facecolor=\"none\",\n", + " )\n", + "\n", + " ax.add_patch(rect)\n", + " plt.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)\n", + "\n", + " polygon_np = np.array(polygon[0])\n", + " polygon_np = polygon_np.reshape(-1, 2)\n", + " polygon_np[:, 0] *= x\n", + " polygon_np[:, 1] *= y\n", + " poly = patches.Polygon(polygon_np, True, facecolor=color, alpha=0.4)\n", + " ax.add_patch(poly)\n", + " poly_line = Line2D(\n", + " polygon_np[:, 0],\n", + " polygon_np[:, 1],\n", + " linewidth=2,\n", + " marker=\"o\",\n", + " markersize=8,\n", + " markerfacecolor=color,\n", + " )\n", + " ax.add_line(poly_line)\n", + "plt.show()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/example_instance_segmentation_predictions.jpg b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/example_instance_segmentation_predictions.jpg new file mode 100644 index 000000000..a47c45add Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/example_instance_segmentation_predictions.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/jsonl_converter.py b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/jsonl_converter.py new file mode 100644 index 000000000..5a7a2baa6 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/jsonl_converter.py @@ -0,0 +1,213 @@ +import argparse +import os +import json +import numpy as np +import PIL.Image as Image +import xml.etree.ElementTree as ET + +from simplification.cutil import simplify_coords +from skimage import measure + + +def convert_mask_to_polygon( + mask, + max_polygon_points=100, + score_threshold=0.5, + max_refinement_iterations=25, + edge_safety_padding=1, +): + """Convert a numpy mask to a polygon outline in normalized coordinates. + + :param mask: Pixel mask, where each pixel has an object (float) score in [0, 1], in size ([1, height, width]) + :type: mask: + :param max_polygon_points: Maximum number of (x, y) coordinate pairs in polygon + :type: max_polygon_points: Int + :param score_threshold: Score cutoff for considering a pixel as in object. + :type: score_threshold: Float + :param max_refinement_iterations: Maximum number of times to refine the polygon + trying to reduce the number of pixels to meet max polygon points. + :type: max_refinement_iterations: Int + :param edge_safety_padding: Number of pixels to pad the mask with + :type edge_safety_padding: Int + :return: normalized polygon coordinates + :rtype: list of list + """ + # Convert to numpy bitmask + mask = mask[0] + mask_array = np.array((mask > score_threshold), dtype=np.uint8) + image_shape = mask_array.shape + + # Pad the mask to avoid errors at the edge of the mask + embedded_mask = np.zeros( + ( + image_shape[0] + 2 * edge_safety_padding, + image_shape[1] + 2 * edge_safety_padding, + ), + dtype=np.uint8, + ) + embedded_mask[ + edge_safety_padding : image_shape[0] + edge_safety_padding, + edge_safety_padding : image_shape[1] + edge_safety_padding, + ] = mask_array + + # Find Image Contours + contours = measure.find_contours(embedded_mask, 0.5) + simplified_contours = [] + + for contour in contours: + + # Iteratively reduce polygon points, if necessary + if max_polygon_points is not None: + simplify_factor = 0 + while ( + len(contour) > max_polygon_points + and simplify_factor < max_refinement_iterations + ): + contour = simplify_coords(contour, simplify_factor) + simplify_factor += 1 + + # Convert to [x, y, x, y, ....] coordinates and correct for padding + unwrapped_contour = [0] * (2 * len(contour)) + unwrapped_contour[::2] = np.ceil(contour[:, 1]) - edge_safety_padding + unwrapped_contour[1::2] = np.ceil(contour[:, 0]) - edge_safety_padding + + simplified_contours.append(unwrapped_contour) + + return _normalize_contour(simplified_contours, image_shape) + + +def _normalize_contour(contours, image_shape): + + height, width = image_shape[0], image_shape[1] + + for contour in contours: + contour[::2] = [x * 1.0 / width for x in contour[::2]] + contour[1::2] = [y * 1.0 / height for y in contour[1::2]] + + return contours + + +def binarise_mask(mask_fname): + + mask = Image.open(mask_fname) + mask = np.array(mask) + # instances are encoded as different colors + obj_ids = np.unique(mask) + # first id is the background, so remove it + obj_ids = obj_ids[1:] + + # split the color-encoded mask into a set of binary masks + binary_masks = mask == obj_ids[:, None, None] + return binary_masks + + +def parsing_mask(mask_fname): + + # For this particular dataset, initially each mask was merged (based on binary mask of each object) + # in the order of the bounding boxes described in the corresponding PASCAL VOC annotation file. + # Therefore, we have to extract each binary mask which is in the order of objects in the annotation file. + # https://github.com/microsoft/computervision-recipes/blob/master/utils_cv/detection/dataset.py + binary_masks = binarise_mask(mask_fname) + polygons = [] + for bi_mask in binary_masks: + + if len(bi_mask.shape) == 2: + bi_mask = bi_mask[np.newaxis, :] + polygon = convert_mask_to_polygon(bi_mask) + polygons.append(polygon) + + return polygons + + +def convert_mask_in_VOC_to_jsonl(base_dir, workspace): + + src = base_dir + train_validation_ratio = 5 + + # Retrieving default datastore that got automatically created when we setup a workspace + workspaceblobstore = workspace.get_default_datastore().name + + # Path to the annotations + annotations_folder = os.path.join(src, "annotations") + mask_folder = os.path.join(src, "segmentation-masks") + + # Path to the training and validation files + train_annotations_file = os.path.join(src, "train_annotations.jsonl") + validation_annotations_file = os.path.join(src, "validation_annotations.jsonl") + + # sample json line dictionary + json_line_sample = { + "image_url": "AmlDatastore://" + + workspaceblobstore + + "/" + + os.path.basename(os.path.dirname(src)) + + "/" + + "images", + "image_details": {"format": None, "width": None, "height": None}, + "label": [], + } + + # Read each annotation and convert it to jsonl line + with open(train_annotations_file, "w") as train_f: + with open(validation_annotations_file, "w") as validation_f: + for i, filename in enumerate(os.listdir(annotations_folder)): + if filename.endswith(".xml"): + print("Parsing " + os.path.join(src, filename)) + + root = ET.parse( + os.path.join(annotations_folder, filename) + ).getroot() + + width = int(root.find("size/width").text) + height = int(root.find("size/height").text) + # convert mask into polygon + mask_fname = os.path.join(mask_folder, filename[:-4] + ".png") + polygons = parsing_mask(mask_fname) + + labels = [] + for index, object in enumerate(root.findall("object")): + name = object.find("name").text + isCrowd = int(object.find("difficult").text) + labels.append( + { + "label": name, + "bbox": "null", + "isCrowd": isCrowd, + "polygon": polygons[index], + } + ) + + # build the jsonl file + image_filename = root.find("filename").text + _, file_extension = os.path.splitext(image_filename) + json_line = dict(json_line_sample) + json_line["image_url"] = ( + json_line["image_url"] + "/" + image_filename + ) + json_line["image_details"]["format"] = file_extension[1:] + json_line["image_details"]["width"] = width + json_line["image_details"]["height"] = height + json_line["label"] = labels + + if i % train_validation_ratio == 0: + # validation annotation + validation_f.write(json.dumps(json_line) + "\n") + else: + # train annotation + train_f.write(json.dumps(json_line) + "\n") + else: + print("Skipping unknown file: {}".format(filename)) + + +if __name__ == "__main__": + parser = argparse.ArgumentParser(allow_abbrev=False) + parser.add_argument( + "--data_path", + type=str, + help="the directory contains images, annotations, and masks", + ) + + args, remaining_args = parser.parse_known_args() + data_path = args.data_path + + convert_mask_in_VOC_to_jsonl(data_path) diff --git a/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/test_image.jpg b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/test_image.jpg new file mode 100644 index 000000000..a20619469 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-instance-segmentation/test_image.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/README.md b/how-to-use-azureml/automated-machine-learning/image-object-detection/README.md new file mode 100644 index 000000000..5827dfb91 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-object-detection/README.md @@ -0,0 +1,15 @@ +--- +page_type: sample +languages: +- python +products: +- azure-machine-learning +description: Notebook showing how to use AutoML for training an Object Detection model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. +--- + +# Object Detection using AutoML for Images +- Dataset: Toy dataset with images of products found in a fridge + - **[Jupyter Notebook](auto-ml-image-object-detection.ipynb)** + - train an Object Detection model using AutoML + - tune hyperparameters of the model to optimize model performance + - deploy the model to use in inference scenarios diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/auto-ml-image-object-detection.ipynb b/how-to-use-azureml/automated-machine-learning/image-object-detection/auto-ml-image-object-detection.ipynb new file mode 100644 index 000000000..3e644d018 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-object-detection/auto-ml-image-object-detection.ipynb @@ -0,0 +1,835 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License.\n", + "\n", + "# Training an Object Detection model using AutoML\n", + "In this notebook, we go over how you can use AutoML for training an Object Detection model. We will use a small dataset to train the model, demonstrate how you can tune hyperparameters of the model to optimize model performance and deploy the model to use in inference scenarios. For detailed information please refer to the [documentation of AutoML for Images](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![img](example_object_detection_predictions.jpg)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important:** This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/en-us/support/legal/preview-supplemental-terms/)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Environment Setup\n", + "Please follow the [\"Setup a new conda environment\"](https://github.com/Azure/azureml-examples/tree/main/python-sdk/tutorials/automl-with-azureml#3-setup-a-new-conda-environment) instructions to get started." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import azureml.core\n", + "\n", + "print(\"This notebook was created using version 1.35.0 of the Azure ML SDK.\")\n", + "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK.\")\n", + "assert (\n", + " azureml.core.VERSION >= \"1.35\"\n", + "), \"Please upgrade the Azure ML SDK by running '!pip install --upgrade azureml-sdk' then restart the kernel.\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Workspace setup\n", + "In order to train and deploy models in Azure ML, you will first need to set up a workspace.\n", + "\n", + "An [Azure ML Workspace](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#workspace) is an Azure resource that organizes and coordinates the actions of many other Azure resources to assist in executing and sharing machine learning workflows. In particular, an Azure ML Workspace coordinates storage, databases, and compute resources providing added functionality for machine learning experimentation, deployment, inference, and the monitoring of deployed models.\n", + "\n", + "Create an Azure ML Workspace within your Azure subscription or load an existing workspace." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.workspace import Workspace\n", + "\n", + "ws = Workspace.from_config()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compute target setup\n", + "You will need to provide a [Compute Target](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#computes) that will be used for your AutoML model training. AutoML models for image tasks require [GPU SKUs](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) such as the ones from the NC, NCv2, NCv3, ND, NDv2 and NCasT4 series. We recommend using the NCsv3-series (with v100 GPUs) for faster training. Using a compute target with a multi-GPU VM SKU will leverage the multiple GPUs to speed up training. Additionally, setting up a compute target with multiple nodes will allow for faster model training by leveraging parallelism, when tuning hyperparameters for your model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import AmlCompute, ComputeTarget\n", + "\n", + "cluster_name = \"gpu-cluster-nc6\"\n", + "\n", + "try:\n", + " compute_target = ws.compute_targets[cluster_name]\n", + " print(\"Found existing compute target.\")\n", + "except KeyError:\n", + " print(\"Creating a new compute target...\")\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"Standard_NC6\",\n", + " idle_seconds_before_scaledown=600,\n", + " min_nodes=0,\n", + " max_nodes=4,\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cluster_name, compute_config)\n", + "# Can poll for a minimum number of nodes and for a specific timeout.\n", + "# If no min_node_count is provided, it will use the scale settings for the cluster.\n", + "compute_target.wait_for_completion(\n", + " show_output=True, min_node_count=None, timeout_in_minutes=20\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Experiment Setup\n", + "Create an [Experiment](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#experiments) in your workspace to track your model training runs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Experiment\n", + "\n", + "experiment_name = \"automl-image-object-detection\"\n", + "experiment = Experiment(ws, name=experiment_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Dataset with input Training Data\n", + "\n", + "In order to generate models for computer vision, you will need to bring in labeled image data as input for model training in the form of an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset). You can either use a dataset that you have exported from a [Data Labeling](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-label-data) project, or create a new Tabular Dataset with your labeled training data." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In this notebook, we use a toy dataset called Fridge Objects, which consists of 128 images of 4 classes of beverage container {can, carton, milk bottle, water bottle} photos taken on different backgrounds.\n", + "\n", + "All images in this notebook are hosted in [this repository](https://github.com/microsoft/computervision-recipes) and are made available under the [MIT license](https://github.com/microsoft/computervision-recipes/blob/master/LICENSE).\n", + "\n", + "We first download and unzip the data locally." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import urllib\n", + "from zipfile import ZipFile\n", + "\n", + "# download data\n", + "download_url = \"https://cvbp-secondary.z19.web.core.windows.net/datasets/object_detection/odFridgeObjects.zip\"\n", + "data_file = \"./odFridgeObjects.zip\"\n", + "urllib.request.urlretrieve(download_url, filename=data_file)\n", + "\n", + "# extract files\n", + "with ZipFile(data_file, \"r\") as zip:\n", + " print(\"extracting files...\")\n", + " zip.extractall()\n", + " print(\"done\")\n", + "# delete zip file\n", + "os.remove(data_file)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This is a sample image from this dataset:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "\n", + "Image(filename=\"./odFridgeObjects/images/31.jpg\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert the downloaded data to JSONL\n", + "In this example, the fridge object dataset is annotated in Pascal VOC format, where each image corresponds to an xml file. Each xml file contains information on where its corresponding image file is located and also contains information about the bounding boxes and the object labels. In order to use this data to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset), we first need to convert it to the required JSONL format. Please refer to the [documentation on how to prepare datasets](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-prepare-datasets-for-automl-images).\n", + "\n", + "The following script is creating two .jsonl files (one for training and one for validation) in the parent folder of the dataset. The train / validation ratio corresponds to 20% of the data going into the validation file." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import os\n", + "import xml.etree.ElementTree as ET\n", + "\n", + "src = \"./odFridgeObjects/\"\n", + "train_validation_ratio = 5\n", + "\n", + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "workspaceblobstore = ws.get_default_datastore().name\n", + "\n", + "# Path to the annotations\n", + "annotations_folder = os.path.join(src, \"annotations\")\n", + "\n", + "# Path to the training and validation files\n", + "train_annotations_file = os.path.join(src, \"train_annotations.jsonl\")\n", + "validation_annotations_file = os.path.join(src, \"validation_annotations.jsonl\")\n", + "\n", + "# sample json line dictionary\n", + "json_line_sample = {\n", + " \"image_url\": \"AmlDatastore://\"\n", + " + workspaceblobstore\n", + " + \"/\"\n", + " + os.path.basename(os.path.dirname(src))\n", + " + \"/\"\n", + " + \"images\",\n", + " \"image_details\": {\"format\": None, \"width\": None, \"height\": None},\n", + " \"label\": [],\n", + "}\n", + "\n", + "# Read each annotation and convert it to jsonl line\n", + "with open(train_annotations_file, \"w\") as train_f:\n", + " with open(validation_annotations_file, \"w\") as validation_f:\n", + " for i, filename in enumerate(os.listdir(annotations_folder)):\n", + " if filename.endswith(\".xml\"):\n", + " print(\"Parsing \" + os.path.join(src, filename))\n", + "\n", + " root = ET.parse(os.path.join(annotations_folder, filename)).getroot()\n", + "\n", + " width = int(root.find(\"size/width\").text)\n", + " height = int(root.find(\"size/height\").text)\n", + "\n", + " labels = []\n", + " for object in root.findall(\"object\"):\n", + " name = object.find(\"name\").text\n", + " xmin = object.find(\"bndbox/xmin\").text\n", + " ymin = object.find(\"bndbox/ymin\").text\n", + " xmax = object.find(\"bndbox/xmax\").text\n", + " ymax = object.find(\"bndbox/ymax\").text\n", + " isCrowd = int(object.find(\"difficult\").text)\n", + " labels.append(\n", + " {\n", + " \"label\": name,\n", + " \"topX\": float(xmin) / width,\n", + " \"topY\": float(ymin) / height,\n", + " \"bottomX\": float(xmax) / width,\n", + " \"bottomY\": float(ymax) / height,\n", + " \"isCrowd\": isCrowd,\n", + " }\n", + " )\n", + " # build the jsonl file\n", + " image_filename = root.find(\"filename\").text\n", + " _, file_extension = os.path.splitext(image_filename)\n", + " json_line = dict(json_line_sample)\n", + " json_line[\"image_url\"] = json_line[\"image_url\"] + \"/\" + image_filename\n", + " json_line[\"image_details\"][\"format\"] = file_extension[1:]\n", + " json_line[\"image_details\"][\"width\"] = width\n", + " json_line[\"image_details\"][\"height\"] = height\n", + " json_line[\"label\"] = labels\n", + "\n", + " if i % train_validation_ratio == 0:\n", + " # validation annotation\n", + " validation_f.write(json.dumps(json_line) + \"\\n\")\n", + " else:\n", + " # train annotation\n", + " train_f.write(json.dumps(json_line) + \"\\n\")\n", + " else:\n", + " print(\"Skipping unknown file: {}\".format(filename))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Convert annotation file from COCO to JSONL\n", + "If you want to try with a dataset in COCO format, the scripts below shows how to convert it to `jsonl` format. The file \"odFridgeObjects_coco.json\" consists of annotation information for the `odFridgeObjects` dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Generate jsonl file from coco file\n", + "!python coco2jsonl.py \\\n", + "--input_coco_file_path \"./odFridgeObjects_coco.json\" \\\n", + "--output_dir \"./odFridgeObjects\" --output_file_name \"odFridgeObjects_from_coco.jsonl\" \\\n", + "--task_type \"ObjectDetection\" \\\n", + "--base_url \"AmlDatastore://workspaceblobstore/odFridgeObjects/images/\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Visualize bounding boxes\n", + "Please refer to the \"Visualize data\" section in the following [tutorial](https://docs.microsoft.com/en-us/azure/machine-learning/tutorial-auto-train-image-models#visualize-data) to see how to easily visualize your ground truth bounding boxes before starting to train." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Upload the JSONL file and images to Datastore\n", + "In order to use the data for training in Azure ML, we upload it to our Azure ML Workspace via a [Datastore](https://docs.microsoft.com/en-us/azure/machine-learning/concept-azure-machine-learning-architecture#datasets-and-datastores). The datastore provides a mechanism for you to upload/download data and interact with it from your remote compute targets. It is an abstraction over Azure Storage." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Retrieving default datastore that got automatically created when we setup a workspace\n", + "ds = ws.get_default_datastore()\n", + "ds.upload(src_dir=\"./odFridgeObjects\", target_path=\"odFridgeObjects\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we need to create an [AzureML Tabular Dataset](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabulardataset) from the data we uploaded to the Datastore. We create one dataset for training and one for validation." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Dataset\n", + "from azureml.data import DataType\n", + "\n", + "# get existing training dataset\n", + "training_dataset_name = \"odFridgeObjectsTrainingDataset\"\n", + "if training_dataset_name in ws.datasets:\n", + " training_dataset = ws.datasets.get(training_dataset_name)\n", + " print(\"Found the training dataset\", training_dataset_name)\n", + "else:\n", + " # create training dataset\n", + " training_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"odFridgeObjects/train_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " training_dataset = training_dataset.register(\n", + " workspace=ws, name=training_dataset_name\n", + " )\n", + "# get existing validation dataset\n", + "validation_dataset_name = \"odFridgeObjectsValidationDataset\"\n", + "if validation_dataset_name in ws.datasets:\n", + " validation_dataset = ws.datasets.get(validation_dataset_name)\n", + " print(\"Found the validation dataset\", validation_dataset_name)\n", + "else:\n", + " # create validation dataset\n", + " validation_dataset = Dataset.Tabular.from_json_lines_files(\n", + " path=ds.path(\"odFridgeObjects/validation_annotations.jsonl\"),\n", + " set_column_types={\"image_url\": DataType.to_stream(ds.workspace)},\n", + " )\n", + " validation_dataset = validation_dataset.register(\n", + " workspace=ws, name=validation_dataset_name\n", + " )\n", + "print(\"Training dataset name: \" + training_dataset.name)\n", + "print(\"Validation dataset name: \" + validation_dataset.name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Validation dataset is optional. If no validation dataset is specified, by default 20% of your training data will be used for validation. You can control the percentage using the `split_ratio` argument - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#model-agnostic-hyperparameters) for more details.\n", + "\n", + "This is what the training dataset looks like:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_dataset.to_pandas_dataframe()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Configuring your AutoML run for image tasks\n", + "AutoML allows you to easily train models for Image Classification, Object Detection & Instance Segmentation on your image data. You can control the model algorithm to be used, specify hyperparameter values for your model as well as perform a sweep across the hyperparameter space to generate an optimal model. Parameters for configuring your AutoML Image run are specified using the `AutoMLImageConfig` - please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-your-experiment-settings) for the details on the parameters that can be used and their values." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When using AutoML for image tasks, you need to specify the model algorithms using the `model_name` parameter. You can either specify a single model or choose to sweep over multiple models. Please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#configure-model-algorithms-and-hyperparameters) for the list of supported model algorithms." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Using default hyperparameter values for the specified algorithm\n", + "Before doing a large sweep to search for the optimal models and hyperparameters, we recommend trying the default values for a given model to get a first baseline. Next, you can explore multiple hyperparameters for the same model before sweeping over multiple models and their parameters. This allows an iterative approach, as with multiple models and multiple hyperparameters for each (as we showcase in the next section), the search space grows exponentially, and you need more iterations to find optimal configurations.\n", + "\n", + "If you wish to use the default hyperparameter values for a given algorithm (say `yolov5`), you can specify the config for your AutoML Image runs as follows:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import GridParameterSampling, choice\n", + "\n", + "image_config_yolov5 = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_OBJECT_DETECTION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " hyperparameter_sampling=GridParameterSampling({\"model_name\": choice(\"yolov5\")}),\n", + " iterations=1,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Submitting an AutoML run for Computer Vision tasks\n", + "Once you've created the config settings for your run, you can submit an AutoML run using the config in order to train a vision model using your training dataset." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(image_config_yolov5)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Hyperparameter sweeping for your AutoML models for computer vision tasks\n", + "\n", + "In this example, we use the AutoMLImageConfig to train an Object Detection model using `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over 200K labeled images with over 80 label categories.\n", + "\n", + "When using AutoML for Images, you can perform a hyperparameter sweep over a defined parameter space to find the optimal model. In this example, we sweep over the hyperparameters for each algorithm, choosing from a range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for the specified algorithm.\n", + "\n", + "We use Random Sampling to pick samples from this parameter space and try a total of 10 iterations with these different samples, running 2 iterations at a time on our compute target, which has been previously set up using 4 nodes. Please note that the more parameters the space has, the more iterations you need to find optimal models.\n", + "\n", + "We leverage the Bandit early termination policy which will terminate poor performing configs (those that are not within 20% slack of the best performing config), thus significantly saving compute resources.\n", + "\n", + "For more details on model and hyperparameter sweeping, please refer to the [documentation](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-tune-hyperparameters)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared.constants import ImageTask\n", + "from azureml.train.automl import AutoMLImageConfig\n", + "from azureml.train.hyperdrive import BanditPolicy, RandomParameterSampling\n", + "from azureml.train.hyperdrive import choice, uniform\n", + "\n", + "parameter_space = {\n", + " \"model\": choice(\n", + " {\n", + " \"model_name\": choice(\"yolov5\"),\n", + " \"learning_rate\": uniform(0.0001, 0.01),\n", + " \"model_size\": choice(\"small\", \"medium\"), # model-specific\n", + " #'img_size': choice(640, 704, 768), # model-specific; might need GPU with large memory\n", + " },\n", + " {\n", + " \"model_name\": choice(\"fasterrcnn_resnet50_fpn\"),\n", + " \"learning_rate\": uniform(0.0001, 0.001),\n", + " \"optimizer\": choice(\"sgd\", \"adam\", \"adamw\"),\n", + " \"min_size\": choice(600, 800), # model-specific\n", + " #'warmup_cosine_lr_warmup_epochs': choice(0, 3),\n", + " },\n", + " ),\n", + "}\n", + "\n", + "tuning_settings = {\n", + " \"iterations\": 10,\n", + " \"max_concurrent_iterations\": 2,\n", + " \"hyperparameter_sampling\": RandomParameterSampling(parameter_space),\n", + " \"early_termination_policy\": BanditPolicy(\n", + " evaluation_interval=2, slack_factor=0.2, delay_evaluation=6\n", + " ),\n", + "}\n", + "\n", + "automl_image_config = AutoMLImageConfig(\n", + " task=ImageTask.IMAGE_OBJECT_DETECTION,\n", + " compute_target=compute_target,\n", + " training_data=training_dataset,\n", + " validation_data=validation_dataset,\n", + " **tuning_settings,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run = experiment.submit(automl_image_config)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_image_run.wait_for_completion(wait_post_processing=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main `automl_image_run` from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this HyperDrive parent run. Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core import Run\n", + "\n", + "hyperdrive_run = Run(experiment=experiment, run_id=automl_image_run.id + \"_HD\")\n", + "hyperdrive_run" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Register the optimal vision model from the AutoML run\n", + "Once the run completes, we can register the model that was created from the best run (configuration that resulted in the best primary metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register the model from the best run\n", + "\n", + "best_child_run = automl_image_run.get_best_child()\n", + "model_name = best_child_run.properties[\"model_name\"]\n", + "model = best_child_run.register_model(\n", + " model_name=model_name, model_path=\"outputs/model.pt\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Deploy model as a web service\n", + "Once you have your trained model, you can deploy the model on Azure. You can deploy your trained model as a web service on Azure Container Instances ([ACI](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-container-instance)) or Azure Kubernetes Service ([AKS](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-azure-kubernetes-service)). Please note that ACI only supports small models under 1 GB in size. For testing larger models or for the high-scale production stage, we recommend using AKS.\n", + "In this tutorial, we will deploy the model as a web service in AKS." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You will need to first create an AKS compute cluster or use an existing AKS cluster. You can use either GPU or CPU VM SKUs for your deployment cluster" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AksCompute\n", + "from azureml.exceptions import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster\n", + "aks_name = \"aks-cpu-od\"\n", + "# Check to see if the cluster already exists\n", + "try:\n", + " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", + " print(\"Found existing compute target\")\n", + "except ComputeTargetException:\n", + " print(\"Creating a new compute target...\")\n", + " # Provision AKS cluster with a CPU machine\n", + " prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n", + " # Create the cluster\n", + " aks_target = ComputeTarget.create(\n", + " workspace=ws, name=aks_name, provisioning_configuration=prov_config\n", + " )\n", + " aks_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Next, you will need to define the [inference configuration](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-auto-train-image-models#update-inference-configuration), that describes how to set up the web-service containing your model. You can use the scoring script and the environment from the training run in your inference config.\n", + "\n", + "Note: To change the model's settings, open the downloaded scoring script and modify the model_settings variable before deploying the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "\n", + "best_child_run.download_file(\n", + " \"outputs/scoring_file_v_1_0_0.py\", output_file_path=\"score.py\"\n", + ")\n", + "environment = best_child_run.get_environment()\n", + "inference_config = InferenceConfig(entry_script=\"score.py\", environment=environment)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You can then deploy the model as an AKS web service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Deploy the model from the best run as an AKS web service\n", + "from azureml.core.webservice import AksWebservice\n", + "from azureml.core.model import Model\n", + "\n", + "aks_config = AksWebservice.deploy_configuration(\n", + " autoscale_enabled=True, cpu_cores=1, memory_gb=5, enable_app_insights=True\n", + ")\n", + "\n", + "aks_service = Model.deploy(\n", + " ws,\n", + " models=[model],\n", + " inference_config=inference_config,\n", + " deployment_config=aks_config,\n", + " deployment_target=aks_target,\n", + " name=\"automl-image-test-cpu-od\",\n", + " overwrite=True,\n", + ")\n", + "aks_service.wait_for_deployment(show_output=True)\n", + "print(aks_service.state)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test the web service\n", + "Finally, let's test our deployed web service to predict new images. You can pass in any image. In this case, we'll use a random image from the dataset and pass it to the scoring URI." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import requests\n", + "\n", + "# URL for the web service\n", + "scoring_uri = aks_service.scoring_uri\n", + "\n", + "# If the service is authenticated, set the key or token\n", + "key, _ = aks_service.get_keys()\n", + "\n", + "sample_image = \"./test_image.jpg\"\n", + "\n", + "# Load image data\n", + "data = open(sample_image, \"rb\").read()\n", + "\n", + "# Set the content type\n", + "headers = {\"Content-Type\": \"application/octet-stream\"}\n", + "\n", + "# If authentication is enabled, set the authorization header\n", + "headers[\"Authorization\"] = f\"Bearer {key}\"\n", + "\n", + "# Make the request and display the response\n", + "resp = requests.post(scoring_uri, data, headers=headers)\n", + "print(resp.text)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Visualize detections\n", + "Now that we have scored a test image, we can visualize the bounding boxes for this image" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "import matplotlib.pyplot as plt\n", + "import matplotlib.image as mpimg\n", + "import matplotlib.patches as patches\n", + "from PIL import Image\n", + "import numpy as np\n", + "import json\n", + "\n", + "IMAGE_SIZE = (18, 12)\n", + "plt.figure(figsize=IMAGE_SIZE)\n", + "img_np = mpimg.imread(sample_image)\n", + "img = Image.fromarray(img_np.astype(\"uint8\"), \"RGB\")\n", + "x, y = img.size\n", + "\n", + "fig, ax = plt.subplots(1, figsize=(15, 15))\n", + "# Display the image\n", + "ax.imshow(img_np)\n", + "\n", + "# draw box and label for each detection\n", + "detections = json.loads(resp.text)\n", + "for detect in detections[\"boxes\"]:\n", + " label = detect[\"label\"]\n", + " box = detect[\"box\"]\n", + " conf_score = detect[\"score\"]\n", + " if conf_score > 0.6:\n", + " ymin, xmin, ymax, xmax = (\n", + " box[\"topY\"],\n", + " box[\"topX\"],\n", + " box[\"bottomY\"],\n", + " box[\"bottomX\"],\n", + " )\n", + " topleft_x, topleft_y = x * xmin, y * ymin\n", + " width, height = x * (xmax - xmin), y * (ymax - ymin)\n", + " print(\n", + " \"{}: [{}, {}, {}, {}], {}\".format(\n", + " detect[\"label\"],\n", + " round(topleft_x, 3),\n", + " round(topleft_y, 3),\n", + " round(width, 3),\n", + " round(height, 3),\n", + " round(conf_score, 3),\n", + " )\n", + " )\n", + "\n", + " color = np.random.rand(3) #'red'\n", + " rect = patches.Rectangle(\n", + " (topleft_x, topleft_y),\n", + " width,\n", + " height,\n", + " linewidth=3,\n", + " edgecolor=color,\n", + " facecolor=\"none\",\n", + " )\n", + "\n", + " ax.add_patch(rect)\n", + " plt.text(topleft_x, topleft_y - 10, label, color=color, fontsize=20)\n", + "plt.show()" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + }, + "nteract": { + "version": "nteract-front-end@1.0.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/coco2jsonl.py b/how-to-use-azureml/automated-machine-learning/image-object-detection/coco2jsonl.py new file mode 100644 index 000000000..df8882958 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-object-detection/coco2jsonl.py @@ -0,0 +1,127 @@ +import json +import os +import sys +import argparse + +# Define Converters + + +class CocoToJSONLinesConverter: + def convert(self): + raise NotImplementedError + + +class BoundingBoxConverter(CocoToJSONLinesConverter): + def __init__(self, coco_data): + self.json_lines_data = [] + self.categories = {} + self.coco_data = coco_data + self.image_id_to_data_index = {} + for i in range(0, len(coco_data["images"])): + self.json_lines_data.append({}) + self.json_lines_data[i]["image_url"] = "" + self.json_lines_data[i]["image_details"] = {} + self.json_lines_data[i]["label"] = [] + for i in range(0, len(coco_data["categories"])): + self.categories[coco_data["categories"][i]["id"]] = coco_data["categories"][ + i + ]["name"] + + def _populate_image_url(self, index, coco_image): + self.json_lines_data[index]["image_url"] = coco_image["file_name"] + self.image_id_to_data_index[coco_image["id"]] = index + + def _populate_image_details(self, index, coco_image): + file_name = coco_image["file_name"] + self.json_lines_data[index]["image_details"]["format"] = file_name[ + file_name.rfind(".") + 1 : + ] + self.json_lines_data[index]["image_details"]["width"] = coco_image["width"] + self.json_lines_data[index]["image_details"]["height"] = coco_image["height"] + + def _populate_bbox_in_label(self, label, annotation, image_details): + # if bbox comes as normalized, skip normalization. + if max(annotation["bbox"]) < 1.5: + width = 1 + height = 1 + else: + width = image_details["width"] + height = image_details["height"] + label["topX"] = annotation["bbox"][0] / width + label["topY"] = annotation["bbox"][1] / height + label["bottomX"] = (annotation["bbox"][0] + annotation["bbox"][2]) / width + label["bottomY"] = (annotation["bbox"][1] + annotation["bbox"][3]) / height + + def _populate_label(self, annotation): + index = self.image_id_to_data_index[annotation["image_id"]] + image_details = self.json_lines_data[index]["image_details"] + label = {"label": self.categories[annotation["category_id"]]} + self._populate_bbox_in_label(label, annotation, image_details) + self._populate_isCrowd(label, annotation) + self.json_lines_data[index]["label"].append(label) + + def _populate_isCrowd(self, label, annotation): + if "iscrowd" in annotation.keys(): + label["isCrowd"] = annotation["iscrowd"] + + def convert(self): + for i in range(0, len(self.coco_data["images"])): + self._populate_image_url(i, self.coco_data["images"][i]) + self._populate_image_details(i, self.coco_data["images"][i]) + for i in range(0, len(self.coco_data["annotations"])): + self._populate_label(self.coco_data["annotations"][i]) + return self.json_lines_data + + +if __name__ == "__main__": + # Parse arguments that are passed into the script + parser = argparse.ArgumentParser() + parser.add_argument("--input_coco_file_path", type=str, required=True) + parser.add_argument("--output_dir", type=str, required=True) + parser.add_argument("--output_file_name", type=str, required=True) + parser.add_argument( + "--task_type", + type=str, + required=True, + choices=["ObjectDetection"], + default="ObjectDetection", + ) + parser.add_argument("--base_url", type=str, default=None) + + args = parser.parse_args() + + input_coco_file_path = args.input_coco_file_path + output_dir = args.output_dir + output_file_path = output_dir + "/" + args.output_file_name + task_type = args.task_type + base_url = args.base_url + + def read_coco_file(coco_file): + with open(coco_file) as f_in: + return json.load(f_in) + + def write_json_lines(converter, filename, base_url=None): + json_lines_data = converter.convert() + with open(filename, "w") as outfile: + for json_line in json_lines_data: + if base_url is not None: + image_url = json_line["image_url"] + json_line["image_url"] = ( + base_url + image_url[image_url.rfind("/") + 1 :] + ) + json.dump(json_line, outfile, separators=(",", ":")) + outfile.write("\n") + print(f"Conversion completed. Converted {len(json_lines_data)} lines.") + + coco_data = read_coco_file(input_coco_file_path) + + print("Converting for {}".format(task_type)) + + # Defined in azureml.contrib.dataset.labeled_dataset.LabeledDatasetTask.OBJECT_DETECTION.value + if task_type == "ObjectDetection": + converter = BoundingBoxConverter(coco_data) + write_json_lines(converter, output_file_path, base_url) + + else: + print("ERROR: Invalid Task Type") + pass diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/example_object_detection_predictions.jpg b/how-to-use-azureml/automated-machine-learning/image-object-detection/example_object_detection_predictions.jpg new file mode 100644 index 000000000..26e6ba28e Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-object-detection/example_object_detection_predictions.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/odFridgeObjects_coco.json b/how-to-use-azureml/automated-machine-learning/image-object-detection/odFridgeObjects_coco.json new file mode 100644 index 000000000..9c0f8374a --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-object-detection/odFridgeObjects_coco.json @@ -0,0 +1,5837 @@ +{ + "images": [ + { + "file_name": "1.jpg", + "height": 666, + "width": 499, + "id": "1" + }, + { + "file_name": "2.jpg", + "height": 666, + "width": 499, + "id": "2" + }, + { + "file_name": "3.jpg", + "height": 666, + "width": 499, + "id": "3" + }, + { + "file_name": "4.jpg", + "height": 666, + "width": 499, + "id": "4" + }, + { + "file_name": "5.jpg", + "height": 666, + "width": 499, + "id": "5" + }, + { + "file_name": "6.jpg", + "height": 666, + "width": 499, + "id": "6" + }, + { + "file_name": "7.jpg", + "height": 666, + "width": 499, + "id": "7" + }, + { + "file_name": "8.jpg", + "height": 666, + "width": 499, + "id": "8" + }, + { + "file_name": "9.jpg", + "height": 666, + "width": 499, + "id": "9" + }, + { + "file_name": "10.jpg", + "height": 666, + "width": 499, + "id": "10" + }, + { + "file_name": "11.jpg", + "height": 666, + "width": 499, + "id": "11" + }, + { + "file_name": "12.jpg", + "height": 666, + "width": 499, + "id": "12" + }, + { + "file_name": "13.jpg", + "height": 666, + "width": 499, + "id": "13" + }, + { + "file_name": "14.jpg", + "height": 666, + "width": 499, + "id": "14" + }, + { + "file_name": "15.jpg", + "height": 666, + "width": 499, + "id": "15" + }, + { + "file_name": "16.jpg", + "height": 666, + "width": 499, + "id": "16" + }, + { + "file_name": "17.jpg", + "height": 666, + "width": 499, + "id": "17" + }, + { + "file_name": "18.jpg", + "height": 666, + "width": 499, + "id": "18" + }, + { + "file_name": "19.jpg", + "height": 666, + "width": 499, + "id": "19" + }, + { + "file_name": "20.jpg", + "height": 666, + "width": 499, + "id": "20" + }, + { + "file_name": "21.jpg", + "height": 666, + "width": 499, + "id": "21" + }, + { + "file_name": "22.jpg", + "height": 666, + "width": 499, + "id": "22" + }, + { + "file_name": "23.jpg", + "height": 666, + "width": 499, + "id": "23" + }, + { + "file_name": "24.jpg", + "height": 666, + "width": 499, + "id": "24" + }, + { + "file_name": "25.jpg", + "height": 666, + "width": 499, + "id": "25" + }, + { + "file_name": "26.jpg", + "height": 666, + "width": 499, + "id": "26" + }, + { + "file_name": "27.jpg", + "height": 666, + "width": 499, + "id": "27" + }, + { + "file_name": "28.jpg", + "height": 666, + "width": 499, + "id": "28" + }, + { + "file_name": "29.jpg", + "height": 666, + "width": 499, + "id": "29" + }, + { + "file_name": "30.jpg", + "height": 666, + "width": 499, + "id": "30" + }, + { + "file_name": "31.jpg", + "height": 666, + "width": 499, + "id": "31" + }, + { + "file_name": "32.jpg", + "height": 666, + "width": 499, + "id": "32" + }, + { + "file_name": "33.jpg", + "height": 666, + "width": 499, + "id": "33" + }, + { + "file_name": "34.jpg", + "height": 666, + "width": 499, + "id": "34" + }, + { + "file_name": "35.jpg", + "height": 666, + "width": 499, + "id": "35" + }, + { + "file_name": "36.jpg", + "height": 666, + "width": 499, + "id": "36" + }, + { + "file_name": "37.jpg", + "height": 666, + "width": 499, + "id": "37" + }, + { + "file_name": "38.jpg", + "height": 666, + "width": 499, + "id": "38" + }, + { + "file_name": "39.jpg", + "height": 666, + "width": 499, + "id": "39" + }, + { + "file_name": "40.jpg", + "height": 666, + "width": 499, + "id": "40" + }, + { + "file_name": "41.jpg", + "height": 666, + "width": 499, + "id": "41" + }, + { + "file_name": "42.jpg", + "height": 666, + "width": 499, + "id": "42" + }, + { + "file_name": "43.jpg", + "height": 666, + "width": 499, + "id": "43" + }, + { + "file_name": "44.jpg", + "height": 666, + "width": 499, + "id": "44" + }, + { + "file_name": "45.jpg", + "height": 666, + "width": 499, + "id": "45" + }, + { + "file_name": "46.jpg", + "height": 666, + "width": 499, + "id": "46" + }, + { + "file_name": "47.jpg", + "height": 666, + "width": 499, + "id": "47" + }, + { + "file_name": "48.jpg", + "height": 666, + "width": 499, + "id": "48" + }, + { + "file_name": "49.jpg", + "height": 666, + "width": 499, + "id": "49" + }, + { + "file_name": "50.jpg", + "height": 666, + "width": 499, + "id": "50" + }, + { + "file_name": "51.jpg", + "height": 666, + "width": 499, + "id": "51" + }, + { + "file_name": "52.jpg", + "height": 666, + "width": 499, + "id": "52" + }, + { + "file_name": "53.jpg", + "height": 666, + "width": 499, + "id": "53" + }, + { + "file_name": "54.jpg", + "height": 666, + "width": 499, + "id": "54" + }, + { + "file_name": "55.jpg", + "height": 666, + "width": 499, + "id": "55" + }, + { + "file_name": "56.jpg", + "height": 666, + "width": 499, + "id": "56" + }, + { + "file_name": "57.jpg", + "height": 666, + "width": 499, + "id": "57" + }, + { + "file_name": "58.jpg", + "height": 666, + "width": 499, + "id": "58" + }, + { + "file_name": "59.jpg", + "height": 666, + "width": 499, + "id": "59" + }, + { + "file_name": "60.jpg", + "height": 666, + "width": 499, + "id": "60" + }, + { + "file_name": "61.jpg", + "height": 666, + "width": 499, + "id": "61" + }, + { + "file_name": "62.jpg", + "height": 666, + "width": 499, + "id": "62" + }, + { + "file_name": "63.jpg", + "height": 666, + "width": 499, + "id": "63" + }, + { + "file_name": "64.jpg", + "height": 666, + "width": 499, + "id": "64" + }, + { + "file_name": "65.jpg", + "height": 666, + "width": 499, + "id": "65" + }, + { + "file_name": "66.jpg", + "height": 666, + "width": 499, + "id": "66" + }, + { + "file_name": "67.jpg", + "height": 666, + "width": 499, + "id": "67" + }, + { + "file_name": "68.jpg", + "height": 666, + "width": 499, + "id": "68" + }, + { + "file_name": "69.jpg", + "height": 666, + "width": 499, + "id": "69" + }, + { + "file_name": "70.jpg", + "height": 666, + "width": 499, + "id": "70" + }, + { + "file_name": "71.jpg", + "height": 666, + "width": 499, + "id": "71" + }, + { + "file_name": "72.jpg", + "height": 666, + "width": 499, + "id": "72" + }, + { + "file_name": "73.jpg", + "height": 666, + "width": 499, + "id": "73" + }, + { + "file_name": "74.jpg", + "height": 666, + "width": 499, + "id": "74" + }, + { + "file_name": "75.jpg", + "height": 666, + "width": 499, + "id": "75" + }, + { + "file_name": "76.jpg", + "height": 666, + "width": 499, + "id": "76" + }, + { + "file_name": "77.jpg", + "height": 666, + "width": 499, + "id": "77" + }, + { + "file_name": "78.jpg", + "height": 666, + "width": 499, + "id": "78" + }, + { + "file_name": "79.jpg", + "height": 666, + "width": 499, + "id": "79" + }, + { + "file_name": "80.jpg", + "height": 666, + "width": 499, + "id": "80" + }, + { + "file_name": "81.jpg", + "height": 666, + "width": 499, + "id": "81" + }, + { + "file_name": "82.jpg", + "height": 666, + "width": 499, + "id": "82" + }, + { + "file_name": "83.jpg", + "height": 666, + "width": 499, + "id": "83" + }, + { + "file_name": "84.jpg", + "height": 666, + "width": 499, + "id": "84" + }, + { + "file_name": "85.jpg", + "height": 666, + "width": 499, + "id": "85" + }, + { + "file_name": "86.jpg", + "height": 666, + "width": 499, + "id": "86" + }, + { + "file_name": "87.jpg", + "height": 666, + "width": 499, + "id": "87" + }, + { + "file_name": "88.jpg", + "height": 666, + "width": 499, + "id": "88" + }, + { + "file_name": "89.jpg", + "height": 666, + "width": 499, + "id": "89" + }, + { + "file_name": "90.jpg", + "height": 666, + "width": 499, + "id": "90" + }, + { + "file_name": "91.jpg", + "height": 666, + "width": 499, + "id": "91" + }, + { + "file_name": "92.jpg", + "height": 666, + "width": 499, + "id": "92" + }, + { + "file_name": "93.jpg", + "height": 666, + "width": 499, + "id": "93" + }, + { + "file_name": "94.jpg", + "height": 666, + "width": 499, + "id": "94" + }, + { + "file_name": "95.jpg", + "height": 666, + "width": 499, + "id": "95" + }, + { + "file_name": "96.jpg", + "height": 666, + "width": 499, + "id": "96" + }, + { + "file_name": "97.jpg", + "height": 666, + "width": 499, + "id": "97" + }, + { + "file_name": "98.jpg", + "height": 666, + "width": 499, + "id": "98" + }, + { + "file_name": "99.jpg", + "height": 666, + "width": 499, + "id": "99" + }, + { + "file_name": "100.jpg", + "height": 666, + "width": 499, + "id": "100" + }, + { + "file_name": "101.jpg", + "height": 666, + "width": 499, + "id": "101" + }, + { + "file_name": "102.jpg", + "height": 666, + "width": 499, + "id": "102" + }, + { + "file_name": "103.jpg", + "height": 666, + "width": 499, + "id": "103" + }, + { + "file_name": "104.jpg", + "height": 666, + "width": 499, + "id": "104" + }, + { + "file_name": "105.jpg", + "height": 666, + "width": 499, + "id": "105" + }, + { + "file_name": "106.jpg", + "height": 666, + "width": 499, + "id": "106" + }, + { + "file_name": "107.jpg", + "height": 666, + "width": 499, + "id": "107" + }, + { + "file_name": "108.jpg", + "height": 666, + "width": 499, + "id": "108" + }, + { + "file_name": "109.jpg", + "height": 666, + "width": 499, + "id": "109" + }, + { + "file_name": "110.jpg", + "height": 666, + "width": 499, + "id": "110" + }, + { + "file_name": "111.jpg", + "height": 666, + "width": 499, + "id": "111" + }, + { + "file_name": "112.jpg", + "height": 666, + "width": 499, + "id": "112" + }, + { + "file_name": "113.jpg", + "height": 666, + "width": 499, + "id": "113" + }, + { + "file_name": "114.jpg", + "height": 666, + "width": 499, + "id": "114" + }, + { + "file_name": "115.jpg", + "height": 666, + "width": 499, + "id": "115" + }, + { + "file_name": "116.jpg", + "height": 666, + "width": 499, + "id": "116" + }, + { + "file_name": "117.jpg", + "height": 666, + "width": 499, + "id": "117" + }, + { + "file_name": "118.jpg", + "height": 666, + "width": 499, + "id": "118" + }, + { + "file_name": "119.jpg", + "height": 666, + "width": 499, + "id": "119" + }, + { + "file_name": "120.jpg", + "height": 666, + "width": 499, + "id": "120" + }, + { + "file_name": "121.jpg", + "height": 666, + "width": 499, + "id": "121" + }, + { + "file_name": "122.jpg", + "height": 666, + "width": 499, + "id": "122" + }, + { + "file_name": "123.jpg", + "height": 666, + "width": 499, + "id": "123" + }, + { + "file_name": "124.jpg", + "height": 666, + "width": 499, + "id": "124" + }, + { + "file_name": "125.jpg", + "height": 666, + "width": 499, + "id": "125" + }, + { + "file_name": "126.jpg", + "height": 666, + "width": 499, + "id": "126" + }, + { + "file_name": "127.jpg", + "height": 666, + "width": 499, + "id": "127" + }, + { + "file_name": "128.jpg", + "height": 666, + "width": 499, + "id": "128" + } + ], + "type": "instances", + "annotations": [ + { + "area": 46766, + "iscrowd": 0, + "bbox": [ + 100, + 173, + 133, + 348 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "1", + "id": 1 + }, + { + "area": 32918, + "iscrowd": 0, + "bbox": [ + 247, + 192, + 108, + 301 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "2", + "id": 2 + }, + { + "area": 28500, + "iscrowd": 0, + "bbox": [ + 259, + 231, + 124, + 227 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "3", + "id": 3 + }, + { + "area": 58000, + "iscrowd": 0, + "bbox": [ + 245, + 119, + 144, + 399 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "4", + "id": 4 + }, + { + "area": 44132, + "iscrowd": 0, + "bbox": [ + 39, + 278, + 373, + 117 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "5", + "id": 5 + }, + { + "area": 30380, + "iscrowd": 0, + "bbox": [ + 125, + 316, + 244, + 123 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "6", + "id": 6 + }, + { + "area": 39195, + "iscrowd": 0, + "bbox": [ + 86, + 298, + 334, + 116 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "7", + "id": 7 + }, + { + "area": 60514, + "iscrowd": 0, + "bbox": [ + 47, + 280, + 382, + 157 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "8", + "id": 8 + }, + { + "area": 41538, + "iscrowd": 0, + "bbox": [ + 80, + 157, + 128, + 321 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "9", + "id": 9 + }, + { + "area": 23520, + "iscrowd": 0, + "bbox": [ + 299, + 220, + 95, + 244 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "9", + "id": 10 + }, + { + "area": 44278, + "iscrowd": 0, + "bbox": [ + 86, + 102, + 130, + 337 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "10", + "id": 11 + }, + { + "area": 33744, + "iscrowd": 0, + "bbox": [ + 150, + 377, + 295, + 113 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "10", + "id": 12 + }, + { + "area": 56518, + "iscrowd": 0, + "bbox": [ + 56, + 148, + 153, + 366 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "11", + "id": 13 + }, + { + "area": 39406, + "iscrowd": 0, + "bbox": [ + 328, + 180, + 121, + 322 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "11", + "id": 14 + }, + { + "area": 53067, + "iscrowd": 0, + "bbox": [ + 51, + 107, + 146, + 360 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "12", + "id": 15 + }, + { + "area": 44764, + "iscrowd": 0, + "bbox": [ + 94, + 402, + 360, + 123 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "12", + "id": 16 + }, + { + "area": 50410, + "iscrowd": 0, + "bbox": [ + 89, + 121, + 141, + 354 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "13", + "id": 17 + }, + { + "area": 20370, + "iscrowd": 0, + "bbox": [ + 273, + 278, + 104, + 193 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "13", + "id": 18 + }, + { + "area": 76368, + "iscrowd": 0, + "bbox": [ + 62, + 323, + 343, + 221 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "14", + "id": 19 + }, + { + "area": 18564, + "iscrowd": 0, + "bbox": [ + 320, + 268, + 101, + 181 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "14", + "id": 20 + }, + { + "area": 42828, + "iscrowd": 0, + "bbox": [ + 95, + 140, + 128, + 331 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "15", + "id": 21 + }, + { + "area": 33499, + "iscrowd": 0, + "bbox": [ + 289, + 248, + 138, + 240 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "15", + "id": 22 + }, + { + "area": 30520, + "iscrowd": 0, + "bbox": [ + 120, + 185, + 108, + 279 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "16", + "id": 23 + }, + { + "area": 38420, + "iscrowd": 0, + "bbox": [ + 127, + 379, + 225, + 169 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "16", + "id": 24 + }, + { + "area": 43400, + "iscrowd": 0, + "bbox": [ + 95, + 156, + 123, + 349 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "17", + "id": 25 + }, + { + "area": 34384, + "iscrowd": 0, + "bbox": [ + 228, + 196, + 111, + 306 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "17", + "id": 26 + }, + { + "area": 31414, + "iscrowd": 0, + "bbox": [ + 65, + 180, + 112, + 277 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "18", + "id": 27 + }, + { + "area": 40014, + "iscrowd": 0, + "bbox": [ + 141, + 375, + 341, + 116 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "18", + "id": 28 + }, + { + "area": 45666, + "iscrowd": 0, + "bbox": [ + 122, + 150, + 128, + 353 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "19", + "id": 29 + }, + { + "area": 29056, + "iscrowd": 0, + "bbox": [ + 284, + 286, + 127, + 226 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "19", + "id": 30 + }, + { + "area": 68482, + "iscrowd": 0, + "bbox": [ + 74, + 358, + 352, + 193 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "20", + "id": 31 + }, + { + "area": 24600, + "iscrowd": 0, + "bbox": [ + 324, + 247, + 119, + 204 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "20", + "id": 32 + }, + { + "area": 47696, + "iscrowd": 0, + "bbox": [ + 4, + 339, + 270, + 175 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "21", + "id": 33 + }, + { + "area": 54812, + "iscrowd": 0, + "bbox": [ + 157, + 275, + 283, + 192 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "21", + "id": 34 + }, + { + "area": 67144, + "iscrowd": 0, + "bbox": [ + 22, + 276, + 307, + 217 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "22", + "id": 35 + }, + { + "area": 31647, + "iscrowd": 0, + "bbox": [ + 314, + 258, + 136, + 230 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "22", + "id": 36 + }, + { + "area": 51379, + "iscrowd": 0, + "bbox": [ + 49, + 282, + 268, + 190 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "23", + "id": 37 + }, + { + "area": 37260, + "iscrowd": 0, + "bbox": [ + 255, + 328, + 206, + 179 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "23", + "id": 38 + }, + { + "area": 45108, + "iscrowd": 0, + "bbox": [ + 142, + 156, + 125, + 357 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "24", + "id": 39 + }, + { + "area": 28785, + "iscrowd": 0, + "bbox": [ + 241, + 187, + 100, + 284 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "24", + "id": 40 + }, + { + "area": 36652, + "iscrowd": 0, + "bbox": [ + 101, + 171, + 118, + 307 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "25", + "id": 41 + }, + { + "area": 24080, + "iscrowd": 0, + "bbox": [ + 209, + 363, + 214, + 111 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "25", + "id": 42 + }, + { + "area": 21721, + "iscrowd": 0, + "bbox": [ + 10, + 383, + 202, + 106 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "26", + "id": 43 + }, + { + "area": 25662, + "iscrowd": 0, + "bbox": [ + 221, + 351, + 272, + 93 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "26", + "id": 44 + }, + { + "area": 39390, + "iscrowd": 0, + "bbox": [ + 35, + 207, + 129, + 302 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "27", + "id": 45 + }, + { + "area": 39440, + "iscrowd": 0, + "bbox": [ + 191, + 168, + 115, + 339 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "27", + "id": 46 + }, + { + "area": 27392, + "iscrowd": 0, + "bbox": [ + 346, + 295, + 127, + 213 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "27", + "id": 47 + }, + { + "area": 40467, + "iscrowd": 0, + "bbox": [ + 121, + 110, + 122, + 328 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "28", + "id": 48 + }, + { + "area": 24442, + "iscrowd": 0, + "bbox": [ + 285, + 243, + 120, + 201 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "28", + "id": 49 + }, + { + "area": 49476, + "iscrowd": 0, + "bbox": [ + 89, + 415, + 371, + 132 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "28", + "id": 50 + }, + { + "area": 28776, + "iscrowd": 0, + "bbox": [ + 123, + 180, + 108, + 263 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "29", + "id": 51 + }, + { + "area": 36270, + "iscrowd": 0, + "bbox": [ + 285, + 143, + 116, + 309 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "29", + "id": 52 + }, + { + "area": 30680, + "iscrowd": 0, + "bbox": [ + 148, + 413, + 235, + 129 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "29", + "id": 53 + }, + { + "area": 29670, + "iscrowd": 0, + "bbox": [ + 64, + 177, + 114, + 257 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "30", + "id": 54 + }, + { + "area": 20944, + "iscrowd": 0, + "bbox": [ + 324, + 257, + 111, + 186 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "30", + "id": 55 + }, + { + "area": 50895, + "iscrowd": 0, + "bbox": [ + 43, + 434, + 376, + 134 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "30", + "id": 56 + }, + { + "area": 19530, + "iscrowd": 0, + "bbox": [ + 112, + 193, + 89, + 216 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "31", + "id": 57 + }, + { + "area": 27538, + "iscrowd": 0, + "bbox": [ + 215, + 181, + 97, + 280 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "31", + "id": 58 + }, + { + "area": 27216, + "iscrowd": 0, + "bbox": [ + 343, + 327, + 125, + 215 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "31", + "id": 59 + }, + { + "area": 24786, + "iscrowd": 0, + "bbox": [ + 55, + 167, + 101, + 242 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "32", + "id": 60 + }, + { + "area": 24500, + "iscrowd": 0, + "bbox": [ + 162, + 218, + 97, + 249 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "32", + "id": 61 + }, + { + "area": 27776, + "iscrowd": 0, + "bbox": [ + 331, + 334, + 127, + 216 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "32", + "id": 62 + }, + { + "area": 25250, + "iscrowd": 0, + "bbox": [ + 72, + 150, + 100, + 249 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "33", + "id": 63 + }, + { + "area": 35802, + "iscrowd": 0, + "bbox": [ + 192, + 229, + 116, + 305 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "33", + "id": 64 + }, + { + "area": 15180, + "iscrowd": 0, + "bbox": [ + 324, + 246, + 91, + 164 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "33", + "id": 65 + }, + { + "area": 25500, + "iscrowd": 0, + "bbox": [ + 80, + 149, + 101, + 249 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "34", + "id": 66 + }, + { + "area": 18300, + "iscrowd": 0, + "bbox": [ + 186, + 271, + 99, + 182 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "34", + "id": 67 + }, + { + "area": 42108, + "iscrowd": 0, + "bbox": [ + 339, + 226, + 131, + 318 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "34", + "id": 68 + }, + { + "area": 45560, + "iscrowd": 0, + "bbox": [ + 58, + 222, + 135, + 334 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "35", + "id": 69 + }, + { + "area": 15308, + "iscrowd": 0, + "bbox": [ + 205, + 292, + 88, + 171 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "35", + "id": 70 + }, + { + "area": 18144, + "iscrowd": 0, + "bbox": [ + 306, + 200, + 83, + 215 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "35", + "id": 71 + }, + { + "area": 50568, + "iscrowd": 0, + "bbox": [ + 29, + 257, + 146, + 343 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "36", + "id": 72 + }, + { + "area": 42900, + "iscrowd": 0, + "bbox": [ + 173, + 180, + 129, + 329 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "36", + "id": 73 + }, + { + "area": 19712, + "iscrowd": 0, + "bbox": [ + 308, + 220, + 87, + 223 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "36", + "id": 74 + }, + { + "area": 65296, + "iscrowd": 0, + "bbox": [ + 32, + 196, + 175, + 370 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "37", + "id": 75 + }, + { + "area": 24534, + "iscrowd": 0, + "bbox": [ + 244, + 212, + 93, + 260 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "37", + "id": 76 + }, + { + "area": 17595, + "iscrowd": 0, + "bbox": [ + 340, + 220, + 84, + 206 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "37", + "id": 77 + }, + { + "area": 63318, + "iscrowd": 0, + "bbox": [ + 26, + 191, + 172, + 365 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "38", + "id": 78 + }, + { + "area": 37022, + "iscrowd": 0, + "bbox": [ + 193, + 334, + 213, + 172 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "38", + "id": 79 + }, + { + "area": 17458, + "iscrowd": 0, + "bbox": [ + 326, + 207, + 85, + 202 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "38", + "id": 80 + }, + { + "area": 65520, + "iscrowd": 0, + "bbox": [ + 9, + 198, + 181, + 359 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "39", + "id": 81 + }, + { + "area": 17920, + "iscrowd": 0, + "bbox": [ + 227, + 232, + 79, + 223 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "39", + "id": 82 + }, + { + "area": 19800, + "iscrowd": 0, + "bbox": [ + 333, + 186, + 87, + 224 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "39", + "id": 83 + }, + { + "area": 65124, + "iscrowd": 0, + "bbox": [ + 17, + 335, + 267, + 242 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "40", + "id": 84 + }, + { + "area": 17577, + "iscrowd": 0, + "bbox": [ + 244, + 215, + 80, + 216 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "40", + "id": 85 + }, + { + "area": 19272, + "iscrowd": 0, + "bbox": [ + 344, + 172, + 87, + 218 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "40", + "id": 86 + }, + { + "area": 71002, + "iscrowd": 0, + "bbox": [ + 195, + 307, + 270, + 261 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "41", + "id": 87 + }, + { + "area": 21762, + "iscrowd": 0, + "bbox": [ + 134, + 176, + 92, + 233 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "41", + "id": 88 + }, + { + "area": 19270, + "iscrowd": 0, + "bbox": [ + 256, + 127, + 81, + 234 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "41", + "id": 89 + }, + { + "area": 33840, + "iscrowd": 0, + "bbox": [ + 65, + 264, + 119, + 281 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "42", + "id": 90 + }, + { + "area": 53265, + "iscrowd": 0, + "bbox": [ + 144, + 308, + 264, + 200 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "42", + "id": 91 + }, + { + "area": 21160, + "iscrowd": 0, + "bbox": [ + 337, + 175, + 91, + 229 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "42", + "id": 92 + }, + { + "area": 44220, + "iscrowd": 0, + "bbox": [ + 96, + 250, + 133, + 329 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "43", + "id": 93 + }, + { + "area": 37356, + "iscrowd": 0, + "bbox": [ + 124, + 127, + 131, + 282 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "43", + "id": 94 + }, + { + "area": 34770, + "iscrowd": 0, + "bbox": [ + 281, + 184, + 113, + 304 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "43", + "id": 95 + }, + { + "area": 45188, + "iscrowd": 0, + "bbox": [ + 19, + 188, + 142, + 315 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "44", + "id": 96 + }, + { + "area": 29744, + "iscrowd": 0, + "bbox": [ + 203, + 231, + 103, + 285 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "44", + "id": 97 + }, + { + "area": 29568, + "iscrowd": 0, + "bbox": [ + 344, + 266, + 111, + 263 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "44", + "id": 98 + }, + { + "area": 32301, + "iscrowd": 0, + "bbox": [ + 93, + 205, + 110, + 290 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "45", + "id": 99 + }, + { + "area": 44756, + "iscrowd": 0, + "bbox": [ + 198, + 171, + 133, + 333 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "45", + "id": 100 + }, + { + "area": 24735, + "iscrowd": 0, + "bbox": [ + 306, + 238, + 96, + 254 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "45", + "id": 101 + }, + { + "area": 32592, + "iscrowd": 0, + "bbox": [ + 66, + 177, + 111, + 290 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "46", + "id": 102 + }, + { + "area": 66642, + "iscrowd": 0, + "bbox": [ + 161, + 149, + 173, + 382 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "46", + "id": 103 + }, + { + "area": 25620, + "iscrowd": 0, + "bbox": [ + 307, + 316, + 121, + 209 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "46", + "id": 104 + }, + { + "area": 33900, + "iscrowd": 0, + "bbox": [ + 77, + 190, + 112, + 299 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "47", + "id": 105 + }, + { + "area": 18988, + "iscrowd": 0, + "bbox": [ + 202, + 303, + 100, + 187 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "47", + "id": 106 + }, + { + "area": 64032, + "iscrowd": 0, + "bbox": [ + 290, + 157, + 183, + 347 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "47", + "id": 107 + }, + { + "area": 22914, + "iscrowd": 0, + "bbox": [ + 68, + 301, + 113, + 200 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "48", + "id": 108 + }, + { + "area": 60214, + "iscrowd": 0, + "bbox": [ + 175, + 140, + 160, + 373 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "48", + "id": 109 + }, + { + "area": 38430, + "iscrowd": 0, + "bbox": [ + 308, + 182, + 121, + 314 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "48", + "id": 110 + }, + { + "area": 35568, + "iscrowd": 0, + "bbox": [ + 21, + 425, + 311, + 113 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "49", + "id": 111 + }, + { + "area": 12015, + "iscrowd": 0, + "bbox": [ + 228, + 306, + 88, + 134 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "49", + "id": 112 + }, + { + "area": 50868, + "iscrowd": 0, + "bbox": [ + 308, + 176, + 161, + 313 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "49", + "id": 113 + }, + { + "area": 41454, + "iscrowd": 0, + "bbox": [ + 54, + 168, + 140, + 293 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "50", + "id": 114 + }, + { + "area": 32508, + "iscrowd": 0, + "bbox": [ + 59, + 410, + 300, + 107 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "50", + "id": 115 + }, + { + "area": 22425, + "iscrowd": 0, + "bbox": [ + 353, + 338, + 114, + 194 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "50", + "id": 116 + }, + { + "area": 44092, + "iscrowd": 0, + "bbox": [ + 21, + 202, + 145, + 301 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "51", + "id": 117 + }, + { + "area": 16275, + "iscrowd": 0, + "bbox": [ + 199, + 341, + 92, + 174 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "51", + "id": 118 + }, + { + "area": 28355, + "iscrowd": 0, + "bbox": [ + 361, + 235, + 106, + 264 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "51", + "id": 119 + }, + { + "area": 45743, + "iscrowd": 0, + "bbox": [ + 29, + 194, + 148, + 306 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "52", + "id": 120 + }, + { + "area": 17088, + "iscrowd": 0, + "bbox": [ + 209, + 337, + 95, + 177 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "52", + "id": 121 + }, + { + "area": 25132, + "iscrowd": 0, + "bbox": [ + 369, + 261, + 102, + 243 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "52", + "id": 122 + }, + { + "area": 22967, + "iscrowd": 0, + "bbox": [ + 47, + 333, + 118, + 192 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "53", + "id": 123 + }, + { + "area": 49288, + "iscrowd": 0, + "bbox": [ + 172, + 319, + 243, + 201 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "53", + "id": 124 + }, + { + "area": 20202, + "iscrowd": 0, + "bbox": [ + 362, + 225, + 90, + 221 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "53", + "id": 125 + }, + { + "area": 17576, + "iscrowd": 0, + "bbox": [ + 42, + 354, + 103, + 168 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "54", + "id": 126 + }, + { + "area": 23674, + "iscrowd": 0, + "bbox": [ + 122, + 374, + 177, + 132 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "54", + "id": 127 + }, + { + "area": 42340, + "iscrowd": 0, + "bbox": [ + 325, + 224, + 145, + 289 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "54", + "id": 128 + }, + { + "area": 25220, + "iscrowd": 0, + "bbox": [ + 118, + 222, + 96, + 259 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "55", + "id": 129 + }, + { + "area": 20790, + "iscrowd": 0, + "bbox": [ + 185, + 316, + 104, + 197 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "55", + "id": 130 + }, + { + "area": 54880, + "iscrowd": 0, + "bbox": [ + 251, + 147, + 159, + 342 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "55", + "id": 131 + }, + { + "area": 29298, + "iscrowd": 0, + "bbox": [ + 22, + 252, + 113, + 256 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "56", + "id": 132 + }, + { + "area": 51984, + "iscrowd": 0, + "bbox": [ + 163, + 181, + 151, + 341 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "56", + "id": 133 + }, + { + "area": 21660, + "iscrowd": 0, + "bbox": [ + 345, + 328, + 113, + 189 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "56", + "id": 134 + }, + { + "area": 31860, + "iscrowd": 0, + "bbox": [ + 69, + 230, + 117, + 269 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "57", + "id": 135 + }, + { + "area": 32963, + "iscrowd": 0, + "bbox": [ + 167, + 313, + 276, + 118 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "57", + "id": 136 + }, + { + "area": 21384, + "iscrowd": 0, + "bbox": [ + 283, + 315, + 107, + 197 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "57", + "id": 137 + }, + { + "area": 20460, + "iscrowd": 0, + "bbox": [ + 44, + 332, + 109, + 185 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "58", + "id": 138 + }, + { + "area": 15808, + "iscrowd": 0, + "bbox": [ + 175, + 231, + 75, + 207 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "58", + "id": 139 + }, + { + "area": 55040, + "iscrowd": 0, + "bbox": [ + 297, + 194, + 171, + 319 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "58", + "id": 140 + }, + { + "area": 52456, + "iscrowd": 0, + "bbox": [ + 48, + 322, + 315, + 165 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "59", + "id": 141 + }, + { + "area": 22781, + "iscrowd": 0, + "bbox": [ + 182, + 318, + 108, + 208 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "59", + "id": 142 + }, + { + "area": 19040, + "iscrowd": 0, + "bbox": [ + 332, + 201, + 84, + 223 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "59", + "id": 143 + }, + { + "area": 51958, + "iscrowd": 0, + "bbox": [ + 30, + 296, + 312, + 165 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "60", + "id": 144 + }, + { + "area": 23353, + "iscrowd": 0, + "bbox": [ + 203, + 360, + 192, + 120 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "60", + "id": 145 + }, + { + "area": 19314, + "iscrowd": 0, + "bbox": [ + 312, + 170, + 86, + 221 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "60", + "id": 146 + }, + { + "area": 48960, + "iscrowd": 0, + "bbox": [ + 34, + 260, + 305, + 159 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "61", + "id": 147 + }, + { + "area": 44520, + "iscrowd": 0, + "bbox": [ + 144, + 331, + 264, + 167 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "61", + "id": 148 + }, + { + "area": 20240, + "iscrowd": 0, + "bbox": [ + 376, + 241, + 109, + 183 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "61", + "id": 149 + }, + { + "area": 26429, + "iscrowd": 0, + "bbox": [ + 67, + 255, + 106, + 246 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "62", + "id": 150 + }, + { + "area": 17836, + "iscrowd": 0, + "bbox": [ + 191, + 333, + 97, + 181 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "62", + "id": 151 + }, + { + "area": 42280, + "iscrowd": 0, + "bbox": [ + 332, + 179, + 139, + 301 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "62", + "id": 152 + }, + { + "area": 22321, + "iscrowd": 0, + "bbox": [ + 59, + 269, + 100, + 220 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "63", + "id": 153 + }, + { + "area": 13833, + "iscrowd": 0, + "bbox": [ + 172, + 326, + 86, + 158 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "63", + "id": 154 + }, + { + "area": 21315, + "iscrowd": 0, + "bbox": [ + 253, + 235, + 86, + 244 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "63", + "id": 155 + }, + { + "area": 38808, + "iscrowd": 0, + "bbox": [ + 351, + 202, + 131, + 293 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "63", + "id": 156 + }, + { + "area": 44243, + "iscrowd": 0, + "bbox": [ + 40, + 200, + 150, + 292 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "64", + "id": 157 + }, + { + "area": 19624, + "iscrowd": 0, + "bbox": [ + 182, + 259, + 87, + 222 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "64", + "id": 158 + }, + { + "area": 13770, + "iscrowd": 0, + "bbox": [ + 291, + 317, + 84, + 161 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "64", + "id": 159 + }, + { + "area": 25351, + "iscrowd": 0, + "bbox": [ + 369, + 227, + 100, + 250 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "64", + "id": 160 + }, + { + "area": 48513, + "iscrowd": 0, + "bbox": [ + 47, + 181, + 156, + 308 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "65", + "id": 161 + }, + { + "area": 14520, + "iscrowd": 0, + "bbox": [ + 192, + 304, + 87, + 164 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "65", + "id": 162 + }, + { + "area": 24832, + "iscrowd": 0, + "bbox": [ + 274, + 211, + 96, + 255 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "65", + "id": 163 + }, + { + "area": 25334, + "iscrowd": 0, + "bbox": [ + 376, + 247, + 105, + 238 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "65", + "id": 164 + }, + { + "area": 26001, + "iscrowd": 0, + "bbox": [ + 19, + 239, + 106, + 242 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "66", + "id": 165 + }, + { + "area": 36608, + "iscrowd": 0, + "bbox": [ + 96, + 200, + 127, + 285 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "66", + "id": 166 + }, + { + "area": 12000, + "iscrowd": 0, + "bbox": [ + 217, + 314, + 79, + 149 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "66", + "id": 167 + }, + { + "area": 20774, + "iscrowd": 0, + "bbox": [ + 386, + 254, + 93, + 220 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "66", + "id": 168 + }, + { + "area": 26312, + "iscrowd": 0, + "bbox": [ + 71, + 198, + 103, + 252 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "67", + "id": 169 + }, + { + "area": 38645, + "iscrowd": 0, + "bbox": [ + 149, + 160, + 130, + 294 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "67", + "id": 170 + }, + { + "area": 21160, + "iscrowd": 0, + "bbox": [ + 234, + 392, + 183, + 114 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "67", + "id": 171 + }, + { + "area": 20460, + "iscrowd": 0, + "bbox": [ + 356, + 217, + 92, + 219 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "67", + "id": 172 + }, + { + "area": 16300, + "iscrowd": 0, + "bbox": [ + 22, + 341, + 99, + 162 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "68", + "id": 173 + }, + { + "area": 21736, + "iscrowd": 0, + "bbox": [ + 110, + 249, + 87, + 246 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "68", + "id": 174 + }, + { + "area": 52052, + "iscrowd": 0, + "bbox": [ + 191, + 365, + 285, + 181 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "68", + "id": 175 + }, + { + "area": 17622, + "iscrowd": 0, + "bbox": [ + 382, + 271, + 88, + 197 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "68", + "id": 176 + }, + { + "area": 37089, + "iscrowd": 0, + "bbox": [ + 107, + 109, + 116, + 316 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "69", + "id": 177 + }, + { + "area": 20250, + "iscrowd": 0, + "bbox": [ + 272, + 248, + 89, + 224 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "69", + "id": 178 + }, + { + "area": 20930, + "iscrowd": 0, + "bbox": [ + 361, + 332, + 114, + 181 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "69", + "id": 179 + }, + { + "area": 64581, + "iscrowd": 0, + "bbox": [ + 47, + 357, + 308, + 208 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "69", + "id": 180 + }, + { + "area": 38560, + "iscrowd": 0, + "bbox": [ + 21, + 378, + 240, + 159 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "70", + "id": 181 + }, + { + "area": 56240, + "iscrowd": 0, + "bbox": [ + 60, + 305, + 295, + 189 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "70", + "id": 182 + }, + { + "area": 17640, + "iscrowd": 0, + "bbox": [ + 201, + 162, + 89, + 195 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "70", + "id": 183 + }, + { + "area": 19264, + "iscrowd": 0, + "bbox": [ + 361, + 280, + 111, + 171 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "70", + "id": 184 + }, + { + "area": 72615, + "iscrowd": 0, + "bbox": [ + 68, + 340, + 308, + 234 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "71", + "id": 185 + }, + { + "area": 17201, + "iscrowd": 0, + "bbox": [ + 9, + 273, + 102, + 166 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "71", + "id": 186 + }, + { + "area": 16856, + "iscrowd": 0, + "bbox": [ + 236, + 171, + 85, + 195 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "71", + "id": 187 + }, + { + "area": 21922, + "iscrowd": 0, + "bbox": [ + 365, + 210, + 96, + 225 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "71", + "id": 188 + }, + { + "area": 18360, + "iscrowd": 0, + "bbox": [ + 1, + 318, + 101, + 179 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "72", + "id": 189 + }, + { + "area": 40690, + "iscrowd": 0, + "bbox": [ + 106, + 178, + 129, + 312 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "72", + "id": 190 + }, + { + "area": 23296, + "iscrowd": 0, + "bbox": [ + 239, + 203, + 90, + 255 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "72", + "id": 191 + }, + { + "area": 27195, + "iscrowd": 0, + "bbox": [ + 372, + 245, + 110, + 244 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "72", + "id": 192 + }, + { + "area": 17200, + "iscrowd": 0, + "bbox": [ + 50, + 312, + 99, + 171 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "73", + "id": 193 + }, + { + "area": 29606, + "iscrowd": 0, + "bbox": [ + 168, + 180, + 112, + 261 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "73", + "id": 194 + }, + { + "area": 20430, + "iscrowd": 0, + "bbox": [ + 278, + 206, + 89, + 226 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "73", + "id": 195 + }, + { + "area": 27930, + "iscrowd": 0, + "bbox": [ + 118, + 418, + 265, + 104 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "73", + "id": 196 + }, + { + "area": 31024, + "iscrowd": 0, + "bbox": [ + 51, + 418, + 276, + 111 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "74", + "id": 197 + }, + { + "area": 37812, + "iscrowd": 0, + "bbox": [ + 96, + 170, + 137, + 273 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "74", + "id": 198 + }, + { + "area": 22204, + "iscrowd": 0, + "bbox": [ + 233, + 193, + 90, + 243 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "74", + "id": 199 + }, + { + "area": 20202, + "iscrowd": 0, + "bbox": [ + 344, + 310, + 110, + 181 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "74", + "id": 200 + }, + { + "area": 31857, + "iscrowd": 0, + "bbox": [ + 86, + 205, + 122, + 258 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "75", + "id": 201 + }, + { + "area": 44548, + "iscrowd": 0, + "bbox": [ + 36, + 402, + 258, + 171 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "75", + "id": 202 + }, + { + "area": 12920, + "iscrowd": 0, + "bbox": [ + 297, + 323, + 84, + 151 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "75", + "id": 203 + }, + { + "area": 21024, + "iscrowd": 0, + "bbox": [ + 386, + 268, + 95, + 218 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "75", + "id": 204 + }, + { + "area": 32574, + "iscrowd": 0, + "bbox": [ + 32, + 247, + 121, + 266 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "76", + "id": 205 + }, + { + "area": 39087, + "iscrowd": 0, + "bbox": [ + 110, + 157, + 128, + 302 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "76", + "id": 206 + }, + { + "area": 60680, + "iscrowd": 0, + "bbox": [ + 57, + 393, + 295, + 204 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "76", + "id": 207 + }, + { + "area": 18656, + "iscrowd": 0, + "bbox": [ + 348, + 300, + 105, + 175 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "76", + "id": 208 + }, + { + "area": 19530, + "iscrowd": 0, + "bbox": [ + 47, + 209, + 92, + 209 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "77", + "id": 209 + }, + { + "area": 34602, + "iscrowd": 0, + "bbox": [ + 29, + 331, + 236, + 145 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "77", + "id": 210 + }, + { + "area": 12972, + "iscrowd": 0, + "bbox": [ + 261, + 293, + 93, + 137 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "77", + "id": 211 + }, + { + "area": 69402, + "iscrowd": 0, + "bbox": [ + 209, + 352, + 268, + 257 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "77", + "id": 212 + }, + { + "area": 23607, + "iscrowd": 0, + "bbox": [ + 22, + 371, + 182, + 128 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "78", + "id": 213 + }, + { + "area": 33824, + "iscrowd": 0, + "bbox": [ + 196, + 220, + 111, + 301 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "78", + "id": 214 + }, + { + "area": 23040, + "iscrowd": 0, + "bbox": [ + 291, + 281, + 95, + 239 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "78", + "id": 215 + }, + { + "area": 15717, + "iscrowd": 0, + "bbox": [ + 369, + 338, + 92, + 168 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "78", + "id": 216 + }, + { + "area": 33330, + "iscrowd": 0, + "bbox": [ + 53, + 391, + 201, + 164 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "79", + "id": 217 + }, + { + "area": 26496, + "iscrowd": 0, + "bbox": [ + 55, + 326, + 191, + 137 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "79", + "id": 218 + }, + { + "area": 42826, + "iscrowd": 0, + "bbox": [ + 238, + 160, + 132, + 321 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "79", + "id": 219 + }, + { + "area": 27904, + "iscrowd": 0, + "bbox": [ + 337, + 229, + 108, + 255 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "79", + "id": 220 + }, + { + "area": 22852, + "iscrowd": 0, + "bbox": [ + 44, + 337, + 115, + 196 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "80", + "id": 221 + }, + { + "area": 34352, + "iscrowd": 0, + "bbox": [ + 142, + 228, + 112, + 303 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "80", + "id": 222 + }, + { + "area": 48422, + "iscrowd": 0, + "bbox": [ + 245, + 192, + 141, + 340 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "80", + "id": 223 + }, + { + "area": 30705, + "iscrowd": 0, + "bbox": [ + 349, + 266, + 114, + 266 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "80", + "id": 224 + }, + { + "area": 17430, + "iscrowd": 0, + "bbox": [ + 24, + 336, + 104, + 165 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "81", + "id": 225 + }, + { + "area": 25900, + "iscrowd": 0, + "bbox": [ + 106, + 244, + 99, + 258 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "81", + "id": 226 + }, + { + "area": 20025, + "iscrowd": 0, + "bbox": [ + 265, + 268, + 88, + 224 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "81", + "id": 227 + }, + { + "area": 43071, + "iscrowd": 0, + "bbox": [ + 340, + 217, + 146, + 292 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "81", + "id": 228 + }, + { + "area": 15876, + "iscrowd": 0, + "bbox": [ + 29, + 323, + 107, + 146 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "82", + "id": 229 + }, + { + "area": 21528, + "iscrowd": 0, + "bbox": [ + 248, + 256, + 91, + 233 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "82", + "id": 230 + }, + { + "area": 50080, + "iscrowd": 0, + "bbox": [ + 327, + 200, + 159, + 312 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "82", + "id": 231 + }, + { + "area": 33136, + "iscrowd": 0, + "bbox": [ + 1, + 437, + 303, + 108 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "82", + "id": 232 + }, + { + "area": 43566, + "iscrowd": 0, + "bbox": [ + 53, + 212, + 136, + 317 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "83", + "id": 233 + }, + { + "area": 14696, + "iscrowd": 0, + "bbox": [ + 177, + 335, + 87, + 166 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "83", + "id": 234 + }, + { + "area": 26190, + "iscrowd": 0, + "bbox": [ + 254, + 246, + 96, + 269 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "83", + "id": 235 + }, + { + "area": 25544, + "iscrowd": 0, + "bbox": [ + 338, + 281, + 102, + 247 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "83", + "id": 236 + }, + { + "area": 41580, + "iscrowd": 0, + "bbox": [ + 88, + 150, + 134, + 307 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "84", + "id": 237 + }, + { + "area": 13524, + "iscrowd": 0, + "bbox": [ + 209, + 283, + 91, + 146 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "84", + "id": 238 + }, + { + "area": 28140, + "iscrowd": 0, + "bbox": [ + 293, + 185, + 104, + 267 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "84", + "id": 239 + }, + { + "area": 32032, + "iscrowd": 0, + "bbox": [ + 96, + 412, + 285, + 111 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "84", + "id": 240 + }, + { + "area": 61060, + "iscrowd": 0, + "bbox": [ + 26, + 195, + 214, + 283 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "85", + "id": 241 + }, + { + "area": 23205, + "iscrowd": 0, + "bbox": [ + 211, + 214, + 90, + 254 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "85", + "id": 242 + }, + { + "area": 22892, + "iscrowd": 0, + "bbox": [ + 287, + 245, + 96, + 235 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "85", + "id": 243 + }, + { + "area": 19224, + "iscrowd": 0, + "bbox": [ + 367, + 317, + 107, + 177 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "85", + "id": 244 + }, + { + "area": 71095, + "iscrowd": 0, + "bbox": [ + 9, + 238, + 294, + 240 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "86", + "id": 245 + }, + { + "area": 16000, + "iscrowd": 0, + "bbox": [ + 272, + 312, + 99, + 159 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "86", + "id": 246 + }, + { + "area": 28749, + "iscrowd": 0, + "bbox": [ + 355, + 238, + 110, + 258 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "86", + "id": 247 + }, + { + "area": 38259, + "iscrowd": 0, + "bbox": [ + 65, + 429, + 326, + 116 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "86", + "id": 248 + }, + { + "area": 59648, + "iscrowd": 0, + "bbox": [ + 2, + 235, + 255, + 232 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "87", + "id": 249 + }, + { + "area": 13140, + "iscrowd": 0, + "bbox": [ + 229, + 303, + 89, + 145 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "87", + "id": 250 + }, + { + "area": 19008, + "iscrowd": 0, + "bbox": [ + 311, + 231, + 95, + 197 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "87", + "id": 251 + }, + { + "area": 36542, + "iscrowd": 0, + "bbox": [ + 157, + 409, + 301, + 120 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "87", + "id": 252 + }, + { + "area": 30480, + "iscrowd": 0, + "bbox": [ + 24, + 260, + 119, + 253 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "88", + "id": 253 + }, + { + "area": 46631, + "iscrowd": 0, + "bbox": [ + 113, + 239, + 210, + 220 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "88", + "id": 254 + }, + { + "area": 14670, + "iscrowd": 0, + "bbox": [ + 289, + 302, + 89, + 162 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "88", + "id": 255 + }, + { + "area": 25602, + "iscrowd": 0, + "bbox": [ + 362, + 207, + 101, + 250 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "88", + "id": 256 + }, + { + "area": 55536, + "iscrowd": 0, + "bbox": [ + 31, + 159, + 155, + 355 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "89", + "id": 257 + }, + { + "area": 20352, + "iscrowd": 0, + "bbox": [ + 173, + 312, + 105, + 191 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "89", + "id": 258 + }, + { + "area": 28886, + "iscrowd": 0, + "bbox": [ + 258, + 192, + 100, + 285 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "89", + "id": 259 + }, + { + "area": 29304, + "iscrowd": 0, + "bbox": [ + 342, + 229, + 110, + 263 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "89", + "id": 260 + }, + { + "area": 38776, + "iscrowd": 0, + "bbox": [ + 33, + 228, + 130, + 295 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "90", + "id": 261 + }, + { + "area": 41856, + "iscrowd": 0, + "bbox": [ + 136, + 183, + 127, + 326 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "90", + "id": 262 + }, + { + "area": 18180, + "iscrowd": 0, + "bbox": [ + 251, + 322, + 100, + 179 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "90", + "id": 263 + }, + { + "area": 27864, + "iscrowd": 0, + "bbox": [ + 340, + 250, + 107, + 257 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "90", + "id": 264 + }, + { + "area": 69360, + "iscrowd": 0, + "bbox": [ + 43, + 384, + 407, + 169 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "91", + "id": 265 + }, + { + "area": 25648, + "iscrowd": 0, + "bbox": [ + 113, + 169, + 111, + 228 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "91", + "id": 266 + }, + { + "area": 12495, + "iscrowd": 0, + "bbox": [ + 211, + 275, + 104, + 118 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "91", + "id": 267 + }, + { + "area": 19686, + "iscrowd": 0, + "bbox": [ + 309, + 200, + 101, + 192 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "91", + "id": 268 + }, + { + "area": 25185, + "iscrowd": 0, + "bbox": [ + 200, + 292, + 114, + 218 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "92", + "id": 269 + }, + { + "area": 18715, + "iscrowd": 0, + "bbox": [ + 122, + 120, + 94, + 196 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "92", + "id": 270 + }, + { + "area": 14760, + "iscrowd": 0, + "bbox": [ + 292, + 152, + 89, + 163 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "92", + "id": 271 + }, + { + "area": 46704, + "iscrowd": 0, + "bbox": [ + 67, + 307, + 335, + 138 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "92", + "id": 272 + }, + { + "area": 39650, + "iscrowd": 0, + "bbox": [ + 109, + 163, + 121, + 324 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "93", + "id": 273 + }, + { + "area": 23142, + "iscrowd": 0, + "bbox": [ + 222, + 285, + 113, + 202 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "93", + "id": 274 + }, + { + "area": 14130, + "iscrowd": 0, + "bbox": [ + 305, + 151, + 89, + 156 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "93", + "id": 275 + }, + { + "area": 43335, + "iscrowd": 0, + "bbox": [ + 91, + 296, + 320, + 134 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "93", + "id": 276 + }, + { + "area": 54756, + "iscrowd": 0, + "bbox": [ + 18, + 202, + 161, + 337 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "94", + "id": 277 + }, + { + "area": 30672, + "iscrowd": 0, + "bbox": [ + 163, + 242, + 107, + 283 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "94", + "id": 278 + }, + { + "area": 19136, + "iscrowd": 0, + "bbox": [ + 260, + 335, + 103, + 183 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "94", + "id": 279 + }, + { + "area": 25704, + "iscrowd": 0, + "bbox": [ + 351, + 264, + 101, + 251 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "94", + "id": 280 + }, + { + "area": 64566, + "iscrowd": 0, + "bbox": [ + 104, + 238, + 305, + 210 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "95", + "id": 281 + }, + { + "area": 85250, + "iscrowd": 0, + "bbox": [ + 86, + 244, + 340, + 249 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "96", + "id": 282 + }, + { + "area": 90207, + "iscrowd": 0, + "bbox": [ + 86, + 245, + 350, + 256 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "97", + "id": 283 + }, + { + "area": 87235, + "iscrowd": 0, + "bbox": [ + 45, + 236, + 364, + 238 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "98", + "id": 284 + }, + { + "area": 41400, + "iscrowd": 0, + "bbox": [ + 152, + 249, + 224, + 183 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "99", + "id": 285 + }, + { + "area": 46886, + "iscrowd": 0, + "bbox": [ + 130, + 271, + 237, + 196 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "100", + "id": 286 + }, + { + "area": 46872, + "iscrowd": 0, + "bbox": [ + 65, + 289, + 371, + 125 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "101", + "id": 287 + }, + { + "area": 51054, + "iscrowd": 0, + "bbox": [ + 139, + 253, + 200, + 253 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "102", + "id": 288 + }, + { + "area": 29750, + "iscrowd": 0, + "bbox": [ + 144, + 260, + 174, + 169 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "103", + "id": 289 + }, + { + "area": 29684, + "iscrowd": 0, + "bbox": [ + 175, + 253, + 180, + 163 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "104", + "id": 290 + }, + { + "area": 52578, + "iscrowd": 0, + "bbox": [ + 142, + 257, + 253, + 206 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "105", + "id": 291 + }, + { + "area": 52224, + "iscrowd": 0, + "bbox": [ + 103, + 272, + 255, + 203 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "106", + "id": 292 + }, + { + "area": 30820, + "iscrowd": 0, + "bbox": [ + 107, + 272, + 133, + 229 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "107", + "id": 293 + }, + { + "area": 23408, + "iscrowd": 0, + "bbox": [ + 261, + 274, + 132, + 175 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "107", + "id": 294 + }, + { + "area": 29870, + "iscrowd": 0, + "bbox": [ + 35, + 314, + 289, + 102 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "108", + "id": 295 + }, + { + "area": 24112, + "iscrowd": 0, + "bbox": [ + 313, + 281, + 136, + 175 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "108", + "id": 296 + }, + { + "area": 34438, + "iscrowd": 0, + "bbox": [ + 97, + 272, + 133, + 256 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "109", + "id": 297 + }, + { + "area": 22550, + "iscrowd": 0, + "bbox": [ + 225, + 293, + 204, + 109 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "109", + "id": 298 + }, + { + "area": 33300, + "iscrowd": 0, + "bbox": [ + 47, + 308, + 299, + 110 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "110", + "id": 299 + }, + { + "area": 26368, + "iscrowd": 0, + "bbox": [ + 331, + 205, + 127, + 205 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "110", + "id": 300 + }, + { + "area": 39162, + "iscrowd": 0, + "bbox": [ + 98, + 176, + 121, + 320 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "111", + "id": 301 + }, + { + "area": 65048, + "iscrowd": 0, + "bbox": [ + 220, + 130, + 172, + 375 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "111", + "id": 302 + }, + { + "area": 39360, + "iscrowd": 0, + "bbox": [ + 29, + 415, + 327, + 119 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "112", + "id": 303 + }, + { + "area": 56724, + "iscrowd": 0, + "bbox": [ + 272, + 145, + 162, + 347 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "112", + "id": 304 + }, + { + "area": 29500, + "iscrowd": 0, + "bbox": [ + 82, + 128, + 117, + 249 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "113", + "id": 305 + }, + { + "area": 77172, + "iscrowd": 0, + "bbox": [ + 89, + 333, + 353, + 217 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "113", + "id": 306 + }, + { + "area": 31868, + "iscrowd": 0, + "bbox": [ + 74, + 114, + 123, + 256 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "114", + "id": 307 + }, + { + "area": 87290, + "iscrowd": 0, + "bbox": [ + 56, + 330, + 405, + 214 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "114", + "id": 308 + }, + { + "area": 40119, + "iscrowd": 0, + "bbox": [ + 229, + 118, + 128, + 310 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "115", + "id": 309 + }, + { + "area": 58828, + "iscrowd": 0, + "bbox": [ + 131, + 365, + 307, + 190 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "115", + "id": 310 + }, + { + "area": 37820, + "iscrowd": 0, + "bbox": [ + 65, + 169, + 121, + 309 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "116", + "id": 311 + }, + { + "area": 72354, + "iscrowd": 0, + "bbox": [ + 242, + 132, + 185, + 388 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "116", + "id": 312 + }, + { + "area": 54040, + "iscrowd": 0, + "bbox": [ + 140, + 167, + 139, + 385 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "117", + "id": 313 + }, + { + "area": 30694, + "iscrowd": 0, + "bbox": [ + 240, + 190, + 102, + 297 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "117", + "id": 314 + }, + { + "area": 23816, + "iscrowd": 0, + "bbox": [ + 196, + 200, + 103, + 228 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "118", + "id": 315 + }, + { + "area": 46280, + "iscrowd": 0, + "bbox": [ + 88, + 410, + 355, + 129 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "118", + "id": 316 + }, + { + "area": 35160, + "iscrowd": 0, + "bbox": [ + 139, + 116, + 119, + 292 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "119", + "id": 317 + }, + { + "area": 46170, + "iscrowd": 0, + "bbox": [ + 128, + 389, + 341, + 134 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "119", + "id": 318 + }, + { + "area": 34691, + "iscrowd": 0, + "bbox": [ + 150, + 139, + 112, + 306 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "120", + "id": 319 + }, + { + "area": 55769, + "iscrowd": 0, + "bbox": [ + 149, + 355, + 256, + 216 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "120", + "id": 320 + }, + { + "area": 33744, + "iscrowd": 0, + "bbox": [ + 36, + 294, + 303, + 110 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "121", + "id": 321 + }, + { + "area": 43155, + "iscrowd": 0, + "bbox": [ + 169, + 371, + 314, + 136 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "121", + "id": 322 + }, + { + "area": 38936, + "iscrowd": 0, + "bbox": [ + 31, + 296, + 313, + 123 + ], + "category_id": 4, + "ignore": 0, + "segmentation": [], + "image_id": "122", + "id": 323 + }, + { + "area": 47742, + "iscrowd": 0, + "bbox": [ + 171, + 373, + 326, + 145 + ], + "category_id": 3, + "ignore": 0, + "segmentation": [], + "image_id": "122", + "id": 324 + }, + { + "area": 25773, + "iscrowd": 0, + "bbox": [ + 107, + 288, + 120, + 212 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "123", + "id": 325 + }, + { + "area": 69706, + "iscrowd": 0, + "bbox": [ + 243, + 112, + 181, + 382 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "123", + "id": 326 + }, + { + "area": 26334, + "iscrowd": 0, + "bbox": [ + 50, + 263, + 125, + 208 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "124", + "id": 327 + }, + { + "area": 106304, + "iscrowd": 0, + "bbox": [ + 122, + 186, + 351, + 301 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "124", + "id": 328 + }, + { + "area": 23617, + "iscrowd": 0, + "bbox": [ + 86, + 371, + 208, + 112 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "125", + "id": 329 + }, + { + "area": 68816, + "iscrowd": 0, + "bbox": [ + 279, + 111, + 183, + 373 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "125", + "id": 330 + }, + { + "area": 19360, + "iscrowd": 0, + "bbox": [ + 111, + 206, + 109, + 175 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "126", + "id": 331 + }, + { + "area": 99470, + "iscrowd": 0, + "bbox": [ + 102, + 285, + 342, + 289 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "126", + "id": 332 + }, + { + "area": 40200, + "iscrowd": 0, + "bbox": [ + 103, + 300, + 149, + 267 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "127", + "id": 333 + }, + { + "area": 67968, + "iscrowd": 0, + "bbox": [ + 229, + 71, + 176, + 383 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "127", + "id": 334 + }, + { + "area": 14151, + "iscrowd": 0, + "bbox": [ + 134, + 257, + 88, + 158 + ], + "category_id": 1, + "ignore": 0, + "segmentation": [], + "image_id": "128", + "id": 335 + }, + { + "area": 70485, + "iscrowd": 0, + "bbox": [ + 244, + 139, + 184, + 380 + ], + "category_id": 2, + "ignore": 0, + "segmentation": [], + "image_id": "128", + "id": 336 + } + ], + "categories": [ + { + "supercategory": "none", + "id": 1, + "name": "can" + }, + { + "supercategory": "none", + "id": 2, + "name": "carton" + }, + { + "supercategory": "none", + "id": 3, + "name": "milk_bottle" + }, + { + "supercategory": "none", + "id": 4, + "name": "water_bottle" + } + ] +} \ No newline at end of file diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/test_image.jpg b/how-to-use-azureml/automated-machine-learning/image-object-detection/test_image.jpg new file mode 100644 index 000000000..a20619469 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/image-object-detection/test_image.jpg differ diff --git a/how-to-use-azureml/automated-machine-learning/image-object-detection/yolo_onnx_preprocessing_utils.py b/how-to-use-azureml/automated-machine-learning/image-object-detection/yolo_onnx_preprocessing_utils.py new file mode 100644 index 000000000..caacff924 --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/image-object-detection/yolo_onnx_preprocessing_utils.py @@ -0,0 +1,327 @@ +import cv2 +import numpy as np +import torch +import time +import torchvision +from PIL import Image +from typing import Any, Dict, List + + +def letterbox( + img, + new_shape=(640, 640), + color=(114, 114, 114), + auto=True, + scaleFill=False, + scaleup=True, +): + """Resize image to a 32-pixel-multiple rectangle + https://github.com/ultralytics/yolov3/issues/232 + + :param img: an image + :type img: + :param new_shape: target shape in [height, width] + :type new_shape: + :param color: color for pad area + :type color: + :param auto: minimum rectangle + :type auto: bool + :param scaleFill: stretch the image without pad + :type scaleFill: bool + :param scaleup: scale up + :type scaleup: bool + :return: letterbox image, scale ratio, padded area in (width, height) in each side + :rtype: , , + """ + shape = img.shape[:2] # current shape [height, width] + if isinstance(new_shape, int): + new_shape = (new_shape, new_shape) + + # Scale ratio (new / old) + r = min(new_shape[0] / shape[0], new_shape[1] / shape[1]) + if not scaleup: # only scale down, do not scale up (for better test mAP) + r = min(r, 1.0) + + # Compute padding + ratio = r, r # width, height ratios + new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r)) + dw, dh = new_shape[1] - new_unpad[0], new_shape[0] - new_unpad[1] # wh padding + if auto: # minimum rectangle + dw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh padding + elif scaleFill: # stretch + dw, dh = 0.0, 0.0 + new_unpad = (new_shape[1], new_shape[0]) + ratio = new_shape[1] / shape[1], new_shape[0] / shape[0] # width, height ratios + + dw /= 2 # divide padding into 2 sides + dh /= 2 + + if shape[::-1] != new_unpad: # resize + img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR) + top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) + left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) + img = cv2.copyMakeBorder( + img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color + ) # add border + return img, ratio, (dw, dh) + + +def clip_coords(boxes, img_shape): + """Clip bounding xyxy bounding boxes to image shape (height, width) + + :param boxes: bbox + :type boxes: + :return: img_shape: image shape + :rtype: img_shape: : (height, width) + """ + boxes[:, 0].clamp_(0, img_shape[1]) # x1 + boxes[:, 1].clamp_(0, img_shape[0]) # y1 + boxes[:, 2].clamp_(0, img_shape[1]) # x2 + boxes[:, 3].clamp_(0, img_shape[0]) # y2 + + +def unpad_bbox(boxes, img_shape, pad): + """Correct bbox coordinates by removing the padded area from letterbox image + + :param boxes: bbox absolute coordinates from prediction + :type boxes: + :param img_shape: image shape + :type img_shape: : (height, width) + :param pad: pad used in letterbox image for inference + :type pad: : (width, height) + :return: (unpadded) image height and width + :rtype: : (height, width) + """ + dw, dh = pad + left, right = int(round(dw - 0.1)), int(round(dw + 0.1)) + top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1)) + img_width = img_shape[1] - (left + right) + img_height = img_shape[0] - (top + bottom) + + if boxes is not None: + boxes[:, 0] -= left + boxes[:, 1] -= top + boxes[:, 2] -= left + boxes[:, 3] -= top + clip_coords(boxes, (img_height, img_width)) + + return img_height, img_width + + +def _convert_to_rcnn_output(output, height, width, pad): + # output: nx6 (x1, y1, x2, y2, conf, cls) + rcnn_label: Dict[str, List[Any]] = {"boxes": [], "labels": [], "scores": []} + + # Adjust bbox to effective image bounds + img_height, img_width = unpad_bbox( + output[:, :4] if output is not None else None, (height, width), pad + ) + + if output is not None: + rcnn_label["boxes"] = output[:, :4] + rcnn_label["labels"] = output[:, 5:6].long() + rcnn_label["scores"] = output[:, 4:5] + + return rcnn_label, (img_height, img_width) + + +def xywh2xyxy(x): + """Convert nx4 boxes from [x, y, w, h] to [x1, y1, x2, y2] where xy1=top-left, xy2=bottom-right + + :param x: bbox coordinates in [x center, y center, w, h] + :type x: or torch.Tensor + :return: new bbox coordinates in [x1, y1, x2, y2] + :rtype: or torch.Tensor + """ + y = torch.zeros_like(x) if isinstance(x, torch.Tensor) else np.zeros_like(x) + y[:, 0] = x[:, 0] - x[:, 2] / 2 # top left x + y[:, 1] = x[:, 1] - x[:, 3] / 2 # top left y + y[:, 2] = x[:, 0] + x[:, 2] / 2 # bottom right x + y[:, 3] = x[:, 1] + x[:, 3] / 2 # bottom right y + return y + + +def box_iou(box1, box2): + """Return intersection-over-union (Jaccard index) of boxes. + Both sets of boxes are expected to be in (x1, y1, x2, y2) format. + https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py + + :param box1: bbox in (Tensor[N, 4]), N for multiple bboxes and 4 for the box coordinates + :type box1: + :param box2: bbox in (Tensor[M, 4]), M is for multiple bboxes + :type box2: + :return: iou of box1 to box2 in (Tensor[N, M]), the NxM matrix containing the pairwise + IoU values for every element in boxes1 and boxes2 + :rtype: + """ + + def box_area(box): + # box = 4xn + return (box[2] - box[0]) * (box[3] - box[1]) + + area1 = box_area(box1.t()) + area2 = box_area(box2.t()) + + # inter(N,M) = (rb(N,M,2) - lt(N,M,2)).clamp(0).prod(2) + inter = ( + ( + torch.min(box1[:, None, 2:], box2[:, 2:]) + - torch.max(box1[:, None, :2], box2[:, :2]) + ) + .clamp(0) + .prod(2) + ) + return inter / ( + area1[:, None] + area2 - inter + ) # iou = inter / (area1 + area2 - inter) + + +def non_max_suppression( + prediction, + conf_thres=0.1, + iou_thres=0.6, + multi_label=False, + merge=False, + classes=None, + agnostic=False, +): + """Performs per-class Non-Maximum Suppression (NMS) on inference results + + :param prediction: predictions + :type prediction: + :param conf_thres: confidence threshold + :type conf_thres: float + :param iou_thres: IoU threshold + :type iou_thres: float + :param multi_label: enable to have multiple labels in each box? + :type multi_label: bool + :param merge: Merge NMS (boxes merged using weighted mean) + :type merge: bool + :param classes: specific target class + :type classes: + :param agnostic: enable class agnostic NMS? + :type agnostic: bool + :return: detections with shape: nx6 (x1, y1, x2, y2, conf, cls) + :rtype: + """ + if prediction.dtype is torch.float16: + prediction = prediction.float() # to FP32 + + nc = prediction[0].shape[1] - 5 # number of classes + xc = prediction[..., 4] > conf_thres # candidates + + # min_wh = 2 + max_wh = 4096 # (pixels) maximum box width and height + max_det = 300 # maximum number of detections per image + time_limit = 10.0 # seconds to quit after + redundant = True # require redundant detections + if multi_label and nc < 2: + multi_label = False # multiple labels per box (adds 0.5ms/img) + + t = time.time() + output = [None] * prediction.shape[0] + for xi, x in enumerate(prediction): # image index, image inference + # Apply constraints + # x[((x[..., 2:4] < min_wh) | (x[..., 2:4] > max_wh)).any(1), 4] = 0 # width-height + x = x[xc[xi]] # confidence + + # If none remain process next image + if not x.shape[0]: + continue + + # Compute conf + x[:, 5:] *= x[:, 4:5] # conf = obj_conf * cls_conf + + # Box (center x, center y, width, height) to (x1, y1, x2, y2) + box = xywh2xyxy(x[:, :4]) + + # Detections matrix nx6 (xyxy, conf, cls) + if multi_label: + i, j = (x[:, 5:] > conf_thres).nonzero().t() + x = torch.cat((box[i], x[i, j + 5, None], j[:, None].float()), 1) + else: # best class only + conf, j = x[:, 5:].max(1, keepdim=True) + x = torch.cat((box, conf, j.float()), 1)[conf.view(-1) > conf_thres] + + # Filter by class + if classes: + x = x[(x[:, 5:6] == torch.tensor(classes, device=x.device)).any(1)] + + # Apply finite constraint + # if not torch.isfinite(x).all(): + # x = x[torch.isfinite(x).all(1)] + + # If none remain process next image + n = x.shape[0] # number of boxes + if not n: + continue + + # Sort by confidence + # x = x[x[:, 4].argsort(descending=True)] + + # Batched NMS + c = x[:, 5:6] * (0 if agnostic else max_wh) # classes + boxes, scores = x[:, :4] + c, x[:, 4] # boxes (offset by class), scores + i = torchvision.ops.boxes.nms(boxes, scores, iou_thres) + if i.shape[0] > max_det: # limit detections + i = i[:max_det] + if merge and (1 < n < 3e3): + try: # update boxes as boxes(i,4) = weights(i,n) * boxes(n,4) + iou = box_iou(boxes[i], boxes) > iou_thres # iou matrix + weights = iou * scores[None] # box weights + x[i, :4] = torch.mm(weights, x[:, :4]).float() / weights.sum( + 1, keepdim=True + ) # merged boxes + if redundant: + i = i[iou.sum(1) > 1] # require redundancy + except Exception: # possible CUDA error https://github.com/ultralytics/yolov3/issues/1139 + print( + "[WARNING: possible CUDA error ({} {} {} {})]".format( + x, i, x.shape, i.shape + ) + ) + pass + + output[xi] = x[i] + if (time.time() - t) > time_limit: + break # time limit exceeded + + return output + + +def _read_image(ignore_data_errors: bool, image_url: str, use_cv2: bool = False): + try: + if use_cv2: + # cv2 can return None in some error cases + img = cv2.imread(image_url) # BGR + if img is None: + print("cv2.imread returned None") + return img + else: + image = Image.open(image_url).convert("RGB") + return image + except Exception as ex: + if ignore_data_errors: + msg = "Exception occurred when trying to read the image. This file will be ignored." + print(msg) + else: + print(str(ex), has_pii=True) + return None + + +def preprocess(image_url, img_size=640): + img0 = _read_image( + ignore_data_errors=False, image_url=image_url, use_cv2=True + ) # cv2.imread(image_url) # BGR + if img0 is None: + return image_url, None, None + + img, ratio, pad = letterbox(img0, new_shape=img_size, auto=False, scaleup=False) + + # Convert + img = img[:, :, ::-1].transpose(2, 0, 1) # BGR to RGB, to 3x640x640 + img = np.ascontiguousarray(img) + np_image = torch.from_numpy(img) + np_image = np.expand_dims(np_image, axis=0) + np_image = np_image.astype(np.float32) / 255.0 + return np_image, pad diff --git a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb b/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb index b6302f391..4ca569e63 100644 --- a/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb +++ b/how-to-use-azureml/automated-machine-learning/local-run-classification-credit-card-fraud/auto-ml-classification-credit-card-fraud-local.ipynb @@ -1,864 +1,899 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/classification-credit-card-fraud/auto-ml-classification-credit-card-fraud.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Classification of credit card fraudulent transactions with local run **_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Train](#Train)\n", - "1. [Results](#Results)\n", - "1. [Test](#Tests)\n", - "1. [Explanation](#Explanation)\n", - "1. [Acknowledgements](#Acknowledgements)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "\n", - "In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n", - "\n", - "This notebook is using the local machine compute to train the model.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", - "\n", - "In this notebook you will learn how to:\n", - "1. Create an experiment using an existing workspace.\n", - "2. Configure AutoML using `AutoMLConfig`.\n", - "3. Train the model.\n", - "4. Explore the results.\n", - "5. Test the fitted model.\n", - "6. Explore any model's explanation and explore feature importance in azure portal.\n", - "7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n", - "\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import pandas as pd\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.dataset import Dataset\n", - "from azureml.train.automl import AutoMLConfig\n", - "from azureml.interpret import ExplanationClient" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# choose a name for experiment\n", - "experiment_name = 'automl-classification-ccard-local'\n", - "\n", - "experiment=Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Experiment Name'] = experiment.name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Load Data\n", - "\n", - "Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n", - "dataset = Dataset.Tabular.from_delimited_files(data)\n", - "training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n", - "label_column_name = 'Class'" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|classification or regression|\n", - "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
          accuracy
          AUC_weighted
          average_precision_score_weighted
          norm_macro_recall
          precision_score_weighted|\n", - "|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**training_data**|Input dataset, containing both features and label column.|\n", - "|**label_column_name**|The name of the label column.|\n", - "\n", - "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"n_cross_validations\": 3,\n", - " \"primary_metric\": 'AUC_weighted',\n", - " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible\n", - " \"verbosity\": logging.INFO,\n", - " \"enable_stack_ensemble\": False\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'classification',\n", - " debug_log = 'automl_errors.log',\n", - " training_data = training_data,\n", - " label_column_name = label_column_name,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n", - "In this example, we specify `show_output = True` to print currently running iterations to the console." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "local_run = experiment.submit(automl_config, show_output = True)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# If you need to retrieve a run that already started, use the following code\n", - "#from azureml.train.automl.run import AutoMLRun\n", - "#local_run = AutoMLRun(experiment = experiment, run_id = '')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Results" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Widget for Monitoring Runs\n", - "\n", - "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", - "\n", - "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(local_run).show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Analyze results\n", - "\n", - "#### Retrieve the Best Model\n", - "\n", - "Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run, fitted_model = local_run.get_output()\n", - "fitted_model" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Print the properties of the model\n", - "The fitted_model is a python object and you can read the different properties of the object.\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Tests\n", - "\n", - "Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# convert the test data to dataframe\n", - "X_test_df = validation_data.drop_columns(columns=[label_column_name]).to_pandas_dataframe()\n", - "y_test_df = validation_data.keep_columns(columns=[label_column_name], validate=True).to_pandas_dataframe()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# call the predict functions on the model\n", - "y_pred = fitted_model.predict(X_test_df)\n", - "y_pred" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Calculate metrics for the prediction\n", - "\n", - "Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n", - "from the trained model that was returned." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from sklearn.metrics import confusion_matrix\n", - "import numpy as np\n", - "import itertools\n", - "\n", - "cf =confusion_matrix(y_test_df.values,y_pred)\n", - "plt.imshow(cf,cmap=plt.cm.Blues,interpolation='nearest')\n", - "plt.colorbar()\n", - "plt.title('Confusion Matrix')\n", - "plt.xlabel('Predicted')\n", - "plt.ylabel('Actual')\n", - "class_labels = ['False','True']\n", - "tick_marks = np.arange(len(class_labels))\n", - "plt.xticks(tick_marks,class_labels)\n", - "plt.yticks([-0.5,0,1,1.5],['','False','True',''])\n", - "# plotting text value inside cells\n", - "thresh = cf.max() / 2.\n", - "for i,j in itertools.product(range(cf.shape[0]),range(cf.shape[1])):\n", - " plt.text(j,i,format(cf[i,j],'d'),horizontalalignment='center',color='white' if cf[i,j] >thresh else 'black')\n", - "plt.show()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Explanation\n", - "In this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.\n", - "\n", - "Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n", - "\n", - "### Run the explanation\n", - "#### Download the engineered feature importance from artifact store\n", - "You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "client = ExplanationClient.from_run(best_run)\n", - "engineered_explanations = client.download_model_explanation(raw=False)\n", - "print(engineered_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Download the raw feature importance from artifact store\n", - "You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "raw_explanations = client.download_model_explanation(raw=True)\n", - "print(raw_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + best_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Retrieve any other AutoML model from training" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "automl_run, fitted_model = local_run.get_output(metric='accuracy')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Setup the model explanations for AutoML models\n", - "The fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-\n", - "\n", - "1. Featurized data from train samples/test samples\n", - "2. Gather engineered name lists\n", - "3. Find the classes in your labeled column in classification scenarios\n", - "\n", - "The automl_explainer_setup_obj contains all the structures from above list." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "X_train = training_data.drop_columns(columns=[label_column_name])\n", - "y_train = training_data.keep_columns(columns=[label_column_name], validate=True)\n", - "X_test = validation_data.drop_columns(columns=[label_column_name])" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations\n", - "\n", - "automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, \n", - " X_test=X_test, y=y_train, \n", - " task='classification',\n", - " automl_run=automl_run)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Initialize the Mimic Explainer for feature importance\n", - "For explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.interpret.mimic_wrapper import MimicWrapper\n", - "explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator,\n", - " explainable_model=automl_explainer_setup_obj.surrogate_model, \n", - " init_dataset=automl_explainer_setup_obj.X_transform, run=automl_explainer_setup_obj.automl_run,\n", - " features=automl_explainer_setup_obj.engineered_feature_names, \n", - " feature_maps=[automl_explainer_setup_obj.feature_map],\n", - " classes=automl_explainer_setup_obj.classes,\n", - " explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Use Mimic Explainer for computing and visualizing engineered feature importance\n", - "The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Compute the engineered explanations\n", - "engineered_explanations = explainer.explain(['local', 'global'], eval_dataset=automl_explainer_setup_obj.X_test_transform)\n", - "print(engineered_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Use Mimic Explainer for computing and visualizing raw feature importance\n", - "The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Compute the raw explanations\n", - "raw_explanations = explainer.explain(['local', 'global'], get_raw=True,\n", - " raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n", - " eval_dataset=automl_explainer_setup_obj.X_test_transform,\n", - " raw_eval_dataset=automl_explainer_setup_obj.X_test_raw)\n", - "print(raw_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Initialize the scoring Explainer, save and upload it for later use in scoring explanation" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer\n", - "import joblib\n", - "\n", - "# Initialize the ScoringExplainer\n", - "scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map])\n", - "\n", - "# Pickle scoring explainer locally to './scoring_explainer.pkl'\n", - "scoring_explainer_file_name = 'scoring_explainer.pkl'\n", - "with open(scoring_explainer_file_name, 'wb') as stream:\n", - " joblib.dump(scoring_explainer, stream)\n", - "\n", - "# Upload the scoring explainer to the automl run\n", - "automl_run.upload_file('outputs/scoring_explainer.pkl', scoring_explainer_file_name)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)\n", - "\n", - "We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Register trained automl model present in the 'outputs' folder in the artifacts\n", - "original_model = automl_run.register_model(model_name='automl_model', \n", - " model_path='outputs/model.pkl')\n", - "scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer',\n", - " model_path='outputs/scoring_explainer.pkl')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create the conda dependencies for setting up the service\n", - "\n", - "We need to download the conda dependencies using the automl_run object." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.automl.core.shared import constants\n", - "from azureml.core.environment import Environment\n", - "\n", - "automl_run.download_file(constants.CONDA_ENV_FILE_PATH, 'myenv.yml')\n", - "myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n", - "myenv" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Write the Entry Script\n", - "Write the script that will be used to predict on your model" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%writefile score.py\n", - "import joblib\n", - "import pandas as pd\n", - "from azureml.core.model import Model\n", - "from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations\n", - "\n", - "\n", - "def init():\n", - " global automl_model\n", - " global scoring_explainer\n", - "\n", - " # Retrieve the path to the model file using the model name\n", - " # Assume original model is named original_prediction_model\n", - " automl_model_path = Model.get_model_path('automl_model')\n", - " scoring_explainer_path = Model.get_model_path('scoring_explainer')\n", - "\n", - " automl_model = joblib.load(automl_model_path)\n", - " scoring_explainer = joblib.load(scoring_explainer_path)\n", - "\n", - "\n", - "def run(raw_data):\n", - " data = pd.read_json(raw_data, orient='records') \n", - " # Make prediction\n", - " predictions = automl_model.predict(data)\n", - " # Setup for inferencing explanations\n", - " automl_explainer_setup_obj = automl_setup_model_explanations(automl_model,\n", - " X_test=data, task='classification')\n", - " # Retrieve model explanations for engineered explanations\n", - " engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform)\n", - " # Retrieve model explanations for raw explanations\n", - " raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True)\n", - " # You can return any data type as long as it is JSON-serializable\n", - " return {'predictions': predictions.tolist(),\n", - " 'engineered_local_importance_values': engineered_local_importance_values,\n", - " 'raw_local_importance_values': raw_local_importance_values}\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create the InferenceConfig \n", - "Create the inference config that will be used when deploying the model" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.model import InferenceConfig\n", - "\n", - "inf_config = InferenceConfig(entry_script='score.py', environment=myenv)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Provision the AKS Cluster\n", - "This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AksCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your cluster.\n", - "aks_name = 'scoring-explain'\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " prov_config = AksCompute.provisioning_configuration(vm_size='STANDARD_D3_V2')\n", - " aks_target = ComputeTarget.create(workspace=ws, \n", - " name=aks_name,\n", - " provisioning_configuration=prov_config)\n", - "aks_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Deploy web service to AKS" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Set the web service configuration (using default here)\n", - "from azureml.core.webservice import AksWebservice\n", - "from azureml.core.model import Model\n", - "\n", - "aks_config = AksWebservice.deploy_configuration()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aks_service_name ='model-scoring-local-aks'\n", - "\n", - "aks_service = Model.deploy(workspace=ws,\n", - " name=aks_service_name,\n", - " models=[scoring_explainer_model, original_model],\n", - " inference_config=inf_config,\n", - " deployment_config=aks_config,\n", - " deployment_target=aks_target)\n", - "\n", - "aks_service.wait_for_deployment(show_output = True)\n", - "print(aks_service.state)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### View the service logs" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aks_service.get_logs()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Consume the web service using run method to do the scoring and explanation of scoring.\n", - "We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Serialize the first row of the test data into json\n", - "X_test_json = X_test_df[:1].to_json(orient='records')\n", - "print(X_test_json)\n", - "\n", - "# Call the service to get the predictions and the engineered and raw explanations\n", - "output = aks_service.run(X_test_json)\n", - "\n", - "# Print the predicted value\n", - "print('predictions:\\n{}\\n'.format(output['predictions']))\n", - "# Print the engineered feature importances for the predicted value\n", - "print('engineered_local_importance_values:\\n{}\\n'.format(output['engineered_local_importance_values']))\n", - "# Print the raw feature importances for the predicted value\n", - "print('raw_local_importance_values:\\n{}\\n'.format(output['raw_local_importance_values']))\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Clean up\n", - "Delete the service." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "aks_service.delete()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Acknowledgements" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n", - "\n", - "\n", - "The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Universit\u00c3\u0192\u00c2\u00a9 Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n", - "Please cite the following works: \n", - "\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n", - "\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n", - "\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n", - "o\tDal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n", - "\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n", - "\u00c3\u00a2\u00e2\u201a\u00ac\u00c2\u00a2\tCarcillo, Fabrizio; Le Borgne, Yann-A\u00c3\u0192\u00c2\u00abl; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing" - ] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Classification of credit card fraudulent transactions with local run **_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Test](#Tests)\n", + "1. [Explanation](#Explanation)\n", + "1. [Acknowledgements](#Acknowledgements)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "In this example we use the associated credit card dataset to showcase how you can use AutoML for a simple classification problem. The goal is to predict if a credit card transaction is considered a fraudulent charge.\n", + "\n", + "This notebook is using the local machine compute to train the model.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "In this notebook you will learn how to:\n", + "1. Create an experiment using an existing workspace.\n", + "2. Configure AutoML using `AutoMLConfig`.\n", + "3. Train the model.\n", + "4. Explore the results.\n", + "5. Test the fitted model.\n", + "6. Explore any model's explanation and explore feature importance in azure portal.\n", + "7. Create an AKS cluster, deploy the webservice of AutoML scoring model and the explainer model to the AKS and consume the web service." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import pandas as pd\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.train.automl import AutoMLConfig\n", + "from azureml.interpret import ExplanationClient" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# choose a name for experiment\n", + "experiment_name = \"automl-classification-ccard-local\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load Data\n", + "\n", + "Load the credit card dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/creditcard.csv\"\n", + "dataset = Dataset.Tabular.from_delimited_files(data)\n", + "training_data, validation_data = dataset.random_split(percentage=0.8, seed=223)\n", + "label_column_name = \"Class\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|classification or regression|\n", + "|**primary_metric**|This is the metric that you want to optimize. Classification supports the following primary metrics:
          accuracy
          AUC_weighted
          average_precision_score_weighted
          norm_macro_recall
          precision_score_weighted|\n", + "|**enable_early_stopping**|Stop the run if the metric score is not showing improvement.|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**training_data**|Input dataset, containing both features and label column.|\n", + "|**label_column_name**|The name of the label column.|\n", + "\n", + "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"n_cross_validations\": 3,\n", + " \"primary_metric\": \"average_precision_score_weighted\",\n", + " \"experiment_timeout_hours\": 0.25, # This is a time limit for testing purposes, remove it for real use cases, this will drastically limit ability to find the best model possible\n", + " \"verbosity\": logging.INFO,\n", + " \"enable_stack_ensemble\": False,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " training_data=training_data,\n", + " label_column_name=label_column_name,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Depending on the data and the number of iterations this can run for a while.\n", + "In this example, we specify `show_output = True` to print currently running iterations to the console." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "local_run = experiment.submit(automl_config, show_output=True)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# If you need to retrieve a run that already started, use the following code\n", + "# from azureml.train.automl.run import AutoMLRun\n", + "# local_run = AutoMLRun(experiment = experiment, run_id = '')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Widget for Monitoring Runs\n", + "\n", + "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", + "\n", + "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.widgets import RunDetails\n", + "\n", + "RunDetails(local_run).show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Analyze results\n", + "\n", + "#### Retrieve the Best Model\n", + "\n", + "Below we select the best pipeline from our iterations. The `get_output` method on `automl_classifier` returns the best run and the fitted model for the last invocation. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = local_run.get_output()\n", + "fitted_model" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Print the properties of the model\n", + "The fitted_model is a python object and you can read the different properties of the object.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Tests\n", + "\n", + "Now that the model is trained, split the data in the same way the data was split for training (The difference here is the data is being split locally) and then run the test data through the trained model to get the predicted values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# convert the test data to dataframe\n", + "X_test_df = validation_data.drop_columns(\n", + " columns=[label_column_name]\n", + ").to_pandas_dataframe()\n", + "y_test_df = validation_data.keep_columns(\n", + " columns=[label_column_name], validate=True\n", + ").to_pandas_dataframe()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# call the predict functions on the model\n", + "y_pred = fitted_model.predict(X_test_df)\n", + "y_pred" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Calculate metrics for the prediction\n", + "\n", + "Now visualize the data on a scatter plot to show what our truth (actual) values are compared to the predicted values \n", + "from the trained model that was returned." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.metrics import confusion_matrix\n", + "import numpy as np\n", + "import itertools\n", + "\n", + "cf = confusion_matrix(y_test_df.values, y_pred)\n", + "plt.imshow(cf, cmap=plt.cm.Blues, interpolation=\"nearest\")\n", + "plt.colorbar()\n", + "plt.title(\"Confusion Matrix\")\n", + "plt.xlabel(\"Predicted\")\n", + "plt.ylabel(\"Actual\")\n", + "class_labels = [\"False\", \"True\"]\n", + "tick_marks = np.arange(len(class_labels))\n", + "plt.xticks(tick_marks, class_labels)\n", + "plt.yticks([-0.5, 0, 1, 1.5], [\"\", \"False\", \"True\", \"\"])\n", + "# plotting text value inside cells\n", + "thresh = cf.max() / 2.0\n", + "for i, j in itertools.product(range(cf.shape[0]), range(cf.shape[1])):\n", + " plt.text(\n", + " j,\n", + " i,\n", + " format(cf[i, j], \"d\"),\n", + " horizontalalignment=\"center\",\n", + " color=\"white\" if cf[i, j] > thresh else \"black\",\n", + " )\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Explanation\n", + "In this section, we will show how to compute model explanations and visualize the explanations using azureml-interpret package. We will also show how to run the automl model and the explainer model through deploying an AKS web service.\n", + "\n", + "Besides retrieving an existing model explanation for an AutoML model, you can also explain your AutoML model with different test data. The following steps will allow you to compute and visualize engineered feature importance based on your test data.\n", + "\n", + "### Run the explanation\n", + "#### Download the engineered feature importance from artifact store\n", + "You can use ExplanationClient to download the engineered feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "client = ExplanationClient.from_run(best_run)\n", + "engineered_explanations = client.download_model_explanation(raw=False)\n", + "print(engineered_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + best_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Download the raw feature importance from artifact store\n", + "You can use ExplanationClient to download the raw feature explanations from the artifact store of the best_run. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "raw_explanations = client.download_model_explanation(raw=True)\n", + "print(raw_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + best_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Retrieve any other AutoML model from training" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_run, fitted_model = local_run.get_output(metric=\"accuracy\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Setup the model explanations for AutoML models\n", + "The fitted_model can generate the following which will be used for getting the engineered explanations using automl_setup_model_explanations:-\n", + "\n", + "1. Featurized data from train samples/test samples\n", + "2. Gather engineered name lists\n", + "3. Find the classes in your labeled column in classification scenarios\n", + "\n", + "The automl_explainer_setup_obj contains all the structures from above list." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "X_train = training_data.drop_columns(columns=[label_column_name])\n", + "y_train = training_data.keep_columns(columns=[label_column_name], validate=True)\n", + "X_test = validation_data.drop_columns(columns=[label_column_name])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.train.automl.runtime.automl_explain_utilities import (\n", + " automl_setup_model_explanations,\n", + ")\n", + "\n", + "automl_explainer_setup_obj = automl_setup_model_explanations(\n", + " fitted_model,\n", + " X=X_train,\n", + " X_test=X_test,\n", + " y=y_train,\n", + " task=\"classification\",\n", + " automl_run=automl_run,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Initialize the Mimic Explainer for feature importance\n", + "For explaining the AutoML models, use the MimicWrapper from azureml-interpret package. The MimicWrapper can be initialized with fields in automl_explainer_setup_obj, your workspace and a surrogate model to explain the AutoML model (fitted_model here). The MimicWrapper also takes the automl_run object where engineered explanations will be uploaded." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.interpret.mimic_wrapper import MimicWrapper\n", + "\n", + "explainer = MimicWrapper(\n", + " ws,\n", + " automl_explainer_setup_obj.automl_estimator,\n", + " explainable_model=automl_explainer_setup_obj.surrogate_model,\n", + " init_dataset=automl_explainer_setup_obj.X_transform,\n", + " run=automl_explainer_setup_obj.automl_run,\n", + " features=automl_explainer_setup_obj.engineered_feature_names,\n", + " feature_maps=[automl_explainer_setup_obj.feature_map],\n", + " classes=automl_explainer_setup_obj.classes,\n", + " explainer_kwargs=automl_explainer_setup_obj.surrogate_model_params,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Use Mimic Explainer for computing and visualizing engineered feature importance\n", + "The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the generated engineered features. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Compute the engineered explanations\n", + "engineered_explanations = explainer.explain(\n", + " [\"local\", \"global\"], eval_dataset=automl_explainer_setup_obj.X_test_transform\n", + ")\n", + "print(engineered_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + automl_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Use Mimic Explainer for computing and visualizing raw feature importance\n", + "The explain() method in MimicWrapper can be called with the transformed test samples to get the feature importance for the original features in your data. You can also use azure portal url to view the dash board visualization of the feature importance values of the original/raw features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Compute the raw explanations\n", + "raw_explanations = explainer.explain(\n", + " [\"local\", \"global\"],\n", + " get_raw=True,\n", + " raw_feature_names=automl_explainer_setup_obj.raw_feature_names,\n", + " eval_dataset=automl_explainer_setup_obj.X_test_transform,\n", + " raw_eval_dataset=automl_explainer_setup_obj.X_test_raw,\n", + ")\n", + "print(raw_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + automl_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Initialize the scoring Explainer, save and upload it for later use in scoring explanation" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer\n", + "import joblib\n", + "\n", + "# Initialize the ScoringExplainer\n", + "scoring_explainer = TreeScoringExplainer(\n", + " explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]\n", + ")\n", + "\n", + "# Pickle scoring explainer locally to './scoring_explainer.pkl'\n", + "scoring_explainer_file_name = \"scoring_explainer.pkl\"\n", + "with open(scoring_explainer_file_name, \"wb\") as stream:\n", + " joblib.dump(scoring_explainer, stream)\n", + "\n", + "# Upload the scoring explainer to the automl run\n", + "automl_run.upload_file(\"outputs/scoring_explainer.pkl\", scoring_explainer_file_name)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deploying the scoring and explainer models to a web service to Azure Kubernetes Service (AKS)\n", + "\n", + "We use the TreeScoringExplainer from azureml.interpret package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register trained automl model present in the 'outputs' folder in the artifacts\n", + "original_model = automl_run.register_model(\n", + " model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n", + ")\n", + "scoring_explainer_model = automl_run.register_model(\n", + " model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Create the conda dependencies for setting up the service\n", + "\n", + "We need to download the conda dependencies using the automl_run object." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.core.shared import constants\n", + "from azureml.core.environment import Environment\n", + "\n", + "automl_run.download_file(constants.CONDA_ENV_FILE_PATH, \"myenv.yml\")\n", + "myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n", + "myenv" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Write the Entry Script\n", + "Write the script that will be used to predict on your model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%writefile score.py\n", + "import joblib\n", + "import pandas as pd\n", + "from azureml.core.model import Model\n", + "from azureml.train.automl.runtime.automl_explain_utilities import (\n", + " automl_setup_model_explanations,\n", + ")\n", + "\n", + "\n", + "def init():\n", + " global automl_model\n", + " global scoring_explainer\n", + "\n", + " # Retrieve the path to the model file using the model name\n", + " # Assume original model is named original_prediction_model\n", + " automl_model_path = Model.get_model_path(\"automl_model\")\n", + " scoring_explainer_path = Model.get_model_path(\"scoring_explainer\")\n", + "\n", + " automl_model = joblib.load(automl_model_path)\n", + " scoring_explainer = joblib.load(scoring_explainer_path)\n", + "\n", + "\n", + "def run(raw_data):\n", + " data = pd.read_json(raw_data, orient=\"records\")\n", + " # Make prediction\n", + " predictions = automl_model.predict(data)\n", + " # Setup for inferencing explanations\n", + " automl_explainer_setup_obj = automl_setup_model_explanations(\n", + " automl_model, X_test=data, task=\"classification\"\n", + " )\n", + " # Retrieve model explanations for engineered explanations\n", + " engineered_local_importance_values = scoring_explainer.explain(\n", + " automl_explainer_setup_obj.X_test_transform\n", + " )\n", + " # Retrieve model explanations for raw explanations\n", + " raw_local_importance_values = scoring_explainer.explain(\n", + " automl_explainer_setup_obj.X_test_transform, get_raw=True\n", + " )\n", + " # You can return any data type as long as it is JSON-serializable\n", + " return {\n", + " \"predictions\": predictions.tolist(),\n", + " \"engineered_local_importance_values\": engineered_local_importance_values,\n", + " \"raw_local_importance_values\": raw_local_importance_values,\n", + " }" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Create the InferenceConfig \n", + "Create the inference config that will be used when deploying the model" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.model import InferenceConfig\n", + "\n", + "inf_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Provision the AKS Cluster\n", + "This is a one time setup. You can reuse this cluster for multiple deployments after it has been created. If you delete the cluster or the resource group that contains it, then you would have to recreate it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AksCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "aks_name = \"scoring-explain\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " aks_target = ComputeTarget(workspace=ws, name=aks_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " prov_config = AksCompute.provisioning_configuration(vm_size=\"STANDARD_D3_V2\")\n", + " aks_target = ComputeTarget.create(\n", + " workspace=ws, name=aks_name, provisioning_configuration=prov_config\n", + " )\n", + "aks_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Deploy web service to AKS" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Set the web service configuration (using default here)\n", + "from azureml.core.webservice import AksWebservice\n", + "from azureml.core.model import Model\n", + "\n", + "aks_config = AksWebservice.deploy_configuration()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aks_service_name = \"model-scoring-local-aks\"\n", + "\n", + "aks_service = Model.deploy(\n", + " workspace=ws,\n", + " name=aks_service_name,\n", + " models=[scoring_explainer_model, original_model],\n", + " inference_config=inf_config,\n", + " deployment_config=aks_config,\n", + " deployment_target=aks_target,\n", + ")\n", + "\n", + "aks_service.wait_for_deployment(show_output=True)\n", + "print(aks_service.state)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### View the service logs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aks_service.get_logs()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Consume the web service using run method to do the scoring and explanation of scoring.\n", + "We test the web sevice by passing data. Run() method retrieves API keys behind the scenes to make sure that call is authenticated." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Serialize the first row of the test data into json\n", + "X_test_json = X_test_df[:1].to_json(orient=\"records\")\n", + "print(X_test_json)\n", + "\n", + "# Call the service to get the predictions and the engineered and raw explanations\n", + "output = aks_service.run(X_test_json)\n", + "\n", + "# Print the predicted value\n", + "print(\"predictions:\\n{}\\n\".format(output[\"predictions\"]))\n", + "# Print the engineered feature importances for the predicted value\n", + "print(\n", + " \"engineered_local_importance_values:\\n{}\\n\".format(\n", + " output[\"engineered_local_importance_values\"]\n", + " )\n", + ")\n", + "# Print the raw feature importances for the predicted value\n", + "print(\n", + " \"raw_local_importance_values:\\n{}\\n\".format(output[\"raw_local_importance_values\"])\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Clean up\n", + "Delete the service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aks_service.delete()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Acknowledgements" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This Credit Card fraud Detection dataset is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0/. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/ and is available at: https://www.kaggle.com/mlg-ulb/creditcardfraud\n", + "\n", + "\n", + "The dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group (http://mlg.ulb.ac.be) of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available on https://www.researchgate.net/project/Fraud-detection-5 and the page of the DefeatFraud project\n", + "Please cite the following works: \n", + "•\tAndrea Dal Pozzolo, Olivier Caelen, Reid A. Johnson and Gianluca Bontempi. Calibrating Probability with Undersampling for Unbalanced Classification. In Symposium on Computational Intelligence and Data Mining (CIDM), IEEE, 2015\n", + "•\tDal Pozzolo, Andrea; Caelen, Olivier; Le Borgne, Yann-Ael; Waterschoot, Serge; Bontempi, Gianluca. Learned lessons in credit card fraud detection from a practitioner perspective, Expert systems with applications,41,10,4915-4928,2014, Pergamon\n", + "•\tDal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca. Credit card fraud detection: a realistic modeling and a novel learning strategy, IEEE transactions on neural networks and learning systems,29,8,3784-3797,2018,IEEE\n", + "o\tDal Pozzolo, Andrea Adaptive Machine learning for credit card fraud detection ULB MLG PhD thesis (supervised by G. Bontempi)\n", + "•\tCarcillo, Fabrizio; Dal Pozzolo, Andrea; Le Borgne, Yann-Aël; Caelen, Olivier; Mazzer, Yannis; Bontempi, Gianluca. Scarff: a scalable framework for streaming credit card fraud detection with Spark, Information fusion,41, 182-194,2018,Elsevier\n", + "•\tCarcillo, Fabrizio; Le Borgne, Yann-Aël; Caelen, Olivier; Bontempi, Gianluca. Streaming active learning strategies for real-life credit card fraud detection: assessment and visualization, International Journal of Data Science and Analytics, 5,4,285-300,2018,Springer International Publishing" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "ratanase" + } + ], + "category": "tutorial", + "compute": [ + "Local" + ], + "datasets": [ + "creditcard" + ], + "deployment": [ + "None" + ], + "exclude_from_index": true, + "file_extension": ".py", + "framework": [ + "None" + ], + "friendly_name": "Classification of credit card fraudulent transactions using Automated ML", + "index_order": 5, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + }, + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "tags": [ + "local_run", + "AutomatedML" ], - "metadata": { - "authors": [ - { - "name": "ratanase" - } - ], - "category": "tutorial", - "compute": [ - "Local" - ], - "datasets": [ - "creditcard" - ], - "deployment": [ - "None" - ], - "exclude_from_index": true, - "file_extension": ".py", - "framework": [ - "None" - ], - "friendly_name": "Classification of credit card fraudulent transactions using Automated ML", - "index_order": 5, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "tags": [ - "local_run", - "AutomatedML" - ], - "task": "Classification", - "version": "3.6.7" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "task": "Classification", + "version": "3.6.7" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/metrics/binary-classification-metric-and-confidence-interval.ipynb b/how-to-use-azureml/automated-machine-learning/metrics/binary-classification-metric-and-confidence-interval.ipynb new file mode 100644 index 000000000..c6886bf3c --- /dev/null +++ b/how-to-use-azureml/automated-machine-learning/metrics/binary-classification-metric-and-confidence-interval.ipynb @@ -0,0 +1,698 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Copyright (c) Microsoft Corporation. All rights reserved.\n", + "\n", + "Licensed under the MIT License." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/metrics/binary-classification-metric-and-confidence-interval.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**New metric features in Azure AutoML**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Test](#Test)\n", + "1. [Acknowledgements](#Acknowledgements)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "In this example notebook we use the sklearn datasets, [digits](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html) and [boston](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html) to help you get familiar with binary classification metrics and confidence interval. The goal is to learn how to use these features through the examples. \n", + "\n", + "This notebook is using remote compute to train the model.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "In this notebook you will learn how to:\n", + "1. How to have binary classification metrics calculated for AutoML runs\n", + "2. How to find binary classification metrics in UI and how to retrieve the values through code\n", + "3. How to have confidence intervals calculated for both classification and regression AutoML runs\n", + "4. How to find confidence intervals in UI and how to retrieve the values through code" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import logging\n", + "\n", + "import pandas as pd\n", + "import os\n", + "\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "experiment_name = \"metrics-new-feature-test\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Create or Attach existing AmlCompute\n", + "A compute target is required to execute the Automated ML run. In this tutorial, you create AmlCompute as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "#### Creation of AmlCompute takes approximately 5 minutes. \n", + "If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "cpu_cluster_name = \"cpu-cluster-1\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=6\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Data" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load Data\n", + "\n", + "We load datasets from sklearn and save to local files to register them to workspace.\n", + "\n", + "For classification, we use [digits dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html#sklearn.datasets.load_digits)\n", + "\n", + "For regression, we use [boston dataset](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html#sklearn.datasets.load_boston)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "import sklearn.datasets\n", + "\n", + "\n", + "def load_classification_data():\n", + " if os.path.exists(\"./data/digits.csv\"):\n", + " print(\"Find downloaded dataset. Loading\")\n", + " else:\n", + " print(\"Downloading dataset\")\n", + " os.makedirs(\"./data\", exist_ok=True)\n", + " classification_dataset = sklearn.datasets.load_digits()\n", + " X = classification_dataset[\"data\"]\n", + " y = classification_dataset[\"target\"]\n", + " full_data = np.concatenate([X, y.reshape(-1, 1)], axis=1).astype(\"int\")\n", + " columns = [\"feature_{}\".format(i) for i in range(X.shape[1])] + [\"label\"]\n", + " full_data = pd.DataFrame(data=full_data, columns=columns)\n", + " full_data.to_csv(\"./data/digits.csv\", index=False)\n", + " print(\"Dataset downloaded\")\n", + " ws = Workspace.from_config()\n", + " datastore = ws.get_default_datastore()\n", + " datastore.upload(\n", + " src_dir=\"./data\", target_path=\"data/new-metric-features/\", overwrite=True\n", + " )\n", + " data = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, (\"data/new-metric-features/digits.csv\"))]\n", + " )\n", + " train, test = data.random_split(percentage=0.8, seed=101)\n", + " validation, test = test.random_split(percentage=0.5, seed=47)\n", + " return train, validation, test, np.arange(10), \"label\"\n", + "\n", + "\n", + "(\n", + " digit_train,\n", + " digit_validation,\n", + " digit_test,\n", + " labels,\n", + " label_column_name,\n", + ") = load_classification_data()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Binary Classification Metrics\n", + "\n", + "In this section we will explain how to set parameters for AutoML runs to have binary classification metrics calculated.\n", + "\n", + "## Binary Classification Metrics\n", + "Binary classification metrics will be calculated for AutoML in two cases:\n", + "1. There are exactly two classes.\n", + "2. parameter `positive_label` in `AutoMLConfig` is specified as an existing class.\n", + "\n", + "When a `positive_label` is specified for multiclass classification tasks, all other classes will all be treated the negative class when calculating the binary classification metrics.\n", + "\n", + "When there are exactly two classes, `np.unique()` will be used to sort the classes and the class with larger index will be used as the positive class. However, we would recommend always specify a `positive_label` when you want to calculate binary classification metrics to make sure that it is calculated for the correct class. In the example below, we use class `4` as the positive class." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"primary_metric\": \"AUC_weighted\",\n", + " \"enable_early_stopping\": True,\n", + " \"max_concurrent_iterations\": 6,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=digit_train,\n", + " validation_data=digit_validation,\n", + " label_column_name=label_column_name,\n", + " positive_label=4, # specify the positive class with this parameter\n", + " **automl_settings\n", + ")\n", + "\n", + "classification_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "classification_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Find Binary Metrics in UI\n", + "\n", + "After training, you can click the link above to visit the page of this run. You can find all training runs under `Child runs` tab:\n", + "\n", + "![](imgs/child-runs.png)\n", + "\n", + "Then under `Metrics` tab, you can find some metrics names that end with `_binary`. They are the binary classification metrics with the specified positive class.\n", + "\n", + "![](imgs/binary-metrics.png)\n", + "\n", + "## Retrieve Binary Metrics with Code\n", + "\n", + "You can also retrieve the metrics values for any training run with codes. They returned values will be a dictionary with structure `{name: value}`. The example below retrieves the metrics of the best trained model." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = classification_run.get_output()\n", + "training_metrics = best_run.get_metrics()\n", + "training_metrics[\"AUC_binary\"]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "With data downloaded, you can also calculate the binary classification metrics with other classes as the positive class. \n", + "\n", + "To calculate metrics with codes, you will need to import Azure AutoML's scoring modules and specify the value of `positive_label` as desired. See example code below:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.runtime.shared.score import constants, scoring\n", + "\n", + "test_df = digit_test.to_pandas_dataframe()\n", + "y_test = test_df[label_column_name]\n", + "test_df = test_df.drop(columns=[label_column_name])\n", + "y_pred_proba = fitted_model.predict_proba(test_df)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "for positive_label in range(10):\n", + " metrics = scoring.score_classification(\n", + " y_test,\n", + " y_pred_proba,\n", + " constants.CLASSIFICATION_SCALAR_SET,\n", + " labels,\n", + " labels,\n", + " positive_label=positive_label,\n", + " )\n", + " print(\n", + " \"AUC_binary for label {} is {:.4f}\".format(\n", + " positive_label, metrics[\"AUC_binary\"]\n", + " )\n", + " )" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Wrong Value of `positive_label` Fails the Run\n", + "\n", + "The value of `positive_label` passed into `AutoMLConfig` must be exactly the same as it is in the dataset. If you passed in a `positive_label` that cannot be found in the training dataset, the run will fail. See the example below, where the correct value `4` is replaced by its string version, `'4'`" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"primary_metric\": \"AUC_weighted\",\n", + " \"enable_early_stopping\": True,\n", + " \"max_concurrent_iterations\": 6,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=digit_train,\n", + " validation_data=digit_validation,\n", + " label_column_name=label_column_name,\n", + " positive_label=\"4\", # replace the correct integer value with its string version\n", + " **automl_settings\n", + ")\n", + "\n", + "classification_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "classification_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Confidence Interval\n", + "\n", + "We calculate confidence intervals for metrics by doing bootstrap and we give conservative estimates. Like binary classification metrics, you can find the confidence intervals in UI, and also retrieve them with codes. \n", + "\n", + "To calculate confidence intervals in AutoML runs, we need to pass two other parameters to `AutoMLConfig`:\n", + "1. `enable_metric_confidence = True` to tell the run to calculate confidence interval\n", + "2. `test_data` to activate a test run, as confidence intervals will only be calculated for test runs.\n", + "\n", + "Currently, if the task is classification, only primary metrics will have their confidence intervals logged with the run. To get confidence intervals for other metrics, you can use codes. We will provide examples below." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"primary_metric\": \"AUC_weighted\",\n", + " \"enable_early_stopping\": True,\n", + " \"max_concurrent_iterations\": 6,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"classification\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=digit_train,\n", + " validation_data=digit_validation,\n", + " test_data=digit_test, # if you only have a test set, you can pass validation set here, instead of at validation_data\n", + " label_column_name=label_column_name,\n", + " enable_metric_confidence=True,\n", + " **automl_settings\n", + ")\n", + "\n", + "classification_run = experiment.submit(automl_config, show_output=False)\n", + "classification_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Find Confidence Interval in UI\n", + "\n", + "To locate the confidence intervals in UI, we must first find the run which gives the best model, as only the best model will be run on test set. In order to do so, click the link above for the AutoML run, and go to `Models` tab. The model listed on the top is the one with best performance:\n", + "\n", + "![](imgs/best-model.png)\n", + "\n", + "Then for this best model, go to its `Child runs` tab and click the run with tab `Test model`\n", + "\n", + "![](imgs/test-run.png)\n", + "\n", + "For this test run, under tab `Metrics`, you can find some metrics whose names end with `extras`. By switching `View as` from `Chart` to `Table`, you can find the confidence intervals for those metrics.\n", + "\n", + "![](imgs/confidence-intervals.png)\n", + "\n", + "## Find Confidence Interval with Code\n", + "\n", + "You can retrieve the `Run` object for test run with the following code, and get confidence interval from its metrics." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = classification_run.get_output()\n", + "test_run = next(best_run.get_children(type=\"automl.model_test\"))\n", + "test_run.wait_for_completion(show_output=False, wait_post_processing=True)\n", + "test_metrics = test_run.get_metrics()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "CIs = {\"metric_name\": [], \"lower_ci_95\": [], \"upper_ci_95\": [], \"value\": []}\n", + "\n", + "for key, ci in test_metrics.items():\n", + " if key.endswith(\"extras\"):\n", + " CIs[\"metric_name\"].append(key[:-7]) # remove \"_extras\" to get metric name\n", + " for ci_key, ci_value in ci.items():\n", + " CIs[ci_key].append(ci_value)\n", + "\n", + "pd.DataFrame(CIs)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Or, you can retrieve the best model, do inference yourself, and get confidence intervals for all metrics. However, since our confidence intervals includes a large number of bootstraps, it will take some time." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "test_df = digit_test.to_pandas_dataframe()\n", + "y_test = test_df[label_column_name]\n", + "test_df = test_df.drop(columns=[label_column_name])\n", + "y_pred_proba = fitted_model.predict_proba(test_df)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.automl.runtime._ml_engine.classification_ml_engine import (\n", + " evaluate_classifier,\n", + ")\n", + "\n", + "test_metrics = evaluate_classifier(\n", + " y_test,\n", + " y_pred_proba,\n", + " constants.CLASSIFICATION_SCALAR_SET,\n", + " labels,\n", + " labels,\n", + " enable_metric_confidence=True,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "CIs = {\"metric_name\": [], \"lower_ci_95\": [], \"upper_ci_95\": [], \"value\": []}\n", + "\n", + "for key, ci in test_metrics.items():\n", + " if key.endswith(\"extras\"):\n", + " CIs[\"metric_name\"].append(key[:-7]) # remove \"_extras\" to get metric name\n", + " for ci_key, ci_value in ci.items():\n", + " CIs[ci_key].append(ci_value)\n", + "\n", + "pd.DataFrame(CIs)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Confidence Interval for Regression\n", + "\n", + "Confidence intervals are also supported for regression runs and all confidence intervals can be found in UI. You can find it by following the exact same steps as you do for a classification run. Here we only provide example code for a regression run, screen shots of the confidence intervals, and retrieve it with codes." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def load_regression_data():\n", + " if os.path.exists(\"./data/boston.csv\"):\n", + " print(\"Find downloaded dataset. Loading\")\n", + " else:\n", + " print(\"Downloading dataset\")\n", + " os.makedirs(\"./data\", exist_ok=True)\n", + " regression_data = sklearn.datasets.load_boston()\n", + " X = regression_data[\"data\"]\n", + " y = regression_data[\"target\"]\n", + " full_data = np.concatenate([X, y.reshape(-1, 1)], axis=1)\n", + " columns = [\"feature_{}\".format(i) for i in range(X.shape[1])] + [\"label\"]\n", + " full_data = pd.DataFrame(data=full_data, columns=columns)\n", + " full_data.to_csv(\"./data/boston.csv\", index=False)\n", + " print(\"Dataset downloaded\")\n", + " ws = Workspace.from_config()\n", + " datastore = ws.get_default_datastore()\n", + " datastore.upload(\n", + " src_dir=\"./data\", target_path=\"data/new-metric-features/\", overwrite=True\n", + " )\n", + " data = Dataset.Tabular.from_delimited_files(\n", + " path=[(datastore, (\"data/new-metric-features/boston.csv\"))]\n", + " )\n", + " train, test = data.random_split(percentage=0.8, seed=101)\n", + " validation, test = test.random_split(percentage=0.5, seed=47)\n", + " return train, validation, test, \"label\"\n", + "\n", + "\n", + "boston_train, boston_validation, boston_test, label_column_name = load_regression_data()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"enable_early_stopping\": True,\n", + " \"max_concurrent_iterations\": 6,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"regression\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " training_data=boston_train,\n", + " validation_data=boston_validation,\n", + " test_data=boston_test, # if you only have a test set, you can pass validation set here, instead of at validation_data\n", + " label_column_name=label_column_name,\n", + " enable_metric_confidence=True,\n", + " **automl_settings\n", + ")\n", + "\n", + "regression_run = experiment.submit(automl_config, show_output=False)\n", + "regression_run.wait_for_completion(show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = regression_run.get_output()\n", + "test_run = next(best_run.get_children(type=\"automl.model_test\"))\n", + "test_run.wait_for_completion(show_output=False, wait_post_processing=True)\n", + "test_metrics = test_run.get_metrics()\n", + "\n", + "CIs = {\"metric_name\": [], \"lower_ci_95\": [], \"upper_ci_95\": [], \"value\": []}\n", + "\n", + "for key, ci in test_metrics.items():\n", + " if key.endswith(\"extras\"):\n", + " CIs[\"metric_name\"].append(key[:-7]) # remove \"_extras\" to get metric name\n", + " for ci_key, ci_value in ci.items():\n", + " CIs[ci_key].append(ci_value)\n", + "\n", + "pd.DataFrame(CIs)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![](imgs/regression-confidence-interval.png)" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "lifengwei" + } + ], + "category": "tutorial", + "compute": [ + "AML Compute" + ], + "datasets": [ + "Digits", + "Boston" + ], + "deployment": [ + "None" + ], + "exclude_from_index": false, + "file_extension": ".py", + "framework": [ + "None" + ], + "friendly_name": "New metric features in Azure AutoML", + "index_order": 5, + "interpreter": { + "hash": "cc0892e042a269bcf4aec58f0c86eb5e2be478ff7be4e5f6b2680e2af1718f2e" + }, + "kernelspec": { + "display_name": "Python 3.7.0 64-bit ('pypi': conda)", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.0" + }, + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "tags": [ + "remote_run", + "AutomatedML" + ], + "task": "Classification", + "version": "3.6.7" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/best-model.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/best-model.png new file mode 100644 index 000000000..2c80cea22 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/best-model.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/binary-metrics.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/binary-metrics.png new file mode 100644 index 000000000..6185ecee3 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/binary-metrics.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/child-runs.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/child-runs.png new file mode 100644 index 000000000..3bfa294e0 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/child-runs.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/confidence-intervals.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/confidence-intervals.png new file mode 100644 index 000000000..692173b4c Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/confidence-intervals.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/regression-confidence-interval.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/regression-confidence-interval.png new file mode 100644 index 000000000..31f8d7c61 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/regression-confidence-interval.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run-id.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run-id.png new file mode 100644 index 000000000..7a8ec3b7a Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run-id.png differ diff --git a/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run.png b/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run.png new file mode 100644 index 000000000..7eb314912 Binary files /dev/null and b/how-to-use-azureml/automated-machine-learning/metrics/imgs/test-run.png differ diff --git a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb index e033ee4ab..d74d14fb4 100644 --- a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb +++ b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/auto-ml-regression-explanation-featurization.ipynb @@ -1,910 +1,947 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/regression-car-price-model-explaination-and-featurization/auto-ml-regression.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Regression with Aml Compute**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Data](#Data)\n", - "1. [Train](#Train)\n", - "1. [Results](#Results)\n", - "1. [Test](#Test)\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. The Regression goal is to predict the performance of certain combinations of hardware parts.\n", - "After training AutoML models for this regression data set, we show how you can compute model explanations on your remote compute using a sample explainer script.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", - "\n", - "In this notebook you will learn how to:\n", - "1. Create an `Experiment` in an existing `Workspace`.\n", - "2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n", - "3. Train the model using remote compute.\n", - "4. Explore the results and featurization transparency options\n", - "5. Setup remote compute for computing the model explanations for a given AutoML model.\n", - "6. Start an AzureML experiment on your remote compute to compute explanations for an AutoML model.\n", - "7. Download the feature importance for engineered features and visualize the explanations for engineered features on azure portal. \n", - "8. Download the feature importance for raw features and visualize the explanations for raw features on azure portal. \n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n", - "\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import json\n", - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import numpy as np\n", - "import pandas as pd\n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.automl.core.featurization import FeaturizationConfig\n", - "from azureml.train.automl import AutoMLConfig\n", - "from azureml.core.dataset import Dataset" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# Choose a name for the experiment.\n", - "experiment_name = 'automl-regression-hardware-explain'\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace Name'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Experiment Name'] = experiment.name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create or Attach existing AmlCompute\n", - "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n", - "\n", - "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", - "\n", - "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", - "\n", - "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your cluster.\n", - "amlcompute_cluster_name = \"hardware-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n", - " max_nodes=4)\n", - " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Setup Training and Test Data for AutoML experiment\n", - "\n", - "Load the hardware dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. We also register the datasets in your workspace using a name so that these datasets may be accessed from the remote compute." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = 'https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv'\n", - "\n", - "dataset = Dataset.Tabular.from_delimited_files(data)\n", - "\n", - "# Split the dataset into train and test datasets\n", - "train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n", - "\n", - "\n", - "# Register the train dataset with your workspace\n", - "train_data.register(workspace = ws, name = 'machineData_train_dataset',\n", - " description = 'hardware performance training data',\n", - " create_new_version=True)\n", - "\n", - "# Register the test dataset with your workspace\n", - "test_data.register(workspace = ws, name = 'machineData_test_dataset', description = 'hardware performance test data', create_new_version=True)\n", - "\n", - "label =\"ERP\"\n", - "\n", - "train_data.to_pandas_dataframe().head()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|classification, regression or forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
          spearman_correlation
          normalized_root_mean_squared_error
          r2_score
          normalized_mean_absolute_error|\n", - "|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n", - "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n", - "|**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*. Note: If the input data is sparse, featurization cannot be turned on.|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n", - "|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Customization\n", - "\n", - "Supported customization includes:\n", - "\n", - "1. Column purpose update: Override feature type for the specified column.\n", - "2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n", - "3. Drop columns: Columns to drop from being featurized.\n", - "4. Block transformers: Allow/Block transformers to be used on featurization process." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Create FeaturizationConfig object using API calls" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "sample-featurizationconfig-remarks2" - ] - }, - "outputs": [], - "source": [ - "featurization_config = FeaturizationConfig()\n", - "featurization_config.blocked_transformers = ['LabelEncoder']\n", - "#featurization_config.drop_columns = ['MMIN']\n", - "featurization_config.add_column_purpose('MYCT', 'Numeric')\n", - "featurization_config.add_column_purpose('VendorName', 'CategoricalHash')\n", - "#default strategy mean, add transformer param for for 3 columns\n", - "featurization_config.add_transformer_params('Imputer', ['CACH'], {\"strategy\": \"median\"})\n", - "featurization_config.add_transformer_params('Imputer', ['CHMIN'], {\"strategy\": \"median\"})\n", - "featurization_config.add_transformer_params('Imputer', ['PRP'], {\"strategy\": \"most_frequent\"})\n", - "#featurization_config.add_transformer_params('HashOneHotEncoder', [], {\"number_of_bits\": 3})" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "sample-featurizationconfig-remarks3" - ] - }, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"enable_early_stopping\": True, \n", - " \"experiment_timeout_hours\" : 0.25,\n", - " \"max_concurrent_iterations\": 4,\n", - " \"max_cores_per_iteration\": -1,\n", - " \"n_cross_validations\": 5,\n", - " \"primary_metric\": 'normalized_root_mean_squared_error',\n", - " \"verbosity\": logging.INFO\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'regression',\n", - " debug_log = 'automl_errors.log',\n", - " compute_target=compute_target,\n", - " featurization=featurization_config,\n", - " training_data = train_data,\n", - " label_column_name = label,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", - "In this example, we specify `show_output = True` to print currently running iterations to the console." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output = False)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Run the following cell to access previous runs. Uncomment the cell below and update the run_id." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "#from azureml.train.automl.run import AutoMLRun\n", - "#remote_run = AutoMLRun(experiment=experiment, run_id='>', automl_run.experiment.name) # your experiment name.\n", - "content = content.replace('<>', automl_run.id) # Run-id of the AutoML run for which you want to explain the model.\n", - "content = content.replace('<>', 'ERP') # Your target column name\n", - "content = content.replace('<>', 'regression') # Training task type\n", - "# Name of your training dataset register with your workspace\n", - "content = content.replace('<>', 'machineData_train_dataset') \n", - "# Name of your test dataset register with your workspace\n", - "content = content.replace('<>', 'machineData_test_dataset')\n", - "\n", - "# Write sample file into your script folder.\n", - "with open(script_file_name, 'w') as cefw:\n", - " cefw.write(content)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Create conda configuration for model explanations experiment from automl_run object" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.runconfig import RunConfiguration\n", - "\n", - "# create a new RunConfig object\n", - "conda_run_config = RunConfiguration(framework=\"python\")\n", - "\n", - "# Set compute target to AmlCompute\n", - "conda_run_config.target = compute_target\n", - "conda_run_config.environment.docker.enabled = True\n", - "\n", - "# specify CondaDependencies obj\n", - "conda_run_config.environment.python.conda_dependencies = automl_run.get_environment().python.conda_dependencies" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Submit the experiment for model explanations\n", - "Submit the experiment with the above `run_config` and the sample script for computing explanations." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Now submit a run on AmlCompute for model explanations\n", - "from azureml.core.script_run_config import ScriptRunConfig\n", - "\n", - "script_run_config = ScriptRunConfig(source_directory=script_folder,\n", - " script='train_explainer.py',\n", - " run_config=conda_run_config)\n", - "\n", - "run = experiment.submit(script_run_config)\n", - "\n", - "# Show run details\n", - "run" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%%time\n", - "# Shows output of the run on stdout.\n", - "run.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Feature importance and visualizing explanation dashboard\n", - "In this section we describe how you can download the explanation results from the explanations experiment and visualize the feature importance for your AutoML model on the azure portal." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Download engineered feature importance from artifact store\n", - "You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.interpret import ExplanationClient\n", - "client = ExplanationClient.from_run(automl_run)\n", - "engineered_explanations = client.download_model_explanation(raw=False, comment='engineered explanations')\n", - "print(engineered_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Download raw feature importance from artifact store\n", - "You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "raw_explanations = client.download_model_explanation(raw=True, comment='raw explanations')\n", - "print(raw_explanations.get_feature_importance_dict())\n", - "print(\"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\" + automl_run.get_portal_url())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Operationalize\n", - "In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n", - "\n", - "### Register the AutoML model and the scoring explainer\n", - "We use the *TreeScoringExplainer* from *azureml-interpret* package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. \n", - "In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# Register trained automl model present in the 'outputs' folder in the artifacts\n", - "original_model = automl_run.register_model(model_name='automl_model', \n", - " model_path='outputs/model.pkl')\n", - "scoring_explainer_model = automl_run.register_model(model_name='scoring_explainer',\n", - " model_path='outputs/scoring_explainer.pkl')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Create the conda dependencies for setting up the service\n", - "We need to create the conda dependencies comprising of the *azureml* packages using the training environment from the *automl_run*." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "conda_dep = automl_run.get_environment().python.conda_dependencies\n", - "\n", - "with open(\"myenv.yml\",\"w\") as f:\n", - " f.write(conda_dep.serialize_to_string())\n", - "\n", - "with open(\"myenv.yml\",\"r\") as f:\n", - " print(f.read())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### View your scoring file" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "with open(\"score_explain.py\",\"r\") as f:\n", - " print(f.read())" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Deploy the service\n", - "In the cell below, we deploy the service using the conda file and the scoring file from the previous steps. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.model import InferenceConfig\n", - "from azureml.core.webservice import AciWebservice\n", - "from azureml.core.model import Model\n", - "from azureml.core.environment import Environment\n", - "\n", - "aciconfig = AciWebservice.deploy_configuration(cpu_cores=2, \n", - " memory_gb=2, \n", - " tags={\"data\": \"Machine Data\", \n", - " \"method\" : \"local_explanation\"}, \n", - " description='Get local explanations for Machine test data')\n", - "\n", - "myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n", - "inference_config = InferenceConfig(entry_script=\"score_explain.py\", environment=myenv)\n", - "\n", - "# Use configs and models generated above\n", - "service = Model.deploy(ws, 'model-scoring', [scoring_explainer_model, original_model], inference_config, aciconfig)\n", - "service.wait_for_deployment(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### View the service logs" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "service.get_logs()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Inference using some test data\n", - "Inference using some test data to see the predicted value from autml model, view the engineered feature importance for the predicted value and raw feature importance for the predicted value." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "if service.state == 'Healthy':\n", - " X_test = test_data.drop_columns([label]).to_pandas_dataframe()\n", - " # Serialize the first row of the test data into json\n", - " X_test_json = X_test[:1].to_json(orient='records')\n", - " print(X_test_json)\n", - " # Call the service to get the predictions and the engineered and raw explanations\n", - " output = service.run(X_test_json)\n", - " # Print the predicted value\n", - " print(output['predictions'])\n", - " # Print the engineered feature importances for the predicted value\n", - " print(output['engineered_local_importance_values'])\n", - " # Print the raw feature importances for the predicted value\n", - " print(output['raw_local_importance_values'])" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Delete the service\n", - "Delete the service once you have finished inferencing." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "service.delete()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Test" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# preview the first 3 rows of the dataset\n", - "\n", - "test_data = test_data.to_pandas_dataframe()\n", - "y_test = test_data['ERP'].fillna(0)\n", - "test_data = test_data.drop('ERP', 1)\n", - "test_data = test_data.fillna(0)\n", - "\n", - "\n", - "train_data = train_data.to_pandas_dataframe()\n", - "y_train = train_data['ERP'].fillna(0)\n", - "train_data = train_data.drop('ERP', 1)\n", - "train_data = train_data.fillna(0)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y_pred_train = fitted_model.predict(train_data)\n", - "y_residual_train = y_train - y_pred_train\n", - "\n", - "y_pred_test = fitted_model.predict(test_data)\n", - "y_residual_test = y_test - y_pred_test" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%matplotlib inline\n", - "from sklearn.metrics import mean_squared_error, r2_score\n", - "\n", - "# Set up a multi-plot chart.\n", - "f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n", - "f.suptitle('Regression Residual Values', fontsize = 18)\n", - "f.set_figheight(6)\n", - "f.set_figwidth(16)\n", - "\n", - "# Plot residual values of training set.\n", - "a0.axis([0, 360, -100, 100])\n", - "a0.plot(y_residual_train, 'bo', alpha = 0.5)\n", - "a0.plot([-10,360],[0,0], 'r-', lw = 3)\n", - "a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n", - "a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)),fontsize = 12)\n", - "a0.set_xlabel('Training samples', fontsize = 12)\n", - "a0.set_ylabel('Residual Values', fontsize = 12)\n", - "\n", - "# Plot residual values of test set.\n", - "a1.axis([0, 90, -100, 100])\n", - "a1.plot(y_residual_test, 'bo', alpha = 0.5)\n", - "a1.plot([-10,360],[0,0], 'r-', lw = 3)\n", - "a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n", - "a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)),fontsize = 12)\n", - "a1.set_xlabel('Test samples', fontsize = 12)\n", - "a1.set_yticklabels([])\n", - "\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%matplotlib inline\n", - "test_pred = plt.scatter(y_test, y_pred_test, color='')\n", - "test_test = plt.scatter(y_test, y_test, color='g')\n", - "plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n", - "plt.show()" - ] - } - ], - "metadata": { - "authors": [ - { - "name": "anshirga" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "category": "tutorial", - "compute": [ - "AML" - ], - "datasets": [ - "MachineData" - ], - "deployment": [ - "ACI" - ], - "exclude_from_index": false, - "framework": [ - "None" - ], - "friendly_name": "Automated ML run with featurization and model explainability.", - "index_order": 5, - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.7" - }, + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Regression with Aml Compute**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Data](#Data)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Test](#Test)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. The Regression goal is to predict the performance of certain combinations of hardware parts.\n", + "After training AutoML models for this regression data set, we show how you can compute model explanations on your remote compute using a sample explainer script.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "In this notebook you will learn how to:\n", + "1. Create an `Experiment` in an existing `Workspace`.\n", + "2. Instantiating AutoMLConfig with FeaturizationConfig for customization\n", + "3. Train the model using remote compute.\n", + "4. Explore the results and featurization transparency options\n", + "5. Setup remote compute for computing the model explanations for a given AutoML model.\n", + "6. Start an AzureML experiment on your remote compute to compute explanations for an AutoML model.\n", + "7. Download the feature importance for engineered features and visualize the explanations for engineered features on azure portal. \n", + "8. Download the feature importance for raw features and visualize the explanations for raw features on azure portal. \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import json\n", + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import numpy as np\n", + "import pandas as pd\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "\n", + "from azureml.automl.core.featurization import FeaturizationConfig\n", + "from azureml.train.automl import AutoMLConfig\n", + "from azureml.core.dataset import Dataset" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# Choose a name for the experiment.\n", + "experiment_name = \"automl-regression-hardware-explain\"\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace Name\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Experiment Name\"] = experiment.name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create or Attach existing AmlCompute\n", + "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you create `AmlCompute` as your training compute resource.\n", + "\n", + "> Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.\n", + "\n", + "**Creation of AmlCompute takes approximately 5 minutes.** If the AmlCompute with that name is already in your workspace this code will skip the creation process.\n", + "\n", + "As with other Azure services, there are limits on certain resources (e.g. AmlCompute) associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your cluster.\n", + "amlcompute_cluster_name = \"hardware-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", + " )\n", + " compute_target = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Setup Training and Test Data for AutoML experiment\n", + "\n", + "Load the hardware dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. We also register the datasets in your workspace using a name so that these datasets may be accessed from the remote compute." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n", + "\n", + "dataset = Dataset.Tabular.from_delimited_files(data)\n", + "\n", + "# Split the dataset into train and test datasets\n", + "train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n", + "\n", + "\n", + "# Register the train dataset with your workspace\n", + "train_data.register(\n", + " workspace=ws,\n", + " name=\"machineData_train_dataset\",\n", + " description=\"hardware performance training data\",\n", + " create_new_version=True,\n", + ")\n", + "\n", + "# Register the test dataset with your workspace\n", + "test_data.register(\n", + " workspace=ws,\n", + " name=\"machineData_test_dataset\",\n", + " description=\"hardware performance test data\",\n", + " create_new_version=True,\n", + ")\n", + "\n", + "label = \"ERP\"\n", + "\n", + "train_data.to_pandas_dataframe().head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|classification, regression or forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
          spearman_correlation
          normalized_root_mean_squared_error
          r2_score
          normalized_mean_absolute_error|\n", + "|**experiment_timeout_hours**| Maximum amount of time in hours that all iterations combined can take before the experiment terminates.|\n", + "|**enable_early_stopping**| Flag to enble early termination if the score is not improving in the short term.|\n", + "|**featurization**| 'auto' / 'off' / FeaturizationConfig Indicator for whether featurization step should be done automatically or not, or whether customized featurization should be used. Setting this enables AutoML to perform featurization on the input to handle *missing data*, and to perform some common *feature extraction*. Note: If the input data is sparse, featurization cannot be turned on.|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n", + "|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Customization\n", + "\n", + "Supported customization includes:\n", + "\n", + "1. Column purpose update: Override feature type for the specified column.\n", + "2. Transformer parameter update: Update parameters for the specified transformer. Currently supports Imputer and HashOneHotEncoder.\n", + "3. Drop columns: Columns to drop from being featurized.\n", + "4. Block transformers: Allow/Block transformers to be used on featurization process." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Create FeaturizationConfig object using API calls" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { "tags": [ - "featurization", - "explainability", - "remote_run", - "AutomatedML" - ], - "task": "Regression" - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "sample-featurizationconfig-remarks2" + ] + }, + "outputs": [], + "source": [ + "featurization_config = FeaturizationConfig()\n", + "featurization_config.blocked_transformers = [\"LabelEncoder\"]\n", + "# featurization_config.drop_columns = ['MMIN']\n", + "featurization_config.add_column_purpose(\"MYCT\", \"Numeric\")\n", + "featurization_config.add_column_purpose(\"VendorName\", \"CategoricalHash\")\n", + "# default strategy mean, add transformer param for for 3 columns\n", + "featurization_config.add_transformer_params(\"Imputer\", [\"CACH\"], {\"strategy\": \"median\"})\n", + "featurization_config.add_transformer_params(\n", + " \"Imputer\", [\"CHMIN\"], {\"strategy\": \"median\"}\n", + ")\n", + "featurization_config.add_transformer_params(\n", + " \"Imputer\", [\"PRP\"], {\"strategy\": \"most_frequent\"}\n", + ")\n", + "# featurization_config.add_transformer_params('HashOneHotEncoder', [], {\"number_of_bits\": 3})" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "sample-featurizationconfig-remarks3" + ] + }, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"enable_early_stopping\": True,\n", + " \"experiment_timeout_hours\": 0.25,\n", + " \"max_concurrent_iterations\": 4,\n", + " \"max_cores_per_iteration\": -1,\n", + " \"n_cross_validations\": 5,\n", + " \"primary_metric\": \"normalized_root_mean_squared_error\",\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"regression\",\n", + " debug_log=\"automl_errors.log\",\n", + " compute_target=compute_target,\n", + " featurization=featurization_config,\n", + " training_data=train_data,\n", + " label_column_name=label,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Execution of local runs is synchronous. Depending on the data and the number of iterations this can run for a while.\n", + "In this example, we specify `show_output = True` to print currently running iterations to the console." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Run the following cell to access previous runs. Uncomment the cell below and update the run_id." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# from azureml.train.automl.run import AutoMLRun\n", + "# remote_run = AutoMLRun(experiment=experiment, run_id='>\", automl_run.experiment.name\n", + ") # your experiment name.\n", + "content = content.replace(\n", + " \"<>\", automl_run.id\n", + ") # Run-id of the AutoML run for which you want to explain the model.\n", + "content = content.replace(\"<>\", \"ERP\") # Your target column name\n", + "content = content.replace(\"<>\", \"regression\") # Training task type\n", + "# Name of your training dataset register with your workspace\n", + "content = content.replace(\"<>\", \"machineData_train_dataset\")\n", + "# Name of your test dataset register with your workspace\n", + "content = content.replace(\"<>\", \"machineData_test_dataset\")\n", + "\n", + "# Write sample file into your script folder.\n", + "with open(script_file_name, \"w\") as cefw:\n", + " cefw.write(content)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Create conda configuration for model explanations experiment from automl_run object" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.runconfig import RunConfiguration\n", + "from azureml.core.conda_dependencies import CondaDependencies\n", + "import pkg_resources\n", + "\n", + "# create a new RunConfig object\n", + "conda_run_config = RunConfiguration(framework=\"python\")\n", + "\n", + "# Set compute target to AmlCompute\n", + "conda_run_config.target = compute_target\n", + "conda_run_config.environment.docker.enabled = True\n", + "\n", + "# specify CondaDependencies obj\n", + "conda_run_config.environment.python.conda_dependencies = (\n", + " automl_run.get_environment().python.conda_dependencies\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Submit the experiment for model explanations\n", + "Submit the experiment with the above `run_config` and the sample script for computing explanations." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Now submit a run on AmlCompute for model explanations\n", + "from azureml.core.script_run_config import ScriptRunConfig\n", + "\n", + "script_run_config = ScriptRunConfig(\n", + " source_directory=script_folder,\n", + " script=\"train_explainer.py\",\n", + " run_config=conda_run_config,\n", + ")\n", + "\n", + "run = experiment.submit(script_run_config)\n", + "\n", + "# Show run details\n", + "run" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%%time\n", + "# Shows output of the run on stdout.\n", + "run.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Feature importance and visualizing explanation dashboard\n", + "In this section we describe how you can download the explanation results from the explanations experiment and visualize the feature importance for your AutoML model on the azure portal." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Download engineered feature importance from artifact store\n", + "You can use *ExplanationClient* to download the engineered feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the engineered features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.interpret import ExplanationClient\n", + "\n", + "client = ExplanationClient.from_run(automl_run)\n", + "engineered_explanations = client.download_model_explanation(\n", + " raw=False, comment=\"engineered explanations\"\n", + ")\n", + "print(engineered_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the engineered explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + automl_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Download raw feature importance from artifact store\n", + "You can use *ExplanationClient* to download the raw feature explanations from the artifact store of the *automl_run*. You can also use azure portal url to view the dash board visualization of the feature importance values of the raw features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "raw_explanations = client.download_model_explanation(\n", + " raw=True, comment=\"raw explanations\"\n", + ")\n", + "print(raw_explanations.get_feature_importance_dict())\n", + "print(\n", + " \"You can visualize the raw explanations under the 'Explanations (preview)' tab in the AutoML run at:-\\n\"\n", + " + automl_run.get_portal_url()\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Operationalize\n", + "In this section we will show how you can operationalize an AutoML model and the explainer which was used to compute the explanations in the previous section.\n", + "\n", + "### Register the AutoML model and the scoring explainer\n", + "We use the *TreeScoringExplainer* from *azureml-interpret* package to create the scoring explainer which will be used to compute the raw and engineered feature importances at the inference time. \n", + "In the cell below, we register the AutoML model and the scoring explainer with the Model Management Service." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Register trained automl model present in the 'outputs' folder in the artifacts\n", + "original_model = automl_run.register_model(\n", + " model_name=\"automl_model\", model_path=\"outputs/model.pkl\"\n", + ")\n", + "scoring_explainer_model = automl_run.register_model(\n", + " model_name=\"scoring_explainer\", model_path=\"outputs/scoring_explainer.pkl\"\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Create the conda dependencies for setting up the service\n", + "We need to create the conda dependencies comprising of the *azureml* packages using the training environment from the *automl_run*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "conda_dep = automl_run.get_environment().python.conda_dependencies\n", + "\n", + "with open(\"myenv.yml\", \"w\") as f:\n", + " f.write(conda_dep.serialize_to_string())\n", + "with open(\"myenv.yml\", \"r\") as f:\n", + " print(f.read())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### View your scoring file" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "with open(\"score_explain.py\", \"r\") as f:\n", + " print(f.read())" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Deploy the service\n", + "In the cell below, we deploy the service using the conda file and the scoring file from the previous steps. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.webservice import Webservice\n", + "from azureml.core.model import InferenceConfig\n", + "from azureml.core.webservice import AciWebservice\n", + "from azureml.core.model import Model\n", + "from azureml.core.environment import Environment\n", + "\n", + "aciconfig = AciWebservice.deploy_configuration(\n", + " cpu_cores=2,\n", + " memory_gb=2,\n", + " tags={\"data\": \"Machine Data\", \"method\": \"local_explanation\"},\n", + " description=\"Get local explanations for Machine test data\",\n", + ")\n", + "\n", + "myenv = Environment.from_conda_specification(name=\"myenv\", file_path=\"myenv.yml\")\n", + "inference_config = InferenceConfig(entry_script=\"score_explain.py\", environment=myenv)\n", + "\n", + "# Use configs and models generated above\n", + "service = Model.deploy(\n", + " ws,\n", + " \"model-scoring\",\n", + " [scoring_explainer_model, original_model],\n", + " inference_config,\n", + " aciconfig,\n", + ")\n", + "service.wait_for_deployment(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### View the service logs" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "service.get_logs()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Inference using some test data\n", + "Inference using some test data to see the predicted value from autml model, view the engineered feature importance for the predicted value and raw feature importance for the predicted value." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "if service.state == \"Healthy\":\n", + " X_test = test_data.drop_columns([label]).to_pandas_dataframe()\n", + " # Serialize the first row of the test data into json\n", + " X_test_json = X_test[:1].to_json(orient=\"records\")\n", + " print(X_test_json)\n", + " # Call the service to get the predictions and the engineered and raw explanations\n", + " output = service.run(X_test_json)\n", + " # Print the predicted value\n", + " print(output[\"predictions\"])\n", + " # Print the engineered feature importances for the predicted value\n", + " print(output[\"engineered_local_importance_values\"])\n", + " # Print the raw feature importances for the predicted value\n", + " print(output[\"raw_local_importance_values\"])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Delete the service\n", + "Delete the service once you have finished inferencing." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "service.delete()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# preview the first 3 rows of the dataset\n", + "\n", + "test_data = test_data.to_pandas_dataframe()\n", + "y_test = test_data[\"ERP\"].fillna(0)\n", + "test_data = test_data.drop(\"ERP\", 1)\n", + "test_data = test_data.fillna(0)\n", + "\n", + "\n", + "train_data = train_data.to_pandas_dataframe()\n", + "y_train = train_data[\"ERP\"].fillna(0)\n", + "train_data = train_data.drop(\"ERP\", 1)\n", + "train_data = train_data.fillna(0)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y_pred_train = fitted_model.predict(train_data)\n", + "y_residual_train = y_train - y_pred_train\n", + "\n", + "y_pred_test = fitted_model.predict(test_data)\n", + "y_residual_test = y_test - y_pred_test" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "from sklearn.metrics import mean_squared_error, r2_score\n", + "\n", + "# Set up a multi-plot chart.\n", + "f, (a0, a1) = plt.subplots(\n", + " 1, 2, gridspec_kw={\"width_ratios\": [1, 1], \"wspace\": 0, \"hspace\": 0}\n", + ")\n", + "f.suptitle(\"Regression Residual Values\", fontsize=18)\n", + "f.set_figheight(6)\n", + "f.set_figwidth(16)\n", + "\n", + "# Plot residual values of training set.\n", + "a0.axis([0, 360, -100, 100])\n", + "a0.plot(y_residual_train, \"bo\", alpha=0.5)\n", + "a0.plot([-10, 360], [0, 0], \"r-\", lw=3)\n", + "a0.text(\n", + " 16,\n", + " 170,\n", + " \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_train, y_pred_train))),\n", + " fontsize=12,\n", + ")\n", + "a0.text(\n", + " 16, 140, \"R2 score = {0:.2f}\".format(r2_score(y_train, y_pred_train)), fontsize=12\n", + ")\n", + "a0.set_xlabel(\"Training samples\", fontsize=12)\n", + "a0.set_ylabel(\"Residual Values\", fontsize=12)\n", + "\n", + "# Plot residual values of test set.\n", + "a1.axis([0, 90, -100, 100])\n", + "a1.plot(y_residual_test, \"bo\", alpha=0.5)\n", + "a1.plot([-10, 360], [0, 0], \"r-\", lw=3)\n", + "a1.text(\n", + " 5,\n", + " 170,\n", + " \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_test, y_pred_test))),\n", + " fontsize=12,\n", + ")\n", + "a1.text(5, 140, \"R2 score = {0:.2f}\".format(r2_score(y_test, y_pred_test)), fontsize=12)\n", + "a1.set_xlabel(\"Test samples\", fontsize=12)\n", + "a1.set_yticklabels([])\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "test_pred = plt.scatter(y_test, y_pred_test, color=\"\")\n", + "test_test = plt.scatter(y_test, y_test, color=\"g\")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + } + ], + "metadata": { + "authors": [ + { + "name": "anshirga" + } + ], + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" + ], + "category": "tutorial", + "compute": [ + "AML" + ], + "datasets": [ + "MachineData" + ], + "deployment": [ + "ACI" + ], + "exclude_from_index": false, + "framework": [ + "None" + ], + "friendly_name": "Automated ML run with featurization and model explainability.", + "index_order": 5, + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.7" + }, + "tags": [ + "featurization", + "explainability", + "remote_run", + "AutomatedML" + ], + "task": "Regression" + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/score_explain.py b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/score_explain.py index 25e48cdf8..3451fb7ea 100644 --- a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/score_explain.py +++ b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/score_explain.py @@ -1,7 +1,9 @@ import pandas as pd import joblib from azureml.core.model import Model -from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations +from azureml.train.automl.runtime.automl_explain_utilities import ( + automl_setup_model_explanations, +) def init(): @@ -11,8 +13,8 @@ def init(): # Retrieve the path to the model file using the model name # Assume original model is named original_prediction_model - automl_model_path = Model.get_model_path('automl_model') - scoring_explainer_path = Model.get_model_path('scoring_explainer') + automl_model_path = Model.get_model_path("automl_model") + scoring_explainer_path = Model.get_model_path("scoring_explainer") automl_model = joblib.load(automl_model_path) scoring_explainer = joblib.load(scoring_explainer_path) @@ -20,17 +22,24 @@ def init(): def run(raw_data): # Get predictions and explanations for each data point - data = pd.read_json(raw_data, orient='records') + data = pd.read_json(raw_data, orient="records") # Make prediction predictions = automl_model.predict(data) # Setup for inferencing explanations - automl_explainer_setup_obj = automl_setup_model_explanations(automl_model, - X_test=data, task='regression') + automl_explainer_setup_obj = automl_setup_model_explanations( + automl_model, X_test=data, task="regression" + ) # Retrieve model explanations for engineered explanations - engineered_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform) + engineered_local_importance_values = scoring_explainer.explain( + automl_explainer_setup_obj.X_test_transform + ) # Retrieve model explanations for raw explanations - raw_local_importance_values = scoring_explainer.explain(automl_explainer_setup_obj.X_test_transform, get_raw=True) + raw_local_importance_values = scoring_explainer.explain( + automl_explainer_setup_obj.X_test_transform, get_raw=True + ) # You can return any data type as long as it is JSON-serializable - return {'predictions': predictions.tolist(), - 'engineered_local_importance_values': engineered_local_importance_values, - 'raw_local_importance_values': raw_local_importance_values} + return { + "predictions": predictions.tolist(), + "engineered_local_importance_values": engineered_local_importance_values, + "raw_local_importance_values": raw_local_importance_values, + } diff --git a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/train_explainer.py b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/train_explainer.py index 61e200017..9750ee658 100644 --- a/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/train_explainer.py +++ b/how-to-use-azureml/automated-machine-learning/regression-explanation-featurization/train_explainer.py @@ -10,11 +10,13 @@ from azureml.core.run import Run from azureml.interpret.mimic_wrapper import MimicWrapper from azureml.interpret.scoring.scoring_explainer import TreeScoringExplainer -from azureml.train.automl.runtime.automl_explain_utilities import automl_setup_model_explanations, \ - automl_check_model_if_explainable +from azureml.train.automl.runtime.automl_explain_utilities import ( + automl_setup_model_explanations, + automl_check_model_if_explainable, +) -OUTPUT_DIR = './outputs/' +OUTPUT_DIR = "./outputs/" os.makedirs(OUTPUT_DIR, exist_ok=True) # Get workspace from the run context @@ -22,63 +24,77 @@ ws = run.experiment.workspace # Get the AutoML run object from the experiment name and the workspace -experiment = Experiment(ws, '<>') -automl_run = Run(experiment=experiment, run_id='<>') +experiment = Experiment(ws, "<>") +automl_run = Run(experiment=experiment, run_id="<>") # Check if this AutoML model is explainable if not automl_check_model_if_explainable(automl_run): - raise Exception("Model explanations are currently not supported for " + automl_run.get_properties().get( - 'run_algorithm')) + raise Exception( + "Model explanations are currently not supported for " + + automl_run.get_properties().get("run_algorithm") + ) # Download the best model from the artifact store -automl_run.download_file(name=MODEL_PATH, output_file_path='model.pkl') +automl_run.download_file(name=MODEL_PATH, output_file_path="model.pkl") # Load the AutoML model into memory -fitted_model = joblib.load('model.pkl') +fitted_model = joblib.load("model.pkl") # Get the train dataset from the workspace -train_dataset = Dataset.get_by_name(workspace=ws, name='<>') +train_dataset = Dataset.get_by_name(workspace=ws, name="<>") # Drop the labeled column to get the training set. -X_train = train_dataset.drop_columns(columns=['<>']) -y_train = train_dataset.keep_columns(columns=['<>'], validate=True) +X_train = train_dataset.drop_columns(columns=["<>"]) +y_train = train_dataset.keep_columns(columns=["<>"], validate=True) # Get the test dataset from the workspace -test_dataset = Dataset.get_by_name(workspace=ws, name='<>') +test_dataset = Dataset.get_by_name(workspace=ws, name="<>") # Drop the labeled column to get the testing set. -X_test = test_dataset.drop_columns(columns=['<>']) +X_test = test_dataset.drop_columns(columns=["<>"]) # Setup the class for explaining the AutoML models -automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, '<>', - X=X_train, X_test=X_test, - y=y_train, - automl_run=automl_run) +automl_explainer_setup_obj = automl_setup_model_explanations( + fitted_model, "<>", X=X_train, X_test=X_test, y=y_train, automl_run=automl_run +) # Initialize the Mimic Explainer -explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel, - init_dataset=automl_explainer_setup_obj.X_transform, - run=automl_explainer_setup_obj.automl_run, - features=automl_explainer_setup_obj.engineered_feature_names, - feature_maps=[automl_explainer_setup_obj.feature_map], - classes=automl_explainer_setup_obj.classes) +explainer = MimicWrapper( + ws, + automl_explainer_setup_obj.automl_estimator, + LGBMExplainableModel, + init_dataset=automl_explainer_setup_obj.X_transform, + run=automl_explainer_setup_obj.automl_run, + features=automl_explainer_setup_obj.engineered_feature_names, + feature_maps=[automl_explainer_setup_obj.feature_map], + classes=automl_explainer_setup_obj.classes, +) # Compute the engineered explanations -engineered_explanations = explainer.explain(['local', 'global'], tag='engineered explanations', - eval_dataset=automl_explainer_setup_obj.X_test_transform) +engineered_explanations = explainer.explain( + ["local", "global"], + tag="engineered explanations", + eval_dataset=automl_explainer_setup_obj.X_test_transform, +) # Compute the raw explanations -raw_explanations = explainer.explain(['local', 'global'], get_raw=True, tag='raw explanations', - raw_feature_names=automl_explainer_setup_obj.raw_feature_names, - eval_dataset=automl_explainer_setup_obj.X_test_transform, - raw_eval_dataset=automl_explainer_setup_obj.X_test_raw) +raw_explanations = explainer.explain( + ["local", "global"], + get_raw=True, + tag="raw explanations", + raw_feature_names=automl_explainer_setup_obj.raw_feature_names, + eval_dataset=automl_explainer_setup_obj.X_test_transform, + raw_eval_dataset=automl_explainer_setup_obj.X_test_raw, +) print("Engineered and raw explanations computed successfully") # Initialize the ScoringExplainer -scoring_explainer = TreeScoringExplainer(explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map]) +scoring_explainer = TreeScoringExplainer( + explainer.explainer, feature_maps=[automl_explainer_setup_obj.feature_map] +) # Pickle scoring explainer locally -with open('scoring_explainer.pkl', 'wb') as stream: +with open("scoring_explainer.pkl", "wb") as stream: joblib.dump(scoring_explainer, stream) # Upload the scoring explainer to the automl run -automl_run.upload_file('outputs/scoring_explainer.pkl', 'scoring_explainer.pkl') +automl_run.upload_file("outputs/scoring_explainer.pkl", "scoring_explainer.pkl") diff --git a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb b/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb index 0109754de..2dcbf2798 100644 --- a/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb +++ b/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.ipynb @@ -1,477 +1,470 @@ { - "cells": [ - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Copyright (c) Microsoft Corporation. All rights reserved.\n", - "\n", - "Licensed under the MIT License." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/how-to-use-azureml/automated-machine-learning/regression/auto-ml-regression.png)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "# Automated Machine Learning\n", - "_**Regression with Aml Compute**_\n", - "\n", - "## Contents\n", - "1. [Introduction](#Introduction)\n", - "1. [Setup](#Setup)\n", - "1. [Data](#Data)\n", - "1. [Train](#Train)\n", - "1. [Results](#Results)\n", - "1. [Test](#Test)\n", - "\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Introduction\n", - "In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. The Regression goal is to predict the performance of certain combinations of hardware parts.\n", - "\n", - "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", - "\n", - "In this notebook you will learn how to:\n", - "1. Create an `Experiment` in an existing `Workspace`.\n", - "2. Configure AutoML using `AutoMLConfig`.\n", - "3. Train the model using local compute.\n", - "4. Explore the results.\n", - "5. Test the best fitted model." - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Setup\n", - "\n", - "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "import logging\n", - "\n", - "from matplotlib import pyplot as plt\n", - "import numpy as np\n", - "import pandas as pd\n", - " \n", - "\n", - "import azureml.core\n", - "from azureml.core.experiment import Experiment\n", - "from azureml.core.workspace import Workspace\n", - "from azureml.core.dataset import Dataset\n", - "from azureml.train.automl import AutoMLConfig" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "print(\"This notebook was created using version 1.38.0 of the Azure ML SDK\")\n", - "print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "ws = Workspace.from_config()\n", - "\n", - "# Choose a name for the experiment.\n", - "experiment_name = 'automl-regression'\n", - "\n", - "experiment = Experiment(ws, experiment_name)\n", - "\n", - "output = {}\n", - "output['Subscription ID'] = ws.subscription_id\n", - "output['Workspace'] = ws.name\n", - "output['Resource Group'] = ws.resource_group\n", - "output['Location'] = ws.location\n", - "output['Run History Name'] = experiment_name\n", - "pd.set_option('display.max_colwidth', -1)\n", - "outputDf = pd.DataFrame(data = output, index = [''])\n", - "outputDf.T" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Using AmlCompute\n", - "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you use `AmlCompute` as your training compute resource." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.core.compute import ComputeTarget, AmlCompute\n", - "from azureml.core.compute_target import ComputeTargetException\n", - "\n", - "# Choose a name for your CPU cluster\n", - "cpu_cluster_name = \"reg-cluster\"\n", - "\n", - "# Verify that cluster does not exist already\n", - "try:\n", - " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", - " print('Found existing cluster, use it.')\n", - "except ComputeTargetException:\n", - " compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS12_V2',\n", - " max_nodes=4)\n", - " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", - "\n", - "compute_target.wait_for_completion(show_output=True)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Data\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Load Data\n", - "Load the hardware dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n", - "dataset = Dataset.Tabular.from_delimited_files(data)\n", - "\n", - "# Split the dataset into train and test datasets\n", - "train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n", - "\n", - "label = \"ERP\"\n" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Train\n", - "\n", - "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", - "\n", - "|Property|Description|\n", - "|-|-|\n", - "|**task**|classification, regression or forecasting|\n", - "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
          spearman_correlation
          normalized_root_mean_squared_error
          r2_score
          normalized_mean_absolute_error|\n", - "|**n_cross_validations**|Number of cross validation splits.|\n", - "|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n", - "|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n", - "\n", - "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": { - "tags": [ - "automlconfig-remarks-sample" - ] - }, - "outputs": [], - "source": [ - "automl_settings = {\n", - " \"n_cross_validations\": 3,\n", - " \"primary_metric\": 'normalized_root_mean_squared_error',\n", - " \"enable_early_stopping\": True, \n", - " \"experiment_timeout_hours\": 0.3, #for real scenarios we reccommend a timeout of at least one hour \n", - " \"max_concurrent_iterations\": 4,\n", - " \"max_cores_per_iteration\": -1,\n", - " \"verbosity\": logging.INFO,\n", - "}\n", - "\n", - "automl_config = AutoMLConfig(task = 'regression',\n", - " compute_target = compute_target,\n", - " training_data = train_data,\n", - " label_column_name = label,\n", - " **automl_settings\n", - " )" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "Call the `submit` method on the experiment object and pass the run configuration. Execution of remote runs is asynchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run = experiment.submit(automl_config, show_output = False)" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "# If you need to retrieve a run that already started, use the following code\n", - "#from azureml.train.automl.run import AutoMLRun\n", - "#remote_run = AutoMLRun(experiment = experiment, run_id = '')" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Results" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Widget for Monitoring Runs\n", - "\n", - "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", - "\n", - "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "from azureml.widgets import RunDetails\n", - "RunDetails(remote_run).show() " - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "remote_run.wait_for_completion()" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "### Retrieve the Best Model\n", - "\n", - "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "best_run, fitted_model = remote_run.get_output()\n", - "print(best_run)\n", - "print(fitted_model)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Best Model Based on Any Other Metric\n", - "Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "lookup_metric = \"root_mean_squared_error\"\n", - "best_run, fitted_model = remote_run.get_output(metric = lookup_metric)\n", - "print(best_run)\n", - "print(fitted_model)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "#### Model from a Specific Iteration\n", - "Show the run and the model from the third iteration:" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "iteration = 3\n", - "third_run, third_model = remote_run.get_output(iteration = iteration)\n", - "print(third_run)\n", - "print(third_model)" - ] - }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Test" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y_test = test_data.keep_columns('ERP').to_pandas_dataframe()\n", - "test_data = test_data.drop_columns('ERP').to_pandas_dataframe()\n", - "\n", - "\n", - "y_train = train_data.keep_columns('ERP').to_pandas_dataframe()\n", - "train_data = train_data.drop_columns('ERP').to_pandas_dataframe()\n" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "y_pred_train = fitted_model.predict(train_data)\n", - "y_residual_train = y_train.values - y_pred_train\n", - "\n", - "y_pred_test = fitted_model.predict(test_data)\n", - "y_residual_test = y_test.values - y_pred_test" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%matplotlib inline\n", - "from sklearn.metrics import mean_squared_error, r2_score\n", - "\n", - "# Set up a multi-plot chart.\n", - "f, (a0, a1) = plt.subplots(1, 2, gridspec_kw = {'width_ratios':[1, 1], 'wspace':0, 'hspace': 0})\n", - "f.suptitle('Regression Residual Values', fontsize = 18)\n", - "f.set_figheight(6)\n", - "f.set_figwidth(16)\n", - "\n", - "# Plot residual values of training set.\n", - "a0.axis([0, 360, -100, 100])\n", - "a0.plot(y_residual_train, 'bo', alpha = 0.5)\n", - "a0.plot([-10,360],[0,0], 'r-', lw = 3)\n", - "a0.text(16,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_train, y_pred_train))), fontsize = 12)\n", - "a0.text(16,140,'R2 score = {0:.2f}'.format(r2_score(y_train, y_pred_train)),fontsize = 12)\n", - "a0.set_xlabel('Training samples', fontsize = 12)\n", - "a0.set_ylabel('Residual Values', fontsize = 12)\n", - "\n", - "# Plot residual values of test set.\n", - "a1.axis([0, 90, -100, 100])\n", - "a1.plot(y_residual_test, 'bo', alpha = 0.5)\n", - "a1.plot([-10,360],[0,0], 'r-', lw = 3)\n", - "a1.text(5,170,'RMSE = {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred_test))), fontsize = 12)\n", - "a1.text(5,140,'R2 score = {0:.2f}'.format(r2_score(y_test, y_pred_test)),fontsize = 12)\n", - "a1.set_xlabel('Test samples', fontsize = 12)\n", - "a1.set_yticklabels([])\n", - "\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [ - "%matplotlib inline\n", - "test_pred = plt.scatter(y_test, y_pred_test, color='')\n", - "test_test = plt.scatter(y_test, y_test, color='g')\n", - "plt.legend((test_pred, test_test), ('prediction', 'truth'), loc='upper left', fontsize=8)\n", - "plt.show()" - ] - }, - { - "cell_type": "code", - "execution_count": null, - "metadata": {}, - "outputs": [], - "source": [] - } + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Automated Machine Learning\n", + "_**Regression with Aml Compute**_\n", + "\n", + "## Contents\n", + "1. [Introduction](#Introduction)\n", + "1. [Setup](#Setup)\n", + "1. [Data](#Data)\n", + "1. [Train](#Train)\n", + "1. [Results](#Results)\n", + "1. [Test](#Test)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "In this example we use the Hardware Performance Dataset to showcase how you can use AutoML for a simple regression problem. The Regression goal is to predict the performance of certain combinations of hardware parts.\n", + "\n", + "If you are using an Azure Machine Learning Compute Instance, you are all set. Otherwise, go through the [configuration](../../../configuration.ipynb) notebook first if you haven't already to establish your connection to the AzureML Workspace. \n", + "\n", + "In this notebook you will learn how to:\n", + "1. Create an `Experiment` in an existing `Workspace`.\n", + "2. Configure AutoML using `AutoMLConfig`.\n", + "3. Train the model using local compute.\n", + "4. Explore the results.\n", + "5. Test the best fitted model." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup\n", + "\n", + "As part of the setup you have already created an Azure ML `Workspace` object. For Automated ML you will need to create an `Experiment` object, which is a named object in a `Workspace` used to run experiments." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import logging\n", + "\n", + "from matplotlib import pyplot as plt\n", + "import numpy as np\n", + "import pandas as pd\n", + "\n", + "\n", + "import azureml.core\n", + "from azureml.core.experiment import Experiment\n", + "from azureml.core.workspace import Workspace\n", + "from azureml.core.dataset import Dataset\n", + "from azureml.train.automl import AutoMLConfig" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This sample notebook may use features that are not available in previous versions of the Azure ML SDK." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ws = Workspace.from_config()\n", + "\n", + "# Choose a name for the experiment.\n", + "experiment_name = \"automl-regression\"\n", + "\n", + "experiment = Experiment(ws, experiment_name)\n", + "\n", + "output = {}\n", + "output[\"Subscription ID\"] = ws.subscription_id\n", + "output[\"Workspace\"] = ws.name\n", + "output[\"Resource Group\"] = ws.resource_group\n", + "output[\"Location\"] = ws.location\n", + "output[\"Run History Name\"] = experiment_name\n", + "pd.set_option(\"display.max_colwidth\", -1)\n", + "outputDf = pd.DataFrame(data=output, index=[\"\"])\n", + "outputDf.T" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Using AmlCompute\n", + "You will need to create a [compute target](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture#compute-target) for your AutoML run. In this tutorial, you use `AmlCompute` as your training compute resource." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.core.compute import ComputeTarget, AmlCompute\n", + "from azureml.core.compute_target import ComputeTargetException\n", + "\n", + "# Choose a name for your CPU cluster\n", + "cpu_cluster_name = \"reg-cluster\"\n", + "\n", + "# Verify that cluster does not exist already\n", + "try:\n", + " compute_target = ComputeTarget(workspace=ws, name=cpu_cluster_name)\n", + " print(\"Found existing cluster, use it.\")\n", + "except ComputeTargetException:\n", + " compute_config = AmlCompute.provisioning_configuration(\n", + " vm_size=\"STANDARD_DS12_V2\", max_nodes=4\n", + " )\n", + " compute_target = ComputeTarget.create(ws, cpu_cluster_name, compute_config)\n", + "\n", + "compute_target.wait_for_completion(show_output=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Data\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Load Data\n", + "Load the hardware dataset from a csv file containing both training features and labels. The features are inputs to the model, while the training labels represent the expected output of the model. Next, we'll split the data using random_split and extract the training data for the model. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "data = \"https://automlsamplenotebookdata.blob.core.windows.net/automl-sample-notebook-data/machineData.csv\"\n", + "dataset = Dataset.Tabular.from_delimited_files(data)\n", + "\n", + "# Split the dataset into train and test datasets\n", + "train_data, test_data = dataset.random_split(percentage=0.8, seed=223)\n", + "\n", + "label = \"ERP\"" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Train\n", + "\n", + "Instantiate an `AutoMLConfig` object to specify the settings and data used to run the experiment.\n", + "\n", + "|Property|Description|\n", + "|-|-|\n", + "|**task**|classification, regression or forecasting|\n", + "|**primary_metric**|This is the metric that you want to optimize. Regression supports the following primary metrics:
          spearman_correlation
          normalized_root_mean_squared_error
          r2_score
          normalized_mean_absolute_error|\n", + "|**n_cross_validations**|Number of cross validation splits.|\n", + "|**training_data**|(sparse) array-like, shape = [n_samples, n_features]|\n", + "|**label_column_name**|(sparse) array-like, shape = [n_samples, ], targets values.|\n", + "\n", + "**_You can find more information about primary metrics_** [here](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-configure-auto-train#primary-metric)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "tags": [ + "automlconfig-remarks-sample" + ] + }, + "outputs": [], + "source": [ + "automl_settings = {\n", + " \"n_cross_validations\": 3,\n", + " \"primary_metric\": \"r2_score\",\n", + " \"enable_early_stopping\": True,\n", + " \"experiment_timeout_hours\": 0.3, # for real scenarios we reccommend a timeout of at least one hour\n", + " \"max_concurrent_iterations\": 4,\n", + " \"max_cores_per_iteration\": -1,\n", + " \"verbosity\": logging.INFO,\n", + "}\n", + "\n", + "automl_config = AutoMLConfig(\n", + " task=\"regression\",\n", + " compute_target=compute_target,\n", + " training_data=train_data,\n", + " label_column_name=label,\n", + " **automl_settings,\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Call the `submit` method on the experiment object and pass the run configuration. Execution of remote runs is asynchronous. Depending on the data and the number of iterations this can run for a while. Validation errors and current status will be shown when setting `show_output=True` and the execution will be synchronous." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run = experiment.submit(automl_config, show_output=False)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# If you need to retrieve a run that already started, use the following code\n", + "# from azureml.train.automl.run import AutoMLRun\n", + "# remote_run = AutoMLRun(experiment = experiment, run_id = '')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Widget for Monitoring Runs\n", + "\n", + "The widget will first report a \"loading\" status while running the first iteration. After completing the first iteration, an auto-updating graph and table will be shown. The widget will refresh once per minute, so you should see the graph update as child runs complete.\n", + "\n", + "**Note:** The widget displays a link at the bottom. Use this link to open a web interface to explore the individual run details." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from azureml.widgets import RunDetails\n", + "\n", + "RunDetails(remote_run).show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "remote_run.wait_for_completion()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Retrieve the Best Model\n", + "\n", + "Below we select the best pipeline from our iterations. The `get_output` method returns the best run and the fitted model. The Model includes the pipeline and any pre-processing. Overloads on `get_output` allow you to retrieve the best run and fitted model for *any* logged metric or for a particular *iteration*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "best_run, fitted_model = remote_run.get_output()\n", + "print(best_run)\n", + "print(fitted_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Best Model Based on Any Other Metric\n", + "Show the run and the model that has the smallest `root_mean_squared_error` value (which turned out to be the same as the one with largest `spearman_correlation` value):" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "lookup_metric = \"root_mean_squared_error\"\n", + "best_run, fitted_model = remote_run.get_output(metric=lookup_metric)\n", + "print(best_run)\n", + "print(fitted_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Model from a Specific Iteration\n", + "Show the run and the model from the third iteration:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "iteration = 3\n", + "third_run, third_model = remote_run.get_output(iteration=iteration)\n", + "print(third_run)\n", + "print(third_model)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Test" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y_test = test_data.keep_columns(\"ERP\").to_pandas_dataframe()\n", + "test_data = test_data.drop_columns(\"ERP\").to_pandas_dataframe()\n", + "\n", + "\n", + "y_train = train_data.keep_columns(\"ERP\").to_pandas_dataframe()\n", + "train_data = train_data.drop_columns(\"ERP\").to_pandas_dataframe()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "y_pred_train = fitted_model.predict(train_data)\n", + "y_residual_train = y_train.values - y_pred_train\n", + "\n", + "y_pred_test = fitted_model.predict(test_data)\n", + "y_residual_test = y_test.values - y_pred_test" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "from sklearn.metrics import mean_squared_error, r2_score\n", + "\n", + "# Set up a multi-plot chart.\n", + "f, (a0, a1) = plt.subplots(\n", + " 1, 2, gridspec_kw={\"width_ratios\": [1, 1], \"wspace\": 0, \"hspace\": 0}\n", + ")\n", + "f.suptitle(\"Regression Residual Values\", fontsize=18)\n", + "f.set_figheight(6)\n", + "f.set_figwidth(16)\n", + "\n", + "# Plot residual values of training set.\n", + "a0.axis([0, 360, -100, 100])\n", + "a0.plot(y_residual_train, \"bo\", alpha=0.5)\n", + "a0.plot([-10, 360], [0, 0], \"r-\", lw=3)\n", + "a0.text(\n", + " 16,\n", + " 170,\n", + " \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_train, y_pred_train))),\n", + " fontsize=12,\n", + ")\n", + "a0.text(\n", + " 16, 140, \"R2 score = {0:.2f}\".format(r2_score(y_train, y_pred_train)), fontsize=12\n", + ")\n", + "a0.set_xlabel(\"Training samples\", fontsize=12)\n", + "a0.set_ylabel(\"Residual Values\", fontsize=12)\n", + "\n", + "# Plot residual values of test set.\n", + "a1.axis([0, 90, -100, 100])\n", + "a1.plot(y_residual_test, \"bo\", alpha=0.5)\n", + "a1.plot([-10, 360], [0, 0], \"r-\", lw=3)\n", + "a1.text(\n", + " 5,\n", + " 170,\n", + " \"RMSE = {0:.2f}\".format(np.sqrt(mean_squared_error(y_test, y_pred_test))),\n", + " fontsize=12,\n", + ")\n", + "a1.text(5, 140, \"R2 score = {0:.2f}\".format(r2_score(y_test, y_pred_test)), fontsize=12)\n", + "a1.set_xlabel(\"Test samples\", fontsize=12)\n", + "a1.set_yticklabels([])\n", + "\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "test_pred = plt.scatter(y_test, y_pred_test, color=\"\")\n", + "test_test = plt.scatter(y_test, y_test, color=\"g\")\n", + "plt.legend(\n", + " (test_pred, test_test), (\"prediction\", \"truth\"), loc=\"upper left\", fontsize=8\n", + ")\n", + "plt.show()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "authors": [ + { + "name": "ratanase" + } ], - "metadata": { - "authors": [ - { - "name": "ratanase" - } - ], - "categories": [ - "how-to-use-azureml", - "automated-machine-learning" - ], - "kernelspec": { - "display_name": "Python 3.6", - "language": "python", - "name": "python36" - }, - "language_info": { - "codemirror_mode": { - "name": "ipython", - "version": 3 - }, - "file_extension": ".py", - "mimetype": "text/x-python", - "name": "python", - "nbconvert_exporter": "python", - "pygments_lexer": "ipython3", - "version": "3.6.2" - } - }, - "nbformat": 4, - "nbformat_minor": 2 -} \ No newline at end of file + "categories": [ + "how-to-use-azureml", + "automated-machine-learning" + ], + "kernelspec": { + "display_name": "Python 3.6 - AzureML", + "language": "python", + "name": "python3-azureml" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.6.2" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb b/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb index 3b37c55ec..6610d9003 100644 --- a/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb +++ b/how-to-use-azureml/deployment/onnx/onnx-inference-facial-expression-recognition-deploy.ipynb @@ -70,7 +70,7 @@ "\n", "import urllib.request\n", "\n", - "onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n", + "onnx_model_url = \"https://github.com/onnx/models/blob/master/vision/body_analysis/emotion_ferplus/model/emotion-ferplus-7.tar.gz?raw=true\"\n", "\n", "urllib.request.urlretrieve(onnx_model_url, filename=\"emotion-ferplus-7.tar.gz\")\n", "\n", diff --git a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb b/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb index 7d481129a..33b8bccee 100644 --- a/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb +++ b/how-to-use-azureml/deployment/onnx/onnx-inference-mnist-deploy.ipynb @@ -70,7 +70,7 @@ "\n", "import urllib.request\n", "\n", - "onnx_model_url = \"https://github.com/onnx/models/blob/main/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n", + "onnx_model_url = \"https://github.com/onnx/models/blob/master/vision/classification/mnist/model/mnist-7.tar.gz?raw=true\"\n", "\n", "urllib.request.urlretrieve(onnx_model_url, filename=\"mnist-7.tar.gz\")" ] diff --git a/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/scripts/prepdata/cleanse.py b/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/scripts/prepdata/cleanse.py index bdbfb465d..0da693cca 100644 --- a/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/scripts/prepdata/cleanse.py +++ b/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/scripts/prepdata/cleanse.py @@ -5,6 +5,17 @@ import os from azureml.core import Run + +def get_dict(dict_str): + pairs = dict_str.strip("{}").split(r'\;') + new_dict = {} + for pair in pairs: + key, value = pair.strip().split(":") + new_dict[key.strip().strip("'")] = value.strip().strip("'") + + return new_dict + + print("Cleans the input data") # Get the input green_taxi_data. To learn more about how to access dataset in your script, please @@ -12,6 +23,7 @@ run = Run.get_context() raw_data = run.input_datasets["raw_data"] + parser = argparse.ArgumentParser("cleanse") parser.add_argument("--output_cleanse", type=str, help="cleaned taxi data directory") parser.add_argument("--useful_columns", type=str, help="useful columns to keep") @@ -26,8 +38,8 @@ # These functions ensure that null data is removed from the dataset, # which will help increase machine learning model accuracy. -useful_columns = eval(args.useful_columns.replace(';', ',')) -columns = eval(args.columns.replace(';', ',')) +useful_columns = [s.strip().strip("'") for s in args.useful_columns.strip("[]").split(r'\;')] +columns = get_dict(args.columns) new_df = (raw_data.to_pandas_dataframe() .dropna(how='all') diff --git a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb b/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb index 0b7c279ed..a061085b0 100644 --- a/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb +++ b/how-to-use-azureml/ml-frameworks/chainer/train-hyperparameter-tune-deploy-with-chainer/train-hyperparameter-tune-deploy-with-chainer.ipynb @@ -254,7 +254,6 @@ "- conda-forge\n", "dependencies:\n", "- python=3.6.2\n", - "- pip=21.3.1\n", "- pip:\n", " - azureml-defaults\n", " - azureml-opendatasets\n", diff --git a/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb b/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb index 3da5e83a1..9a37c9de3 100644 --- a/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb +++ b/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb @@ -431,7 +431,6 @@ "- conda-forge\n", "dependencies:\n", "- python=3.6.2\n", - "- pip=21.3.1\n", "- pip:\n", " - h5py<=2.10.0\n", " - azureml-defaults\n", diff --git a/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb b/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb index 99e21ff64..360f1ea85 100644 --- a/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb +++ b/how-to-use-azureml/ml-frameworks/pytorch/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.ipynb @@ -262,7 +262,6 @@ "- conda-forge\n", "dependencies:\n", "- python=3.6.2\n", - "- pip=21.3.1\n", "- pip:\n", " - azureml-defaults\n", " - torch==1.6.0\n",