AutoSeries 2020

Overview

Rank Team Code
1 Denis Vorotyntsev https://github.com/DenisVorotyntsev/AutoSeries
2 DeepBlueAI https://github.com/DeepBlueAI/AutoSeries
3 DeepWisdom https://github.com/DeepWisdom/AutoSeries2019
Web Search and Data Mining (WSDM)2020 will be held in Houston, Texas, USA from Feberary 03 to 07, 2020.AutoSeriesis one of the competitions in main conferenceprovided by4ParadigmandChaLearn.
Machine Learning has achieved remarkable success in time series-related tasks, e.g., classification, regression and clustering. For time series regression, ML methods show powerful predictive performances. However, in practice, it is very difficult to switch between different datasets without human efforts. To address this problem, Automated Machine Learning (AutoML) is proposed to explore automatic pipeline to train an effective ML model given a specific task requirement. Since its proposal, AutoML have been explored in various applications, and a series of AutoML competitions, e.g., Auto-ML Track at Kdd Cup, Automated natural language processing (AutoNLP) and Automated computer vision (AutoCV), have been organized by 4Paradigm, Inc. and ChaLearn (sponsored by Google, Microsoft). These competitions have drawn a lot of attention from both academic researchers and industrial practitioners. In this challenge, we further propose the Automated Time Series Regression (AutoSeries) competition which aims at proposing automated solutions for time series regression task. This challenge is restricted to multivariate regression problems, which come from different time series domains, including air quality, sales, work presence, city traffic, etc.. Datasets are tabular data whose features are of three main types: ID_Key, Main_Timestamp, other features and Label/Regression_Value. The combination of ID_Key features identifies a variable (a time series). There is only one Main_Timestamp feature that indicate the primary timestamp of the data table. Label/Regression_Value is the target that needs to predict. The provided solutions are expected to discover various kinds of information, e.g. time series correlation, when only raw data (time series features) and meta information are provided. There will have three kinds of datasets: public datasets, feedback datasets (corresponding to Feedback Phase) and private datasets (corresponding to Private Phase). Public datasets are provided to the participants for developing AutoSeries solutions offline. Afterward, solutions will be evaluated with feedback datasets without human intervention. The results of the private datasets determine the final ranking. This is the first AutoSeries competition, and that focuses on time series regression this time, which poses new challenges to the participants, as listed below: - How to automatically discover various kinds of information from other variables besides time feature? - How to automatically extract useful features from different datasets of time series data? - How to automatically handle relations among different time series? - How to automatically handle both long and short time series data? - How to automatically design effective neural network structures? Additionally, participants should also consider: - How to automatically and efficiently select appropriate machine learning model and hyper-parameters? - How to make the solution more generic, i.e., how to make it applicable for unseen datasets? - How to keep the computational and memory cost acceptable?

Platform

Participants should log in ourplatformto start the challenge. Please follow the instructions inplatform [Get Started]to get access to the data, learn the data format and submission interface, and download the starting-kit.

Dataset

This page describes the datasets used in AutoSeries challenge. 12 datasets are prepared for this competition.2public datasets, which can be downloaded, are provided to the participants so that they can develop their AutoSeries solutions offline. Besides that, another5 feedback datasetsare also provided to participants to evaluate the public leaderboard scores of their AutoSeries solutions. Afterward, their solutions will be evaluated with5private datasetswithout human intervention. Note that this challenge is restricted to time sereis regression problems, which come from different domains.

Data

Each dataset containes 5 files: train.data, test.data, test.solution, test_time.data, info.yaml

train.data

This is the training data including target variable (regression target). Its column types could be read from info.yaml.
There are 3 data types of features, indicated by "num", "str", and "timestamp", respectively:
• num: numerical feature, a real value
• str: string or categorical features
• timestamp: time feature, an integer that indicates the UNIX timestamp

test.data

This is the test data including target variable (regression target). Its column types could be read from info.yaml.

test.solution

This is the test solution (extracted from test.data).

test_time.data

This is the UNIQUE test timestamp (extracted from test.data).

info.yaml

For every dataset, we provide an info.yaml file that contains the important information (meta data).

Here we give details about info.yaml
• time_budget:the time budgets for different methods in user models
• schema:stores data type information of each column
• is_multivariate:whether there are multiple time series.
• is_relative_time:DEPRECATED, not used in this challenge.
• primary_timestamp:UNIX timestamp
• primary_id:a list of column names, identifying uniquely the time series. Note that if is_multivatriate is False, this will be an empty list.
• label:regression target

Example:

Screen-Shot-2019-11-21-at-21-10-18


Rules

This challenge hasthree phases. The participants are provided withpublicdatasetswhich can be downloaded, so that they can develop their AutoSeries solutions offline. Then, the code will be uploaded to the platform and participants will receive immediate feedback on the performance of their method at another 5feedbackdatasets. AfterFeedback Phaseterminates, we will have anotherCheck Phase, where participants are allowed to submit their code only once onprivate datasetsin order to debug. Participants won't be able to read detailed logs but they are able to see whether their code report errors. Last, in thePrivatePhase, Participants’ solutions will be evaluated on 5privatedatasets. The ranking in Private Phase will count towards determining the winners.

Code submitted is trained and tested automatically, without any human intervention. Code submitted onFeedback (or Private) Phaseruns on all 5 feedback (or private) datasets in parallel on separate compute workers, each one with its own time budgets.

Running process

The flow diagram of the running processe is shown in Figure 1.


test


Figure 1. The flow diagram of ingestion program.

The procedure in Figure 1 can be described as follows:

  1. The program trains a model with the training dataset, and save the model.
  2. In each iteration of predicting, the ingestion program loads the model and sends the test samples with the next timestamp and the user program will predict the labels of them.
  3. In the next iteration of predicting, the samples and their true labels with the last timestamp will also be sent to the user program.
  4. After each iteration, the user program can choose to update the model. After updating, the user program can choose to continue predicting or update the model again.
  5. The procedure ends when all test samples are predicted.

Figure 2 illustrates the predict method. Here \(X^{t}, Y^{t}\) are the samples and true labels in test dataset with timestamp t. \(\tilde Y^t\) is the predicted labels. \(\mathbb{I}_{update}^t\) indicates whether the program needs to update.

Figure 3 shows the update method. In this sub-procedure, the user program can update the model with training data and all historical data in test dataset.



Figure 2. The predict method. X, Y are samples and labels.



Figure 3. The update method.

Interface of the user program

(For more details, please check the ingestion program in starting kit)

The participant should construct a Model class in their model.py file. The interface and its description can be found in Figure 4. In Model class, 6 methods should be defined: __init__, train, predict, update, save, load.

  • __init__: in this method user can get meta information and timestamps about test dataset. test_timestamp is the main timestamp column of the test datasets. The samples with these timestamps (or part of them) need to be predicted, and the timestamps that need to be predicted are in pred_timestamp with a unique operation.
  • train: train the models.
  • predict: predict the samples with next timestamp in test dataset. You should NOT update the model in this method.
  • update: update the model.
  • save: save the model.
  • load: load the model.

Important remark about competition rules:

  • Participants can only use data passed by ingestion program.
  • Participants can only train model in train method, and update model in update method.



Figure 4. The interface of user program. There are 6 methods that should be defined: __init__, train, predict, update, save and load.

Time budgets

train, predict, update, save and load are all running with their limited time budgets. The time budgets can be found in info.yaml for each dataset. For train, save and load, each call of the method has its own time budget. For predict and update, all calls of a method share its time budget, i.e. the total running time of all calls of predict can not exceed the time budget of predict, and similar for update.

Metrics

For each dataset, we computeRoot-Mean-Square Error (RMSE)as the evaluation metric for this competition. Participants will be ranked according to RMSE per dataset. After we compute the RMSE for all 5 datasets, the overall ranking is used as the final score for evaluation and will be used in the leaderboard. It is computed by averaging the ranks (among all participants) of RMSE obtained on the 5 datasets.

More details about submission and evaluation can be found on theplatform [Get Started - Evaluation].

Terms & Conditions

Please find the challenge rules on theplatform website [Get Started - Challenge Rules].

Prizes

  • 1st Place: $2,500
  • 2ndPlace: $1,500
  • 3rdPlace: $1,000

Timeline

UTC time

  • Nov 21st,2019, 15:59: Beginning of the Feedback Phase, the release of public datasets. Participants can start submitting codes and obtaining immediate feedback in the leaderboard
  • Dec 23rd, 2019, 15:59: Real personal identification
  • Dec 30th, 2019, 15:59: End of the Feedback Phase
  • Dec 30th, 2019, 16:00: Beginning of the Check Phase
  • Jan 02nd, 2019, 15:59: Submission deadline of Check Phase
  • Jan 03rd, 2019, 15:59: End of the Check Phase
  • Jan 03rd, 2019, 16:00: Beginning of the Final Phase
  • Jan 05th, 2019, 16:00: Submission deadline of Final Phase
  • Jan 06th, 2019, 16:00: End of the Final Phase

Note that the CodaLab platform uses UTC time format, please pay attention to the time descriptions elsewhere on this page so as not to mistake the time points for each phase of the competition.

About

Pleasecontact the organizersif you have any problem concerning this challenge.

Sponsors

4paradigm

chalean

Image result for microsoft logo


Advisors

- Wei-Wei Tu, 4Pardigm Inc., China, (Coordinator, Platform Administrator, Data Provider, Baseline Provider, Sponsor)tuweiwei@4paradigm.com

- Isabelle Guyon, Universté Paris-Saclay, France, ChaLearn, USA, (Advisor, Platform Administrator)guyon@chalearn.org

- Qiang Yang, Hong Kong University of Science and Technology, Hong Kong, China, (Advisor, Sponsor)qyang@cse.ust.hk

Committee (alphabetical order)

- Chenshuo Liu,4Paradigm Inc., China, (Admin)liuchenshuo@4paradigm.com

- Ling Yue,4Paradigm Inc., China, (Admin)yueling@4paradigm.com

- Shouxiang Liu,4Paradigm Inc., China, (Admin)liushouxiang@4paradigm.com

- Xiawei Guo, 4Paradigm Inc., China, (Admin)guoxiawei@4paradigm.com

- Zhen Xu,4Paradigm Inc., China, (Admin)xuzhen@4paradigm.com

Organization Institutes

4paradigm

chalean




AboutAutoML

Previous AutoML Challenges:

-First AutoML Challenge

-AutoML@PAKDD2018

-AutoML@NeurIPS2018

-AutoML@PAKDD2019

-AutoML@KDDCUP2019

-AutoCV@IJCNN2019

-AutoCV2@ECML PKDD2019

-AutoNLP@WAIC2019

-AutoSpeech@ACML2019

About 4Paradigm Inc.

Founded in early 2015,4Paradigmis one of the world's leading AI technology and service providers for industrial applications. 4Paradigm's flagship product – the AI Prophet – is an AI development platform that enables enterprises to effortlessly build their own AI applications, and thereby significantly increase their operation's efficiency. Using the AI Prophet, a company can develop a data-driven ''AI Core System'', which could be largely regarded as a second core system next to the traditional transaction-oriented Core Banking System (IBM Mainframe) often found in banks. Beyond this, 4Paradigm has also successfully developed more than 100 AI solutions for use in various settings such as finance, telecommunication and internet applications. These solutions include, but are not limited to, smart pricing, real-time anti-fraud systems, precision marketing, personalized recommendation and more. And while it is clear that 4Paradigm can completely set up a new paradigm that an organization uses its data, its scope of services does not stop there. 4Paradigm uses state-of-the-art machine learning technologies and practical experiences to bring together a team of experts ranging from scientists to architects. This team has successfully built China's largest machine learning system and the world's first commercial deep learning system. However, 4Paradigm's success does not stop there. With its core team pioneering the research of ''Transfer Learning'', 4Paradigm takes the lead in this area, and as a result, has drawn great attention of worldwide tech giants.

About ChaLearn

ChaLearnis a non-profit organization with vast experience in the organization of academic challenges. ChaLearn is interested in all aspects of challenge organization, including data gathering procedures, evaluation protocols, novel challenge scenarios (e.g., competitions), training for challenge organizers, challenge analytics, resultdissemination and, ultimately, advancing the state-of-the-art through challenges.