The Taub Faculty of Computer Science Events and Talks
Wednesday, 13.04.2022, 08:30
Multivariate time series forecasting differs from univariate time series forecasting by trying to model the dependencies between the different time series in order to make a more precise forecasting. Despite reaching better results in a multivariate setting, classical and deep learning multivariate models are not scalable, having their total number of parameters growing square of the number of time-series. We present in this paper a novel paradigm in how deep learning models shall be adapted to high dimension multivariate time series forecasting. Our paradigm adapts any deep learning model into a scalable reduced and distributed model, focusing on the parameters necessary to capture the essential dependencies of the data. We leverage knowledge from the most meaningful dependencies with a cluster based decomposition of the data while still learning from global dependencies between all the time series via a small global component. A third component efficiently model the common trends of the different timeseries via a univariate model. Our model brings state of the art results on smaller datasets and mainly on bigger ones while being more efficient memory wise and time wise.