Generative Pre-Trained Diffusion Paradigm for Zero-Shot Time Series Forecasting
arXiv preprint arXiv:2406.02212, 2024•arxiv.org
In recent years, generative pre-trained paradigms such as Large Language Models (LLMs)
and Large Vision Models (LVMs) have achieved revolutionary advancements and
widespread real-world applications. Particularly, the emergence of pre-trained LLMs-based
temporal works, compared to previous deep model approaches, has demonstrated superior
generalization and robustness, showcasing the potential of generative pre-trained
paradigms as foundation models for time series. However, those LLMs-based works mainly …
and Large Vision Models (LVMs) have achieved revolutionary advancements and
widespread real-world applications. Particularly, the emergence of pre-trained LLMs-based
temporal works, compared to previous deep model approaches, has demonstrated superior
generalization and robustness, showcasing the potential of generative pre-trained
paradigms as foundation models for time series. However, those LLMs-based works mainly …
In recent years, generative pre-trained paradigms such as Large Language Models (LLMs) and Large Vision Models (LVMs) have achieved revolutionary advancements and widespread real-world applications. Particularly, the emergence of pre-trained LLMs-based temporal works, compared to previous deep model approaches, has demonstrated superior generalization and robustness, showcasing the potential of generative pre-trained paradigms as foundation models for time series. However, those LLMs-based works mainly focus on cross-modal research, i.e., leveraging the language capabilities of LLMs in time series contexts. Although they have achieved impressive performance, there still exist the issues of concept drift caused by differences in data distribution and inflexibility caused by misalignment of dimensions. To this end, inspired by recent work on LVMs, we reconsider the paradigm of time series modeling. In this paper, we comprehensively explore, for the first time, the effectiveness and superiority of the Generative Pre-trained Diffusion (GPD) paradigm in real-world multivariate time series forecasting (TSF). Specifically, to mitigate performance bias introduced by sophisticated networks, we propose a straightforward MLP diffusion network for unconditional modeling of time series. Then we employ a zero-shot and tuning-free method to predict (generate) future data using historical data as prompts. The GPD paradigm is established on the time series modality, effectively preventing the phenomenon of concept drift, and enabling flexible forecasting of arbitrary lengths. We demonstrate that the GPD paradigm achieves comprehensive performance and generalization comparable to current SOTA LLM-based and deep model paradigms on mainstream benchmarks and various TSF tasks. Extensive experiments validate the potential of the GPD paradigm and its assistance in future related research.
arxiv.org