[1] BOX G E P, JENKINS G M, REINSEL G C, et al. Time series analysis: forecasting and control[M]. New York: John Wiley & Sons, 2015.
[2] FRANCQ C, ZAKOIAN J M. GARCH models: structure, statistical inference and financial applications[M]. New York: John Wiley & Sons, 2019.
[3] CAO L, GU Q. Dynamic support vector machines for nonstationary time series forecasting[J]. Intelligent Data Analysis, 2002, 6(1): 67-83.
[4] KUMAR M, THENMOZHI M. Forecasting stock index movement: a comparison of support vector machines and random forest[J]. SSRN Electronic Journal, 2006. DOI: 10.2139/ssrn.876544.
[5] 梁宏涛, 刘硕, 杜军威, 等. 深度学习应用于时序预测研究综述[J]. 计算机科学与探索, 2023, 17(6): 1285-1300.
LIANG H T, LIU S, DU J W, et al. Review of deep learning applied to time series prediction[J]. Journal of Frontiers of Computer Science and Technology, 2023, 17(6): 1285-1300.
[6] LEE M C, CHANG J W, HUNG J C, et al. Exploring the effectiveness of deep neural networks with technical analysis applied to stock market prediction[J]. Computer Science and Information Systems, 2021, 18(2): 401-418.
[7] LU W, LI J, WANG J, et al. A CNN-BiLSTM-AM method for stock price prediction[J]. Neural Computing and Applications, 2021, 33: 4741-4753.
[8] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Advances in Neural Information Processing Systems 30, Long Beach, Dec 4-9, 2017: 5998-6008.
[9] ZHOU H, ZHANG S, PENG J, et al. Informer: beyond efficient transformer for long sequence time-series forecasting[C]//Proceedings of the 2021 AAAI Conference on Artificial Intelligence. Menlo Park: AAAI, 2021: 11106-11115.
[10] WU H, XU J, WANG J, et al. Autoformer: decomposition transformers with auto-correlation for long-term series forecasting[C]//Advances in Neural Information Processing Systems 34, Dec 6-14, 2021: 22419-22430.
[11] DING Q, WU S, SUN H, et al. Hierarchical multi-scale Gaus-sian transformer for stock movement prediction[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence, Yokohama, Jan 7-15, 2021: 4640-4646.
[12] DAIYA D, LIN C. Stock movement prediction and portfolio management via multimodal learning with transformer[C]//Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing. Piscataway: IEEE, 2021: 3305-3309.
[13] KIRKPATRICK J, PASCANU R, RABINOWITZ N, et al. Overcoming catastrophic forgetting in neural networks[J]. Proceedings of the National Academy of Sciences, 2017, 114(13): 3521-3526.
[14] ALJUNDI R, KELCHTERMANS K, TUYTELAARS T. Task-free continual learning[C]//Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2019: 11254-11263.
[15] BUZZEGA P, BOSCHINI M, PORRELLO A, et al. Dark experience for general continual learning: a strong, simple baseline[C]//Advances in Neural Information Processing Systems 33, Dec 6-12, 2020: 15920-15930.
[16] MALLYA A, LAZEBNIK S. Packnet: adding multiple tasks to a single network by iterative pruning[C]//Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition. Washington: IEEE Computer Society, 2018: 7765-7773.
[17] WANG X, HAN M. Online sequential extreme learning machine with kernels for nonstationary time series prediction[J]. Neurocomputing, 2014, 145: 90-97.
[18] YU H, DAI Q. DWE-IL: a new incremental learning algorithm for non-stationary time series prediction via dynamically weighting ensemble learning[J]. Applied Intelligence, 2022, 52(1): 174-194.
[19] WANG H, LI M, YUE X. IncLSTM: incremental ensemble LSTM model towards time series data[J]. Computers & Electrical Engineering, 2021, 92: 107156.
[20] WOO G, LIU C, SAHOO D, et al. CoST: contrastive learning of disentangled seasonal-trend representations for time series forecasting[EB/OL]. [2023-11-12]. https://arxiv.org/abs/2202.01575.
[21] HUANG T, CHEN P, ZHANG J, et al. A transferable time series forecasting service using deep transformer model for online systems[C]//Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering. Piscataway: IEEE, 2022: 1-12.
[22] IOFFE S, SZEGEDY C. Batch normalization: accelerating deep network training by reducing internal covariate shift[C]//Proceedings of the 32nd International Conference on Machine Learning, Lille, Jul 6-11, 2015: 448-456.
[23] WU Y, HE K. Group normalization[C]//Proceedings of the 15th European Conference on Computer Vision. Cham:Springer, 2018: 3-19. |