Improving Stock Price Prediction with GAN-Based Data Augmentation

Julisa Bana Abraham

Abstract


The stock price is one of the most studied time series data because it is deemed to be profitable doing so, however stock price data is still difficult to predict because it is non-linear, non-parametric, non-stationary, and chaotic. One of the methods that most recently used to predict stock price data is deep learning. Although deep learning has a good performance to solve various problems, deep learning must be trained using a lot of data or this method will experience overfitting. This paper proposes a scheme to train a classifier model for predicting stock price time series data using augmented time-series data generated using GAN. Evaluation shows that the classifier model trained using augmented data has better performance on the AMZN dataset of 24.47% and 30.27% lower RMSE and MAE respectively compared to just using the real data and FB dataset of 15.84% and 13.88% lower RMSE and MAE respectively compared to just using the real data, but for the GOOG dataset it does not show a significant change in RMSE that is 0.52% lower and even the MAE value is increased slightly by 2.62% compared to just using the real data

Keywords


GAN;Time-series Forecasting;Data augmentation;Deep learning

Full Text:

PDF

References


D. Shah, H. Isah, and F. Zulkernine, “Stock Market Analysis: A Review and Taxonomy of Prediction Techniques,” International Journal of Financial Studies, vol. 7, no. 2, pp. 1–22, 2019.

X. Wu, D. Sahoo, and S. C. H. Hoi, “Recent Advances in Deep Learning for Object Detection,” arXiv:1908.03673 [cs], Aug. 2019.

A. B. Nassif, I. Shahin, I. Attili, M. Azzeh, and K. Shaalan, “Speech Recognition Using Deep Neural Networks: A Systematic Review,” IEEE Access, vol. 7, pp. 19143–19165, 2019.

L. Zhang, S. Wang, and B. Liu, “Deep Learning for Sentiment Analysis : A Survey,” arXiv:1801.07883 [cs, stat], Jan. 2018.

Bernal, Armando, Sam Fok, and Rohit Pidaparthi, ”Financial Market Time Series Prediction with Recurrent Neural Networks”. State College: Citeseer. 2012

Di Persio, Luca, and Oleksandr Honchar. "Recurrent neural networks approach to the financial forecast of Google assets." International Journal of Mathematics and Computers in simulation 11. 2017

B. Yang, Z. Gong, and W. Yang, “Stock market index prediction using deep neural network ensemble,” in 2017 36th Chinese Control Conference (CCC), pp. 3882–3887, 2017.

M. Hossain, R. Karim, R. Thulasiram, N. Bruce, and Y. Wang, “Hybrid Deep Learning Model for Stock Price Prediction,” 2018, pp. 1837–1844.

B. Ghojogh and M. Crowley, “The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial,” arXiv:1905.12787 [cs, stat], May 2019.

L. Perez and J. Wang, “The Effectiveness of Data Augmentation in Image Classification using Deep Learning,” arXiv:1712.04621 [cs], Dec. 2017.

A. L. Guennec, S. Malinowski, and R. Tavenard, “Data Augmentation for Time Series Classification using Convolutional Neural Networks,” 2016.

Z. Cui, W. Chen, and Y. Chen, “Multi-Scale Convolutional Neural Networks for Time Series Classification,” arXiv:1603.06995 [cs], Mar. 2016.

G. Ramponi, P. Protopapas, M. Brambilla, and R. Janssen, “T-CGAN: Conditional Generative Adversarial Network for Data Augmentation in Noisy Time Series with Irregular Sampling,” arXiv:1811.08295 [cs, stat], Nov. 2018.

M. Mirza and S. Osindero, “Conditional Generative Adversarial Nets,” arXiv:1411.1784 [cs, stat], Nov. 2014.

A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks,” arXiv:1511.06434 [cs], Nov. 2015.

I. J. Goodfellow et al., “Generative Adversarial Networks,” arXiv:1406.2661 [cs, stat], Jun. 2014.

S. Ioffe and C. Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167 [cs], Feb. 2015.

B. Xu, N. Wang, T. Chen, and M. Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” arXiv:1505.00853 [cs, stat], May 2015.

Maas, Andrew L., Awni Y. Hannun, and Andrew Y. Ng. "Rectifier nonlinearities improve neural network acoustic models." In Proc. icml, vol. 30, no. 1, p. 3. 2013.

Atienza, Rowel. “Advanced Deep Learning with Keras.” Packt Publishing. 2018

T. Kim and H. Y. Kim, “Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data,” PLOS ONE, vol. 14, no. 2, p. e0212320, Feb. 2019.

T.-Y. Kim and S.-B. Cho, “Predicting residential energy consumption using CNN-LSTM neural networks,” Energy, vol. 182, pp. 72–81, Sep. 2019.

Niu, Jiayi, Jing Chen, and Yitian Xu. "Twin support vector regression with Huber loss." Journal of Intelligent & Fuzzy Systems 32, no. 6 pp 4247-4258. 2017

F. Chollet, Deep Learning with Python, 1st ed. Greenwich, CT, USA: Manning Publications Co., 2017.

G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv:1207.0580 [cs], Jul. 2012.

Y. Gal and Z. Ghahramani, “A Theoretically Grounded Application of Dropout in Recurrent Neural Networks,” arXiv:1512.05287 [stat], Dec. 2015.




DOI: http://dx.doi.org/10.24014/ijaidm.v4i1.10740

Refbacks

  • There are currently no refbacks.


Office and Secretariat:

Big Data Research Centre
Puzzle Research Data Technology (Predatech)
Laboratory Building 1st Floor of Faculty of Science and Technology
UIN Sultan Syarif Kasim Riau

Jl. HR. Soebrantas KM. 18.5 No. 155 Pekanbaru Riau – 28293
Website: http://predatech.uin-suska.ac.id/ijaidm
Email: ijaidm@uin-suska.ac.id
e-Journal: http://ejournal.uin-suska.ac.id/index.php/ijaidm
Phone./ Hp.: +62 852-7535-9942


Journal Indexing:

Google Scholar | ROAD | PKP Index | BASE | ESJI | Journal Factor | General Impact Factor | Garuda | Moraref | One Search | Cite Factor | Crossref | WorldCat | Neliti  | SINTA

IJAIDM Stats