Find Communities by: Category | Product

Introduction

 

Due to the almost “unpredictable” nature of stock market, predicting stock price is one of the most challenging problems in financial service industry. In financial literature, a stock price is considered as a stochastic process and modeled with geometric Brownian motion (GBM), which is expressed as a stochastic differential equation (SDE) as follows

formula1b.PNG.png

Here S(t) is price of the stock at time t ; a is percentage drift; b is percentage volatility and W(t) is the Brownian motion term. The equation has a nice closed form solution

formula2b.PNG.png

However, for real problems, the percentage volatility b is hardly obtainable, making it extremely difficult to model the dynamics of a stock or predict its trends.


As a compromise, we use neural network models to predict stock price trends. As a data-driven approach, neural networks take stock prices as time series and use historical data to learn the parameters involved in the model and make predictions for the future. Also, neural networks possess a good diversity in model architectures, so they are good candidates for ensemble learning, which usually produces more accurate predictions than any single models.


In this blog, we discuss about building an ensemble prediction system consisting of neural network models to predict the trends for 542 technology companies’ stocks. Also, we will show how to train the prediction system in parallel utilizing the HPC (high performance computing) power of Intel® Xeon cluster.

 



Ensemble Model for Predicting a Bundle of Stocks


Evolutions of different stocks’ prices are not independent of each other. Instead, they are influencing each other in some way. It is very hard to know exactly how they interact with each other. We can approximately model the dynamics of stock prices with the following system

formula3.PNG.png

Here k is the maximum lag that is used as predictors. In this blog, we explore on using neural network models to approximate F_1, F_2, ......, F_n. As is shown in Figure 1, we firstly train three different neural network models (multi-layer perceptron (MLP), convolutional neural network (CNN) and long short-term memory (LSTM)) and then do an ensemble forecasting via majority voting, i.e., we believe that the price of a stock will increase/decrease if at least 2/3 of the prediction models believe so.

figure1.png

Figure 1: Training ensemble model for stock price trends prediction on multiple nodes with Intel® Xeon processors




Low Latency Prediction with Intel® Xeon Scalable Processors


Latency is very critical for stock trading. An investment portfolio on the stock market needs to be frequently adjusted to hedge the risk (to minimize the probability of loss). The hedging strategy needs to be executed at high frequency with very low latency. So, training and predictions must be finished within very short time periods.


We leverage the HPC power of Intel® Xeon processors to build a low latency neural network system for real-time predictions for stock price trends. The system in Figure 1 was tested on the Zenith supercomputer in Dell EMC HPC and AI Innovation Lab, which consists of 422 Dell EMC PowerEdge C6420 servers, each with 2 Intel® Xeon Scalable Gold 6148 processors. Each of these processors has 20 physical cores.


In Figure 1, MLP, CNN and LSTM models can be trained simultaneously with multiple processes (e.g., 40 processes) on 3 Zenith nodes. Training with multiple processes was performed with Uber’s Horovod framework. We trained the models with historical data in the past 200 consecutive trading days. Batch size for each process was 8. We did the test for 20 trading days, so there were 10840 (20x542) trends. The ensemble system can predict about 56% of them correctly.

figure2.png

Figure 2: Training time comparison for 1 process and 40 processes.

Figure 2 compares training time costs for 1 process and 40 processes. As is shown, distributed training over multi-processes can reduce training time by about 10 times. The speed-up is especially significant for LSTM model, because it has more parameters and usually takes longer time to train than other models. With this prediction system, the latency is less than 20 seconds. Stock trading system can take advantage of this low latency to respond and react to changes/fluctuations in the market more quickly, which in the end will help better optimize the investment portfolio so that the risk of loss is lower.




Summary


While deep neural networks have had great success in areas like computer vision, natural language processing, heath care, weather forecasting, etc., it is worthwhile to do more exploring research on their applications in financial industry. Since most operations (e.g., stock trading and options trading) in financial industry are time-sensitive and requires low latency, training prediction models in parallel is a necessity when financial companies integrate artificial intelligence (AI) applications into their business. HPC clusters with Intel® Xeon scalable processors will be good infrastructure choices for building such low latency AI systems.



Introduction

 

Time series is a very important type of data in the financial services industry. Interest rates, stock prices, exchange rates, and option prices are good examples for this type of data. Time series forecasting plays a critical role when financial institutions design investment strategies and make decisions. Traditionally, statistical models such as SMA (simple moving average), SES (simple exponential smoothing), and ARIMA (autoregressive integrated moving average) are widely used to perform time series forecasting tasks.

 

Neural networks are promising alternatives, as they are more robust for such regression problems due to flexibility in model architectures (e.g., there are many hyperparameters that we can tune, such as number of layers, number of neurons, learning rate, etc.). Recently applications of neural network models in the time series forecasting area have been gaining more and more attention from statistical and data science communities.

 

In this blog, we will firstly discuss about some basic properties that a machine learning model must have to perform financial service tasks. Then we will design our model based on these requirements and show how to train the model in parallel on HPC cluster with Intel® Xeon processors.

 

 

Requirements from Financial Institutions

 

High-accuracy and low-latency are two import properties that financial service institutions expect from a quality time series forecasting model.

 

High Accuracy  A high level of accuracy in the forecasting model helps companies lower the risk of losing money in investments. Neural networks are believed to be good at capturing the dynamics in time series and hence yield more accurate predictions. There are many hyperparameters in the model so that data scientists and quantitative researchers can tune them to obtain the optimal model. Moreover, data science community believes that ensemble learning tends to improve prediction accuracy significantly. The flexibility of model architecture provides us a good variety of model members for ensemble learning.

 

Low Latency  Operations in financial services are time-sensitive.  For example, high frequency trading usually requires models to finish training and prediction within very short time periods. For deep neural network models, low latency can be guaranteed by distributed training with Horovod or distributed TensorFlow. Intel® Xeon multi-core processors, coupled with Intel’s MKL optimized TensorFlow, prove to be a good infrastructure option for such distributed training.

 

With these requirements in mind, we propose an ensemble learning model as in Figure 1, which is a combination of MLP (Multi-Layer Perceptron), CNN (Convolutional Neural Network) and LSTM (Long Short-Term Memory) models. Because architecture topologies for MLP, CNN and LSTM are quite different, the ensemble model has a good variety in members, which helps reduce risk of overfitting and produces more reliable predictions. The member models are trained at the same time over multiple nodes with Intel® Xeon processors. If more models need to be integrated, we just add more nodes into the system so that the overall training time stays short. With neural network models and HPC power of the Intel® Xeon processors, this system meets the requirements from financial service institutions.

figure1.png

Figure 1: Training high accuracy ensemble model on HPC cluster with Intel® Xeon processors

 

 

Fast Training with Intel® Xeon Scalable Processors

 

Our tests used Dell EMC’s Zenith supercomputer which consists of 422 Dell EMC PowerEdge C6420 nodes, each with 2 Intel® Xeon Scalable Gold 6148 processors. Figure 2 shows an example of time-to-train for training MLP, CNN and LSTM models with different numbers of processes. The data set used is the 10-Year Treasury Inflation-Indexed Security data. For this example, running distributed training with 40 processes is the most efficient, primarily due to the data size in this time series is small and the neural network models we used did not have many layers. With this setting, model training can finish within 10 seconds, much faster than training the models with one processor that has only a few cores, which typically takes more than one minute. Regarding accuracy, the ensemble model can predict this interest rate with MAE (mean absolute error) less than 0.0005. Typical values for this interest rate is around 0.01, so the relative error is less than 5%.

figure2.png

Figure 2: Training time comparison (Each of the models is trained on Intel® Xeon processors within one node)

 

 

Conclusion

 

With both high-accuracy and low-latency being very critical for time series forecasting in financial services, neural network models trained in parallel using Intel® Xeon Scalable processors stand out as very promising options for financial institutions. And as financial institutions need to train more complicated models to forecast many time series with high accuracy at the same time, the need for parallel processing will only grow.