scroll to top
0

Mobile Menu

Header Layout

EBSCO Auth Banner

Let's find your institution. Click here.

Page title

Stock Trading Strategies Based on Deep Reinforcement Learning.

  • Academic Journal
  • Li, Yawei1 (AUTHOR)
    Liu, Peipei2 (AUTHOR)
    Wang, Ze2 (AUTHOR)
  • Scientific Programming. 3/1/2022, p1-15. 15p.
  • Article
  • The purpose of stock market investment is to obtain more profits. In recent years, an increasing number of researchers have tried to implement stock trading based on machine learning. Facing the complex stock market, how to obtain effective information from multisource data and implement dynamic trading strategies is difficult. To solve these problems, this study proposes a new deep reinforcement learning model to implement stock trading, analyzes the stock market through stock data, technical indicators and candlestick charts, and learns dynamic trading strategies. Fusing the features of different data sources extracted by the deep neural network as the state of the stock market, the agent in reinforcement learning makes trading decisions on this basis. Experiments on the Chinese stock market dataset and the S&P 500 stock market dataset show that our trading strategy can obtain higher profits compared with other trading strategies. [ABSTRACT FROM AUTHOR]
Full Text

AN0155494550;fm101mar.22;2022Mar03.05:06;v2.2.500

Stock Trading Strategies Based on Deep Reinforcement Learning 

1. Introduction

The purpose of stock market investment is to obtain more profits. In recent years, an increasing number of researchers have tried to implement stock trading based on machine learning. Facing the complex stock market, how to obtain effective information from multisource data and implement dynamic trading strategies is difficult. To solve these problems, this study proposes a new deep reinforcement learning model to implement stock trading, analyzes the stock market through stock data, technical indicators and candlestick charts, and learns dynamic trading strategies. Fusing the features of different data sources extracted by the deep neural network as the state of the stock market, the agent in reinforcement learning makes trading decisions on this basis. Experiments on the Chinese stock market dataset and the S&P 500 stock market dataset show that our trading strategy can obtain higher profits compared with other trading strategies.

Stock trading is the process of buying and selling stocks to obtain investment profit. The key to stock trading is to make the right trading decisions at the right times, that is, to develop a suitable trading strategy [[1]]. In recent years, many studies have been based on machine learning methods to predict stock trends or prices to implement stock trading. However, long-duration prediction of the price or trend of the stock is not reliable. Besides, the trading strategy based on stock price prediction is static [[2]]. The stock market is affected by many factors [[4]–[6]], such as changes in investor psychology and company policies, natural disasters, emergencies, etc., stock price fluctuates greatly. Compared with a static trading strategy, a dynamic trading strategy can make trading decisions dynamically according to the changes of the stock market, which has greater advantages.

At present, an increasing number of studies implement dynamic trading strategies based on deep reinforcement learning. Reinforcement learning gains increasing attention after AlphaZero defeated humans [[7]], has the ability of independent learning and decision-making, and has been successfully applied in the field of game playing [[8]], unmanned driving [[10]], and helicopter control [[12]]. Reinforcement learning solves the sequential decision-making problem, which can be applied to stock trading to learn dynamic trading strategies. Nevertheless, reinforcement learning lacks the ability to perceive the environment. The combination of deep learning and reinforcement learning (i.e., deep reinforcement learning) solves this problem and has more advantages when it has the decision-making ability of reinforcement learning and perception ability of deep learning.

One of the challenges when implementing stock trading based on deep reinforcement learning is the correct analysis of the state of the stock market. Financial data is nonlinear and unstable. Most of the existing studies on stock trading based on deep reinforcement learning analyze the stock market through stock data [[13]–[15]]. However, there is noise in stock data, which affects the final analysis results. Technical indicators can reflect the changes in the stock market from different perspectives and reduce the influence of noise [[16]]. There are studies that convert financial data into two-dimensional images for analyzing the stock market [[18]–[22]]. Different data sources reflect the changes in the stock market from different perspectives. Compared with the analysis of the stock market based on a single data source, multisource data can integrate the characteristics of different data sources, which is more conducive to the analysis of the stock market. However, the fusion of multisource data is difficult.

To a deeper analysis of the stock market and learn the optimal dynamic trading strategy, this study proposes a deep reinforcement learning model and integrates multisource data to implement stock trading. Through the analysis of stock data, technical indicators, and candlestick charts, we obtain a deeper feature representation of the stock market, which is conducive to learning the optimal trading strategy. Besides, the setting of the reward function in reinforcement learning cannot be ignored. In stock trading, investment risk should be paid attention to while considering returns and reasonably balance risk and returns. Sharpe ratio (SR) represents the profit that can be obtained under certain risks [[23]]. In this study, the reward function takes investment risk into consideration and combines SR and profit rate (PR) as the reward function to promote the learning of optimal trading strategies.

To verify the effectiveness of the trading strategy learned by our proposed model, we compare it with other trading strategies based on practical trading data. For stocks with different trends, our trading strategy obtained higher PR and SR, which has better robustness. In addition, we conduct ablation experiments, and the experimental results show that the trading strategy learned from analyzing the stock market based on multisource data is better than those learned from analyzing the stock market based on a single data source. The main contributions of this paper are as follows:

(i) A new deep reinforcement learning model is proposed to implement stock trading and integrate the stock data and candlestick charts to analyze the stock market, which is more helpful to learn the optimal dynamic trading strategy.

(ii) A new reward function is proposed. In this study, investment risk is taken into account, and the sum of SR and PR is taken as the reward function.

(iii) The experimental results show that the trading strategy learned from the deep reinforcement learning model proposed in this paper can obtain better profits for stocks with different trends.

2. Related Work

In recent years, a mass of machine learning methods has been implemented in stock trading. Investors make trading decisions based on their judgment of the stock market. However, due to the influence of many factors, they cannot make correct trading decisions based on the changes in the stock market in time. Compared with traditional trading strategies, machine learning methods can learn trading strategies by analyzing information related to the stock market and discovering profit patterns that people do not know about without professional financial knowledge, which have more advantages.

There are some studies based on deep learning methods to implement stock trading. Deep learning methods usually implement stock trading by predicting the future trend or price of stock [[24]–[27]]. In the financial field, deep learning methods are used for stock price prediction because they can obtain temporal characteristics from financial data [[28]]; Chen et al. [[30]] analyzed 2D images transformed from financial data through a convolutional neural network (CNN) to classify the future price trend of stocks. When implementing stock trading based on the deep learning method, the higher the accuracy of the prediction, the more helpful the trading decision. On the contrary, when the prediction result deviates greatly from the actual situation, it will cause the fault trading decision. In addition, the trading strategy implemented by such methods is static and cannot be adjusted in time according to the changes in the stock market.

Reinforcement learning can be used to implement stock trading by self-learning and autonomous decision-making. Chakole et al. [[31]] used Q-learning algorithm [[32]] to find the optimal trading strategy, in which the unsupervised learning method K-means and candlestick chart were, respectively, used to represent the state of the stock market. Deng et al. [[33]] proposed a model Deep Direct Reinforcement Learning and added fuzzy learning, which is the first attempt to combine deep learning and reinforcement learning in the field of financial transactions. Wu et al. [[34]] proposed a long short-term memory based (LSTM-based) agent that could perceive stock market conditions and automatically trade by analyzing stock data and technical indicators. Lei et al. [[35]] proposed a time-driven feature aware jointly deep reinforcement learning model (TFJ-DRL), which combines gated recurrent unit (GRU) and policy gradient algorithm to implement stock trading. Lee et al. [[36]] proposed HW_LSTM_RL structure, which first used wavelet transforms to remove noise in stock data, then based on deep reinforcement learning to analyze the stock data to make trading decisions.

Existing studies on stock trading based on deep reinforcement learning mostly analyze the stock market through a single data source. In this study, we propose a new deep reinforcement learning model to implement stock trading, and analyze the state of the stock market through stock data, technical indicators, and candlestick charts. In our proposed model, firstly, different deep neural networks are used to extract the features of different data sources. Secondly, the features of different data sources are fused. Finally, reinforcement learning makes trading decisions according to the fused features and continuously optimizes trading strategies according to the profits. The setting of reward function in reinforcement learning cannot be ignored. In this study, the SR is added to the reward function setting, and the investment risk is taken into account while considering the profits.

3. Methods

We propose a new deep reinforcement learning model and implement stock trading by analyzing the stock market with multisource data. In this section, first, we introduce the overall deep reinforcement learning model, then the feature extraction process of different data sources is described in detail. Finally, the specific application of reinforcement learning in stock trading is introduced.

3.1. The Overall Structure of the Model

Implementing stock trading based on deep reinforcement learning and correctly analyzing the state of the stock market is more conducive to learning the optimal dynamic trading strategy. To obtain the deeper feature representation of the stock market state and learn the optimal dynamic trading strategy, we fuse the features of stock data, technical indicators, and candlestick charts. Figure 1 shows the overall structure of the model.

Graph: Figure 1 The overall structure of the model. vi represents candlestick chart feature, vd represents the feature of stock data and technical indicators, and the feature vector obtained by contacting these two feature vectors is used as the input of the two fully connected (FC) layers. In this paper, FC layers are used to construct the dueling DQN network; the two FC layers represent the advantage function As,a and state value function Vs in dueling DQN. The final Q value is obtained by adding the outputs of the two functions.

The deep reinforcement learning model we propose can be divided into two modules, the deep learning module for extracting features of different data sources and the reinforcement learning module for making trading decisions. Candlestick charts features are extracted by the CNN and bidirectional long short-term memory (BiLSTM); stock data and technical indicators are as the input of the LSTM network for feature extraction. After extracting the features of different data sources, contacting the features of the different data sources to implement feature fusion, the fused features can be regarded as the state of the stock market, and the reinforcement learning module makes trading decisions on this basis. In addition, in the reinforcement learning module, the algorithms used are Dueling DQN [[37]] and Double DQN [[38]].

3.2. Deep Learning Module

The purpose of this study is to obtain a deeper feature representation of the stock market environmental state through the fusion of multisource data to learn the optimal dynamic trading strategy. Although raw stock data can reflect changes in the stock market, they contain considerable noise. To reduce the impact of noise and perceive the changes of the stock market more objectively and accurately, relevant technical indicators are used as one of the data sources for analyzing the stock market in this study. Candlestick charts can reflect the changes in the stock market from another perspective. This paper fuses the features of the candlestick charts.

3.3. Stock Data and Technical Indicator Feature Extraction

Due the noise in stock data, we use relevant technical indicators to reduce the impact of noise. The technical indicators reflect the changes in the stock market from different perspectives. In this paper, stock data and technical indicators are used as inputs to the LSTM network to better capture the main trends of stocks. The raw stock data we use include opening price, closing price, high price, low price, and trading volume. The technical indicators used in this paper are the MACD, EMA, DIFF, DEA, KDJ, BIAS, RSI, and WILLR. The indicators are calculated by mathematical formulas based on stock prices and trading volumes [[39]], as reported in Table 1.

Table 1 The list of used technical indicators.

Technical indicatorIndicator description
MACDMoving average convergence/divergence
EMAExponential moving average
DIFFDiff
DEADifference exponential average
KDJStochastics
BIASBias
RSIRelative strength index
WILLRWilliams %R

To facilitate subsequent calculations, we perform missing value processing on the stock data. First, the stock data is cleaned, and the missing data are supplemented with 0. In addition, the input of the neural network must be a real value, so we replaced the NaNs in the stock data and technical indicators with 0. Data with different value ranges may show gradient explosion during neural network training [[42]]. To prevent this problem, we normalize the stock data and technical indicators; normalization is performed to transform the data to a fixed interval. In this work, the stock data and technical indicators of each dimension are normalized, and the data are converted into ranges [0, 1]. The normalization formula is as follows:

(1)<msub>Xnorm</msub>=X−<msub>Xmin</msub><msub>Xmax</msub>−<msub>Xmin</msub>,

where X represents the original data, <msub>Xmin</msub> and <msub>Xmax</msub> represent the minimum and maximum values of the original data, respectively, <msub>Xnorm</msub> represents the normalized data. The neural network structure for extracting stock data and technical indicators is shown in Figure 2. LSTM network is a variant of recurrent neural network (RNN), and its unit structure is shown in Figure 3. LSTM solves the problem of gradient disappearance and gradient explosion in the long sequence training process. In the LSTM network, f, i, and ο represent a forget gate, an input gate, and an output gate, respectively. A forget gate is responsible for removing information from the cell state. The input gate is responsible for the addition of information to the cell state. The output gate decides which next hidden state should be selected. <msub>Ct</msub> is the state of the memory cell at time t; <msub>Ct</msub>˜</mover> is the value of the candidate state of the memory cell at time t ; σ and tanh are the sigmoid and tanh activation functions, respectively; W and b represent the weight and deviation matrix, respectively; <msub>xt</msub> is the input vector; <msub>ht</msub> is the output vector; in this paper, <msub>xt</msub> is the data after the contacting of stock data and technical indicators, <msub>xt</msub> and other specific calculation formulas are as follows:

<mtd rowspan="6">(2)<msub>xt</msub>=open,low,close,...,MACD,RSI,Willr<msub>it</msub>=σ<msub>Wi</msub>·<msub>ht−1</msub>,<msub>xt</msub>+<msub>bi</msub><msub>Ct</msub>˜</mover>=tanh<msub>Wc</msub>·<msub>ht−1</msub>,<msub>xt</msub>+<msub>bc</msub><msub>Ct</msub>=<msub>ft</msub>∗<msub>Ct−1</msub>+<msub>it</msub>∗<msub>Ct</msub>˜</mover><msub>ot</msub>=σ<msub>Wo</msub><msub>ht−1</msub>,<msub>xt</msub>+<msub>bo</msub><msub>ht</msub>=<msub>ot</msub>∗tanh<msub>Ct</msub>.

Graph: Figure 2 The network structure for extracting features of stock data and technical indicators.

Graph: Figure 3 LSTM network unit structure.

In the entire process of feature extraction of stock data and technical indicators, the stock data and technical indicators are first cleaned and normalized. Then, the normalized data are used as the input of the LSTM network for feature extraction. Finally, the final feature is obtained by feature extraction through the two-layer LSTM network.

3.4. Candlestick Chart Feature Extraction

To extract more informative features, in this study, historical stock data are transformed into candlestick charts; candlestick charts contain not only the candlestick but also other information, which can be divided into two parts, the upper part is the candlestick and moving average of the closing price, the lower part is the trading volume histogram and its moving average. Generally, the candlestick consists of body, upper shadow, and lower shadow. The body is the difference between the closing price and the opening price of the stock during the trading session, as shown in Figure 4. If the opening price is lower than the closing price, it indicates that the price is rising, this kind of candle is called a bullish candle, and the color of the candlestick is red. And if the open price is higher than the close price, it indicates that the price has fallen, and the color of the candlestick is green. For a bullish candlestick, the upper shadow is the difference between the high price and the close price, and the lower shadow represents the difference between the low price and the open price. For a bearish candlestick, the upper shading indicates the difference between the high price and the open price, and the lower shading indicates the difference between the low price and the close price. The trading time of stocks can range from one minute to one month. The candlestick chart is based on days in this study.

Graph: Figure 4 Candlestick representation.

The network structure of extracting candlestick chart features is shown in Figure 5. In this study, we first obtain the features of the candlestick chart through three layers of convolution and pooling, then transform the obtained vector, input the BiLSTM network for feature extraction, and finally obtain the final features.

Graph: Figure 5 The network structure for extracting features of a candlestick chart. vi is the final features.

3.5. Reinforcement Learning Module

Reinforcement learning includes agent, environment, state, action, and reward. The agent chooses action according to the environmental state, which will get the immediate reward every time it chooses an action. The agent constantly adjusts the learning strategy according to the reward value to obtain the largest cumulative reward value. For example, in the process of stock trading, if the trading action selected by the agent gains a profit, it will get a positive reward value. In contrast, if there is a loss after choosing a trade action, the agent will get a negative reward value. Reward promotes the agent to make the correct action in future behavior choices. Most previous works used trading profits as an immediate reward of reinforcement learning to optimize the trading strategies. However, this only considers the changes in the profits after each trading action is taken and does not consider the investment risk. In this paper, we take investment risk into consideration, SR is an indicator for evaluating transactions and is used to optimize the trade-off between profitability and risk. SR is the expected return minus the risk-free rate and then divided by the variance of the return. Considering both investment risk and return change, the reward function obtained by the sum of the two is more advantageous than the reward function based on return change, and this is proved by experiments, so the immediate reward is set to the sum of PR and the SR, the specific formula is as follows:

<mtd rowspan="4">(3)SR=E<msub>RP</msub>−<msub>Rf</msub><msub>σP</msub>,PR=<msub>Pt</msub>−<msub>Pt−1</msub><msub>Pt−1</msub>,r=SR+PR,<msub>Rt</msub>=<mstyle displaystyle="true"><munderover>∑k=0T</munderover>γk<msub>rt+k+1</msub></mstyle>,

where <msub>Pt</msub> is the sum of the assets owned by the investors at time t, E<msub>RP</msub> is the expected portfolio return, <msub>Rf</msub> is the risk-free rate, and <msub>σP</msub> is the standard deviation of the return, <msub>Rt</msub> is cumulative rewards, γ∈0,1 , which is a discount factor.

In this paper, we combine the Double DQN algorithm and the Dueling DQN algorithm, both of which are improved algorithms based on the DQN algorithm. In the value-based deep reinforcement learning algorithm, actions are selected according to the Q value. In the DQN algorithm, there are two networks with the same structure, the main network and the target network. Initially, the parameters of the two networks are the same. During the training process, the target network and the main network update the parameters in different ways. In the DQN algorithm, under the ε-greedy strategy, the agent that has a greater probability chooses the action corresponding to the maximum Q value, which will cause the Q value to be overestimated.

Compared with the DQN algorithm, the Dueling DQN algorithm changes the calculation method of the Q value through the addition of the state value function Vs and advantage function As,a . The value function Vs is used to evaluate the quality of the state, and the advantage function As,a is used to evaluate the quality that the agent chooses action a in state s . The calculation formula of the Q value is as follows:

(4)Qs,a;θ,α,β=As,a;θ,α+Vs;θ,β,

where α and β , respectively, represent the parameters in the value function Vs and advantage function As,a , θ represents other parameters in the deep reinforcement learning modal.

Double DQN changes the calculation of the Q value of the target network and solves the Q value overestimation problem in the DQN algorithm, which can be combined with the Dueling DQN algorithm to improve the overall performance of the model. The formula for calculating the Q value of the target network in the Double DQN algorithm is as follows:

(5)<msub>Yt</msub>=<msub>rt+1</msub>+γQ<msub>st+1</msub>,<munder>argmaxa</munder>Q<msub>st+1</msub>;<msub>θt</msub>;<msubsup>θt′;</msubsup>,

where θ and θ′; represent the parameters in the main network and target network, respectively.

The loss function is the mean square error of the Q value of the main network and the target network. The formula is shown as follows:

(6)Lθ=E<msub>Yt</msub>−Qs,a;θ2.

In this study, we analyze the stock market from stock data, technical indicators, and candlestick charts and fuse the features of the different data sources to obtain stock market state features representation and help the agent learn the optimal dynamic trading strategy. Trading action <msub>at</msub> = { long, neural, short } = {1, 0, −1}, long, neural and short represent buy, hold, and sell, respectively. When the trading action is long, cash is converted into stock as much as possible, and when the trading action is short, all shares are sold into cash. In addition, transaction costs in stock trading cannot be ignored. High-frequency transactions result in higher costs; the transaction cost in this paper is 0.1% of the stock value [[40]]. The trading process is shown in Algorithm 1.

Input: stock data, technical indicators, candlestick chart;

Initialize the experience replay memory D to capacity C;

Initialize the main Q network with random weights θ;

Initialize the target Q network with θ' = θ;

for episode 1 to N do

for t = 1 to T do

The sum of the features extracted by the deep learning module represents the environment state <msub>st</msub> ;

With the probability E choose a random action <msub>at</msub> ;

Otherwise select <msub>at</msub>=<munder>argmaxa</munder>Q<msub>st</msub>,a;θ ;

Get the reward <msub>rt</msub> and next state <msub>st+1</msub> ;

Store the transaction <msub>st</msub>,<msub>at</msub>,<msub>rt</msub>,<msub>st+1</msub> to D;

if t%n=0 then

Sample minibatch <msub>st</msub>,<msub>at</msub>,<msub>rt</msub>,<msub>st+1</msub> randomly from D;

• Set:

<msub>Yt</msub>=<mtable class="cases"><msub>rt</msub>ifthestate<msub>st+1</msub>isterminal,<msub>rt</msub>+γQ<msub>st+1</msub><munder>,argmaxa</munder>Q<msub>st+1</msub>;<msub>θt</msub>;<msubsup>θt′;</msubsup>otherwise.

Train the network with loss function

Lθ=E<msub>Yt</msub>−Qs,a;θ2

Update the target network parameters θ' = θ every N steps;

• end if

• end for

• end for

4. Experiment and Results

This section mainly introduces the dataset, evaluation metrics, comparison methods, implementation details, and experimental result analysis.

4.1. Datasets

In this study, we verify the dynamic trading strategy learned from the proposed model on datasets of Chinese stocks and S&P 500 stock market stocks and compare them with other trading strategies. The period range of the dataset is from January 2012 to January 2021. The training period ranges from January 2012 to December 2018; the testing period ranges from January 2019 to January 2021. For stock data, it includes the daily open price, high price, low price, close price, and trading volume of the stock, as shown in Table 2.

Table 2 Stock data structure example.

DateOpenHighLowCloseVolume
..................
2018/4/5177.01185.49175.7591.982749769
2018/4/6185.30190.95184.0894.132079350
2018/4/7189.06194.00186.9894.282565820
2018/4/8192.03196.00188.2093.981271339
..................

4.2. Metrics

The evaluation indicators used in this paper are PR, the annualized rate of return (AR), SR, and max drawdown duration (MDD). The details are as follows:

(i) PR refers to the difference between the assets owned at the end of the stock transaction and the original assets divided by the original assets.

(ii) AR is the ratio of the profits to the principal of the investment period of one year. The formula is defined as follows:

(7)AR=total profitsprincipal∗365trading days∗100.

(iii) SR is a standardized comprehensive evaluation index, which can consider both profits and risks at the same time to eliminate the adverse impact of risk factors on performance evaluation.

(iv) MDD refers to the maximum losses that can be borne during trading, and the lower the value, the better the performance of the trading strategy.

4.3. Baselines

(i) Buy and Hold (B&H) [[41]] refers to the construction of a certain portfolio according to the determined appropriate asset allocation ratio, and the maintenance of this portfolio during the appropriate holding period without changing the asset allocation status. And B&H strategy is a passive investment strategy.

(ii) Based on the Q-learning algorithm, two models are proposed to implement stock trading [[31]]. The two models perceive the stock market environment in different ways, model 1 analyzes the stock market through the k-means method, model 2 analyzes the stock market through a candlestick chart, and the experimental results show that model 1 performs better than model 2, so we only compare with model 1. In model 1, the size n of clusters is set to 3, 6, and 9, and we compare them, respectively.

(iii) A LSTM-based agent is proposed to learn the temporal relationship between data and implement stock trading [[34]], and the effects of different combinations of technical indicators on trading strategies are verified. In this paper, only a group of technical indicators with the best results are compared.

(iv) The time-driven feature-aware jointly deep reinforcement learning (TFJ-DRL) [[35]] model, which used GRU to extract temporal features of stock data and technical indicators and implement stock trading through the policy gradient algorithm.

(v) HW_LSTM_RL [[36]] is a structure that combines wavelet transformation and deep reinforcement learning for stock trading and is a relatively new method.

4.4. Implementation Details

This study is based on deep reinforcement learning to implement stock trading and fuses the features of stock data, technical indicators, and candlestick charts as the state of the stock market. LSTM network extracts features of stock data and technical indicators, the size of the hidden layer is 128, and the size of the candlestick chart is 390 × 290. In the process of learning the optimal trading strategy, an episode is the trading period ranges from January 2012 to December 2018. The episode in training is 200. In the ε-greedy strategy of reinforcement learning, the ε=0.8 . The length of the sliding time window is set to 30 days, and the learning rate is 0.0001.

4.5. Experimental Results

4.5.1. Comparative Experiment on the Chinese Stock Dataset

We select 10 Chinese stocks with different trends for comparative experiments, and the initial amount is 100,000 Chinese Yuan (CNY). The results of the experiment are shown in Tables 3–6. We select three stocks with different trend changes to further demonstrate the PR changes, as shown in Figure 6.

Table 3 PR comparison of different methods in the Chinese stock market dataset.

PR (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
002460365.62−50.66−19.48−68.11133.29379.32257.68<bold>418.12</bold>
601101−18.93−14.38−18.07−19.77−18.79−28.59−25.09−<bold>9.28</bold>
6007462.4916.177.5431.552.850.144.05<bold>32.30</bold>
600316438.22472.61480.32524.03492.31502.73536.28<bold>595.93</bold>
600028−19.2220.8722.3519.5618.7012.4921.38<bold>23.94</bold>
60090026.21<bold>28.25</bold>22.1125.679.4419.6327.1424.35
002129189.41193.78163.50173.92176.40178.05190.31<bold>196.67</bold>
600704−2.9110.8916.5412.349.9115.2618.28<bold>22.98</bold>
6003770.586.093.8210.4111.126.739.41<bold>14.05</bold>
300122285.45268.52259.87298.91275.26290.35301.48<bold>309.67</bold>
Average126.6952.6850.6253.69111.05137.61134.09<bold>162.87</bold>

Table 4 AR comparison of different methods in the Chinese stock market dataset.

AR (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
00246067.25−51.75−4.72−10045.6468.0959.14<bold>70.37</bold>
601101−4.73−4.63−7.73−7.70−4.64−11.62−8.97<bold>0.95</bold>
60074611.5813.319.4715.5221.7616.5912.30<bold>28.18</bold>
60031672.5374.2374.7276.0868.2867.1272.29<bold>79.83</bold>
6000285.325.216.814.696.995.726.33<bold>7.39</bold>
60090012.35<bold>13.82</bold>10.4712.045.8311.3612.7411.69
00212953.1454.2648.1250.3349.4151.4254.02<bold>58.90</bold>
6007042.327.4310.068.499.329.9511.47<bold>13.28</bold>
6003771.733.111.635.846.422.584.23<bold>7.70</bold>
30012260.1857.1255.8461.4060.5262.4759.24<bold>65.39</bold>
Average28.1717.2120.4712.6726.9528.3728.28<bold>34.37</bold>

Table 5 SR comparison of different methods in the Chinese stock market dataset.

SR
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
0024601.72−1.01−0.08−1.561.041.751.47<bold>1.83</bold>
601101−0.13−0.21−0.41−0.32−0.13−0.31−0.24<bold>0.03</bold>
6007460.250.390.29<bold>0.62</bold>0.490.300.410.52
6003161.771.791.831.891.651.731.92<bold>2.00</bold>
6000280.230.210.340.190.350.220.27<bold>0.49</bold>
6009000.74<bold>0.81</bold>0.670.720.340.680.710.70
0021291.231.251.181.161.201.121.22<bold>1.34</bold>
6007040.080.350.410.390.380.400.43<bold>0.50</bold>
6003770.100.220.150.350.390.240.30<bold>0.46</bold>
3001221.641.671.521.691.481.611.70<bold>1.78</bold>
Average0.760.550.590.510.720.770.82<bold>0.97</bold>

Table 6 MDD comparison of different methods in the Chinese stock market dataset.

MDD (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
00246041.3260.7050.5069.3756.6848.0541.32<bold>41.32</bold>
60110156.9842.17<bold>39.50</bold>43.9056.2959.3857.8046.44
60074642.5844.5640.65<bold>36.01</bold>52.4753.5339.6849.84
60031625.3626.7126.5926.8328.9130.1234.40<bold>25.12</bold>
60002850.0353.1751.7756.8217.2043.9523.39<bold>15.58</bold>
60090016.19<bold>15.29</bold>19.4016.3819.9517.2318.3417.42
00212935.4835.2138.7634.9235.4842.8332.57<bold>30.41</bold>
60070425.7726.9125.0826.9630.7936.2424.16<bold>23.37</bold>
60037719.7619.8420.6218.2620.3123.4818.22<bold>15.45</bold>
30012238.0738.2840.9636.0542.6039.1238.55<bold>34.89</bold>
Average35.1536.2835.5836.5536.0739.3932.84<bold>29.98</bold>

Graph: Figure 6 Changes in the price and PR of stocks with different trends. (a) 002460. (b) 002460. (c) 600746. (d) 600746. (e) 601101. (f) 601101.

Graph: (b)

Graph: (c)

Graph: (d)

Graph: (e)

Graph: (f)

The traditional trading strategy B&H is a passive trading strategy, which has an advantage in stocks with rising prices. However, it does not perform well for stocks with large price fluctuations or downward trends. It can be seen from Figure 6 that for the stock 002460 with an upward trend, the B&H trading strategy can obtain a higher PR, while for the other two stocks, 601101 and 600746, with different trends, the PR obtained are not as good as the other trading strategies. Trading strategies learned based on the Q-learning algorithm are dynamic, compared with the traditional trading strategy B&H. In most cases, the trading strategies learned based on model 1 can obtain higher PR, AR, SR, and lower MDD for stocks with different trends in different fields. Nonetheless, reinforcement learning lacks the ability to perceive the environment. Compared with the trading strategies learned based on the deep reinforcement learning model, the trading strategies learned based on model 1 do not have obvious advantages. LSTM_based, TFJ-DRL, and HW_LSTM_RL are all methods based on deep reinforcement learning. The data sources analyzed by these methods are relatively single, compared with the traditional trading strategy B&H and trading strategies learned based on Model 1. The trading strategies learned by these methods can obtain more profits for stocks with different trends in different fields. From the experimental results, we can see that the dynamic trading strategies learned by our proposed model have better performance. It can be seen from Tables 3–6 that stocks with different trends, the trading strategy learned by the model proposed in this paper, have better performance. Compared with other trading strategies, the evaluation indicators of our trading strategy are the highest in most cases. On the whole, the average PR of our trading strategy is 162.87, AR is 34.37, SR is 0.97, and MDD is 29.98, which are higher than other trading strategies.

4.5.2. Comparative Experiment on the S&P 500 Stock Dataset

To further verify the performance of the trading strategy learned by the model proposed in this paper, in the S&P 500 dataset, we selected 10 stocks with different trends and compared them with other trading strategies; the initial capital is 100,000 USD. The results of the experiment are shown in Tables 7–10. In this section, we select ten stocks with different trends and show their price changes and PR changes in detail, as shown in Figure 7.

Table 7 PR comparison of different methods in the S&P 500 market dataset.

PR (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
AMD296.09323.08315.36337.61301.28283.45336.72<bold>352.67</bold>
AAL−55.78−11.73−75.31−28.66−60.89−70.49−18.58−<bold>3.69</bold>
BIO122.72136.15116.40129.37128.24133.61140.27<bold>143.67</bold>
BLK79.5683.4175.9484.2874.2578.9782.62<bold>85.39</bold>
TSLA1060.141167.901005.81986.43958.221024.261130.22<bold>1184.59</bold>
AAPL216.41218.39206.28224.83197.34209.41213.70<bold>227.61</bold>
GOOGL54.7622.8734.8361.2439.2950.8752.97<bold>62.47</bold>
IBM0.412.465.51−2.3118.258.504.18<bold>28.30</bold>
HST−15.05−10.81−18.92−14.06−17.51−14.32−10.63−<bold>3.47</bold>
PG47.5249.8450.0646.6140.9444.7250.25<bold>52.04</bold>
Average180.68198.16171.60182.53167.94174.90198.17<bold>212.96</bold>

Table 8 AR comparison of different methods in the S&P 500 market dataset.

AR (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
AMD63.1965.0164.2367.4963.1354.2069.41<bold>72.26</bold>
AAL−13.27−0.04−48.11−13.73−20.70−35.886.82<bold>32.25</bold>
BIO38.9241.5736.9339.6128.2937.4340.57<bold>42.73</bold>
BLK31.1031.4829.8532.1725.3031.7232.96<bold>33.20</bold>
TSLA99.24100.0397.1596.0984.6194.8396.25<bold>103.85</bold>
AAPL51.2752.2849.5453.2346.4950.3052.81<bold>55.71</bold>
GOOGL23.8722.0414.1925.5719.1122.8622.97<bold>25.94</bold>
IBM5.065.126.162.5912.588.305.65<bold>16.34</bold>
HST4.164.693.734.015.044.926.44<bold>8.92</bold>
PG20.5620.8721.6518.0219.8321.4120.36<bold>24.14</bold>
Average32.4134.3027.5332.5128.3729.0135.42<bold>41.43</bold>

Table 9 SR comparison of different methods in the S&P 500 market dataset.

SR
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
AMD1.551.641.591.691.421.301.61<bold>1.73</bold>
AAL−0.160−0.56−0.54−0.24−0.360.12<bold>0.42</bold>
BIO1.291.321.141.301.271.121.22<bold>1.36</bold>
BLK0.980.920.831.160.850.961.06<bold>1.24</bold>
TSLA2.082.111.871.791.691.922.13<bold>2.21</bold>
AAPL1.761.781.621.811.681.741.71<bold>1.85</bold>
GOOGL0.850.560.930.920.710.810.86<bold>1.11</bold>
IBM0.160.190.230.100.420.280.21<bold>0.55</bold>
HST0.080.090.050.070.050.070.11<bold>0.16</bold>
PG0.890.910.930.750.650.900.82<bold>0.98</bold>
Average0.950.950.860.910.850.870.99<bold>1.16</bold>

Table 10 MDD comparison of different methods in the S&P 500 market dataset.

MDD (%)
B&H [<xref ref-type="bibr" rid="bibr41">41</xref>]Model 1 [<xref ref-type="bibr" rid="bibr31">31</xref>]LSTM-based [<xref ref-type="bibr" rid="bibr34">34</xref>]TFJ-DRL [<xref ref-type="bibr" rid="bibr35">35</xref>]HW_LSTM_RL [<xref ref-type="bibr" rid="bibr36">36</xref>]Ours
Stockn = 3n = 6n = 9
AMD34.2834.0634.8233.9535.4739.0530.41<bold>32.11</bold>
AAL74.72<bold>30.20</bold>82.0141.1577.6476.1048.3134.06
BIO19.7219.2321.6519.2825.8322.6420.16<bold>18.09</bold>
BLK42.2745.2149.3238.7550.2848.4540.91<bold>36.27</bold>
TSLA33.7331.3736.0438.6141.2930.7335.20<bold>27.06</bold>
AAPL20.3721.3823.0919.7122.7421.2918.43<bold>16.51</bold>
GOOGL32.4143.6938.6244.4120.2230.8422.52<bold>24.01</bold>
IBM38.9633.6831.7132.6531.6030.3724.62<bold>30.55</bold>
HST52.0547.2649.2547.8159.1848.2044.36<bold>43.08</bold>
PG23.1423.2727.1829.0233.3830.4325.58<bold>20.06</bold>
Average37.1732.9339.3734.5339.7637.8131.05<bold>28.18</bold>

Graph: Figure 7 Changes in the price and PR of stocks with different trends. (a) GOOGL. (b) GOOGL. (c) IBM. (d) IBM. (e) AAL. (f) AAL.

Graph: (b)

Graph: (c)

Graph: (d)

Graph: (e)

Graph: (f)

For stocks with different trends in S&P 500, our trading strategy also has a better performance. It can be seen from Tables 7–10 that compared with other trading strategies, our trading strategy can obtain higher yields, SR, AR, and MDD, and on the whole, our PR reached 212.96, SR reached 1.16, obviously higher than other trading strategies.

To further verify the performance of our proposed model in different stock markets, we conducted Mann–Whitney U test on the profits rate of 20 stocks selected from the Chinese stock market and the S&P 500 stock market. The results showed that P=0.677>0.05 , indicating that there is no significant difference between the returns obtained by our model in the Chinese stock market and the S&P 500 stock market, and it has good generalization ability.

4.5.3. Reward Function Comparison Experiment

In this section, we set the reward function with SR and without SR and select two stocks from the Chinese stock market and the S&P 500 stock market for comparison experiments. The stocks selected from the Chinese stock market are 600746 and 601101, and the stocks selected from the S&P 500 are GOOGL and IBM, the training period ranges from January 2012 to December 2018, and the testing period ranges from January 2019 to January 2021. The experimental results are shown in Table 11.

Table 11 Comparison results of the reward function experiment.

StockReward function with SRReward function without SR
PR (%)AR (%)SRMDD (%)PR (%)AR (%)SRMDD (%)
600746595.9328.180.5228.18528.3125.720.4633.39
601101−9.280.950.0346.44−14.99−2.31−0.0648.54
GOOGL62.4725.941.1124.0152.1822.900.9529.40
IBM28.3016.340.5530.5524.3914.280.4035.20

It can be seen from the experimental results in Table 11 that compared to the reward function without the SR, when the reward function contains the SR, the learned trading strategies have a better performance overall. Different from the existing algorithmic trading based on deep reinforcement learning, most of which take profit rate as reward function, this study takes investment risk into account and adds SR and profit rate as reward function, and the learned trading strategy obtains higher PR, AR, SR, and MDD.

4.5.4. Ablation Experiments

In this section, to verify the effectiveness of multisource data fusion, we conduct an ablation experiment. Three groups of comparative experiments are carried out, all of which are based on deep reinforcement learning to implement stock trading. The first group analyzes the stock market through stock data and technical indicators; the second group analyzes the stock market through a candlestick chart; and the third group analyzes the stock market through stock data, technical indicators, and candlestick chart. We select the trading data of GOOGL stock from January 2012 to January 2021 as the dataset for this section, in which January 2012 to December 2018 is the training data, and January 2019 to January 2021 is the test data. The comparison results are shown in Figure 8 and Table 12.

Graph: Figure 8 Profit curves of the ablation experiment on stock GOOGL.

Table 12 Comparison results of ablation experiments on stock GOOGL.

PR (%)AR (%)SRMDD (%)
Group one44.7422.360.9527.35
Group two54.7623.700.9825.61
Group three<bold>62.47</bold><bold>25.94</bold><bold>1.11</bold><bold>24.01</bold>

The experimental results show that compared with the trading strategies learned in the first two groups, the trading strategies learned in the third group can obtain higher PR, SR, AR, and lower MDD. This also proves that the analysis of multisource data can obtain a deeper feature representation of the stock market, which is more conducive to learning the optimal trading strategy.

5. Conclusion

Correct analysis of the stock market state is one of the challenges when implementing stock trading based on deep reinforcement learning. In this research, we analyze multisource data based on deep reinforcement learning to implement stock trading. Stock data, technical indicators, and candlestick charts can reflect the changes in the stock market from different perspectives, we use different deep neural networks to extract the features of the data source and fuse features, and the fused features are more helpful to learn the optimal dynamic trading strategy.

It can be concluded from the experimental results that the trading strategies learned based on the deep reinforcement learning method can be dynamically adjusted according to the stock market changes and have more advantages. Compared with other trading strategies, our trading strategy has better performance for stocks with different trends, and the average SR value is the highest, which means that under the same risk, our trading strategy can get more profits. However, textual information such as investor comments and news events has an impact on the fluctuations of stock prices and cannot be ignored. It is important to obtain informative data from relevant texts for stock trading. In future research, we will consider different text information and train more stable trading strategies.

Data Availability

The experimental data in this article can be downloaded from Yahoo Finance (https://finance.yahoo.com/).

Conflicts of Interest

The authors declare no conflicts of interest.

Acknowledgments

This research was funded by the National Natural Science Foundation of China (Grant nos. 61972227 and 61902217), the Natural Science Foundation of Shandong Province (Grant nos. ZR2019MF051, ZR2020MF037, and ZR2019BF043), the NSFC-Zhejiang Joint Fund of the Integration of Informatization and Industrialization (Grant no. U1909210), Key Research and Development Project of Shandong Province (Grant nos. 2019GGX101007 and 2019GSF109112), and the Science and Technology Plan for Young Talents in Colleges and Universities of Shandong Province (Grant no. 2020KJN007).

REFERENCES

1 Li Y., Zheng W., Zheng Z. Deep robust reinforcement learning for practical algorithmic trading. IEEE Access. 2019; 7, 108014-108022, 10.1109/access.2019.2932789, 2-s2.0-85071105095

2 Fischer T., Krauss C. Deep learning with long short-term memory networks for financial market predictions. European Journal of Operational Research. 2018; 270(02): 654-669, 10.1016/j.ejor.2017.11.054, 2-s2.0-85039970639

3 Ding X., Zhang Y., Liu T. Deep learning for event-driven stock predictionProceedings of the Twenty-fourth international joint conference on artificial intelligence. 2015. Buenos Aires Argentina, 2327-2333

4 Carta S., Corriga A., Ferreira A., Podda A. S., Recupero D. R. A multi-layer and multi-ensemble stock trader using deep learning and deep reinforcement learning. Applied Intelligence. 2021; 51(8): 889-905, 10.1007/s10489-020-01839-5

5 Chia R., Lim S. Y., Ong P. K., Teh S. F. Pre and post Chinese new year holiday effects: Evidence from Hong Kong stock market. The Singapore Economic. 2015; 60(04): 1-14, 10.1142/s021759081550023x, 2-s2.0-84942199587

6 Huang Q., Wang T., Tao D., Li X. Biclustering learning of trading rules. IEEE Transactions on Cybernetics. 2014; 45(20): 2287-2298

7 Silver D., Schrittwieser J., Simonyan K., Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D. Mastering the game of go without human knowledge. Nature. 2017; 550(7076): 354-359, 10.1038/nature24270, 2-s2.0-85031918331

8 Silver D., Hubert T., Schrittwieser J., Antonoglou I., Lai M., Guez A., Lanctot M., Sifre L., Kumaran D., Graepel T., Lillicrap T., Simonyan K., Hassabis D. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science. 2018; 362(6419): 1140-1144, 10.1126/science.aar6404, 2-s2.0-85057740644

9 Mnih V., Kavukcuoglu K., Silver D., Rusu A. A., Veness J., Bellemare M. G., Graves A., Riedmiller M., Fidjeland A. K., Ostrovski G., Petersen S., Beattie C., Sadik A., Antonoglou I., King H., Kumaran D., Wierstra D., Legg S., Hassabis D. Human-level control through deep reinforcement learning. Nature. 2015; 518(7540): 529-533, 10.1038/nature14236, 2-s2.0-84924051598

Gu S., Holly E., Lillicrap T., Levine S. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updatesProceedings of the IEEE International Conference on Robotics and Automation (ICRA). June 2017. Singapore, 3389-3396, 10.1109/icra.2017.7989385, 2-s2.0-85027967014

Wolf P., Hubschneider C., Weber M., Bauer A. Learning how to drive in a real world simulation with deep q-networksProceedings of the IEEE Intelligent Vehicles Symposium. June 2017. Los Angeles, CA, USAIV): 244-250, 10.1109/ivs.2017.7995727, 2-s2.0-85028061461

Ng A. Y., Kim H. J., Jordan M. I., Sastry S. Autonomous helicopter flight via reinforcement learning16Proceedings of the Conference and Workshop on Neural Information Processing Systems. 2003

Bao W., Yue J., Rao Y. A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLoS One. 2017; 12(7): e0180944, 10.1371/journal.pone.0180944, 2-s2.0-85024502828

Chen K., Zhou Y., Dai F. A LSTM-based method for stock returns prediction: a case study of China stock marketProceedings of the IEEE International Conference on Big Data. Nov 2015. Santa Clara, CA, USA, 2823-2824, 10.1109/bigdata.2015.7364089, 2-s2.0-84963729019

Karaoglu S., Arpaci U., Ayvaz S. A deep learning approach for optimization of systematic signal detection in financial trading systems with big data. International Journal of Intelligent Systems and Applications in Engineering. 2017; 2017(Special Issue): 31-36, 10.18201/ijisae.2017specialissue31421

Neely C. J., Rapach D. E., Tu J., Zhou G. Forecasting the equity risk premium: the role of technical indicators. Management Science. 2014; 60(7): 1772-1791, 10.1287/mnsc.2013.1838, 2-s2.0-84897701069

Gorgulho A., Rui N., Horta N. Applying a GA kernel on optimizing technical analysis rules for stock picking and portfolio composition. Expert Systems with Applications. 2011; 38(11): 14072-14085, 10.1016/j.eswa.2011.04.216, 2-s2.0-79960027306

Sezer O. B., Ozbayoglu A. M. Algorithmic financial trading with deep convolutional neural networks: time series to image conversion approach. Applied Soft Computing. 2018; 70, 525-538, 10.1016/j.asoc.2018.04.024, 2-s2.0-85048331794

Kim T., Kim H. Y., Montoya A. Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data. PLoS ONE. 2019; 14(2): e0212320, 10.1371/journal.pone.0212320, 2-s2.0-85061557918

Gudelek M. U., Boluk S. A., Ozbayoglu A. M. A deep learning based stock trading model with 2-D CNN trend detectionProceedings of the IEEE Symposium Series on Computational Intelligence. 2017. Honolulu, HI, USA, 1-8

Tsantekidis A., Passalis N., Tefas A. Forecasting stock prices from the limit order book using convolutional neural networks1Proceedings of the IEEE 19th Conference on Business Informatics. 2017. Thessaloniki, Greece, 7-12, 10.1109/cbi.2017.23, 2-s2.0-85029406625

Chen Y. Y., Chen W. L., Huang S. H. Developing arbitrage strategy in high-frequency pairs trading with filterbank CNN algorithmProceedings of the IEEE International Conference on Agents. 2018. Singapore, 113-116, 10.1109/agents.2018.8459920, 2-s2.0-85054528724

Sharpe W. F. The sharpe ratio. Journal of Portfolio Management. 1994; 21(1): 49-58

Liu S., Zhang C., Ma J. CNN-LSTM neural network model for quantitative strategy analysis in stock markets. Neural Information Processing. 2017; 2017, 198-206, 10.1007/978-3-319-70096-0_21, 2-s2.0-85035078272

Yan Y., Yang D. A stock trend forecast algorithm based on deep neural networks[J]. Scientific Programming. 2021; 2021, 7510641

Tran D. T., Magris M., Kanniainen J. Tensor representation in high-frequency financial data for price change predictionProceedings of the IEEE Symposium Series on Computational Intelligence. 2017. Honolulu, HI, USA, 1-7

Dixon M., Klabjan D., Bang J. H. Classification-based financial markets prediction using deep neural networks. Algorithmic Finance. 2017; 6(3-4): 67-77, 10.3233/af-170176, 2-s2.0-85039961935

Long J., Chen Z., He W. An integrated framework of deep learning and knowledge graph for prediction of stock pricetrend: an application in Chinese stock exchange market. Applied Soft Computing. 2020; 91

Lee S. W., Kim H. Y. Stock market forecasting with super-high dimensional time-series data using ConvLSTM, trend sampling, and specialized data augmentation. Expert Systems with Applications. 2020; 161, 10.1016/j.eswa.2020.113704

Chen J. F., Chen W. L., Huang C.-P., Huang S.-H., Chen A.-P. Financial time-series data analysis using deep convolutional neural networksProceedings of the 2016 17th International conference on cloud computing and big data. 2016. Macau, China, 87-92, 10.1109/ccbd.2016.027, 2-s2.0-85027469207

Chakole J. B., Kolhe M. S., Mahapurush GD. A Q-learning agent for automated trading in equity stock markets. Expert Systems with Applications. 2021; 163, 10.1016/j.eswa.2020.113761

Watkins C. J. C. H. Learning from delayed rewards. 1989

Deng Y., Bao F., Kong Y. Deep direct reinforcement learning for financial signal representation and trading. IEEE Transactions on Neural Networks and Learning Systems. 2016; 28(3): 653-664

Wu J., Wang C., Xiong L. Quantitative trading on stock market based on deep reinforcement learningProceedings of the International Joint Conference on Neural Networks. 2019. Budapest, Hungary, 1-8, 10.1109/ijcnn.2019.8851831, 2-s2.0-85073249061

Lei K., Zhang B., Li Y. Time-driven feature-aware jointly deep reinforcement learning for financial signal representation and algorithmic trading. Expert Systems with Applications. 2020; 140, 10.1016/j.eswa.2019.112872

Lee J., Koh H., Choe H. J. Learning to trade in financial time series using high-frequency through wavelet transformation and deep reinforcement learning. 2021; Applied Intelligence

Wang Z., Schaul T., hessel M. Dueling network architectures for deep reinforcement learningProceedings of the International conference on machine learning. 2016

Van Hasselt H., Guez A., Silver D. Deep reinforcement learning with double q-learningProceedings of the AAAI conference on artificial intelligence. 2016

Wu X., Chen H., Wang J. Adaptive Stock Trading Strategies with Deep Reinforcement Learning Methods. 2020; Information Sciences

Théate T., Ernst D. An application of deep reinforcement learning to algorithmic trading. Expert Systems with Applications. 2020; 173

Chan E. Algorithmic trading: winning strategies and their rationale. 2013; 625; John Wiley & Sons

Hussain A. J., Knowles A., Lisboa P. J. G., El-Deredy W. Financial time series prediction using polynomial pipelined neural networks. Expert Systems with Applications. 2008; 35(3): 1186-1199, 10.1016/j.eswa.2007.08.038, 2-s2.0-44949210424

By Yawei Li; Peipei Liu and Ze Wang

Reported by Author; Author; Author

Additional Information
523210 Securities and Commodity Exchanges
Copyright of Scientific Programming is the property of Hindawi Limited and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
1School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
2School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan 250014, China
7710
1058-9244
10.1155/2022/4698656
155494550

banner_970x250 (970x250)

sponsored