Pricing Stocks With Artificial Intelligence





Deep Learning, Artificial Intelligence Background

imaginima/E+ via Getty Images

Many of you may be wondering how analysts derive their price targets, and if you think you already know, I’m afraid to say that there’s a new spin being added to things that will likely change the asset pricing sphere for the foreseeable future. We’re currently working in alliance with CoGo Data Solutions to curate a variety of predictive models with the intention of setting accurate price targets, and here’s how we’re going about it.

Artificial Neural Networks or ANNs for short is a type of artificial intelligence simulation that aims to replicate the human brain with an advanced iterative approach to improve on matters in learning, correlating, and deciphering new information to reach a final destination.

Comparing a neuron and the ANN architecture

ANN Architecture (Research Gate – Sharifi (2020))

Above is a comparison of a brain’s neuron versus the architecture of an ANN. The input node aims to replicate the dendrites, and practically speaking the analyst will provide his/her pricing variables to the node. Following the input process, a range of hidden layers is utilized to feed information forward and subsequently identify meaningful patterns. The final stage is the output node, which replicated the axon; the output phase is the stock price that we’re trying to determine, and this also conveys the reliability of the model’s forecasts.

Traditional Asset Pricing

To understand the ANNs value-add, we first need to have a look at traditional asset-pricing models. The Fama-French 5-factor model (figure below) and Ross’ Arbitrage Pricing Model (figure below) have often been thought of as the “gold standard” for asset pricing in recent years. The prior incorporates various market stages into a non-linear formula that considers the stock-specific risk (also known as the Beta). The latter judges the stock’s destiny in relation to macroeconomic variables by testing the stock’s Beta relative to economic cycles. Historically speaking, these two metrics have been utilized on a case-by-case basis, and there’s usually been a large amount of subjectivity surrounding price target decision-making.

Fama-French 5 Factor Model

Fama-French 5 Factor Model (Research Gate – Mateus 2017)

APT-Model

APT-Model (Slide Player)

The prominent issues with these models are that they’re rigid and don’t account for a broad enough range of variables. They also don’t account for the change in the underlying variables; for instance, the market sentiment may ignore financial variables but rather dictate prices based on alternative matters, such as covid stringency or weather patterns (ie for agricultural stocks). This is where ANNs are valuable because we can synthesize a model that incorporates a range of financial and non-financial variables to forecast a price target, which holds more flexibility. But let’s delve into the modeling process first before explaining how we can justify an ANN method as more reliable.

LSTM Neural Networks

The Architecture

LSTM stands for Long-Short Term Memory and is a type of ANN. LSTMs are extremely useful for time-series analysis and, in particular, stock forecasts—the best way to explain how the method works would be to look at its architecture.

LSTM Architecture

LSTM Architecture (Springer-Pawar et al. (2018))

Again, the input node is where the initial variables get fed into. The respective variables then proceed into a series of hidden layers with a feedforward process. A backwardation process follows suit, in which the various hidden layers will send data back and forth to curate the best possible output variable, also known as the forecasted stock price. Another value-add of the LSTM network is that it deploys a forget cell, which allows the network to dump out noise instead of considering it as true parameters. To best contextualize the forget cell would be to look at the 2008 housing crisis and the 2020 pandemic outbreak, which caused radical monetary policies. The LSTM will consider these events, but it will account for the “outlier effect” during the events and thus holds better predictive ability in the long run because markets generally revert to strong-form efficiency shortly after extreme events.

How do we determine Reliability?

We need to determine specific correlations before the input variables get fed into the model. This stage doesn’t determine the total effect that these variables will have on the forecasted price; however, it will most likely hold true in some shape or form and thus allows us to streamline a better predictive model with less input variables to consider. Below is a Principal Component Analysis (NYSEARCA:PCA) that we recently did along with CoGo Data Solutions on JPMorgan (NYSE:JPM) stock.

Principal Component Analysis (<a src=

CoGo Data Solutions with variables from Yahoo Finance, St.Louis Fed., and Duke Library

I’ll explain the exact findings of the plotted variables in a follow-up article directed at JP Morgan stock. But the important takeaway here is that the chart displays the grouping of correlations and their magnitude (as seen on the contribution gauge). The other notable matter to look at for stock forecasting reliability is the output result’s residuals (Root Mean Square Error) and the explained variance between the input variables and the output variable (R-Squared).

RMSE A residual can be infinite but a result between 0.2 to 0.5 or below is usually considered good.
R-Squared Values ​​between 0 and 1 and an R-Squared of 0.75 and above are usually considered a well-explained target price.

Source: CFA Institute

Again, this topic will be extensively covered in a follow-up article/s on JPMorgan stock, but as the reader, you need to understand how we can determine the accuracy of our forecasts.

Testing and Training

I’ve mentioned on a few occasions now that we’re working on a predictive model for JPMorgan stock. I’d like to share where we’re at, at the moment.

Predictive Model for JPM stock

CoGo Data Solutions with variables from Yahoo Finance, St. Louis Fed, and Duke Library

The chart above is the predictive model, which is about three weeks away from optimization as feature engineering needs to be improved upon. The green phase is the initial feed in data, which includes 80% of the analyzed horizon. The remaining 20% ​​was fed into the testing period, and the forecasted period is one year. The model trains its memory on the green part and tests its forecasts on the blue region; once the testing pattern is optimized, we’re likely to predict a 1-year price target with more or less 90% reliability, assuming the market holds a semi-strong form of efficiency.

Potential Drawdowns

Readers must understand that this isn’t a method of pricing stocks isn’t a unicorn in itself, meaning it can’t be used in isolation. A successful data pipeline of influencing variables needs to be constructed, which requires solid knowledge of the underlying industry, business, stock, and the financial theory behind it. Because these models are sensitive to noise, it’s more likely that forecasts will be more inaccurate than eyeballing returns in cases where the input variables aren’t perfectly curated.

Final Word

Artificial Neural Networks has come to prominence in the stock market lately due to the proclivity of analysts to utilize modern technology. The process is more reliable than the human brain and will likely yield better results. However, it has to be said that feature engineering requires high-level acumen and is ultimately the make-or-break for predicting stock prices accurately.




Roxxcloud

Leave a Reply

Your email address will not be published.

Back to top