Dr. Seth Huang is not only an A.I researcher, but also an investor in computational finance applications. He has been successful in applying general machine learning algorithms for pattern recognitions and model selections in financial modeling and has continued to be an investor in quantitative trading funds and research talents.
Opinions expressed by MioTech Insights contributors are their own.
ENHANCED DATA ANALYTICS
Artificial Intelligence is not a master key. AI is an arsenal. Advanced AI scientists use a family of deep learning models, which cover at least 10 different subfields such as natural language processing, visual recognition, self-learning machines, and a combination of various techniques. It is data-driven, context-driven, and expert-driven.
“Artificial Intelligence is not a master key. AI is an arsenal. ”
For computer vision, you may have an object detection model, facial detection model, and then you can use advanced visual recognition neural nets such as residual network for further recognition design. This can be used to detect traffic, accident, oil tankers, the number of cars outside a store’s parking lot, and etc.
Advanced visual recognition neural nets used to better detect different vehicles in construction site
Two aspects make advanced methods powerful. First, the information content for Deep Neural Net can contain 10 or 1,000 times more than conventional models. A multivariate regression model or factor investment, one may use 10 or at most 100 factors. For deep learning, to construct a market model, one can use 10,000 factors, combining public data, language, news and photos. You can include various data types in different sequence and include all of them in a single neural net model. Therefore, AI can incorporate many times more information to make better assessments.
Much like a photograph, the greater the pixels, the clearer the understanding. Likewise with models, the larger the dataset, the better the assessment
Second, the “level of abstraction” for deep learning is vastly superior. For finance data analytics, information is either immediately priced in or mostly just noise. In this case, high abstraction level is especially important. A recent advancement I did is combining knowledge graph with news sentiment. One can study various “entities” related to each other and greatly expand the amount of information to include in financial models.
There is also a rise in reinforcement learning application. Deep reinforcement learning is the core of AlphaGo. Reinforcement learning is a self-learner, experimenting with various actions to optimize certain tasks such as reducing costs or maximizing website stickiness. Google has begun using this method for managing server room electricity usage and achieved more than 30% reduction in costs. This kind of model has to be used carefully as it is notoriously unstable. But this problem can be mitigated by a technique call an “ensemble net.” Lastly, deep reinforcement learning combining with LSTM can be used to optimize trade executions, which are much simpler tasks than price forecasts.
REDUCED PORTFOLIO RISK
AI can yield higher Sharpe ratio through forecast on “semi-stationary process.” In finance, we mostly deal with return, risk and global correlation. Returns are non-stationary, meaning the past patterns have high noise content and do not predict the future well. Volatility and correlation are relatively easier to forecast, and when you can control the risk, you can manage the returns through leverage.
"AI can yield higher Sharpe ratio through forecast on 'semi-stationary process'. "
There are several ways to reduce portfolio risk, but two best choices are a stronger risk and correlation forecast for instruments and hedging. This is highly dependent on the investment thesis and rebalancing time. This I filed patents on. It is hard to forecast correlation because of computational costs. The second way is to buy hedging products such as selling calls, buying puts or shorting index futures. All the options are expensive, and they get a lot more expensive during market turbulence. A better way will be to either use more creative products such as credit-default-swap or buy fixed-income products to pay for hedging.
REDUCE OPERATIONAL COSTS
Cost reduction comprises 90% of practical AI applications. It is less sexy, but it is the true motivator for wide adoption.
For traditional execution tasks, the sell-side trading floor might be very empty in the near future. The trading execution and a traditional trader-client relationship will be the first to be replaced by AI-driven trade systems. Steve Cohen and Tudor Investment are both developing algorithms to replace traders. This can save tens of millions of dollars per year. And sometimes, a 10% reduction in cost can be a 30% increase in profit margin. Practically speaking, there are several methods within deep learning, such as deep reinforcement learning, that can learn to optimize execution, i.e. minimizing trading costs for clients.
For investment banks, an equity research desk is a large and necessary cost center. It was traditionally part of critical client services. I am personally more old-fashioned and do believe in the merit of solid fundamental research, such as management visits and on-site due diligence.
But in modern times, the research desk has much coverage, meaning a person has to cover numerous companies and sometimes across industries. This means analysts are unable to cover companies in depth and resort to using publicly-available metrics, earnings forecast, management announcements and etc. This is why there is a greater herding effect in the past 15-20 years – most analysts give very similar ratings with very similar price ranges. When people tend to use similar models on similar data, they arrive at similar conclusions.
Publicly available data is causing a herd mentality amongst analysts
One good example is Tencent’s recent performance from earlier 2018 until now. 80-90% of analysts recommended a buy/ strong-buy rating while Tencent price dropped by over 40%. There was a major theme - China’s restriction on new mobile gaming licenses which is still hard for AI to understand and model. There is also a systematic incentive challenge within the sell-side equity research field, and that may also limit an analyst’s freedom to properly gauge a firm performance. I believe AI can be used with experienced analysts to improve that.
A brilliant AI system always has a brilliant human creator. AI’s core is human ingenuity. Singularity is when we create a machine more intelligent than ourselves, and the machine learns to be smarter without human interference. This level of technology simply does not exist. Therefore, sentiment analysis system has to ask “what we are creating it for and how will the users interact with it?”
"A brilliant AI system always has a brilliant human creator. "
Building an AI system is like building an F1 race car. A Ferrari can run faster than all humans, but it is not learning to get faster by itself. This means AI-driven sentiment analysis is human-dependent. It is human-experience dependent. It is human-mistake dependent. Mistakes are so valuable that the entire recent evolution of AI techniques mainly relies on creating new tools to fix past mistakes and problems in model training.
In sentiment analysis, a recent trend is relationship mapping, which I believe MioTech specializes in. It is called “Knowledge Graph” in the ML community. It is similar to creating a family tree for industries. We construct how the objects and entities are linked to each other, and it is one of the biggest advancement in the last few years.
You can extract “entities” such as “Apple” and “iPhone” in the same news article and recognize their relationship. For now, the best knowledge graph is still hand-crafted by humans.
The next question is “how do people use it?” I think tracking the global sentiment for all related entities is a useful tool. You can track all solar panel companies so that when a good/ bad news hits, you can track the level of impacts on various companies. Maybe some have delayed impacts (American news at 6 PM has not been priced in at 9 AM in Asia), thus creating an alpha opportunity.
Knowledge Graph provides greater clarity in interpreting the interdependence of markets, companies, financial instruments, and individuals.
Aside from knowledge graph, for most financial institutions or asset management companies, I see the true value of AI lies in the practical sense and asks the question, “can we replicate best trader decisions and automate them?”
An expert trader’s experience has strong value. His experience may have come from studying yield curve, volatility surface, market volatility as well as message exchanges on Bloomberg. A great trader requires years of training to internalize knowledge and logic. Since the study of financial products is essentially the study of crowd behavior proxies, it is critical to learn how traders think and then quantify the thought process. The reverse does not work.
I believe AI’s true value is in full integration, expert teaching, and the eventual execution. When AlphaGo was first designed, it was trained by a European champion. One pitfall in today’s AI design is taking an advanced technique and throwing data at it. All non-linear modeling techniques are under the umbrella of curve fitting. Curve fitting is correlation detection, not causation detection. One simplest example: you see a crowd at 6PM all walking to the subway station, but that does not mean they are related and heading to the same place. It is a serious mistake to equate correlation with the actual statistical relationship, or even worse, a causal relationship.
"Curve fitting is correlation detection, not causation detection."
A general rule of thumb to follow is, if you know the data noise is high (such as 90-95% in finance), you should consider more the logic and causation in data. The higher the noise ratio, the greater human expertise becomes. Modeling is not about how to detect a relationship but to learn what spurious relationships to discard.
OPTIMAL FORECAST PORTFOLIO CONSTRUCTION
Within the financial industry, based on the conversations/ consultations I had with global investment banks and asset management companies, I foresee the rise of AI-driven risk premia.
"I foresee the rise of AI-driven risk premia."
Based on the current trend of lower-alpha and near-zero fee, the asset management industry is going through major consolidations. When a company manages billions of dollars, it inevitably focuses a big part on risk premia. There are many discussions on market timing and theme, but at the end of the day, people are asking “can we produce market-return with lower risk or downside control?”
The first disadvantage is the potential time lag between sentiment generation and reward. It may take hours, days or even months to complete an investment cycle.
Second, advanced sentiments may have 60 - 65% accuracy which will make a very powerful hedge fund. But as a product, 35 – 40% error rate may invite aggressive attacks, not to mention the users may not use the sentiments properly.
One dilemma for third-party data analytics company is it cannot control whether the client will use the tool properly or whether the tool is helpful. For example, it is easy (relatively) to map relationships and create a knowledge graph and then monitor the news, but it is impossible to know how much is already priced in. An analyst has no control how a trader will use the information, and an analytics provider does not know whether it will receive any credit.
There are best practices in terms of utilizing AI to conduct portfolio construction. I still believe a human-led data science team and trading team working together will produce the best result. For portfolio construction, the interaction has to be continuous and total, and an AI team has to take part in every step to ensure the true value in AI algorithms can be realized.
The views expressed above reflect those of the authors and are not necessarily the views of MioTech.