HomeMioTech ResearchArticle Detail

ML, ML on the wall, will this merger be successful after all?

In this article, MioTech speaks with Dr. Matthias Buehlmaier, Program Director of HKU’s Bachelor of Business Administration (IBGM), on his award-winning research paper on whether the media can predict the success of corporate acquisitions and mergers using machine learning.

Dr. Matthias Buehlmaier, Program Director of HKU’s Bachelor of Business Administration (IBGM)2018-12-19

Please tell me about your paper on “Financial Media, Price Discovery, and Merger Arbitrage”.

Mergers and acquisitions (M&A) play a huge role in the corporate finance world and are often an efficient business growth strategy. In 2017, the global M&A market was strong with announced transaction volumes reaching USD $3.7 trillion. It holds major impact on shareholder wealth, firm value, and stock return performance. Whilst a majority of the terms is negotiated prior to announcement, the success of a takeover is not guaranteed thereafter. A takeover could fail because it couldn’t obtain shareholder approval, regulatory approval or sufficient finances. Financial media could convey information about these reasons, and another reason for failure could be the dissemination of the M&A deal in financial media itself. Since there is a wealth of articles following an M&A announcement, there could be useful data points in the press to predict the likelihood of deal completion.

This paper and a related study of mine develops and empirically confirms a theory that explains how the media predicts takeover outcomes after a takeover announcement. The study does not look into press content before the M&A announcement due to look ahead bias. Furthermore, from a finance perspective, the information would not be tradable. The paper shows that positive media content about the acquirer predicts takeover success.

Why did you decide to conduct this research paper? What was the importance of it?

I have had a long interest in the role financial press has on the finance sector. It seemed important to me especially since the industry is rich in data. You have CRSP for stock prices and Compustat for accounting. There is a wealth of reports on transactions, holdings, securities tax lots etc. Now third-party market data sources such as Thomson Reuters, S&P, all of which are valuable and could have important implications for financial outcomes, are gaining an increasing level of importance.

When I embarked on this research paper, there was hardly anyone focusing on text analysis for mergers and acquisitions. The financial media had so far received little attention as a potential determinant of takeover success. I realised that if you’re looking at something that no one else is looking at, you could find these pockets of market inefficiencies and exploit them. And that’s why I chose to look into financial media and its influence on the outcomes of takeover deals and how it can make the M&A market more efficient.


How did you go about executing this research paper?

There’s a saying that speaks true to this research process - when you build a boat, the first boat you build is for your enemy. The second boat is for your friend. The third boat you build is for yourself. Especially in 2011 when programs weren’t as sophisticated as they are today, building the model for this analysis required many iterations and lots of trial and errors.

It began with the time-consuming, labor intensive steps of acquiring the data, cleaning the data and making it amenable to our analysis.

The second step involved picking a package from a software library that could work with our dataset. Back then, options were limited and the package that we initially used in R began to show signs of inefficiency as our sample grew to over 130,000 news articles. Each new dataset meant having to copy the entire text corpus again.

The third step was about managing the necessary computing power. I had to swap from my own local computer to a computer cluster, several computers hooked up together, in order to have optimum power. During this time, Hadoop had just launched and I believe I was one of the first few to utilize Hadoop specifically for finance. As a result, I rewrote code manually 3 times from scratch for a distributed computing environment.

In the meantime, Hadoop MapReduce is legacy now and people are on Spark. R and Python have different philosophies. They say R is written by statisticians whilst Python is written by computer scientists. Both have a great deal of overlap in data science and often it is only a matter or personal preferences which one to use. Both R and Python can do big data, machine learning, and statistics, but in my opinion Python has more of a focus on machine learning and big data, while R has more of a traditional statistics background. I believe it helps to be well versed in both R and Python for text analysis.

After analyzing 130,000 news articles, what did you find?

Our paper set out to answer two questions, how important the media is for the likelihood of deal completion and it also asks why rational target shareholders pay attention to the news, even though they are fully aware that the media can be manipulated by insiders such as the acquirer’s management. Out of the over 130,000 articles of the 1,200 takeover attempts we analysed, we found that positive media content about the acquirer predicts takeover success. Vice versa, negative content predicts failure. Shareholders do pay attention to the news, and the information they obtain influences whether they approve the deal.

Furthermore, in a separate study, we found that the media only plays a role in a separating equilibrium, since this is the only one in which the media signal carries information that is useful to target shareholders. You might ask, if there is a possibility of individuals lying, manipulating or spinning the news, should we still pay attention to them? The answer is yes, we should. You can look at this in two ways, empirically and theoretically. Theoretically, the idea is that companies, out of their own interest, would share positive news whilst negative news wouldn’t be shared in the first place. There is no incentive for companies to lie. It is costly and legally risky to falsify takeover announcements.

Empirically speaking, it’s important to think about the possibility of learning something from the data even if people were to lie. Instead of solely considering the general positivity/negativity of reporting, if you look at the data, it could also reveal information about the acquirer. To illustrate, let’s hypothetically say that the FED has been pushing out announcements that there will not be hikes in interest rates. But a month later, they do. That in itself is a signal you can get from the data. If there’s a pattern, and that could be truth, or lying in a predictable way, you can pick that up in machine learning. In relation to this study, it can predict merger outcomes to some degree and inform you about behavioral patterns of acquirers.

With more sophisticated AI solutions today, how will this impact the investment landscape?

In comparison to when I first started on this research paper, text analysis is widely used now. No one questions and you don’t have to justify why you’re conducting a research paper on text analysis for mergers and acquisitions. But I see this causing fierce competition. It was not easy then and it’s not easy now. With larger datasets, more sophisticated technologies that are widely available, markets are likely to become more efficient. If everyone is trading on data-driven stocks, no one is. It’s going to be harder and harder to make money on it.

I understand this year you’ve been teaching in the world’s first course on text analysis and NLP in the finance and fintech, why is this important?

Finance graduates already face a competitive job market. The best way forward is to find ways to differentiate yourself. While not everyone might not have to write a program in the future, it’s good to have some understanding of it.

If you look at the way trading floors have transformed over the years, you’ll realise that due to automation there are fewer humans needed to enter, process, and monitor trades. It’s a miracle that it’s taken this long to be frank. With the help of automation, the trading field has seen reduced costs of transactions and greater liquidity. On top of that you have quantitative investing and machine learning techniques that can help traders and portfolio managers make better investment decisions.

One can’t fully judge and understand these machine learning techniques if one hasn’t done at least a little bit of programming. Some fields might be less affected, but if you’re working in a remotely quantitative area of finance it will become increasingly difficult to advance one’s career without at least basic programming knowledge. That is why I am excited to introduce this new course to students at HKU. This course is a reflection of the growing importance of machine learning and artificial intelligence in the financial space.

To read the full research paper, click here.