How Successful Investors Are Using AI to Get ESG Data: A Quick Guide
November 16, 2022
•
5 mins read
Environmental, social, and governance (ESG) data.
It’s a valuable tool that’s become a standard measurement in sustainable finance for corporate stakeholders.
However, due to the growing demand and need for accurate and timely ESG data in investment decision-making and the ESG finance field, it’s also difficult to attain.
And if you’re reading this, it’s because you likely use ESG data regularly and are looking to improve your data or insights into the data. Or you’re new to ESG data and want to understand it better and how to get accurate and timely data using AI. Whatever your reason, we’ve got you covered.
But before we dive into these points, let’s cover a quick history of ESG.
Who created ESG (plus when and why)
Kofi Annan, former United Nations Secretary-General, invited a group of financial institutions to develop policies and guidance on how to better incorporate ESG issues in securities brokerage services, asset management, and associated research functions. In 2004, this joint initiative published “Who Cares Wins: The Global Compact Connecting Financial Markets to a Changing World,” a report that the UN later shared in the 2006 United Nations Principles for Responsible Investment (PRI) report. It would be the first time ESG criteria are incorporated in companies’ financial performance evaluations.
Stronger, more resilient, and sustainable
According to the “Who Cares Wins” report, the contributors were convinced that “in a more globalized, interconnected, and competitive world, the way that environmental, social, and corporate governance issues are managed is part of companies’ overall management quality needed to compete successfully.” The report goes on to state that “Companies that perform better with regard to these issues can increase shareholder value by, for example, properly managing [ESG risks], anticipating regulatory action or accessing new markets, while at the same time contributing to the sustainable development of the societies in which they operate.”
The cohort believes ESG issues can significantly affect a company’s reputation and brand, an essential part of its value. And as the report puts it, “Endorsing institutions are convinced that a better consideration of environmental, social, and governance factors will ultimately contribute to stronger and more resilient investment markets, as well as contribute to the sustainable development of societies.”
As a standard measurement, ESG becomes a way for companies to demonstrate accountability, trust, and transparency in their ESG goals to appeal to customers, employees, and investors. But how is this data produced and seen?
Where does ESG data come from?
Besides implementing ESG principles and policies, companies are asked to provide information and reports on related performance in a consistent and standardized format. This ESG reporting includes identifying and communicating key challenges and value drivers through normal investor relations communication channels. Companies are also encouraged to mention ESG information in their annual reports.
As you might notice in this scenario, ESG data comes primarily from the very companies we want to evaluate. See a conflict here?
Today’s ESG data challenges
At their core, ESG metrics capture a company’s performance on a given ESG issue. When this aim is achieved, investors can use the data to evaluate and hold companies accountable for their ESG performance. But how would you know whether ESG data accurately capture a firm’s performance?
ESG measuring, data, and how companies report them are inconsistent.
Lack of benchmarking transparency undermines the reliability of peer performance ranking.
ESG data providers deal with “data gaps” differently, and their gap-filling approaches could lead to significant discrepancies.
Interpretation differences among ESG data providers are considerable and are growing with the quantity of data becoming publicly available.
“Although 92% of S&P companies were reporting ESG metrics by the end of 2020, according to a 2020 BlackRock survey of clients, 53% of global respondents cited ‘poor quality or availability of ESG data and analytics; and another 33% cited ‘poor quality of sustainability investment reporting’ as the two biggest barriers to adopting sustainable investing.” — Deloitte
Artificial intelligence (AI) to meet rising ESG data demands
Even as investors consider ESG one of many major market factors, sourcing and analyzing data remains a problem. “The absence of standardized ESG datasets and reporting methodologies makes it difficult for issuers to disclose meaningful information on sustainability,” according to a post on WorldQuant.
But despite the ESG data limitations, ESG investing demands continue to grow. For instance, in its 2021 Key Findings, RBC Global Asset Management found that 75% of respondents of 800-plus institutional investors had integrated ESG principles into their investment approach, an increase from 67% since 2017.
Machine learning helps with this demand. For instance, advances in natural language processing (NLP) in machine-learning techniques have made it possible to extract unstructured data from web sources, like news, blogs, forums, and social media, to gain timely and accurate ESG insights. This alternative data has been integral for seeing an entity’s ESG controversies or events in near real time, providing a unique perspective to ESG data and details, filling the data gaps more accurately.
How to get ESG data using natural language processing (NLP)
NLP algorithms can read billions of news, articles, and text-based web data. It categorizes extracted data and can determine positive and negative sentiments, producing potential predictive indicators. Investors and researchers can use NLP to mine keywords and categories of underlying data to evaluate portfolio companies or see their exposure to ESG factors.
Some ESG rating agencies are now integrating or outsourcing NLP-derived datasets into their processes to extrapolate ESG scores. Likewise, investment firms, like asset managers, are incorporating NLP-enhanced web data into risk management, especially when looking into private-equity-type assets. Many are meeting their needs with NLP companies, such as SESAMm and others.
SESAMm is better at extracting ESG-related data for many reasons. First, it has one of the largest data collection sources to extract data from (data lake). Second, its NLP machine learning algorithms are tuned specially to key indicators.
1. SESAMm’s massive data lake
What makes SESAMm’s data lake unique and ideal for investment research and advanced analytics? SESAMm’s data lake is:
Broad and large
Includes more than 100 languages
Updated in near real time
Including data since 2008, the data lake consists of more than four million data sources made up of more than 20 billion articles, forums, and messages, such as professional news sites, blogs, and social media, increasing by an average of six million per day. The data lake is also updated hourly to give investors near real-time insights into their investment interests.
Moreover, the coverage is global, with 40% of the sources in English (U.S. and international) and 60% in multiple languages, including Japanese, Chinese, and Eastern European. We select and curate these sources to maximize coverage of both public and private companies, focusing on quality, quantity, and frequency to ensure a consistently high input value.
SESAMm’s developers tune the machine-learning algorithms for key indicators such as mention volume, sentiment analysis and emotion, ESG, and SDG. Additionally, they optimize the structure and schema for optimized SQL queries.
For example, our knowledge graph, a digital representation of a network of real-world entities, puts the schema in context through semantic metadata and linking, providing a framework for analytics, data integration, sharing, and unification. In other words, we map and label the concepts, entities, and events and connect and identify their relationships for quick and accurate recall.
SESAMm, a leading provider of Big Data and Artificial Intelligence technology for investment managers, has been recognized with the Best of Show Award at Finovate Europe 2022, which took place on March 22nd and 23rd in London. The award was granted to SESAMm following a demonstration conferred by CEO and Co-founder Sylvain Forté, during which he showcased the company's marquee product TextReveal®.
"Finovate Europe represents a unique opportunity for best-in-class Fintech companies to showcase their innovations in front of leading institutions. It was great to demonstrate our product in front of an elite audience and win the Best of Show award." Said Sylvain Forté, CEO of SESAMm,"We are proud to say that this event was a big success for SESAMm, judging by the level of interest in our technology and its applications to the current ESG topic."
SESAMm is a fintech company that specializes in Big Data and Artificial Intelligence. Through its product, TextReveal®, the company provides analytics and investment signals to finance and corporate professionals by analyzing over 17 billion web articles and messages using natural language processing and machine learning. TextReveal® is a ready-to-use alternative data platform; its NLP (Natural Language Processing) powered engine provides daily sentiment and ESG data mapped to public and private companies to fuel investment strategies.
Finovate Europe, one of the most awaited annual events, sheds light on innovative fintech startups and helps them gain more recognition. It brings together over 1,000 senior finance and tech experts, including “demoers” and insightful speakers.
"We love to see companies like SESAMm join us at Finovate demonstrating their cutting-edge technologies. It really underscores our commitment to provide a platform to promote innovative startups in the financial ecosystem." Said Greg Palmer, VP of Finovate. "Congrats to the SESAMm team for winning Best of Show, it’s clear they really resonated with our audience!"
SESAMm's successful appearance at Finovate Europe once more confirms the great reception the company is getting in the industry, as just a few weeks ago, it was announced that SESAMm was the recipient of the HFM award for Best use of Artificial Intelligence.
TextReveal® Streams emphasizes SESAMm's goal to provide future investors with the accurate and necessary data to make decisions accordingly. Find out more here.
About SESAMm:
SESAMm is a leading company in alternative data and artificial intelligence, delivering global investment firms and corporates data-driven insight and investment analytics. It owns a proprietary 13 years historical data lake containing over 17 billion articles publicly sourced from more than 4 million sources (blogs, forums, social networks, etc.). This represents 10 to 100 times more information than that of our competitors.
At the RBI Innovation Summit in November 2023, SESAMm's CEO, Sylvain Forté, and Suleiman Arabiat, Senior Investment Manager at Elevator Ventures, shared an interview about the intersection of artificial intelligence and ESG data analytics. This conversation highlighted SESAMm's commitment to revolutionizing how ESG data is analyzed and utilized in the financial sector.
Sylvain Forté, SESAMm's CEO and co-founder, illustrated the company's impact in detecting ESG controversies using advanced AI. By processing billions of documents, SESAMm offers a unique capability to identify environmental, social, and governance issues that influence companies. This cutting-edge approach is particularly important for private equity firms, asset managers, banks, and corporations, providing them with critical data for informed decision-making.
The interview dove into the essence of ESG – encompassing environmental, social, and governance topics – and its growing importance in regulatory frameworks worldwide. SESAMm’s AI-driven technology scans online content in over 100 languages, from major media publications to niche NGO websites, to detect and alert clients about potential controversies.
Forté shared the birth of SESAMm, tracing back to 2014 when the initial idea burgeoned from a passion for AI and its application in text analysis. This nascent idea evolved into a specialized focus on ESG controversy analysis, aligning with the increasing regulatory emphasis on sustainable investment strategies.
One of the major challenges SESAMm faced was maintaining focus while leveraging its complex technology platform for the right use cases. This journey led us to tailor our technology for end business users, aligning with the company's growth and scalability goals. As we continue to expand, particularly in the US market and private equity sector, we remain committed to enhancing our offerings in asset management and exploring partnerships in the fintech space. This journey reflects a fusion of technological innovation and dedication to sustainable investment practices, signaling a transformative era in ESG data analytics powered by AI.
To gain deeper insights into how SESAMm is shaping the future of ESG data analytics with AI, watch the full interview between SESAMm's CEO, Sylvain Forté, and Suleiman Arabiat at the RBI Innovation Summit.
SESAMm’s AI Technology Reveals ESG Insights
Discover unparalleled insights into ESG controversies, risks, and opportunities across industries. Learn more about how SESAMm can help you analyze millions of private and public companies using AI-powered text analysis tools.
Financial and ESG insights begin with big data coupled with data science.
At SESAMm, our artificial intelligence (AI) and natural language processing (NLP) platform analyzes text in billions of web-based articles and messages. It generates investment insights and ESG analysis used in systematic trading, fundamental research, risk management, and sustainability analysis.
This technology enables a more quantitative approach to leveraging the value of web data that is less prone to human bias. It addresses a growing need in public and private investment sectors for robust, timely, and granular sentiment and environment, social, and governance (ESG) data. This article will outline how the data is derived and illustrate its effectiveness and predictive value.
Content coverage and ESG data collection
The genesis of SESAMm’s process is the high-quality content that comprises its data lake, the source from which it draws its insights. SESAMm scans over four million data sources rigorously selected and curated to maximize coverage of both public and private companies. Three guiding criteria—quality, quantity, and frequency—ensure a consistently high input value.
Every day the system adds millions of articles to the 16 billion already in the data lake, going back to 2008. The coverage is global, with 40% of the sources in English (the U.S. and international) and 60% in multiple languages. The data lake, expanding every month, comprises over 4 million sources, including professional news sites, blogs, social media, and discussion forums.
The following tables illustrate SESAMm’s data lake distribution (Q1 2022):
Respect for personal privacy figures highly in the data gathering process. We don’t capture personal data, like personally identifiable information (PII), and respect all website terms of service and global data handling and privacy laws. SESAMm’s data also doesn’t contain any material non-public information (MNPI).
Deriving financial signals and ESG performance indicators
SESAMm’s new TextReveal® Streams platform applies NLP and AI expertise to process the premium quality content gathered in its data lake. This complex process involves named entity recognition (NER) and disambiguation (NED)—the process of identifying entities and distinguishing like-named entities using contextual analysis—and mapping the complex interrelationships between tens of thousands of public and private entities, connecting companies, products, and brands by supply chain, location, or competitive relationship.
Process representation for NER and NED
Using SESAMm’s TextReveal Streams, this wealth of information is filtered to focus on four crucial contexts for systematic data processing, risk management, and alpha discovery:
Sentiment covering major global indices: world equities (and Small Caps, Emerging), U.S. 3000, Europe 600, KOSPI 50, Japan 500, Japan 225
Sentiment covering all assets and derivatives traded on the Euronext exchange
Private company sentiment on more than 25,000 private companies
ESG risks covering 90 major environmental, social, and governance risk categories for the entire company universe, which includes more than 10,000 public and more than 25,000 private companies with worldwide coverage
TextReveal Streams data sets and assessments are used by financial institutions, rating agencies, and the financial services sector, such as hedge funds (quantitative and fundamental) and asset managers, to optimize trade timing and identify new sustainable investment opportunities. Private equity deal and credit teams also use the data for deal sourcing and due diligence. Private equity ESG teams use it to manage initiatives like portfolio company environmental, social, and governance risk and reporting.
Methodology and technology for processing unstructured data
NLP workflow, from data extraction to granular insight aggregation
Data is continually extracted from an expanding universe of over four million sources daily. As it enters the system, it is time-stamped, tagged, indexed, and stored in our data lake to update a point-in-time history extending from 2008 to the present. The source material is then transformed from raw, unstructured text data into conformed, interconnected, machine-readable data with a precise topic.
NLP workflow for TextReveal Streams
Mapping relationships between entities with the Knowledge Graph
At the heart of the text analytics process is SESAMm’s proprietary Knowledge Graph, a vast map connecting and integrating over 70 million related entities and their keywords. It’s essentially a cross-referenced dictionary of keywords, relating each organization to its brands, products, associated executives, names, nicknames, and their exchange identifiers in the case of public companies.
Entities within the Knowledge Graph are updated weekly and tagged to ensure changes are correctly tracked. The CEO of a company today, for example, may not be the CEO tomorrow, and brands may be bought and sold, changing the parent company with each sale. Weekly updates within the Knowledge Graph ensure the system is aware of these changes.
Named entity disambiguation (named entity recognition plus entity linking) is one of the NLP techniques used to identify named entities in text sources using the entities mapped within the Knowledge Graph universe.
At SESAMm, NED identifies named entities based on their context and usage. Text referencing “Elon,” for example, could refer indirectly to Tesla through its CEO or to a university in North Carolina. Only the context allows us to differentiate, and NED considers that context when classifying entities. This method is superior to simple pattern matching, limiting the number of possible matches, requiring frequent manual adjustments, and cannot distinguish homophones.
SESAMm uses three other NLP tools to identify entities and create actionable insights. These are lemmatization, embeddings, and similarity. Each is explained in more detail below.
Analyzing the morphology of words with lemmatization
News articles, blog posts, and social media discussions reference organizations and associated entities in various forms and functions. Lemmatization seeks to standardize these references so the system knows they mean the same thing.
For example, “Tesla,” “his firm,” “the company,” and “it” are all noun phrases that can appear in a single article and refer to a single entity. Even where the reference is apparent, it can take different forms. For example, “Tesla” and “Teslas” both refer to the same entity but have slightly different meanings (semantics) and shapes (morphology).
The lemmatization process standardizes reference shape (morphology) to facilitate identification and aggregation. Lemmatization is a more sophisticated process than stemming, which truncates words to their stem and sometimes deletes information.
Encoding context and meaning with word embedding
In NLP, embedding is a numerical representation of a word that enables its manifold contextual meanings to be calculated relationally. Embeddings are typically real-valued vectors with hundreds of dimensions that encode the contexts in which words appear and, thus, also encode their meanings. Because they are vectors in a predefined vector space, they can be compared, scaled, added, and subtracted. An example of how this works is that the vector representations of king and queen bear the same relation to each other as the representations of man and woman once you subtract the vector that represents royal.
Vectorized representation of embeddings
Using embedding is key to analyzing how words change meaning depending on context and understanding the subtle differences between words that refer to the same concept: synonyms. For example, the words business, company, enterprise, and firm can all refer to the same thing if the context is “organizations.” But they represent different things and even different parts of speech if the context changes.
In the phrase, “[Tesla] will be by far the largest firm by market value ever to join the S&P,” for example, one could replace the word firm with company or enterprise without affecting the meaning significantly. Contrast that with “a firm handshake,” where a similar substitution would render the phrase meaningless.
Also, words referring to the same concept can emphasize slightly different aspects of the concept or imply specific qualities. For example, an enterprise might be assumed to be larger or to have more components than a firm. Embeddings enable machines to make these subtle distinctions.
One advantage of using embedding is that it’s practical because it’s empirically testable. In other words, we can look at actual usage to determine what a word means.
Another advantage is that embeddings are computationally tractable. This understanding of a word’s definition allows us to transform words into computation objects to programmatically examine the contexts in which they appear and, thus, derive their meaning.
As lemmatization is an improvement on stemming, embeddings improve techniques such as one-hot encoding, which is close to the common conception of a definition as a single entry in a dictionary.
SESAMm uses the global vectors for word representation (GloVe) algorithm to generate embeddings. It’s an unsupervised learning algorithm that begins by examining how frequently each word in a text corpus co-occurs with other words in the same corpus. The result is an embedding that encapsulates the word and its context together, allowing SESAMm to identify specific words in a list and different forms of the listed words and unlisted synonyms.
GloVe is an extension of recent approaches to vector representation, combining the global statistics of matrix factorization techniques like latent semantic analysis (LSA) with the local context-based learning of word2vec. The result is an unsupervised algorithm that performs well at capturing meaning and demonstrating it on tasks like calculating analogies and identifying synonyms.
BERT is another algorithm used by SESAMm to generate embeddings. BERT produces word representations that are dynamically informed by the words around them. Google developed the technique, and it’s what’s known as a transformer-based machine learning technique, which means it doesn’t process an input sequence token by token but instead takes the entire sequence as input in one go. This technique is a significant improvement over sequential recurrent neural network (RNN) based models because it can be accelerated by graphics processing units (GPUs).
SESAMm uses BERT for multilingual NLP of its extensive foreign language text because it has been retained using an extensive library of unlabeled data extracted from Wikipedia in over 102 languages. BERT model was trained to predict words from context and next sentence prediction where it was trained to predict if a chosen following sentence was probable or not given the first sentence. As a result of this training process, BERT learned contextual embeddings for words. Due to this comprehensive pre-training, BERT can be finetuned with fewer resources on smaller datasets to optimize its performance on specific tasks.
Linking words, sentences, and topics with cosine similarity
Cosine similarity with centered means it’s identical to the correlation coefficient, which highlights another element of the computational tractability of the embeddings approach. It makes it easy to compare words and contexts for similarity.
Converting words to vector representations means we can quickly and easily compare word similarity by comparing the angle between two vectors. This angle is a function of the projection of one vector onto another. It can identify similar, opposite, or wholly unrelated vectors, which allows us to compute the similarity of the underlying word that the vector represents.
Two vectors aligned in the same orientation will have a similarity measurement of 1, while two orthogonal vectors have a similarity of 0. If two vectors are diametrically opposed, the similarity measurement is -1. In practice, negative similarities are rare, so we clip negative values to 0.
Vectorized representation of cosine similarities
Cosine similarity measures whether two words, sentences, or corpora are close to one another in vector space or “about” the same thing in semantic space. To answer the question, “Is this sentence referencing company X?” we embed the sentence using the process described above and compute the cosine similarity between the sentence and the embedded company profile. Analogously, we compute similarities between sentences and the ESG topics SESAMm monitors by taking the maximum similarity between a sentence and each embedded keyword associated with an ESG topic.
These similarities allow us to identify whether a sentence references fraud, tax avoidance, pollution, or any other ESG risk topic among the more than 90 that SESAMm tracks across the web.
Similarities within ESG topics combine with word counts to resolve the recall and precision problem. Word counts are precise because if a word is identified within a context, then that context, by construction, references the topic.
The virtue of using these NLP techniques is that even if a given keyword list does not include every possible combination of words that a person might use to discuss a topic, relevant entities missed by the word-count process will be identified through vector similarity.
This is the power of SESAMm’s NLP expertise. We can scan many lifetimes’ worth of data in seconds to find the concepts you explicitly ask for and the concepts relevant to your search but that you did not think of yourself.
Sentiment analysis with deep learning and neural networks
Once we’ve identified the concepts and contexts of interest in all the forms they appear, we analyze the context to determine the speakers’ attitudes.
We use sentiment classification models to score a sentence with three possible outcomes: negative, neutral, or positive. The current classification models are based on deep learning AI technologies. Specifically, we stack convolutional neural networks with word embeddings and bayesian optimized hyperparameters—parameters not learned during training. This architecture improves the accuracy and enables fast shipping of production-ready models for a given language. We also produce state-of-the-art frameworks with architecture variations enabling multilingual capabilities, such as transformers and universal sentence encoders.
Condensing information and extracting insights with daily aggregation
Similarities, embedded word counts, and sentiment are state-of-the-art tools for processing unstructured text data. The same tools are effective cross-linguistically.
Once the information has been extracted from millions of data points, it’s aggregated and condensed into actionable insights.
All entities are referenced directly or indirectly within an article. Then, sentence-level references are aggregated to obtain an article-level perspective, and finally, all relevant articles are aggregated to gain an entity-level view of that day.
In this way, reams of data are compressed into several metrics to provide a daily aggregate view for each entity, highlighting trends at a sentence, article, and entity-level comparable over a multi-year history.
ESG analysis use cases
SESAMm’s TextReveal Streams is used in various investment domains, from asset selection to alpha generation and risk management. Systematic hedge funds track retail interest in real time to identify investment opportunities and protect their existing positions. In the Private Equity industry, equity and credit-deal teams use the data in various ways, from monitoring consumer perspectives via forums and customer reviews for evaluating deal prospects to estimating due diligence risks, all to help make investment decisions. Dedicated teams use our data for monitoring portfolio companies for ESG red flags that conventional ESG reporting might miss.
Below are two examples of how aggregated TextReveal Streams data can be used to help identify investment risk and opportunity.
LFIS CapitalL: ESG signals for equity trading
ESG controversies can significantly impact asset prices in the short term, and it’s now estimated that intangible assets, including a company’s ESG rating, account for 90% of its market value.
Working in partnership with LFIS Capital (LFIS), a quantitative asset manager and structured investment solutions provider, SESAMm developed machine learning and NLP algorithms that could analyze ESG keywords in articles, blogs, and social media, to generate a daily ESG score specific to each stock, which is part of the TextReveal Streams’ platform’s core functionality.
The results were promising when these scores were incorporated into a simulated strategy for trading stocks in the Stoxx600 ESG-X index.
A simulated long-only strategy running between 2015 and 2020, using the signals, delivered a 7.9% annualized return, 2.9% higher than the benchmark for similar annualized volatility (17.3% vs. 17.1%). The information ratio of the strategy was greater than 1, with a tracking error of 2.8%. Results for the previous three years were compelling, reflecting the growing interest and news flow around ESG themes.
Researchers also backtested a hypothetical long-short strategy for all stocks in the Stoxx600 ESG-X index with a market cap of over $7.5bn. This investment strategy delivered a Sharpe ratio of approximately 1 with annualized returns and volatility of 6.1% and 5.9%, respectively, between 2015 and 2020. Like the long-only strategy, returns were particularly robust over the three years up to 2020: +6.0% in 2018, +7.3% in 2019, and +11.3% in 2020.
Finally, a simulated “130/30” ESG strategy that combined 100% of the long-only ESG strategy and 30% of the long-short ESG strategy delivered a 10.8% annualized return, 5.8% higher than that of the Stoxx600 ESG-X index. Annualized volatility was similar at 16.9% vs. 17.1%. The strategy experienced a tracking error of 3.8% and an information ratio of over 1.5, with a consistent outperformance each year.
Disclaimer: Past performance is not an indicator of future results. Theoretical calculations are provided for illustrative purposes only. The investment theme illustrations presented herein do not represent transactions currently implemented in any fund or product managed by LFIS.
Wirecard: ESG sentiment and volume as predictive indicators
The Wirecard scandal broke on June 21, 2020, when newswires carried the story that the major German payment processor had filed for bankruptcy after admitting that €1.9 billion ($2.3 billion) of purported escrow deposits did not exist.
Could SESAMm’s TextReveal Streams platform have provided investors with an early warning that the scandal was about to break?
The following chart derived from the platform shows how key ESG metrics, including ESG scores (volumes) and ESG scores (sentiment), reacted to the news.
An analysis of the charts pinpoints a shallow rise in the ESG scores (volumes) time series in the early part of June before the eruption on June 21.
The ESG scores (sentiment) metric also shows a steady increase in negative sentiment for governance, the most relevant of the three ESG factors regarding the scandal.
How key ESG metrics, including ESG scores (volumes) and ESG scores (sentiment), reacted to the Wirecard scandal news.
Additionally, before the crash, governance was the most negative of the three ESG factors most of the time. This was especially the case from late March to early April, and then before the scandal in early June, negative governance sentiment diverged higher from the other two.
The rate-of-change of negative governance sentiment as it rose and peaked in early June before the scandal broke was also extremely high, perhaps providing the basis for an early warning signal.
Portfolio managers who had been keeping an eye on the reputational slide in Governance for Wirecard may have decided the company was at high risk of a negative controversy emerging, giving them cause to drop the stock before the event.
In this way, it can be seen how while not providing a hard and fast early warning signal, SESAMm’s ESG scores can, nevertheless, be used as the basis for developing a data-driven, rules-based portfolio management approach that can help investors avoid high-risk candidates like Wirecard.
SESAMm takes on ESG data challenges
SESAMm’s NLP and AI tools analyze over four million data sources daily to identify thousands of public and private companies and their related products, brands, identifiers, and nicknames, turning reams of unstructured text into structured and actionable data.
SESAMm’s TextReveal Streams platform can be used in many quantitative, quantamental, and ESG investment use cases. TextReveal is a solution that allows you to fully leverage NLP-driven insights and receive high-quality results through data streams, modular API and dashboard visualization, and signals and alerts.
Learn how SESAMm can support you in your investment decision-making and request a demo today.
To request a demo or for access to the full SESAMm Wirecard or LFIS reports, contact us here:
Stay ahead with the latest in ESG and AI intelligence
Join our mailing list to receive new reports, event invites, and updates from SESAMm directly to your inbox.