The ESG Scorecard: A Deep Dive into The U.S Private Equity Landscape
December 3, 2025
•
5 mins read
The ESG landscape for U.S. private equity firms is increasingly defined by systemic governance pressure and rising social and environmental scrutiny. Governance issues at firms such as Blackstone, KKR, Thoma Bravo, TPG, and Francisco Partners primarily focus on deal processes, disclosure practices, and investor protection. These concerns encompass settlements related to pension mismanagement, actions taken by the Department of Justice regarding pre-merger filings, as well as lawsuits and shareholder investigations examining the fairness of take-private transactions and stock buybacks. On the social side, exposure is driven largely by portfolio companies and political positioning. Housing and tenant-rights disputes sit alongside allegations of labor abuses, child labor, and unsafe conditions. Environmental concerns are increasingly prominent, with major companies facing criticism for their exposure to fossil fuels, their impact on climate change, and associated lobbying efforts.
What are the most pressing ESG challenges currently facing the U.S. private equity firms? Read on to find out.
Blackstone: Governance Pressure, Social Backlash, and Climate Criticism
Blackstone is facing a wide range of ESG controversies. Governance challenges include a $227.5 million settlement related to Kentucky pension mismanagement, a $590 million lawsuit involving SPAC Recovery Co. that alleges a fraudulent scheme, and SEC fines tied to off-channel communications failures. On the social front, the firm has drawn criticism for political spending that heavily favors right-leaning candidates, child-labor incidents, and recurring safety violations at portfolio companies. Housing-related concerns also persist, with tenant protests over rent and eviction practices and university movements calling for divestment from Blackstone-linked real estate funds. Environmentally, Blackstone continues to be targeted by climate activists for its fossil fuel exposure and its perceived contribution to escalating climate risks.
TextReveal’s web data analysis of over five million public and private companies is essential for keeping tabs on ESG investment risks. To learn more about how you can analyze web data or to request a demo, reach out to one of our representatives.
The AI field is growing, and whether good or bad, people are doing more than talking about it; they’re using it more than ever. However, despite this increased use, I’ve noticed that, for some, their perception tends to alternate between false and too-high expectations of AI.
One case, in particular, was in 2021, Gartner placed natural language processing (NLP) at the top of its list of loaded expectations in terms of the Gartner hype cycle. As a result, many expected a potential “winter of AI,” so to speak. Yet, in 2022, we discovered the potential that we haven’t even touched on the true value AI could deliver.
Will there be a “winter of AI,” and are expectations bloated?
No, I don’t think so. As the past year has shown us, AI still has more to offer, a pocket of value that we have yet to see. I believe that while many people now accept that AI will be a transformative force—thanks to the fast democratization of large language models—our society hasn’t yet fully considered the actual changes it will make by lowering the barrier to access intelligence globally.
Progress in image generation, analysis, and computer vision—think autonomous driving—has leaped and bounded in the past year, and so has the progress in NLP, particularly in thenatural language understanding (NLU) and natural language generation (NLG) aspects. We’re at a tipping point that will likely transform our world in the same way that the internet has.
Tipping point for AI
Today, we’re seeing the development of natural language processing through large language models, such as with the emergence of ChatGPT based on OpenAI’s large language model version GPT-3.
Astounding fact: ChatGPT’s growth in user adoption skyrocketed past one million users within a week of launching. In comparison, no other tech company has reached this feat in this short of a time frame. But the adoption rate is only part of it.
This advance has profoundly affected creative jobs because this might be the first time an AI generative system can create high-quality content. In public mode, users have tapped ChatGPT to do everything, from generating basic reports and ideas to writing lectures and producing code.
With a high adoption rate comes great opportunity. Any startup seeing this level of success could become the most funded project ever. And more, there’s revenue. OpenAI, as the example, could make one billion dollars by 2024, according to a report via Reuters.
On the other side of the same coin, however, there are greater risks due to AI generative system advancement. For example, with AI assistance, human hackers can develop more sophisticated phishing campaigns—hacking mechanisms based on social engineering.
This image was generated with the assistance of DALL-E 2 by OpenAI with the prompt: An oil painting in classical style of an artificial intelligence holding the whole world in its hand. Realistic.
Competition, specificity, and focus for AI advancement
Despite the risks, we still haven’t seen what’s yet to come with generative AI. GPT-4, for instance, is rumored to launch in 2023. I believe it will be a massive improvement over GPT-3, which is already mind-blowing.
And on the point of NLG and these large language models, there’s a lot that’s feasible in process automation. For context, creative content gets the most attention; it’s the area that makes more headlines. But I would also watch advancements in technical content and automated code generation, for example.
Process automation
Because of today’s AI advancements, it’s now possible for tools like ChatGPT to generate near-ready-to-use source code. That means instead of only being fun to play around with, these are becoming enterprise tools, making it possible for developers to automate technical tasks at scale.
NLP—specifically natural language understanding, which SESAMm works on—is not untouched by these applications. Many of these large language models can perform zero-short learning, which means NLU can be performed without pre-training, a huge advance in this industry. However, zero-short learning is insufficient for many advanced sentiment and ESG analysis tasks. We still need additional data sets to fine-tune the data for a specific purpose.
What does this mean for the natural language generation sector? Many startups—especially anything around chatbots—have folded, some just in Q4 of 2022. ChatGPT’s success means it’s solved and replaced the need for many of them, and basically, anything content creation on the B2C side has and will struggle.
Defensive edge
Otherwise, things are looking good in our sector. For example, at SESAMm, we’re focused on what I call “last-mile AI.” In our specific business application, you can’t bypass the need for a data set because we’re trying to attain a precise result for specific, often risk-related applications. Open-source large language models like GPT-3 and BERT can get you mostly there, and that’s fine for general purposes. But for “last-mile AI” applications, there’s a lot you can’t do without additional work.
And here lies what I think is one of SESAMm’s defensive edges: the “last-mile AI.”
Instead of finding ways to protect its algorithms, the AI business community would do better to defend its use cases because the algorithm’s value will decrease progressively. In contrast, the value of a use case’s purpose and the data set used to achieve the use case will grow.
Competitive edge
Computing power and the resources it takes to train large language models remain challenging to applications like OpenAI. It takes electricity, heat, and money to train these models, and AI has an environmental impact. So far, we’ve justified this cost in the name of optimization—meaning that we put in this extra cost upfront so that the likely efficiency will offset or reduce that cost later—but it’s still a cost to incur.
AI companies, especially those in the NLG space, will do well to find their competitive edges, areas optimized for a specific purpose like “last-mile AI.” Companies like OpenAI will likely continue to optimize their models for quicker responses but don’t necessarily have the problem of solving for a specific use case.
At SESAMm, for instance, a big challenge and expertise we developed in-house is inference time—or how quickly we can apply the model to an article or an individual sentence. Because we’re processing so much live content, the more time it takes to process—milliseconds multiplied by a billion—the more costly it is.
Our data lake currently holds over 20 billion articles, messages, etc., from over 14 years, and we add 10 million more daily. That’s a lot of content to analyze. But we make it so our clients can access the data within seconds.
The need to optimize models for fast inference and adapt to deep industry-specific use cases will remain one of the key reasons companies will have to continue re-training their own models. That doesn’t mean large language models don’t add value here. Their open-source versions simply become an impressive building block for any NLP application and accelerate the rate of innovation and productivity in the whole field.
My summary thoughts on AI for 2023
When Google launched BERT in November 2018, we quipped that Google had open-sourced this system as a joke because no one could put it into production because BERT was so big. Many companies didn’t have the computing capabilities to do anything with it at the time. Now we do.
This year, Google did it again; they released a model that’s even bigger than GPT-3. Of course, almost no one besides Google can put that model into production now. But my point is that there will always be computing, resources, and other challenges to making AI advancements. That’s why I think AI companies must focus on defensive and competitive edges.
Regardless of the challenges, I see good things happening in the NLU space being massively improved by large language models. I see improvements as we incorporate these models today compared to deep-learning models trained from scratch a few years ago. I also see a significant decrease in the amount of data we need to fine-tune results, reaching and focusing on the final client use case more quickly.
From a natural language generation perspective, I believe large language models will transform the world. And I’m really excited about this era because this transformation supports my deepest purpose, leveraging AI to accelerate innovative decision-making. We do this by giving decision-makers access to technology that analyzes research content, news, and discussions. And if we increase the rate of innovation or the quality of decision-making by 10% globally, the impact could be huge for all industries: healthcare, finance, fashion, you name it. Industry leaders can make better ESG and SDG choices that will affect our world on a grander scale.
2023 will be an exciting time for AI, specifically for NLG and NLU. Of course, we’ll continue to see AI innovations. But more importantly, leaders will have better insights to make better decisions, creators will create more—and more complex—content, and overall, the applications will become more specific to solving the needs of particular use cases.
Here’s to the new era of AI in 2023. Cheers!
About SESAMm
SESAMm is a leading NLP technology company serving global investment firms, corporations, and investors, such as private equity firms, hedge funds, and other asset management firms. SESAMm provides datasets and NLP capabilities through TextReveal® to generate alternative data for use cases, such as ESG and SDG, sentiment, private equity due diligence, corporate studies, and more. With access to SESAMm’s massive data lake, comprised of 20 billion articles and messages and growing, its clients can make better investment decisions.
On October 1st, SESAMm hosted its second annual “SESAMm Day” in Paris at the EY Impact Lab. The evening kicked off with the Paris 2043 immersive experience, an eye-opening scenario of Paris in 2043 should climate commitments fail. Designed to spark forward-looking discussions, the experience set the stage for a full evening of insight, exchange, and networking among peers across private equity, asset management, banking, and consulting.
The event also featured a dynamic 45-minute panel discussion moderated by Sylvain Forté, CEO of SESAMm, with three distinguished panelists:
Dr. Julia Haake, Head of ESG Rating Agency at EthiFinance
Elsa Couteaud, RSE Director at Praemia
Abigail Arellano Sanchez, Sustainability Project Manager and Data Specialist at Natixis Investment Managers
From Data to Decisions
The conversation opened with a critical challenge facing the industry: transforming abundant ESG data into actionable insights. The panelists discussed their approaches to filtering signal from noise, focusing on how controversy alerts, automated reports, and ESG scores inform actual investment, financing, and rating decisions. The key, they emphasized, lies in establishing clear hierarchies and methodologies to prevent information overload.
Addressing ESG Skepticism
The panel also addressed the growing concerns around greenwashing, regulatory complexity, and "ESG fatigue." An interesting linguistic shift emerged during the discussion. One panelist noted that the term “sustainability” is increasingly preferred over "ESG" in job titles. In contrast, another panelist pointed out that 10-15 years ago, the trend was reversed, as the industry moved from sustainability to ESG.
While European skepticism focuses less on ESG fundamentals and more on complexity and costs, particularly with regulations like Omnibus, the panelists acknowledged growing operational fatigue. For example, the speakers highlighted that teams are demanding more pragmatism and concrete action over the burden of reporting.
The Future of ESG Data and Themes
Looking forward, the panel identified several emerging priorities:
Climate adaptation is taking center stage, with physical risks such as heatwaves and flooding becoming increasingly impossible to ignore. The future of ESG data will be increasingly forward-looking and predictive.
New themes are reshaping the ESG landscape. Biodiversity remains a work in progress requiring significant development. Responsible AI has emerged as both an ESG theme and a transformative force for the industry itself. Even defense has become a consideration as an exclusion or inclusion criterion, raising questions about which other exclusion themes might emerge or fade.
Supply chain risks are gaining prominence, particularly in emerging markets where local taxonomies and data remain scarce. One panelist shared that accessing reliable data on emissions and physical risks in these regions is challenging, with insurance data often providing the most qualified information.
A Call to Action
The panel concluded with a pragmatic vision: rather than being overwhelmed by ever-increasing data, the focus should shift toward enhancing real-world impact and climate resilience. With reputational and financial risks mounting, the message is clear: it's time to move from reporting to action.
Closing Reflections
SESAMm Day 2025 closed on a high note, with participants continuing the conversation over networking drinks. We extend our warm thanks to everyone who joined us, and in particular to our panelists for sharing their perspectives and making the evening both insightful and engaging.
SESAMm’s AI Technology Reveals ESG Insights
Discover unparalleled insights into ESG controversies, risks, and opportunities across industries. Learn more about how SESAMm can help you analyze millions of private and public companies using AI-powered text analysis tools.
Over the past decade, many organizations have improved their carbon footprints, from recyclable and biodegradable packaging and single-use plastic to planting trees and reducing their greenhouse gas emissions. However, some businesses and companies looking to boost their eco-friendly image without committing to serious changes and addressing environmental issues have been associated with false green marketing. We call this "Greenwashing."
Defining Concepts
What is Greenwashing?
Greenwashing is a practice used by businesses to represent themselves as more sustainable than they truly are. Greenpeace and the Environmental Protection Agency define greenwashing as making false and misleading claims about a product's environmental benefits or practices, services, technology, or company practices. Greenwashing typically involves companies spending more money on advertising and marketing than on implementing sustainable business practices that minimize environmental impact. These false green claims can deceive consumers into believing that a product or company is more environmentally friendly than it is, leading to increased sales and profits. As a result, false advertising, misleading initiatives, and groundless claims have increased green investors' exposure to risks emerging from potential lawsuits from activist groups, image deterioration, and heavy losses in assets invested.
Greenwashing Mentions Over Time
In recent years, new concepts have emerged alongside greenwashing:
Greenwashing, Greenhushing, and Greenwishing Mentions Over Time
Greenhushing refers to a company’s refusal to publicize ESG information. The company may fear pushback from stakeholders who would find its sustainability efforts lacking or from investors who believe ESG undermines returns.
Greenwishing, or unintentional greenwashing, describes a practice where a company hopes to meet certain sustainability commitments but simply does not have the means to do so.
High-Profile Greenwashing Case Studies
When talking about greenwashing, the usual suspects are the oil and gas industry, the food and beverage sector, and other environmentally impactful industries. However, the financial industry has also been embroiled in its own greenwashing controversies.
It’s challenging to produce an accurate assessment of environmental, social, and governance (ESG) factors, which creates opportunities for companies to hide ineffective and fake green initiatives. According to Regtank, the main challenges to detecting greenwashing include:
Lack of reporting standards – There’s no universal set of standards for ESG compliance.
Lack of transparency – Companies often don’t disclose the specifics of their “green campaigns,” making it hard for investors and consumers to verify their claims.
Limited consumer awareness – Misleading marketing can exploit consumers’ eco-consciousness and brand loyalty, reducing scrutiny of false green claims.
These gaps lead to inaccurate ESG data and scores, allowing greenwashers to avoid accountability. Ultimately, detecting greenwashing requires careful scrutiny of company claims and a deep understanding of their supply chains and operations.
How Artificial Intelligence Detects Greenwashing
As greenwashing practices become more common, activist investors, journalists, and the general public are using social media, news outlets, and blogs to highlight false claims. Artificial intelligence (AI) has become an invaluable tool in the early detection of greenwashing by analyzing vast amounts of public data.
At SESAMm, we use generative AI and LLMs to identify greenwashing risks across billions of web-based articles. Our data lake covers over 25 billion articles in more than 100 languages from four million news sources, blogs, social media platforms, and forums, analyzing data on five million public and private companies. Through our AI platform, we generate reliable, timely, and comprehensive insights to detect greenwashing, monitor ESG controversies, and identify related risks.
The CSRD significantly strengthens the requirements for companies to substantiate their sustainability commitments. Mandating standardized and detailed ESG disclosures directly addresses the practice of greenwashing, where companies exaggerate their environmental credentials in marketing without meaningful follow-through. Under the CSRD, companies can no longer rely on vague or selectively presented data—any gaps or inconsistencies in their sustainability claims will be exposed in public filings, making greenwashing much riskier. This means an end to cherry-picked data and a shift toward more comprehensive, comparable, and verifiable ESG performance for investors and stakeholders.
The CSDDD (if it stands) further reinforces these efforts by obligating companies to go beyond marketing statements and prove they’re actively managing environmental and human rights impacts throughout their supply chains. This directive closes loopholes that greenwashing often exploits, such as highlighting only direct operations while ignoring supplier practices. By requiring due diligence on environmental impacts across the value chain, the CSDDD aims to turn sustainability from a branding exercise into a legal and operational priority. If real supply chain actions don’t support a company’s green claims, it could face legal action and reputational damage.
Looking Ahead
Looking ahead, greenwashing will continue to face intense scrutiny from regulators, investors, and the public. With evolving regulatory frameworks like CSRD and CSDDD, the pressure is on for companies to ensure genuine environmental responsibility—not just green advertising. At SESAMm, we believe that the combination of regulatory rigor and advanced AI technologies will play a critical role in uncovering false green claims and supporting investors in navigating ESG risks with greater transparency and accountability.
SESAMm’s AI Technology Reveals ESG Insights
Discover unparalleled insights into ESG controversies, risks, and opportunities across industries. Learn more about how SESAMm can help you analyze millions of private and public companies using AI-powered text analysis tools.
Stay ahead with the latest in ESG and AI intelligence
Join our mailing list to receive new reports, event invites, and updates from SESAMm directly to your inbox.