When Luxury Supply Chains Break Down: What the Loro Piana Case Reveals
July 24, 2025
•
5 mins read
Luxury brand Loro Piana, owned by LVMH, has been placed under a one-year judicial administration by an Italian court after a labor exploitation investigation uncovered serious abuses within its supply chain. According to Reuters, workers at a subcontracted factory were paid as little as €4 per hour and subjected to 90-hour workweeks, often living inside the premises. One worker was reportedly attacked after requesting unpaid wages, requiring 45 days of medical treatment. The case highlights the growing scrutiny of labor conditions in Italy’s fashion manufacturing sector, especially among high-end labels. Loro Piana is now the fifth luxury brand, joining Dior, Armani, Valentino, and Alviero Martini, under court supervision due to supplier-related violations.
A Complicated Web of Subcontracting
What sets this case apart is the complexity of the supply chain. Loro Piana did not contract directly with the workshop where the violations occurred. Instead, it worked through two front companies, both of which lacked actual manufacturing capacity. These intermediaries then subcontracted the work to a network of unregistered or poorly monitored producers. All the firms involved in this chain have been swept up in the investigation.
This multi-tier outsourcing structure made it difficult to detect violations and raises questions about accountability. The Milan court noted that Loro Piana "culpably failed" to supervise its partners, prioritizing cost and output over due diligence.
Why It Matters
Luxury brands trade on trust and exclusivity. Consumers expect not just quality, but integrity, especially regarding sourcing. When serious labor violations are revealed, the reputational risks extend far beyond one product or supplier. They affect brand credibility, investor confidence, and long-term consumer loyalty.
This incident also reinforces a trend: regulators are increasingly willing to intervene when voluntary monitoring fails. Judicial administration isn’t just symbolic; it’s a legally binding oversight mechanism aimed at forcing systemic change.
The Path Forward
For fashion brands, this is a clear signal that supply chain governance must go deeper. That includes mapping indirect suppliers, improving transparency around subcontracting, and enforcing ethical standards at every level. Simply trusting the next link in the chain is no longer enough.
In a sector built on craftsmanship and heritage, safeguarding those values behind the scenes is just as important as what ends up on the runway.
SESAMm’s AI Technology Reveals ESG Insights
Discover unparalleled insights into ESG controversies, risks, and opportunities across industries. Learn more about how SESAMm can help you analyze millions of private and public companies using AI-powered text analysis tools.
We are excited to announce the launch of SESAMm’s proprietary Controversy Exposure Score (CES), a new score designed to transform how ESG and finance professionals assess risks. The CES offers a dynamic, real-time view of a company's exposure to ESG controversies, enabling fast, informed decision-making.
What is the Controversy Exposure Score (CES)?
The CES is a continuously updated score ranging from 1 to 100, reflecting a company or project's evolving exposure to ESG controversies. Leveraging SESAMm’s proprietary Intensity and Volume Scores, the CES captures both the severity and frequency of ESG incidents, allowing stakeholders to monitor and understand risks as they develop. Below, we’ve put together an example demonstrating how the CES for Renault compares to Stellantis based on their respective ESG controversies. As we see in the chart below, Renault has had fewer high–intensity events, which results in a lower, more stable CES compared to Stellantis.
Renault CES
Stellantis CES
How Does It Work?
The CES is powered by state-of-the-art Large Language Models (LLMs) that filter and analyze content from our data lake containing over 25 billion articles. Two main components impact the score’s value:
Intensity Score: Measures the severity of each ESG incident, considering its impact on a company’s reputational, stakeholder, financial, and legal standing. This score is derived from a Large Language Model (LLM) fine-tuned by SESAMm’s experts and trained on thousands of humanly annotated events.
Volume Score: Assesses the number of articles associated with an event, calculated using a short-term rolling window. To ensure accuracy, the Volume Score is normalized against the average article volume concerning the company and relevant ESG topics over the past year, reducing potential bias.
Track ESG controversy trends: Evaluate how a company’s risk exposure has evolved. The CES is updated daily, ensuring that users have the most current data at their fingertips.
Benchmark companies against their peers: Compare a company’s risk exposure to its peers, providing a comprehensive view of its relative risk.
Ready to Transform Your ESG Analysis?
For more information on how the Controversy Exposure Score can help you make smarter, data-driven decisions and to see it in action, request a demo.
Reach out to SESAMm
TextReveal’s web data analysis of over five million public and private companies is essential for keeping tabs on ESG investment risks. To learn more about how you can analyze web data or to request a demo, reach out to one of our representatives.
Alternative Data | Sentiment Analysis | Strategic Insights
The macroeconomic environment is moving quickly—inflationary pressures, war in Europe, political instability, and plenty of other topics to make a trader's head spin. While there’s an abundance of structured macro data, it's much more difficult to extract value from unstructured text on the web.
However, with the right partner and the right tools, it doesn't need to be difficult to rein in this complexity. But more on that later.
More data, more problems
We're obviously in the information age; we have more data within reach than ever before. And you'd think that more data would make it easier to find consistent relationships between the macro economy and price returns. The hard truth is that it doesn't.
Of course, you have access to comprehensive historical information, but developing economically intuitive and worthwhile systematic strategies from historical data alone is challenging. Despite having this data, it could still be incomplete, missing that bit of nuance for a theme you're examining.
Nowcasting for more complete, current data
Nowcasting—a contraction of the words now and forecasting—is the prediction of the present and the near future using data from the recent past as an economic indicator. Nowcasting models can be applied in real time as a proxy for official measures, such as monitoring the state of the economy, themes, or sectors: food, transportation, energy, and so on.
For example, you could look into what's being said about supply chain disruption for semiconductors. How is the topic trending across industries or the broader public? And how positively or negatively is that topic perceived over time? This information helps give financial data context and direction, a way to predict what happens next. So where do you turn to for reliable, timely nowcasting data?
Nowcast-enhancing platform
At SESAMm, we have a flexible, adaptable, and modular platform to nowcast pretty much any macroeconomic theme: inflation, supply chains, unemployment, and everything in between. If you can gauge it, we can find data on it.
How do we do this? Our natural language processing (NLP) platform makes sense of all available news, articles, and forums on the web. Currently, there are more than 20 billion articles in our data lake, and it's growing by millions daily. And because we update our data lake multiple times a day, you can read nowcast macroeconomic indicators in near real time.
This flexible approach to building themes goes way beyond off-the-shelf sentiment feeds, and you can adapt to new, emerging factors on the fly.
Use case: inflation insights
With the TextReveal® API and Dashboards, you can generate custom proprietary inflation requests—or pull existing queries by country and sector—and use this data in your nowcasting models (Figure 1).
Figure 1: TextReveal's dashboard highlights inflation topics on the web and associated sectors.
In this example extracted from our API (Figure 2), the number of sources mentioning inflation is relatively stable until 2021, when it starts to increase rapidly, in anticipation of actual inflation readings.
Figure 2: The number of sources mentioning inflation increase in early 2021.
As you can see, you can track a topic and map it to various segments, creating a signal that accurately follows that theme over time. But that's not all. Macro teams can inject their expertise into building these queries, too. So if they have specific ideas on keywords and themes to capture—for example, inflation in Brazil—they have complete control over them.
Ultimately, you can break down the data by volume, sentiment, sector, language, or country. Do you want to know what the Japanese market makes of rising inflation in the U.S.? With SESAMm's platform, you can slice the data in different ways to find out.
Results with transparency
All that inform the results of your queries are available for scrutiny. Say you want to understand why a topic or theme is trending one way or the other, or maybe the sentiment isn't what you expect. You can drill down to the source articles to see why (figure 3).
Figure 3: An example of source articles affecting sentiment score.
Use Case: predictive signals for macro factors and commodities
We worked with a client as part of an asset allocation strategy to build indicators reflecting the tone of the Fed fund rate to see what we could predict based on the indicators.
[figure 4]
Figure 4: The Fed tone indicator successfully anticipated the major changes in the fed rates—a reduction during the COVID-19 crisis and a rise in 2022.
In Figure 4, the language we uncover becomes increasingly dovish, as indicated by the blue line, the aggregate of the hawkish and dovish indicators. It's proceeded by the fall in interest rates, the start of covid, and the recent inflationary period. Then, the indicators spike way before interest rates move up. Of course, it isn't the only factor, and it's not 100% predictive, but it does reflect future movements. Inflation is at an eight-year high right now, so it's indicative of continuing inflation returns and continuing rising interest rates.
Nowcasting and forecasting with TextReveal
With TextReveal, you can nowcast any macro theme by building expert-driven queries and predictive forecasting signals to get insights into volume, sentiment, and more.
If you want to find data relationships that accurately reflect economic trends and macro themes to what's happening online in near real time with a high degree of control, reach out for a demo.
Researching and analyzing investment opportunities can be challenging for asset management—private equity and hedge fund portfolio managers, researchers, and analysts—because, of course, you want to make sure that you're a good steward of your client's investments.
And when you find and source data, such as traditional or alternative data, you also want to make sure it's reliable and that the methods used to gather it are tried and true.
This article aims to give you an inside look into SESAMm's knowledge graph—one of the key reasons SESAMm's NLP-derived alternative data is reliable and trusted. We'll explain what a knowledge graph is, why it's important, how it works, and what makes SESAMm's knowledge graph unique.
What is a knowledge graph?
A knowledge graph is a digital representation of a network of real-world entities, the foundation of a search engine or question-answering service. This structured data model puts the schema in context through linking and semantic metadata, providing a framework for data integration, analytics, unification, and sharing. In other words, it's like a map and legend, with the legend labeling the concepts, entities, and events and the map connecting and identifying their relationships. These details are stored in a graph database and visualized as a graph representation, hence the term knowledge graph.
Fun fact: The expression, knowledge graph, gained popularity after Google used it in 2012 to name their semantic network.
Two types of knowledge graphs
There are two general types of knowledge graphs: open and private. Open knowledge graphs are open to the public. They're created and made available by organizations such as Wikidata, DBpedia, and Yago. Private knowledge graphs are often only used by organizations that create them, like Google, WolframAlpha, Facebook, and SESAMm (of course). Some offer them up for a fee or subscription, such as Crunchbase and OpenCorporates.
Why a knowledge graph is important
Knowledge graphs are important because they equip us with a model to see how everything relates from a big-picture view, creating new knowledge. Its benefits include:
Incorporating disparate data sources, avoiding data silos
From a data science and artificial intelligence (AI) perspective, knowledge graphs provide machine-readable details, adding context and depth to data-driven AI techniques such as machine learning. Using knowledge graphs and machine learning models together improves system accuracy and extends the range of machine learning capabilities for better explainability and trustworthiness.
How a knowledge graph works
The core of a knowledge graph is its knowledge model, a collection of interconnected descriptions of concepts, entities, events, and relationships known as an ontology. This model provides a framework for statements or taxonomy. Each statement consists of a subject, predicate, and object (Figure 1)—known as a triple model—and each subject or object is represented only once in the context of the other subjects and their relationships. For example, in this simple sentence, "The boy kicks the ball," The boy is the subject, and kicker is the predicate because he kicks the ball, the object.
Figure1: Apple is the subject, chief executive officer is the predicate, and Tim Cook is the object.
Likewise, each statement consists of three components: nodes, edges, and labels. A node, or vertice, represents an entity, which can be anything existing in the real world, such as a person, company, or object. For instance, in this example (Figure 2), Barack Obama is the subject node, Malia and Sasha are object nodes, and the edges, or relationships, are labeled as father or sibling, respectively.
Figure 2: How the relationships between nodes can be labeled.
What makes SESAMm's knowledge graph unique?
SESAMm uses open and private datasets with custom, curated information to create our proprietary knowledge graph. As a result, the knowledge graph is a vast map connecting and integrating over 70 million related entities and their keywords, relating each organization to its brands, products, associated executives, names, nicknames, and exchange identifiers in the case of public companies from a data repository made up of more than 18 billion articles and messages and growing.
The knowledge graph is updated regularly
Entities within the knowledge graph are updated weekly and tagged to ensure we correctly track their changes. For instance, the CEO of a company today might not be its CEO tomorrow. And brands might be bought and sold, changing the parent company with each sale. So, weekly updates within the knowledge graph ensure the system is aware of these changes.
NLP-driven accuracy
At SESAMm, named entity disambiguation (NED), a natural language processing (NLP) technique, identifies named entities based on their context and usage. Text referencing "Elon," for example, could refer indirectly to Tesla through its CEO or to a university in North Carolina. Only the context allows us to differentiate, and NED considers that context when classifying entities. This method is superior to simple pattern matching, which limits the number of possible matches, requires frequent manual adjustments, and can't distinguish homophones.
SESAMm uses three other NLP tools to identify entities and create actionable insights: lemmatization, embeddings, and similarity. The lemmatization process normalizes a word into its base form (morphology) to help identify and aggregate entities. Embedding assigns the entity a numerical value to help analyze how words change meaning depending on context and understand the subtle differences between words that refer to the same concept. Similarity measures whether two words, sentences, or objects are close to one another in meaning.
SESAMm tailored its knowledge graph to find, extract, and analyze data about public or private entities, which isn't readily available from the web or standard rating firms. This unique implementation of a knowledge graph provides insights to give you an edge when researching, analyzing, and submitting recommendations to the portfolio manager or clients.
SESAMm's premiere platform, TextReveal®, allows you to leverage NLP-driven insights fully and receive high-quality results through data streams, modular API and dashboard visualization, and signals and alerts. It's perfect for many quantitative, quantamental, and ESG investment use cases.
Learn how SESAMm can support you in your investment decision-making and request a demo today.
Stay ahead with the latest in ESG and AI intelligence
Join our mailing list to receive new reports, event invites, and updates from SESAMm directly to your inbox.