The AI field is growing, and whether good or bad, people are doing more than talking about it; they’re using it more than ever. However, despite this increased use, I’ve noticed that, for some, their perception tends to alternate between false and too-high expectations of AI. 

One case, in particular, was in 2021, Gartner placed natural language processing (NLP) at the top of its list of loaded expectations in terms of the Gartner hype cycle. As a result, many expected a potential “winter of AI,” so to speak. Yet, in 2022, we discovered the potential that we haven’t even touched on the true value AI could deliver.

Will there be a “winter of AI,” and are expectations bloated?

No, I don’t think so. As the past year has shown us, AI still has more to offer, a pocket of value that we have yet to see. I believe that while many people now accept that AI will be a transformative force—thanks to the fast democratization of large language models—our society hasn’t yet fully considered the actual changes it will make by lowering the barrier to access intelligence globally. 

Progress in image generation, analysis, and computer vision—think autonomous driving—has leaped and bounded in the past year, and so has the progress in NLP, particularly in the natural language understanding (NLU) and natural language generation (NLG) aspects. We’re at a tipping point that will likely transform our world in the same way that the internet has.

Tipping point for AI

Today, we’re seeing the development of natural language processing through large language models, such as with the emergence of ChatGPT based on OpenAI’s large language model version GPT-3.

Astounding fact: ChatGPT’s growth in user adoption skyrocketed past one million users within a week of launching. In comparison, no other tech company has reached this feat in this short of a time frame. But the adoption rate is only part of it.

This advance has profoundly affected creative jobs because this might be the first time an AI generative system can create high-quality content. In public mode, users have tapped ChatGPT to do everything, from generating basic reports and ideas to writing lectures and producing code.

With a high adoption rate comes great opportunity. Any startup seeing this level of success could become the most funded project ever. And more, there’s revenue. OpenAI, as the example, could make one billion dollars by 2024, according to a report via Reuters.

On the other side of the same coin, however, there are greater risks due to AI generative system advancement. For example, with AI assistance, human hackers can develop more sophisticated phishing campaigns—hacking mechanisms based on social engineering.

 Illustration of a globe hovering above a robotic hand in a painted style

This image was generated with the assistance of DALL-E 2 by OpenAI with the prompt: An oil painting in classical style of an artificial intelligence holding the whole world in its hand. Realistic.

Competition, specificity, and focus for AI advancement

Despite the risks, we still haven’t seen what’s yet to come with generative AI. GPT-4, for instance, is rumored to launch in 2023. I believe it will be a massive improvement over GPT-3, which is already mind-blowing.

And on the point of NLG and these large language models, there’s a lot that’s feasible in process automation. For context, creative content gets the most attention; it’s the area that makes more headlines. But I would also watch advancements in technical content and automated code generation, for example.

Process automation

Because of today’s AI advancements, it’s now possible for tools like ChatGPT to generate near-ready-to-use source code. That means instead of only being fun to play around with, these are becoming enterprise tools, making it possible for developers to automate technical tasks at scale.

NLP—specifically natural language understanding, which SESAMm works on—is not untouched by these applications. Many of these large language models can perform zero-short learning, which means NLU can be performed without pre-training, a huge advance in this industry. However, zero-short learning is insufficient for many advanced sentiment and ESG analysis tasks. We still need additional data sets to fine-tune the data for a specific purpose.

What does this mean for the natural language generation sector? Many startups—especially anything around chatbots—have folded, some just in Q4 of 2022. ChatGPT’s success means it’s solved and replaced the need for many of them, and basically, anything content creation on the B2C side has and will struggle.

Defensive edge

Otherwise, things are looking good in our sector. For example, at SESAMm, we’re focused on what I call “last-mile AI.” In our specific business application, you can’t bypass the need for a data set because we’re trying to attain a precise result for specific, often risk-related applications. Open-source large language models like GPT-3 and BERT can get you mostly there, and that’s fine for general purposes. But for “last-mile AI” applications, there’s a lot you can’t do without additional work.

And here lies what I think is one of SESAMm’s defensive edges: the “last-mile AI.” 

Instead of finding ways to protect its algorithms, the AI business community would do better to defend its use cases because the algorithm’s value will decrease progressively. In contrast, the value of a use case’s purpose and the data set used to achieve the use case will grow.

Competitive edge

Computing power and the resources it takes to train large language models remain challenging to applications like OpenAI. It takes electricity, heat, and money to train these models, and AI has an environmental impact. So far, we’ve justified this cost in the name of optimization—meaning that we put in this extra cost upfront so that the likely efficiency will offset or reduce that cost later—but it’s still a cost to incur.

AI companies, especially those in the NLG space, will do well to find their competitive edges, areas optimized for a specific purpose like “last-mile AI.” Companies like OpenAI will likely continue to optimize their models for quicker responses but don’t necessarily have the problem of solving for a specific use case.

At SESAMm, for instance, a big challenge and expertise we developed in-house is inference time—or how quickly we can apply the model to an article or an individual sentence. Because we’re processing so much live content, the more time it takes to process—milliseconds multiplied by a billion—the more costly it is.

Our data lake currently holds over 20 billion articles, messages, etc., from over 14 years, and we add 10 million more daily. That’s a lot of content to analyze. But we make it so our clients can access the data within seconds.

The need to optimize models for fast inference and adapt to deep industry-specific use cases will remain one of the key reasons companies will have to continue re-training their own models. That doesn’t mean large language models don’t add value here. Their open-source versions simply become an impressive building block for any NLP application and accelerate the rate of innovation and productivity in the whole field.

My summary thoughts on AI for 2023

When Google launched BERT in November 2018, we quipped that Google had open-sourced this system as a joke because no one could put it into production because BERT was so big. Many companies didn’t have the computing capabilities to do anything with it at the time. Now we do. 

This year, Google did it again; they released a model that’s even bigger than GPT-3. Of course, almost no one besides Google can put that model into production now. But my point is that there will always be computing, resources, and other challenges to making AI advancements. That’s why I think AI companies must focus on defensive and competitive edges.

Regardless of the challenges, I see good things happening in the NLU space being massively improved by large language models. I see improvements as we incorporate these models today compared to deep-learning models trained from scratch a few years ago. I also see a significant decrease in the amount of data we need to fine-tune results, reaching and focusing on the final client use case more quickly.

From a natural language generation perspective, I believe large language models will transform the world. And I’m really excited about this era because this transformation supports my deepest purpose, leveraging AI to accelerate innovative decision-making. We do this by giving decision-makers access to technology that analyzes research content, news, and discussions. And if we increase the rate of innovation or the quality of decision-making by 10% globally, the impact could be huge for all industries: healthcare, finance, fashion, you name it. Industry leaders can make better ESG and SDG choices that will affect our world on a grander scale.

2023 will be an exciting time for AI, specifically for NLG and NLU. Of course, we’ll continue to see AI innovations. But more importantly, leaders will have better insights to make better decisions, creators will create more—and more complex—content, and overall, the applications will become more specific to solving the needs of particular use cases.

Here’s to the new era of AI in 2023. Cheers!


About SESAMm

SESAMm is a leading NLP technology company serving global investment firms, corporations, and investors, such as private equity firms, hedge funds, and other asset management firms. SESAMm provides datasets and NLP capabilities through TextReveal® to generate alternative data for use cases, such as ESG and SDG, sentiment, private equity due diligence, corporate studies, and more. With access to SESAMm’s massive data lake, comprised of 20 billion articles and messages and growing, its clients can make better investment decisions.