How AI21’s new tool reduces hallucinations LLM

No one really knows why AI using large language patterns hallucinate, but they do.

This flaw has led some organizations to ban the use of artificial intelligence. A majority of leaders have expressed concern about risk related to functional accuracy, according to a KPMG survey of 225 executives of US companies with revenues exceeding $1 billion annually. The survey found 90 percent had “moderate to very significant” concerns about the risks of using generative AI and concerns about how to mitigate those risks, Forbes reported in April. 60% also said they were probably still two years away from their first generative AI solution.

AI21 Labs hopes to mitigate these concerns with a new engine called Contextual Responses. Released Wednesday, the solution significantly reduces hallucinations, according to Tal Delbariwho led the AI21 team that created the tool.

“We don’t really understand all the inner workings of these big language models. It’s almost like magic,” Delbari said. “But what we do know is that when we train these models — and it’s true for the big language models of AI21 Labs, OpenAI and Entropik and all these players — the main part of model training isn’t about making sure the responses or outputs of the models are correct.”

It’s a language model, not a knowledge model, as ethicist, author, and philosophy professor Reid Blackman told Rev4. It’s trained to predict the next word in a sentence, Delbari explained, so it will try to generate a sentence that looks structurally and grammatically correct, but the AI ​​doesn’t understand the concept of factuality.

“If a customer asked about a specific website’s return policy, the company wants the model to generate a well-founded, truthful, and fair answer based on the specific policy and not based on the general case,” Delbari said. “This is why we started this technology.”

How the AI21 engine handles hallucinations

Contextual Answers deals with hallucinations in two ways: First, it operates on Jurassic II, AI21’s large-scale language model. Delbari’s group trained a specific variant of Jurassic II on business domains such as finance, medicine, insurance, pharmaceuticals, and retail.

“We train the model with document triplets, document questions and answers that specifically come from documents, and this is part of the technique we use to make sure the model learns that in this specific task, it should only retrieve information from a single document or document library,” he said.

Some organizations have attempted a similar process with models already typically trained on the Internet, he said, but that approach is flawed. For one thing, when a model is trained with new information, it tends to forget what it had previously learned, she explained. Organizations using open source projects like LangChain have also tried similar approaches, she said. These approaches “aren’t great for fighting hallucinations and yet the organization needs to invest in AI professionals, NLP (natural language processing) experts, and engineers,” Delbari added. “With our solution, it’s just plug and play. You don’t need to do any engineering work to implement this architecture.”

Another key difference between Context Answers and large Internet-formed language models like GPT is that in many cases the context window for entering information is between 8,000 and 32,000 tokens, which roughly equals the same amount of words. This can be a problem for organizations, which may have a single document of 50 pages or more. Contextual Answers supports any size document length and any number of documents, Delbari said.

With contextual responses, organizations train the model on their own document library via AI21’s website or an API, he said.

A backup plan

Secondly, AI21 has added filters and binaries to detect hallucinations and either remove them or ask the master model to generate an output until there are no more hallucinations.

“From a single document or from millions of internal documents, whatever corpus of information you load into the model, we put a whole architecture of models that make sure that the model doesn’t shut down [the] rails,” he said.

It’s still possible to trigger hallucinations, but a user has to work very hard to do so, he added.

“Our models are much more reliable than the actual data, and it’s very rare that they respond to something that isn’t true,” Delbari said. When companies see that hallucinations are nearly zero, it encourages them to put these technologies into production, she added.

There are two ways organizations can implement contextual responses: it can run as a SaaS on AWS, or it can run on the organization’s virtual private cloud. The latter ensures that data will never leave the organization’s virtual walls, he said.

Contextual coding and responses

But what about coding? You may ask about coding, but the solution is currently not optimized to help with coding right now. The roadmap includes a plan to train him to be able to write code, Delbari said.

For those interested, there’s a tutorial posted by AI21 Labs’ Lab AI partner Lab on how to build contextual response apps. It was written for a hackathon when it was still in beta and had limited the document size of submissions.

Group Created with Sketch.


#AI21s #tool #reduces #hallucinations #LLM
Image Source : thenewstack.io

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top