Why AI sometimes gets it wrong — and big strides to address it

[ad_1]

Technically, hallucinations are “ungrounded” content, which means a model has changed the data it’s been given or added additional information not contained in it.

There are times when hallucinations are beneficial, like when users want AI to create a science fiction story or provide unconventional ideas on everything from architecture to coding. But many organizations building AI assistants need them to deliver reliable, grounded information in scenarios like medical summarization and education, where accuracy is critical.

That’s why Microsoft has created a comprehensive array of tools to help address ungroundedness based on expertise from developing its own AI products like Microsoft Copilot.

Company engineers spent months grounding Copilot’s model with Bing search data through retrieval augmented generation, a technique that adds extra knowledge to a model without having to retrain it. Bing’s answers, index and ranking data help Copilot deliver more accurate and relevant responses, along with citations that allow users to look up and verify information.

“The model is amazing at reasoning over information, but we don’t think it should be the source of the answer,” says Bird. “We think data should be the source of the answer, so the first step for us in solving the problem was to bring fresh, high-quality, accurate data to the model.”

[ad_2]

Source link

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top