Vogon Today

Selected News from the Galaxy

StartMag

AI chatbots need to learn to ask for help. Microsoft’s word

AI chatbots need to learn to ask for help. Microsoft's word

AI chatbots must learn to ask for “help” according to Vik Singh, vice president of Microsoft. Here's what the head of Microsoft's Copilot developers had to say about the limits and prospects of AI

I know I don't know… everything.

This is what Vik Singh, vice president of Microsoft, hopes for AI chatbots, one of the companies that has focused most on the new technological frontier by also investing in OpenAI, the parent company of ChatGpt, which Microsoft wanted to integrate into its search engine, Bing.

Since last year, Microsoft, Google and their competitors have rapidly deployed generative AI applications such as ChatGpt, which generate various types of content on demand and give users the illusion of omniscience. An illusion indeed. At the moment the main problem of generative artificial intelligence is the percentage of errors, the so-called "hallucinations", that it can return in its feedback.

To make the technology more reliable, last June Microsoft also proposed a system that, in real time, corrects what has been written if it finds different, more detailed information online compared to the topic on which it has provided feedback.

Despite progress, these models continue to “hallucinate” or invent answers.

Therefore, while generative AI tools will save companies a lot of time and money, according to the Microsoft manager these models still have to learn when to admit that they don't have all the answers.

All the details.

THE “HUMAN” FACTOR OF ARTIFICIAL INTELLIGENCE ERRORS

Since the artificial intelligence phenomenon exploded, the uses of AI models have multiplied, but so have the errors, defined in jargon as 'hallucinations'.

“To be honest, the thing that's really missing today is an AI model that raises its hand and says 'Hey, I'm not sure, I need help,'” Singh explained in an interview with AFP . The executive arrived at Microsoft last January and this summer took over the management of the teams developing Copilot, Microsoft's artificial intelligence assistant. Singh's teams are working to integrate Copilot directly into the tech giant's software and make it more autonomous.

For Singh, errors and hallucinations are an important problem to resolve. In fact, enterprise customers cannot afford for their AI systems to go astray, even occasionally.

WORKING ON CHATBOTS WHICH THEY ADMIT THEY DON'T KNOW

As AFP recalls, Marc Benioff, CEO of Salesforce, recently said that many of his customers are increasingly frustrated by the inconsistencies of Microsoft's Copilot.

Singh pointed out that “really smart people” are working on ways for a chatbot to admit it doesn't know the correct answer and ask for help. According to the Microsoft executive, a more humble model would in fact be incredibly useful. Even if the model were to reference a human 50% of the time, it would still save “a lot of money.”


This is a machine translation from Italian language of a post published on Start Magazine at the URL https://www.startmag.it/innovazione/i-chatbot-ai-devono-imparare-a-chiedere-aiuto-parola-di-microsoft/ on Wed, 04 Sep 2024 13:24:53 +0000.