In the rapid development of technology, artificial intelligence (AI) has become an indispensable part of our lives. However, while AI technology provides convenience, it may also bring some unexpected problems. Recently, Google encountered such a problem.
Google AI Overview Feature Encounters Issues
In mid-May, at its annual I/O developer conference, Google launched a new feature called "AI Overview," aiming to integrate AI technology into its world-leading search engine. This feature was first launched in the US market, with the core idea that when users search, what appears first is no longer web page links, but summaries organized by AI.
However, the good times didn't last long. Soon after, screenshots of a series of absurd answers provided by AI Overview appeared on social media. Google defended itself, claiming that the AI Overview feature is usually accurate and has been extensively tested beforehand. But on Friday, local time, Liz Reid, the head of Google's search business, admitted that some strange, inaccurate, or unhelpful AI search results did indeed appear.
Urgent Fixes and Technical Improvements
Faced with these issues, Google took swift action. They made "more than ten technical improvements and updates" to the AI system to improve the accuracy and reliability of answers to certain queries (especially health-related issues). Reid said that more "trigger restrictions" have been added to AI Overview.
Despite Google's tests after announcing the updates, many problems were still found. AI Overview aims to provide users with authoritative answers to the information they are looking for without having to click through a ranked list of website links, but it seems that the feature still has significant flaws.
AI Experts' Warnings
Some AI experts have long warned Google not to hand over its search results to AI-generated answers. They believe this could perpetuate biases and misinformation, and endanger people seeking help in emergencies. The working principle of large language models is to predict which words best answer the questions people ask based on the data they have been trained on, which leads them to often make up stories, known as "AI hallucinations."
Google's Response
Reid said that AI Overview generally does not "hallucinate" or make up stories like other large language model products because they are more closely integrated with Google's traditional search engine, only displaying content supported by the most authoritative or relevant web pages.
She wrote: "When AI Overview makes mistakes, it is usually for other reasons: such as misunderstanding the query, misunderstanding the nuances of language on the web, or not having a lot of useful information."
Conclusion
Chirag Shah, a professor at the University of Washington and computer scientist, pointed out that information retrieval should be Google's core business and should not be hastily handed over to AI models. He warned: "Because even if AI does not make up stories, it will pass on misinformation to users, and if Google does this, it could bring quite bad results."
This incident reminds us that while enjoying the convenience brought by AI, we also need to be vigilant about its potential risks. Google's urgent repair action is a positive start, but the perfection and development of AI technology still require our joint efforts.