Strange and potentially deadly advice has been all over social media, seeming to come from Google’s new AI summary feature. ‘High quality’ search tool is still being defended by the firm.
An artificial intelligence (AI) tool has been added to Google’s search engine; yet, according to a number of social media and press accounts, the new function has advised users to eat pebbles, add glue to their pizzas, and clean their washing machines with chlorine gas.
A user searched “I’m feeling depressed,” and the AI seemed to offer to propose leaping from the Golden Gate Bridge in one especially heinous instance.
Using the Gemini AI paradigm, the experimental “AI Overviews” utility searches the internet to condense search results. Google said May 14 at its I/O developer conference that the functionality has been made available to certain American users in advance of a global launch scheduled for later this year.
But the technology has already raised a lot of ire on social media. Users have claimed that sometimes AI Overviews produced summaries based on humorous Reddit posts and pieces from the satirical website The Onion.
As to a screenshot shared on X, AI Overviews replied to a pizza question, “You can also add about ⅛ cup of non-toxic glue to the sauce to give it more tackiness.” Tracking the response back, it seems to be derived from a funny remark posted on Reddit ten years ago.
A dog played in the NBA, NHL, and NFL, Founding Father John Adams graduated from the University of Wisconsin 21 times, Barack Obama is a Muslim, and users should eat a rock a day to help with digestion are among the other falsehoods.
Unable to independently confirm the postings was Live Science. Google officials said in a statement that the instances noticed were “generally very uncommon queries, and aren’t representative of most people’s experiences” in answer to questioning over the extent of the incorrect results.
“The great majority of AI Overviews provide high quality information, with links to dig deeper on the web,” the statement noted. “In order to guarantee AI overviews live up to our high standards, we tested this new experience thoroughly before releasing it. We have acted when our rules have been broken, and we are also leveraging these unique instances to help us improve our processes as a whole.”
It is not the first occasion that generative AI models have been seen fabricating—a phenomena referred to as “hallucinations.” For instance, ChatGPT made up a sexual harassment incident and accused a legitimate law professor of the crime, using fake media articles as proof.
One thought on “Google’s AI tells people to put glue on their pizza, eat rocks, and make chlorine gas.”