Artificial intelligence is replacing conventional search – Bing has already introduced ChatGPT, and Google is launching its own Bard in a few weeks. However, the Google Bard presentation clearly showed that artificial intelligence cannot be trusted. Bard gave a false fact during its first public demonstration by Google representatives.
The AI-powered search engine is used by Google to generate text summaries of search results. That is, the user no longer needs to follow the links themselves and read a bunch of text. The chatbot will do it for them and give a specific answer to the user’s questions.
However, a demonstration of Google Bard in action, distributed by Google in honor of the launch of the new feature, shows that the search chatbot gives the wrong answer to the question posed.
The question was technical and had one correct answer. These lies will raise further questions about the accuracy of search engines and AI-generated responses to people’s questions.
The user entered the search query “what new discoveries of the James Webb Space Telescope can I tell my 9-year-old child about?”.
NASA’s James Webb Telescope (JWST) went live in December 2021 and scientists have used it to discover several new planets outside the solar system.
One of Bard’s responses says, “JWST took the first photographs of a planet outside of our solar system.”
But that’s not true. The first ever image of a planet outside the solar system was taken in 2004 with the Very Large Ground Telescope in Chile. The exoplanet it photographed is called 2M1207b, about five times the size of Jupiter and located about 170 light-years from Earth.
A Google spokesperson said: “This highlights the importance of the rigorous testing process we are launching this week with the Trusted Tester program. We will combine external feedback with our own internal testing to ensure that Bard’s responses meet the high bar for quality, security, and real-world validity.”
NIX Solutions notes that experts have long expressed concerns about the inaccuracies created by artificial intelligence systems. Such inaccuracies are sometimes difficult for people to notice.
OpenAI, the maker of the ChatGPT chatbot that is a competitor to Google Bard, has been open about the limitations of its technology and acknowledged that sometimes the bot can write plausible but incorrect or ridiculous answers to people’s questions.