Shares of Google’s parent company Alphabet plummeted nearly 8% after the search giant’s new artificial intelligence-powered Chatbot ‘Bard’ gave the wrong answer to questions during a presentation designed to show off it’s dominance.
During a live demonstration of Bard posted to Twitter, the AI chatbot was asked a relatively simple question, “what new discoveries from the James Webb Space Telescope can I tell my 9 year old about?”
As is the nature of Chatbots, Bard confidently spat out the following answer, “James Web Space Telescope took the very first pictures of a planet outside our solar system”.
Unfortunately for Google and what was supposed to be a seamless debut for Bard AI’s powers, the first direct images of a planet outside our Solar System actually came from the ‘Very Large Telescope’ in 2004, according to NASA, not the JWST as Bard said.
The tweet ad featuring Bard’s wrong answer was published on February 6 and has amassed more than one million views at the time of writing. The post is still live and remains featured on Google’s blog post announcing the release of Bard.
Astronomer Bruce Macintosh called out the Chatbot’s error in a comment underneath the ad.
In a move that could be entirely unrelated, the live stream of the showcase event has since been wiped from YouTube and all record of it was deleted.
Alphabet’s share price suffers following Bard gaffe
Alphabet shares (NASDAQ: GOOGL) fell 7.6% in afternoon trading on Wall Street, which puts the company on track for the biggest intraday loss since October 26 last year. The significant drop in its share price has seen roughly US$100 billion wiped from Alphabet’s total market value in a single day.
The stock price is due to more than just Bard’s telescope gaffe. A number of prominent figures in the tech world such as Bloomberg Intelligence analyst Mandeep Singh have called Bard’s release “underwhelming”, with the general consensus being that Bard feels very much like it was ‘panic-launched’ in response to rival OpenAI’s superior AI assistant ChatGPT.
Bard’s errors reveal deeper problems with AI
Bard spitting out the wrong answer is most likely due to Google’s chatbot being exposed to incorrect reporting on the topic, which it assumed to be correct. While this isn’t necessarily the end of the world for Bard or Google, it does raise some questions about the ethics of releasing large-language-model- based chatbots into the world of everyday users.
Public trust in AI chatbots is eerily high, most likely bolstered by news of OpenAI’s viral ChatGPT acing exams from prestigious Business Schools and landing coding jobs at Google. But the problem with high levels of trust in a tool that will happily spit out the wrong answers with serene confidence is that it hyper-accelerates the spreading of misinformation.
If we think for a moment about the levels of false information already peddled across the internet by humans, the potential for largely-untested AI technology to begin adding layers of complexity to this issue really cannot be understated.
Regardless, the AI race looks like its only just getting started. The day after Google first unveiled Bard, Microsoft announced that it would be releasing a new version of its search engine ‘Bing’ enhanced by the powers of ChatGPT. Microsoft claims that OpenAI’s technology will be leveraged in Bing to help users quickly summarise web pages, synthesise a wide range of sources, as well as assisting them with composing emails.