google bard sundar pichai chatgpt ceo ai artificial intelligence 60 minutes

Google CEO Sundar Pichai Explains The Dangers Of AI: Top 5 Mind-Boggling Moments

3 min read
Disclaimer

This article is for general information purposes only and isn’t intended to be financial product advice. You should always obtain your own independent advice before making any financial decisions. The Chainsaw and its contributors aren’t liable for any decisions based on this content.

Share

Follow

Google CEO Sundar Pichai, despite aggressively pushing the company’s own AI chatbot Bard, is concerned about the technology’s risk to humanity. 

Pichai appeared on the latest U.S. instalment of 60 Minutes with host Scott Pelley, as the pair deep-dived  into tough questions surrounding AI.

Here are the top moments from the doco.

1. AI is “more profound” than fire or electricity

When asked about AI’s place in human civilisation, Google CEO Sundar Pichai said, “I’ve always thought of AI as the most profound technology that humanity is working on. More profound than fire or electricity or anything that we’ve ever done in the past.”

Why? Because “it gets into the essence of what intelligence is, what humanity is… we are developing technology which for sure, one day, will be far more capable than anything we’ve ever seen before.”

2. The host got emotional over a word game with Bard

Host Pelley gave Google’s ChatGPT competitor, Bard a crack. He played a game with the chatbot where he wrote down six words, and the machine elaborates on them to make a story.

“For sale. Baby shoes. Never worn. Finish this story”, Pelley typed into Bard. In five seconds, the AI chatbot generated a full story and even a poem.

The host confessed that he had “a little bit of an emotional reaction when working with Bard.”

“I had the sense that I was meeting an intelligence that I had never conceived of… and an intelligence that I was sure that I would never understand,” Pelley said.  

3. … but Bard later made up fake content (again)

Bard was put to the test again, and was asked to write an essay about inflation. The chatbot managed to produce a full text and even recommended five books on economics. But a few days later, it was discovered that the books do not exist.

According to the documentary, this trait, “error with confidence”, is described in the industry as a “hallucination.”

It’s not the first time that the chatbot made stuff up. In February, Bard was caught giving inaccurate information about the James Webb telescope. This wiped US$10 billion (AU$14.9 billion) off the value of Google’s parent company Alphabet. 

Google CEO Sundar Pichai clarified that no tech company has solved the hallucination problem yet, but remains confident that it will lessen with more research.

4. Some AI systems are teaching themselves new skills and we don’t know why

Some AI systems are apparently developing a mysterious behaviour: Teaching themselves skills that were not previously input, and that they were not expected to have.

Google’s Senior Vice President, James Manika, revealed that in one case with an AI program, “with very few amounts of prompting in Bengali, it can now translate all of Bengali.” 

This pattern of behaviour by AI is called an “emergent [property]”. Sundai Pichai added that this aspect of AI research is called a “black box”. However, he assured the host that this behaviour will be understood over time. 

5. AI regulation is needed

Pichai’s stance on AI regulation appears to be similar to Elon Musk’s. Both are concerned about its potential risks, and feel that some degree of regulation is needed. 

“This is going to be a cat and mouse game. People are going to use AI to be more sophisticated… no different from how we have tackled spam [in] Gmail in the past,” Pichai said.

“We are only getting a sense of what these machines are capable of, so these are places where society needs to get together and have a conversation,” host Scott Pelley noted.