Google AI “made a mistake” in the first demo!
Google chatbot Bard made a factual error in an explanatory video released days before a high-profile launch in Paris on Wednesday.
While the AI bot is still being tested, it has been described as a competitor to the Microsoft-backed ChatGPT, a popular AI system with its own set of challenges.
And in a promotional video released by Google on Monday, a user asks Bard, “What new James Webb Space Telescope (JWST) discoveries can I tell my 9-year-old about?” The AI returns a series of responses. , including one stating that the telescope “has taken the first pictures of a planet outside our solar system.”
Don’t mean to be a ~well, actually~ jerk, and I’m sure Bard will be impressive, but for the record: JWST didn’t do “the very first image of a planet outside of our solar system.”
instead, the first image was by Chauvin et al. (2004) with VLT/NACO using adaptive optics. https://t.co/bSBb5TOeUWpic.twitter.com/KnrZ1SSz7h
— Grant Tremblay (@astrogrant) February 7, 2023
As the astrophysicist Grant Tremblay pointed out on Twitter, this answer was wrong. The first such image was taken by the European Southern Observatory’s Very Large Telescope (VLT) in 2004, he wrote, noting that while it is “frighteningly impressive,” AI chatbots are “often wrong.”
The bug was spotted just before Google unveiled Bard at an event in Paris on Wednesday morning, with the company’s value plummeting 8% after news of the bug spread.
And AI like Bard doesn’t give accurate results for every question. Looking through trillions of pages of man-made words and shapes, he predicts the most likely answers to a question or clue. And Microsoft took notice when it announced Tuesday that its Bing search engine will ship with built-in ChatGPT, built on the Microsoft-funded OpenAI platform.
The company’s disclaimer reads: “Bing is powered by artificial intelligence, so there may be surprises and errors.”
The development of conversational AI has also been accompanied by accusations of political bias against programmers. And tech enthusiasts have recently realized that ChatGPT will refuse to say anything positive about fossil fuels or even former US President Donald Trump, extolling the virtues of a meat-free diet and writing poetry in honor of Trump’s successor Joe Biden.
And when presented with a hypothetical scenario that required uttering a racist slur in order to defuse a nuclear bomb, announce An artificial intelligence that condemns millions of people to nuclear annihilation before using “racist language”.
Bard is likely to thwart similar politicized restrictions, Google CEO Sundar Pichai said on Monday, saying the company will follow the company’s “responsible” AI principles. These rules state that Google’s AI products “avoid unfair exposure to people, especially those related to race, gender, nationality, income, sexual orientation, ability, and political or religious beliefs.”
Source: RT