In Paris, Google demonstrated their new ChatGPT competitor Bard, among new AI-powered features coming to their other apps, but made one critical mistake in a Twitter post.
Google showcased its powerful Bard AI in a short showcase to show off its AI advancements. The firm recently announced that Bard, their competitor to OpenAI’s ChatGPT will be rolling out to testers over the next few weeks, before coming to the public.
As Microsoft begins to roll out ChatGPT to Bing, the fight over who has the better AI is well underway, with a presumed fraction of Bard’s capabilities on display today.
Bard AI presented an incorrect answer in Twitter advert
During a demonstration of its Bard AI posted to Twitter, the engine was given a question. “What new discoveries from the JWST can I tell my 9 year old about”.
Unfortunately for Bard AI, it gave the answer of “JWST took the very first pictures of a planet outside our solar system”. Sadly for Google and Bard AI, the first direct images of an exoplanet outside our Solar System instead came from VLT in 2004, according to NASA, not JWST as Bard stated. This could be due to incorrect reporting that Bard AI had seen, which was interpreted to be correct.
While seemingly unrelated, the stream of the entire showcase has since been wiped off of the internet. Immediately following the event, shares in the company have fallen by around 6% as of the time of writing.
Google Bard AI features explained
In the demo, a question is asked about what new car to buy. The response listed the full pros and cons list and was then asked if it could plan out a good road trip for said car.
However, more intriguing is the inclusion of Bard on actual Google Results outside of the chat-looking window. Here, a question would then be answered using information presumably gleaned from the web and results pushed further down in case you wanted a particular source to look into.
Google is calling this “NORA”, or “No One Right Answer”, where the AI will give you a run down and the search will then provide you with more research to conduct.
Google Lens & Multisearch
Oddly, despite being the main event, Bard was sandwiched between presentations about how Google is using Lens and AI together.
Subscribe to our newsletter for the latest updates on Esports, Gaming and more.
Google Lens is now used 10 billion times a month, allowing users to take photos of things to get search results. Lens will now be able to initiate something called “Multisearch”.
This uses AI to help you find, for instance, a shirt in a different color or use the tag “near me” to locate where a particular cake is served.
Google Lens will also be able to scan images already taken, and use mobile search results of images in the future for a jumping-off point.
Maps to get Lens updates
In a new method of using augmented reality, Lens will now be able to scan a location and use AR to highlight information. You can then tap on a location while in the point-of-view to bring up the usual information pages provided by Google Search.
Generative & Responsible AI
Last year, Google demonstrated how six images of a shoe can be turned into a complex, 360 representation through ‘generative AI’. This particular branch of AI will essentially fill in the gaps to construct a 3D model for users to view.
Generative AI will now be available as an API for developers, as well as slowly being rolled out to other Google products as a way to construct views into buildings, restaurants, and the like through images already taken.
As we get closer to artificial intelligence being more than a disguised machine learning algorithm, Google is beginning to work with creatives and other bodies to ensure that they will have Responsible AI rules.
Google also shows off artistic accreditation skills
Towards the end, Google also showed off how they’re utilizing AI to ensure proper credit is given within bodies of scientific and artistic works. Old documents have been scanned and paired up with other corresponding papers or research to highlight women’s roles in science and culture.
They will also be implementing AI for Woolaroo. This is an effort by Google to save languages from extinction and combines with Lens to help you quickly learn the word for various objects.