[ad_1]
Before Google’s latest I/O event could even begin, the tech giant was already trying to stake a claim for its new AI. As music artist Dan Deacon played trippy, chip tune-esque electronic vocals to the gathered crowd, the company displayed the work of its latest text-to-video AI model with a psychedelic slideshow featuring, people in office settings, a trip through space, and mutating ducks surrounded by mumbling lips.
The spectacle was all to show that Google is bringing out the big guns for 2023, taking its biggest swing yet in the ongoing AI fight. At Google I/O, the company displayed a host of new internal and public-facing AI deployments. CEO Sundar Pichai said the company wanted to make AI helpful “for everyone.”
What this really means is—simply—Google plans to stick some form of AI into an increasing number of user-facing software products and apps on its docket.
Google’s sequel to its language model, called PaLM 2
Google unveiled its latest large language model that’s supposed to kick its AI chatbots and other text-based services into high gear. The company said this new LLM is trained on more than 100 languages and has strong coding, mathematics, and creative writing capabilities.
The company said there are four different versions of PaLM, with the smallest “Gecko” version small enough to work on mobile devices. Pichai showed off one example where a user asked PaLM 2 to give comments on a line of code to another developer while translating it into Korean. PaLM will be available in a preview sometime later this year.
PaLM 2 is a continuation of 2022’s PaLM and March’s PaLM-E multimodal model already released earlier this year. In addition, its limited medical-based LLM caled Med-PaLM 2 is supposed to accurately answer medical questions. Google claimed this system achieved 85% accuracy on the US Medical Licensing Examination. Though there’s still enormous ethical questions to work out with AI in the medical field, Pichai said Google hopes Med-PaLM will be able to identify medical diagnoses by looking at images like an X-ray.
Google’s Bard upgrades plus AI in search
Google’s “experiment” for conversational AI has gotten some major upgrades, according to Google. It’s now running on PaLM 2 and has a dark mode, but beyond that, Sissie Hsiao, the VP for Google Assistant and Bard, said the team has been making “rapid improvements” to Bard’s capabilities. She specifically cited its ability to code or debug programs, since it’s now trained on more than 20 programing languages.
Google announced it’s finally removing the waitlist for its Bard AI, letting it out into the open in more than 180 countries. It should also work in both Japanese and Korean, and Hsiao said it should be able to accept around 40 languages “soon.”
Hsiao used an example where Bard creates a script in Python for doing a specific move in a game of Chess. The AI can also explain parts of its code and suggest additions to its base.
Bard can also integrate directly into Gmail, Sheets, and Docs, able to export text directly to those programs. Bard also uses Google Search to give images and descriptions in its responses, though it doesn’t yet seem capable of citing where its drawing its sources or its images from.
Bard is also gaining connections to third-party apps, including Adobe Firefly AI image generator.
This article is part of a developing story. Our writers and editors will be updating this page as new information is released. Please check back again in a few minutes to see the latest updates. Meanwhile, if you want more news coverage, check out our tech, science, or io9 front pages. And you can always see the most recent Gizmodo news stories at gizmodo.com/latest.
[ad_2]