Google, a subsidiary of Alphabet Inc., showcased an updated version of its core search product that incorporates more artificial intelligence (AI) into its responses. The move aims to dispel concerns about Google losing ground to Microsoft’s Bing search, powered by OpenAI.
While Google already has the Bard chatbot as a competitor to OpenAI’s ChatGPT, the new search update emphasizes the distinction between using traditional Google search for finding information and using Bard for creative collaboration.
The updated Google search, called the Search Generative Experience, retains the familiar search bar on the homepage. However, the difference lies in the answers provided. When the new Google identifies that generative AI can effectively respond to a query, it will display the AI-generated response at the top of the search results page, with traditional web links below. For instance, a search for the weather in San Francisco will yield an eight-day forecast, while a query about outfit suggestions for the city will generate a detailed response generated by AI.
Additionally, users will have the option to enter a “conversational mode” that remembers their previous questions, allowing for easier follow-up inquiries. It is important to note that conversational mode is not designed to mimic a chatbot with a personality but instead aims to refine search results. Unlike Bard and ChatGPT, conversational mode responses will not include the use of personal pronouns.
Although the Search Generative Experience is not yet available to users, it will be accessible to U.S. consumers in the coming weeks through a waitlist. During this trial phase, Google will assess the quality, speed, and cost of search results. In contrast, Bard is now accessible in 180 countries and territories without a waitlist, and Google plans to expand its language support to encompass 40 languages.
Through these updates, Google seeks to leverage AI to enhance its search capabilities while maintaining a distinction between traditional information-seeking searches and the creative collaboration facilitated by Bard. The company aims to provide users with more comprehensive and contextually relevant search results, bolstering its position in the search engine market.
YouTube on Thursday announced a new feature on its short-form video platform Shorts, called Dream Screen, which enables users to create unique videos using AI tools.
YouTube CEO Neal Mohan, during the company’s live event “Made on YouTube,” revealed that users can use the AI feature to create an AI-generated video or image in YouTube Shorts by simply typing in the desired background.
Mohan demonstrated how this works by typing in “a panda drinking coffee” to show how the video image appears on the screen.
The company offered further examples as well, such as underwater castles or things you could have dreamed about, like dragons or sci-fi moonrises.
Mohan expressed his belief that the technology will enable more people to publish on YouTube without feeling as though they need a whole production studio or a thorough understanding of YouTube analytics, TechCrunch reported.
This screengrab from a demonstration video from YouTube’s blog shows a panda drinking coffee, as demonstrated by the company’s CEO, Neal Mohan. — YouTube/Blog/File
Over 70 billion daily views are currently being averaged on the Shorts platform, up from 50 billion in January and the biggest video-creating app anticipates that these figures will rise even higher with AI.
“At YouTube, we want to make it easier for everyone to feel like they can create and we believe generative AI will make that possible,” said Mohan.
The feature is currently being presented to a small group of artists, and it will presumably then go live early next year.
According to YouTube, in the future, the tool will allow users to enter ideas for how to alter or remix their content in order to create entirely new and unique videos.
Researchers have been waiting on Earth to receive the biggest asteroid sample which will be sent from space by Nasa’s OSIRIS-Rex probe, as astronomers are gaining more understanding about the evolution of the solar system and the alien rocks that are to impact our planet in future.
Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-Rex) will be releasing a sample of the Bennu asteroid which is likely to touch down in Utah weighing an estimated 8.8 ounces.
The event will be live-streamed starting at 10am ET Sunday.
The capsule is likely to enter Earth’s atmosphere at 10:42am ET, with a speed of about 27,650 miles per hour (44,498 kilometres per hour), landing 13 minutes later.
The probe — launched in 2016 — will continue its space journey in the solar system to collect further information about asteroid named Apophis.
The samples from space may assist scientists to gain more insight into the origins and evolution of the solar system as asteroids are the “leftovers” from those early days 4.5 billion years ago.
The analysis will also help gain more insight into Bennu, which is expected to hit Earth in the future.
After surveying the Bennu — a rubble-pile asteroid shaped like a spinning top, is about one-third of a mile (500 meters) wide and composed of rocks held together by gravity.
During the sample collection, OSIRIS-REx went 1.6 feet (0.5 meters) deep into the surface — loosely packed — of the asteroid.
After saying goodbye to Bennu in May 2021, Nasa’s probe has been on its way to Earth, circling the sun twice so it can fly by Earth at the right time to release the sample.
The capsule will land within an area of 36 miles by 8.5 miles on the Defense Department’s Utah Test and Training Range.
Sandra Freund, OSIRIS-REx program manager at Lockheed Martin Space, said: “Parachutes will deploy to slow the capsule to a gentle touchdown at 11 miles per hour, and recovery teams will be standing by to retrieve the capsule once it is safe to do so.”
Details about the sample, after undergoing the necessary process will be revealed through a Nasa broadcast from Johnson on October 11.
According to scientists, carbonaceous asteroids such as Bennu crashed into Earth early during their formation, delivering elements like water.
“We’re looking for clues as to why Earth is a habitable world — this rare jewel in outer space that has oceans and has a protective atmosphere,” said Dante Lauretta, OSIRIS-REx principal investigator at the University of Arizona in Tucson.
“We think all of those materials were brought by these carbon-rich asteroids very early in our planetary system formation.”
“We believe that we’re bringing back that kind of material, literally maybe representatives of the seeds of life that these asteroids delivered at the beginning of our planet that led to this amazing biosphere, biological evolution and to us being here today,” Lauretta added.
Alphabet Inc’s Google announced on Tuesday that Bard, its generative artificial intelligence, is being equipped with the capability to fact-check responses and analyse users’ personal Google data, CNN reported.
This move is part of Google’s efforts to keep up with the popularity of ChatGPT.
The debut of ChatGPT, a chatbot developed by Microsoft-backed OpenAI, last year triggered a competitive race within the tech industry to provide consumers with access to generative AI technology.
At the time, ChatGPT became the fastest-growing consumer application in history and currently ranks among the top 30 websites globally.
However, Bard hasn’t experienced the same level of success.
In August, it received 183 million visits, which is only 13% of what ChatGPT received, according to Similarweb, a website analytics firm.
To make headway in the rapidly evolving AI landscape, Google is introducing Bard Extensions, allowing users to import their data from other Google products.
For example, users can request Bard to search their files in Google Drive or provide a summary of their Gmail inbox.
For now, Bard users will only be able to pull information in from Google apps, but Google is working with external companies to connect their applications to Bard in the future, Google senior product director Jack Krawczyk said.
Another new feature in Bard seeks to alleviate a nagging problem for generative AI: inaccurate responses known as “hallucinations”.
Bard users will be able to see which parts of Bard’s answers differ from and agree with Google search results.
“We are presenting (Bard) in a way that it admits when it’s not confident,” Krawczyk said, explaining that the intention is to build users’ trust in generative AI by holding Bard accountable.
A third new feature allows users to invite others into Bard conversations.