In a major development, Nasa has found evidence of the hard landing of the Japanese lunar lander HAKUTO-R Mission 1, which crashed on the moon’s surface a month earlier, reported CBS News Wednesday.
The Japanese moon lander, designed by the company ispace, was launched on December 11, 2022, and was to land in the moon’s Atlas crater on April 25.
The ispace team said in a news release that the “lander’s descent speed rapidly increased as it approached the moon. It then lost contact with Mission Control.”
“Based on this, it has been determined that there is a high probability that the lander eventually made a hard landing on the Moon’s surface,” the company said.
On April 26, Nasa’s Lunar Reconnaissance Orbiter, a robotic spacecraft orbiting around the moon with cameras provided topographic maps of the lunar surface, and captured 10 images around the landing site.
Nasa’s Lunar Reconnaissance Orbiter Camera (LROC) can be seen in this picture. — Nasa/File
The 10 captured images and the one clicked before the landing helped scientists, operating the spacecraft, begin to look for the Japanese lander in a 28-by-25-mile region.
The team identified what NASA called “an unusual surface change” near where the lander was supposed to end up.
The photo by Nasa’s orbiter shows “four prominent pieces of debris” and several changes in the lunar surface, including some changes that could indicate a small crater or pieces of the lander.
In a statement, Nasa said, “the photos are just the first step in the process. The site will be further analysed over the coming months.”
According to the US space agency, the orbiter will make further observations of the site in different lighting conditions and from other angles.
Despite the crash, the company ispace is eyeing to launch further moon probes.
Seen to the right of the iconic Vehicle Assembly Building at NASA’s Kennedy Space Center in Florida, a crane positions the Orion crew access arm so it can be attached to the mobile launcher. — Nasa/File
Takeshi Hakamada, founder and CEO of ispace, told CBS News before the failed launch that the company’s goal is to help develop a lunar economy and create an infrastructure that will augment Nasa’s Artemis programme and make it easier to access the surface of the moon.
Under the company’s lunar mission, another lander is set to take another rover to the moon in 2024. The third mission is currently under preparation.
Hakamada said that if possible, the goal is to set “high-frequency transportation to the lunar surface to support scientific, exploration, and technology demonstration missions.”
“We are planning to offer frequent missions to the surface. After 2025, we plan to offer two to three missions per year,” said the CEO.
YouTube on Thursday announced a new feature on its short-form video platform Shorts, called Dream Screen, which enables users to create unique videos using AI tools.
YouTube CEO Neal Mohan, during the company’s live event “Made on YouTube,” revealed that users can use the AI feature to create an AI-generated video or image in YouTube Shorts by simply typing in the desired background.
Mohan demonstrated how this works by typing in “a panda drinking coffee” to show how the video image appears on the screen.
The company offered further examples as well, such as underwater castles or things you could have dreamed about, like dragons or sci-fi moonrises.
Mohan expressed his belief that the technology will enable more people to publish on YouTube without feeling as though they need a whole production studio or a thorough understanding of YouTube analytics, TechCrunch reported.
This screengrab from a demonstration video from YouTube’s blog shows a panda drinking coffee, as demonstrated by the company’s CEO, Neal Mohan. — YouTube/Blog/File
Over 70 billion daily views are currently being averaged on the Shorts platform, up from 50 billion in January and the biggest video-creating app anticipates that these figures will rise even higher with AI.
“At YouTube, we want to make it easier for everyone to feel like they can create and we believe generative AI will make that possible,” said Mohan.
The feature is currently being presented to a small group of artists, and it will presumably then go live early next year.
According to YouTube, in the future, the tool will allow users to enter ideas for how to alter or remix their content in order to create entirely new and unique videos.
Researchers have been waiting on Earth to receive the biggest asteroid sample which will be sent from space by Nasa’s OSIRIS-Rex probe, as astronomers are gaining more understanding about the evolution of the solar system and the alien rocks that are to impact our planet in future.
Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-Rex) will be releasing a sample of the Bennu asteroid which is likely to touch down in Utah weighing an estimated 8.8 ounces.
The event will be live-streamed starting at 10am ET Sunday.
The capsule is likely to enter Earth’s atmosphere at 10:42am ET, with a speed of about 27,650 miles per hour (44,498 kilometres per hour), landing 13 minutes later.
The probe — launched in 2016 — will continue its space journey in the solar system to collect further information about asteroid named Apophis.
The samples from space may assist scientists to gain more insight into the origins and evolution of the solar system as asteroids are the “leftovers” from those early days 4.5 billion years ago.
The analysis will also help gain more insight into Bennu, which is expected to hit Earth in the future.
After surveying the Bennu — a rubble-pile asteroid shaped like a spinning top, is about one-third of a mile (500 meters) wide and composed of rocks held together by gravity.
During the sample collection, OSIRIS-REx went 1.6 feet (0.5 meters) deep into the surface — loosely packed — of the asteroid.
After saying goodbye to Bennu in May 2021, Nasa’s probe has been on its way to Earth, circling the sun twice so it can fly by Earth at the right time to release the sample.
The capsule will land within an area of 36 miles by 8.5 miles on the Defense Department’s Utah Test and Training Range.
Sandra Freund, OSIRIS-REx program manager at Lockheed Martin Space, said: “Parachutes will deploy to slow the capsule to a gentle touchdown at 11 miles per hour, and recovery teams will be standing by to retrieve the capsule once it is safe to do so.”
Details about the sample, after undergoing the necessary process will be revealed through a Nasa broadcast from Johnson on October 11.
According to scientists, carbonaceous asteroids such as Bennu crashed into Earth early during their formation, delivering elements like water.
“We’re looking for clues as to why Earth is a habitable world — this rare jewel in outer space that has oceans and has a protective atmosphere,” said Dante Lauretta, OSIRIS-REx principal investigator at the University of Arizona in Tucson.
“We think all of those materials were brought by these carbon-rich asteroids very early in our planetary system formation.”
“We believe that we’re bringing back that kind of material, literally maybe representatives of the seeds of life that these asteroids delivered at the beginning of our planet that led to this amazing biosphere, biological evolution and to us being here today,” Lauretta added.
Alphabet Inc’s Google announced on Tuesday that Bard, its generative artificial intelligence, is being equipped with the capability to fact-check responses and analyse users’ personal Google data, CNN reported.
This move is part of Google’s efforts to keep up with the popularity of ChatGPT.
The debut of ChatGPT, a chatbot developed by Microsoft-backed OpenAI, last year triggered a competitive race within the tech industry to provide consumers with access to generative AI technology.
At the time, ChatGPT became the fastest-growing consumer application in history and currently ranks among the top 30 websites globally.
However, Bard hasn’t experienced the same level of success.
In August, it received 183 million visits, which is only 13% of what ChatGPT received, according to Similarweb, a website analytics firm.
To make headway in the rapidly evolving AI landscape, Google is introducing Bard Extensions, allowing users to import their data from other Google products.
For example, users can request Bard to search their files in Google Drive or provide a summary of their Gmail inbox.
For now, Bard users will only be able to pull information in from Google apps, but Google is working with external companies to connect their applications to Bard in the future, Google senior product director Jack Krawczyk said.
Another new feature in Bard seeks to alleviate a nagging problem for generative AI: inaccurate responses known as “hallucinations”.
Bard users will be able to see which parts of Bard’s answers differ from and agree with Google search results.
“We are presenting (Bard) in a way that it admits when it’s not confident,” Krawczyk said, explaining that the intention is to build users’ trust in generative AI by holding Bard accountable.
A third new feature allows users to invite others into Bard conversations.