Connect with us

Tech

Dangers of AI: Why White House wants to meet Google, Microsoft CEOs

Published

on

The White House is hosting its first meeting with the CEOs of Google, Microsoft, Anthropic and OpenAI, to discuss the risks of the revolutionary technology, as the Biden administration plans to put its weight behind the safe development of this innovation, Washington Post reported.

The White House in a statement said it would host CEOs of top artificial intelligence companies on Thursday to discuss risks and safeguards as the technology catches the attention of governments and lawmakers globally.

According to Washinton Post, the White House is convening the executives after President Biden warned that companies have a responsibility to make sure artificial intelligence products are safe before they’re released. 

Generative artificial intelligence has become a buzzword this year, with apps such as ChatGPT capturing the public’s fancy, sparking a rush among companies to launch similar products they believe will change the nature of work.

Millions of users have begun testing such tools, which supporters say can make medical diagnoses, write screenplays, create legal briefs and debug software, leading to growing concern about how the technology could lead to privacy violations, skew employment decisions, and power scams and misinformation campaigns.

“We aim to have a frank discussion about the risks we see in current and near-term AI development,” said a senior administration official, speaking on the condition of anonymity because of the sensitivity of the matter. “Our North Star here is this idea that if we’re going to seize these benefits, we have to start by managing the risks.”

Thursday’s meeting will include Google’s Sundar Pichai, Microsoft’s Satya Nadella, OpenAI’s Sam Altman and Anthropic’s Dario Amodei along with Vice President Kamala Harris and administration officials including Biden’s Chief of Staff Jeff Zients, National Security Adviser Jake Sullivan, Director of the National Economic Council Lael Brainard and Secretary of Commerce Gina Raimondo.

Ahead of the meeting, the administration announced a $140 million investment from the National Science Foundation to launch seven new AI research institutes and said the White House’s Office of Management and Budget would release policy guidance on the use of AI by the federal government.

Leading AI developers, including Anthropic, Google, Hugging Face, NVIDIA, OpenAI, and Stability AI, will participate in a public evaluation of their AI systems at the AI Village at DEFCON 31 – one of the largest hacker conventions in the world – and run on a platform created by Scale AI and Microsoft.

Shortly after Biden announced his reelection bid, Republican National Committee produced a video featuring a dystopian future during a second Biden term, that was built entirely with AI imagery.

Such political ads are expected to become more common as AI technology proliferates.

United States regulators have fallen short of the tough approach European governments have taken on tech regulation and in crafting strong rules on deep fakes and misinformation that companies must follow or risk hefty fines.

“We don’t see this as a race,” the administration official said, adding that the administration is working closely with the US-EU Trade & Technology Council on the issue.

In February, Biden signed an executive order directing federal agencies to eliminate bias in their use of AI. The Biden administration has also released an AI Bill of Rights and a risk management framework.

Last week, the Federal Trade Commission and the Department of Justice’s Civil Rights Division also said they would use their legal authorities to fight AI-related harm.

Tech giants have vowed many times to combat propaganda around elections, fake news about the COVID-19 vaccines, racist and sexist messages, pornography and child exploitation, and hateful messaging targeting ethnic groups.

But they have been unsuccessful, research and news events show. Just about one in five fake news articles in English on six major social media platforms were tagged as misleading or removed, a recent study by activist NGO Avaaz found, and articles in other European languages were not flagged.

Tech

US AI drone kills interfering operator in simulation; airforce denies incident

Published

on

By

An artificial intelligence (AI) powered drone that was instructed to decimate the enemy’s defences in a virtual test simulation, killed its operator to prevent ‘interference’ so that it could achieve its mission. 

These instructions were added by the programme itself.

This kind of AI simulation was, however, denied by the US air force in which a drone decided to “kill its operator” to prevent interference from achieving its mission.

According to an official last month, in a virtual test staged by the US military, an air force drone controlled by AI had used “highly unexpected strategies to achieve its goal.”

Col Tucker “Cinco” Hamilton described a simulation in which a drone powered by AI was advised to destroy an enemy’s air defence systems. It, on the other hand, attacked anyone who interfered with that order.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

According to a blog post, he said that “so what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

“We trained the system: ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

There was no harm to any real person.

Hamilton — an experimental fighter test pilot —warned against relying too much on AI. 

He opined that the test showed “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.”

In a statement to Insider, the US air force spokesperson Ann Stefanek said: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to the ethical and responsible use of AI technology.”

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

The US armed forces have incorporated AI recently to control an F-16 fighter jet.

Hamilton, in a last year with Defense IQ, said: “AI is not a nice to have, AI is not a fad, AI is forever changing our society and our military.”

“We must face a world where AI is already here and transforming our society. AI is also very brittle, ie it is easy to trick and/or manipulate. We need to develop ways to make AI more robust and to have more awareness on why the software code is making certain decisions — what we call AI-explainability.”

Continue Reading

Tech

Microsoft expands AI infrastructure with CoreWeave investment

Published

on

By

Microsoft is continuing to invest in cloud computing infrastructure to meet the growing demand for AI-powered services. 

The company has reportedly agreed to spend billions of dollars over multiple years on startup CoreWeave, which offers simplified access to Nvidia’s powerful graphics processing units (GPUs) for running AI models. The investment comes as Microsoft aims to ensure that OpenAI, the company behind the popular ChatGPT chatbot, has sufficient computing power. 

OpenAI relies on Microsoft’s Azure cloud infrastructure to meet its computational needs.

CoreWeave recently raised $200 million in funding, following a valuation of $2 billion. The company provides access to Nvidia GPUs, which are highly regarded for AI applications. Microsoft’s deal with CoreWeave enables the tech giant to tap into additional GPU resources to meet the increasing demand for AI infrastructure. 

CoreWeave’s CEO, Michael Intrator, revealed that the company’s revenue has multiplied significantly from 2022 to 2023, indicating a surge in demand for its services.

The partnership between Microsoft and CoreWeave underscores the intensified competition in the generative AI space. After OpenAI introduced ChatGPT, which demonstrated the ability of AI to generate sophisticated responses, many companies, including Google, have rushed to incorporate generative AI into their products. Microsoft has also been actively deploying chatbots for its own services, such as Bing and Windows.

Nvidia, whose GPUs are used extensively for AI and large language models, has seen its stock price surge by 170% this year. The company’s market capitalisation recently exceeded $1 trillion. Nvidia’s growth is expected to be fueled by data centers, driven by the increasing demand for generative AI and large language models. OpenAI’s GPT-4 model, which powers ChatGPT, is trained using Nvidia GPUs.

CoreWeave offers computing power that is claimed to be 80% less expensive than legacy cloud providers. The company provides Nvidia’s A100 GPUs, as well as the more affordable A40 GPUs, which are suitable for visual computing. Some clients have faced challenges obtaining sufficient GPU power from major cloud providers and have turned to CoreWeave for cost-effective solutions.

Microsoft’s investment in CoreWeave aligns with its ongoing efforts to expand its AI capabilities and meet the growing demand for AI-powered services. 

The partnership allows Microsoft to leverage CoreWeave’s GPU resources, ensuring that OpenAI’s infrastructure can support the computational requirements of ChatGPT and other AI initiatives. 

As the AI boom continues to accelerate, companies like Microsoft are actively seeking strategic investments and partnerships to stay at the forefront of this rapidly evolving field.

Continue Reading

Tech

Nasa UFO panel doesn’t rule out aliens, calls for better data on UAPs

Published

on

By

In their historic 16-member panel meeting about the unidentified anomalous phenomena (UAPs) or UFOs, Nasa said Wednesday that further as well as better data was required to unravel the mysteries surrounding UAPs. 

Nasa’s UFO body, which was constituted in June last year and includes a range of experts from physics to astrobiology, stressed that the currently available data was insufficient to effectively explain the unexplained phenomena in question.

They held a four-hour session, which was streamed live on a Nasa webcast and shared the initial findings of the research. A complete report is likely to be issued this summer.

Astrophysicist and chairman of the panel David Spergel said his team’s role was “not to resolve the nature of these events,” but rather to give Nasa a “roadmap” to guide future analysis.

According to the officials from the US space agency, several panellists had been subjected to unspecified “online abuse” and harassment since beginning their work in June last year.

Nasa science chief Nicola Fox said: “It is really disheartening to hear of the harassment that our panellists have faced online because they’re studying this topic. Harassment only leads to further stigmatisation.”

The panel members noted that the greatest challenge was a dearth of scientifically reliable methods for documenting UFOs, typically sightings of what appear as objects moving in ways that defy the bounds of known technologies and laws of nature.

“The underlying problem is that the phenomena in question are generally being detected and recorded with cameras, sensors and other equipment not designed or calibrated to accurately observe and measure such peculiarities,” they underlined.

“If I were to summarise in one-line what I feel we’ve learned, it’s we need high-quality data,” Spergel added.

“The current existing data and eyewitness reports alone are insufficient to provide conclusive evidence about the nature and origin of every UAP event.”

Spergel said: “While the Pentagon in recent years has encouraged military aviators to document UAP events, many commercial pilots remain very reluctant to report them due to the lingering stigma surrounding such sightings.”

The Nasa panel is the first-ever inquiry conducted under the ambit of the US space agency on matters that the government once considered the secretive purview of military and national security officials.

Investigations by Pentagon

This study is separate from a newly formalised Pentagon-based investigation of UAPs, documented in recent years by military aviators and analysed by US defence and intelligence officials.

The efforts of Nasa and the Pentagon highlight a shift for the government officials who, for decades, deflected and debunked the sightings of such objects which date back to the 1940s.

UFO was earlier associated with flying saucers and aliens, but now has been replaced in government language by “UAP.”

While NASA’s science mission was seen by some as promising a more open-minded approach to a topic long treated as taboo by the defence establishment, it made it known from the start that it was hardly leaping to any conclusions.

“There is no evidence UAPs are extraterrestrial (ET) in origin,” NASA said in announcing the panel’s formation last June.

US defence officials have said the “Pentagon’s recent push to investigate such sightings has led to hundreds of new reports that are under examination, though most remain categorised as unexplained.”

The head of the Pentagon’s newly formed All-domain Anomaly Resolution Office (AARO) has said the “existence of intelligent alien life has not been ruled out but that no sighting had produced evidence of extraterrestrial origins.”

“But just a few are considered beyond relatively simple explanation, while the rest can be attributed to mundane origins such as aircraft, balloons, debris or atmospheric causes,” he said.

Spergel also said: “In a departure from the Pentagon, Nasa’s panel is examining only unclassified reports from civilian observers, an approach permits open sharing of information among scientific, commercial and international entities, as well as the public.”

“To make the claim that we see something that is evidence of non-human intelligence would require extraordinary evidence, and we have not seen that,” Spergel said.

Continue Reading

Trending