Connect with us

Tech

OpenAI trained ChatGPT to lie: Elon Musk

Published

on

CEO Twitter and SpaceX Elon Musk criticised Microsoft-backed artificial intelligence (AI) startup OpenAI — creator of ChatGPT — and accused that they trained the AI chatbot to “lie”.

Speaking during an interview with Fox News aired on Monday, he announced that he would be launching his AI platform that he called “TruthGPT” to “challenge the offerings from Microsoft and Google”.

While accusing OpenAI of “training the AI to lie”, he said: “OpenAI has now become a closed source, ‘for-profit’ organisation closely allied with Microsoft”.

During the interview, he also alleged the co-founder of the Google Larry page for not taking AI safety seriously.

“I’m going to start something which I call ‘TruthGPT’, or a maximum truth-seeking AI that tries to understand the nature of the universe,” Elon the tech-billionaire said during the interview.

TruthGPT might be the best path to safety that would be unlikely to annihilate humans, noted Musk who is also the CEO of Tesla.

He added: “It’s simply starting late. But I will try to create a third option.”

The Twitter CEO has been looking for AI researchers from Alphabet Google to initiate an OpenAI rival project, reported Reuters citing sources.

In March, he registered a company in Nevada named X.AI Corp. that listed him as the sole director and Jared Birchall as a secretary — the managing director of Musk’s family office.

‘AI risks humanity’

The development is followed by an open letter written by technology executives, and AI researchers including Elon Musk calling for a pause of six months in building a system that could be far more powerful than OpenAI’s ChatGPT 4.

In the letter, it was mentioned that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one — not even their creators — can understand, predict, or reliably control.”

“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” said the letter.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”, read the letter in which Musk was also among the people asking to halt the speedy development the AI technology.

Elon Musk also cautioned about human-like technology saying: “AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production according to the excerpts.”

“It has the potential of civilizational destruction,” Musk added.

He went on to say that “For example, a super-intelligent AI can write incredibly well and potentially manipulate public opinions.”

In a tweet, over the weekend he said that he had met with former US President Barack Obama when he was president and told him that Washington needed to encourage AI regulation.

In 2015, Musk co-founded OpenAI but stepped down from the board in 2018.

Explaining the reasons in a Tweet he had said: “Tesla was competing for some of the same people as OpenAI [and] I didn’t agree with some of what OpenAI team wanted to do.”

Tech

Zindagi Trust gets featured on Meta website for transforming Pakistan’s education system

Published

on

By

KARACHI: In Pakistan, where a staggering number of over 28 million children are out of school and education infrastructure widely suffers, Zindagi Trust which is a non-profit organisation, is dedicated to revolutionising the education system.

Founded in 2003 by famous Pakistani singer Shehzad Roy, the trust works on the mission to provide quality education to underprivileged children and reform government schools in Pakistan, through pilot projects at model schools and advocacy with the government.

For its success in reaching and engaging supporters as an early adopter of WhatsApp Channels, Zindagi Trust has been featured on Meta’s website as a case study for government and charities.

The Trust is notably the first non-profit organisation from Pakistan to receive this recognition.

Capitalising on the popularity of Meta-owned messaging app, WhatsApp, Zindagi Trust set out with the objective of reaching new audiences, raising awareness, and facilitating fundraising.

It launched a WhatsApp Channel, through which emphasis was placed on initiatives extending beyond model schools, impacting government schools nationwide.

Zindagi Trust saw a significant surge in followers, a 7% increase in donations, and increased reach across its social ecosystem.

Speaking to Geo.tv, Zindagi Trust’s Senior Marketing & Resource Development Manager Faiq Ahmed said that WhatsApp channels have significantly contributed to the realisation of Zindagi Trust’s objectives by establishing a direct and interactive platform for communication with education and child protection enthusiasts.

Talking about collaboration with the government sector, Faiq said that their advocacy initiatives with the government’s help have left an indelible mark on Pakistan, catalysing groundbreaking changes nationwide.

“Through collaboration and perseverance, we continue to shape a brighter future for the children of Pakistan, not only in the education sector but also in areas vital to the well-being of our society,” he added. 

Continue Reading

Tech

Facebook and Instagram full of predators for children, alleges lawsuit

Published

on

By

Meta’s social media platforms of Facebook and Instagram have become fertile grounds for child predators and paedophiles, revealed New Mexico’s Attorney General, Raul Torrez in a lawsuit.

Torrez’s office used fake accounts to conduct investigations and discovered that these fake accounts of minors were dispatched ‘solicitations’ and explicit content.

The lawsuit seeks court-ordered changes to protect minors, asserting that Meta has neglected voluntary actions to address these issues effectively.

In its response, Meta defended its initiatives in eradicating predators. However, New Mexico’s investigation disclosed a higher prevalence of exploitative material on Facebook and Instagram compared to adult content platforms.

Attorney General Torrez underscored the platforms’ unsafe nature for children, describing them as hotspots for predators to engage in illicit activities.

While US law shields platforms from content liability, the lawsuit argues that Meta’s algorithms actively promote sexually exploitative material, transforming the platforms into a marketplace for child predators.

The lawsuit accuses Meta of misleading users about platform safety, violating laws prohibiting deceptive practices, and creating an unsafe product.

Moreover, the lawsuit targets Facebook founder Mark Zuckerberg personally, alleging contradictory actions in enhancing child safety while steering the company in the opposite direction.

In response, Meta reiterated its commitment to combating child exploitation, emphasizing its use of technology and collaborations with law enforcement to address these concerns.

Continue Reading

Tech

Meta finally launches end-to-end encryption on Messenger

Published

on

By

Meta announced Thursday that it is finally implementing end-to-end encryption for one-on-one conversations and calls on Messenger, delivering on a long-standing commitment.

The company states that when end-to-end encryption is enabled, the only people who can view the contents of a message sent through Messenger are the sender and the recipient.

Messenger’s encrypted chat function was initially made available as an opt-in feature in 2016. However, following a protracted legal dispute, end-to-end encrypted messages and calls for two-person discussions will now be considered the norm.

“This has taken years to deliver because we’ve taken our time to get this right,” Loredana Crisan, vice president of Messenger, said in a statement shared with The Verge.

“Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up.”

A representational picture of Messengers new feature. — Meta
A representational picture of Messenger’s new feature. — Meta

Crisan states that encrypted chats will not compromise Messenger features like themes and custom reactions. However, it may “take some time” for all chats to switch to default encryption.

The end-to-end encryption for group chats is still opt-in. Additionally, Instagram messages are still not encrypted by default, but Meta expects this to happen “shortly after” the rollout of default private Messenger chats.

Meta CEO Mark Zuckerberg announced in 2019 that the company planned to move toward encrypted ephemeral messages across its messaging apps, according to The Verge.

“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he wrote in a Facebook post. “This is the future I hope we will help bring about.”

By enabling encryption by default, most Messenger chats should remain unseen by Meta, and it will also prevent the company from providing the data to law enforcement.

Last year, a 17-year-old from Nebraska and her mother faced criminal charges for illegal abortion after police obtained their Messenger chat history.

Anti-encryption advocates argue that encryption makes it harder to identify bad actors on encrypted messaging apps like WhatsApp.

Continue Reading

Trending