Connect with us


TikTok announces feature that will limit teens screen time



A number of brand-new features have been unveiled by TikTok with the goal of lowering screen time and enhancing the well-being of its younger users.

Every TikTok user under the age of 18 will soon automatically be subject to a daily screen time restriction of 60 minutes, reported The Verge. Teenagers who exceed this cap are required to input a passcode in order to continue watching. Users can completely turn off the feature, but if they do and use TikTok for more than 100 minutes each day, they’ll be prompted to set a new cap.

After the first month of testing, TikTok reports that these reminders raised the utilisation of its screen time control tools by 234%. Teens will also receive a weekly inbox message that summarises their screen time, making it possible for younger users to be aware of how much time they spend using the app and forcing them to actively choose to exceed the advised screen time.

On determining how long the time restriction should be, TikTok claims it consulted specialists from the Digital Wellbeing Lab at Boston Children’s Hospital and recent academic research.

“While there’s no collectively-endorsed position on how much screen time is ‘too much’, or even the impact of screen time more broadly, we recognise that teens typically require extra support as they start to explore the online world independently,” said Cormac Keenan, Head of Trust and Safety at TikTok, in a statement.

Children under the age of 13 who use “TikTok for Younger Users” will also be subject to the 60-minute time limit. In this case, if the screen time limit is surpassed, a parent or guardian must set and enter an existing passcode to permit 30 minutes of additional watch time on the account.

Four new features are also being added to Family Pairing, TikTok’s adjustable parental restrictions that allow a parent or guardian to attach their TikTok account to a younger user’s account. Applying custom limitations enables constraints to be changed according to the day of the week (or more widely extended over school holidays).

TikTok’s screen time dashboard, which displays how much time a child has spent using the app, how frequently it was opened, and a breakdown of how much time was spent during the day and night, is also accessible to family pairing users. Parents will soon be able to schedule the muting of app notifications on their children’s accounts using a new “Mute Notifications” setting. Users between the ages of 13 and 15 already have push notifications muted automatically at 9 pm, and those between the ages of 16 and 17 have push notifications turned off at 10 PM.

Last but not least, TikTok claims to be working on new content filters that would let parents filter out videos that contain phrases or hashtags they don’t want their kids to see. With “parents, youth, and civil society organisations,” the business will design the feature during the ensuing weeks.

In addition to Family Pairing, TikTok announced that some of these settings will be broadly accessible to all accounts “soon,” enabling any user to schedule muted notifications and create unique screen time limits for each day of the week. Users can choose a time to be reminded to close the app and go to bed by using the new sleep reminder feature.

The precise release date for the new features has not been specified by TikTok.


Facebook and Instagram full of predators for children, alleges lawsuit




Meta’s social media platforms of Facebook and Instagram have become fertile grounds for child predators and paedophiles, revealed New Mexico’s Attorney General, Raul Torrez in a lawsuit.

Torrez’s office used fake accounts to conduct investigations and discovered that these fake accounts of minors were dispatched ‘solicitations’ and explicit content.

The lawsuit seeks court-ordered changes to protect minors, asserting that Meta has neglected voluntary actions to address these issues effectively.

In its response, Meta defended its initiatives in eradicating predators. However, New Mexico’s investigation disclosed a higher prevalence of exploitative material on Facebook and Instagram compared to adult content platforms.

Attorney General Torrez underscored the platforms’ unsafe nature for children, describing them as hotspots for predators to engage in illicit activities.

While US law shields platforms from content liability, the lawsuit argues that Meta’s algorithms actively promote sexually exploitative material, transforming the platforms into a marketplace for child predators.

The lawsuit accuses Meta of misleading users about platform safety, violating laws prohibiting deceptive practices, and creating an unsafe product.

Moreover, the lawsuit targets Facebook founder Mark Zuckerberg personally, alleging contradictory actions in enhancing child safety while steering the company in the opposite direction.

In response, Meta reiterated its commitment to combating child exploitation, emphasizing its use of technology and collaborations with law enforcement to address these concerns.

Continue Reading


Meta finally launches end-to-end encryption on Messenger




Meta announced Thursday that it is finally implementing end-to-end encryption for one-on-one conversations and calls on Messenger, delivering on a long-standing commitment.

The company states that when end-to-end encryption is enabled, the only people who can view the contents of a message sent through Messenger are the sender and the recipient.

Messenger’s encrypted chat function was initially made available as an opt-in feature in 2016. However, following a protracted legal dispute, end-to-end encrypted messages and calls for two-person discussions will now be considered the norm.

“This has taken years to deliver because we’ve taken our time to get this right,” Loredana Crisan, vice president of Messenger, said in a statement shared with The Verge.

“Our engineers, cryptographers, designers, policy experts and product managers have worked tirelessly to rebuild Messenger features from the ground up.”

A representational picture of Messengers new feature. — Meta
A representational picture of Messenger’s new feature. — Meta

Crisan states that encrypted chats will not compromise Messenger features like themes and custom reactions. However, it may “take some time” for all chats to switch to default encryption.

The end-to-end encryption for group chats is still opt-in. Additionally, Instagram messages are still not encrypted by default, but Meta expects this to happen “shortly after” the rollout of default private Messenger chats.

Meta CEO Mark Zuckerberg announced in 2019 that the company planned to move toward encrypted ephemeral messages across its messaging apps, according to The Verge.

“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever,” he wrote in a Facebook post. “This is the future I hope we will help bring about.”

By enabling encryption by default, most Messenger chats should remain unseen by Meta, and it will also prevent the company from providing the data to law enforcement.

Last year, a 17-year-old from Nebraska and her mother faced criminal charges for illegal abortion after police obtained their Messenger chat history.

Anti-encryption advocates argue that encryption makes it harder to identify bad actors on encrypted messaging apps like WhatsApp.

Continue Reading


Elon Musk poised to challenge OpenAI, targets $1bn for his AI startup




Tesla chief Elon Musk’s artificial intelligence venture, xAI, is making waves in the AI world, aiming to raise a substantial $1 billion to compete head-on with OpenAI’s widely-used ChatGPT technology. 

According to recent filings with the US Securities and Exchange Commission, xAI has already amassed $134.7 million and is striving to amass the billion-dollar mark.

The filing indicates Musk’s strong commitment to gathering the entire sum, hinting that he might have secured deals to achieve this ambitious target. 

Musk recently showcased “Grok,” a chatbot similar to ChatGPT, trained on data from X (previously Twitter), which he acquired for $44 billion last year.

Musk initiated xAI in July, recruiting top researchers from OpenAI, Google DeepMind, Tesla, and the University of Toronto. He expressed that the company’s goal is to “understand the true nature of the universe.”

Since the rise of OpenAI’s ChatGPT a year ago, there has been intense competition among tech giants like Microsoft, Google, Meta, and startups such as Anthropic and Stability AI. Earlier this year, OpenAI reportedly secured commitments of an astounding $13 billion from Microsoft.

Musk’s fundraising efforts coincide with a tumultuous period at OpenAI, as CEO Sam Altman’s return after a brief dismissal has led to delays in the company’s anticipated share sale. Reports suggest the sale, valuing OpenAI between $80 and $90 billion, faced hindrances due to internal disruptions.

Continue Reading