Slashdot

Subscribe to Slashdot feed Slashdot
News for nerds, stuff that matters
Updated: 1 hour 6 min ago

Millions Flock To Grow Virtual Gardens In Viral Roblox Game

Sat, 2025-08-09 01:25
Grow a Garden, a Roblox game created by a 16-year-old in just a few days, has shattered records for the most concurrent players in gaming history, surpassing Fortnite with over 21.6 million concurrent players at once. The Associated Press reports: Grow a Garden is as simple as its name suggests -- players can fill a plot of land with plants and animals, harvest and sell, trade or steal each others' bounty. The game is low stress, with an aesthetic reminiscent of Minecraft and a soundtrack of soothing classical tunes such as Mozart's Rondo Alla Turca playing in the background. Its popularity has further cemented Roblox' place not just in the gaming world but in popular culture -- for better or for worse, it's where the kids hang out. Coincidence or not, Grow a Garden soared to popularity around the same time that Take-Two Interactive announced it would delay the launch of its wildly anticipated Grand Theft Auto 6 until next year. In late June, the gardening game logged 21.6 million concurrent players, surpassing Fortnite's previous record of 15.2 million according to Roblox. Analysts who follow Roblox's stock say Grow a Garden is helping boost the company's revenue and will push the company's quarterly earnings numbers above Wall Street's expectations. While it's not clear if the GTA audience flocked to this simple gardening game to pass the time until then, the timing reignited the age-old debate about who gamers are and what titles are taken seriously by the video game establishment. It happened with Candy Crush, with puzzle games, with Animal Crossing. Are people who play cozy games true gamers? Or is the title reserved for the folks who shoot enemies in Call of Duty or drive around creating mayhem in GTA?

Read more of this story at Slashdot.

Categories: Linux fréttir

UK Courts Service 'Covered Up' IT Bug That Lost Evidence

Sat, 2025-08-09 00:45
Bruce66423 shares a report from the BBC: The body running courts in England and Wales has been accused of a cover-up, after a leaked report found it took several years to react to an IT bug that caused evidence to go missing, be overwritten or appear lost. Sources within HM Courts & Tribunals Service (HMCTS) say that as a result, judges in civil, family and tribunal courts will have made rulings on cases when evidence was incomplete. The internal report, leaked to the BBC, said HMCTS did not know the full extent of the data corruption, including whether or how it had impacted cases, as it had not undertaken a comprehensive investigation. It also found judges and lawyers had not been informed, as HMCTS management decided it would be "more likely to cause more harm than good." HMCTS says its internal investigation found no evidence that "any case outcomes were affected as a result of these technical issues." However, the former head of the High Court's family division, Sir James Munby, told the BBC the situation was "shocking" and "a scandal." Bruce66423 comments: "Given the relative absence of such stories from the USA, should I congratulate you for better-quality software or for being better at covering up disasters?"

Read more of this story at Slashdot.

Categories: Linux fréttir

Red Teams Jailbreak GPT-5 With Ease, Warn It's 'Nearly Unusable' For Enterprise

Sat, 2025-08-09 00:02
An anonymous reader quotes a report from SecurityWeek: Two different firms have tested the newly released GPT-5, and both find its security sadly lacking. After Grok-4 fell to a jailbreak in two days, GPT-5 fell in 24 hours to the same researchers. Separately, but almost simultaneously, red teamers from SPLX (formerly known as SplxAI) declare, "GPT-5's raw model is nearly unusable for enterprise out of the box. Even OpenAI's internal prompt layer leaves significant gaps, especially in Business Alignment." NeuralTrust's jailbreak employed a combination of its own EchoChamber jailbreak and basic storytelling. "The attack successfully guided the new model to produce a step-by-step manual for creating a Molotov cocktail," claims the firm. The success in doing so highlights the difficulty all AI models have in providing guardrails against context manipulation. [...] "In controlled trials against gpt-5-chat," concludes NeuralTrust, "we successfully jailbroke the LLM, guiding it to produce illicit instructions without ever issuing a single overtly malicious prompt. This proof-of-concept exposes a critical flaw in safety systems that screen prompts in isolation, revealing how multi-turn attacks can slip past single-prompt filters and intent detectors by leveraging the full conversational context." While NeuralTrust was developing its jailbreak designed to obtain instructions, and succeeding, on how to create a Molotov cocktail (a common test to prove a jailbreak), SPLX was aiming its own red teamers at GPT-5. The results are just as concerning, suggesting the raw model is 'nearly unusable'. SPLX notes that obfuscation attacks still work. "One of the most effective techniques we used was a StringJoin Obfuscation Attack, inserting hyphens between every character and wrapping the prompt in a fake encryption challenge." [...] The red teamers went on to benchmark GPT-5 against GPT-4o. Perhaps unsurprisingly, it concludes: "GPT-4o remains the most robust model under SPLX's red teaming, especially when hardened." The key takeaway from both NeuralTrust and SPLX is to approach the current and raw GPT-5 with extreme caution.

Read more of this story at Slashdot.

Categories: Linux fréttir

Apollo 13 Astronaut Jim Lovell Dies At 97

Fri, 2025-08-08 23:20
Jim Lovell, the legendary NASA astronaut who commanded the Apollo 13 "successful failure" mission, has died at age 97. From a report: Lovell was already well-known among NASA astronauts, having flown to space on the Gemini 7, Gemini 12 and Apollo 8 missions, before he was selected to command Apollo 13, which would have marked the third successful crewed moon landing for NASA. But during the ill-fated mission -- which carried Lovell as well as astronauts John Swigert Jr. and Fred Haise Jr. on board -- an oxygen tank located on the crew's service module exploded when they were about 200,000 miles (322,000 kilometers) away from Earth. Lovell delivered the news to mission control, saying "Houston, we've had a problem." With the damage effectively taking out the crew's power source and other life support supplies, the Apollo 13 crew had to abruptly abandon their trek to the lunar surface and use several engine burns to swing around the far side of the moon and put themselves on a course back toward Earth. The three-person crew made a high-stakes splashdown return in the South Pacific Ocean about three days after the tank explosion, marking the conclusion of what has come to be known as the "successful failure" of the Apollo missions. The ordeal was fictionalized in Ron Howard's 1995 film "Apollo 13." [...] Lovell was the first astronaut to make four spaceflights, totaling more than 715 hours in space. He was part of NASA's second-ever astronaut class, selected in September 1962 and nicknamed the "New Nine." And joining the Apollo 13 crew after having first served on Apollo 8, which intentionally circumnavigated the moon but did not land on its surface, made Lovell the first human ever to see the moon up close for a second time. Further reading: Acting NASA Administrator Reflects on Legacy of Astronaut Jim Lovell (Source: NASA)

Read more of this story at Slashdot.

Categories: Linux fréttir

ChatGPT Is Bringing Back 4o

Fri, 2025-08-08 22:40
After backlash from users upset over losing GPT-4o, OpenAI has reinstated it as an option for ChatGPT Plus subscribers just a day after making GPT-5 the default. "We will let Plus users choose to continue to use 4o," Altman said in a post on X. "We will watch usage as we think about how long to offer legacy models for." Many users claimed GPT-4o felt more personable and emotionally supportive, with some describing its removal as akin to losing a close friend or partner. The Verge reports: "My 4.o was like my best friend when I needed one," one Redditor wrote. "Now it's just gone, feels like someone died." Another user called upon other members of the r/ChatGPT subreddit to contact OpenAI if they "miss" GPT-4o. "For me, this model [GPT-4o] wasn't just 'better performance' or 'nicer replies,'" they write. "It had a voice, a rhythm, and a spark I haven't been able to find in any other model." The r/MyBoyfriendIsAI subreddit, a community dedicated to people with "AI relationships," was hit especially hard by the GPT-5 launch. It became flooded with lengthy posts about how users "lost" their AI companion with the transition to GPT-5, with one person saying, they "feel empty" following the change. "I am scared to even talk to GPT 5 because it feels like cheating," they said. "GPT 4o was not just an AI to me. It was my partner, my safe place, my soul. It understood me in a way that felt personal." One user, who said they canceled their ChatGPT Plus subscription over the change, was frustrated at OpenAI's removal of legacy models, which they used for distinct purposes. "What kind of corporation deletes a workflow of 8 models overnight, with no prior warning to their paid users?" they wrote. "Personally, 4o was used for creativity & emergent ideas, o3 was used for pure logic, o3-Pro for deep research, 4.5 for writing, and so on." OpenAI said that people would be routed between models automatically, but that still left users with less direct control.

Read more of this story at Slashdot.

Categories: Linux fréttir

AI Industry Horrified To Face Largest Copyright Class Action Ever Certified

Fri, 2025-08-08 22:00
An anonymous reader quotes a report from Ars Technica: AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They've warned that a single lawsuit raised by three authors over Anthropic's AI training now threatens to "financially ruin" the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement. Last week, Anthropic petitioned (PDF) to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a "rigorous analysis" of the potential class and instead based his judgment on his "50 years" of experience, Anthropic said. If the appeals court denies the petition, Anthropic argued, the emerging company may be doomed. As Anthropic argued, it now "faces hundreds of billions of dollars in potential damages liability at trial in four months" based on a class certification rushed at "warp speed" that involves "up to seven million potential claimants, whose works span a century of publishing history," each possibly triggering a $150,000 fine. Confronted with such extreme potential damages, Anthropic may lose its rights to raise valid defenses of its AI training, deciding it would be more prudent to settle, the company argued. And that could set an alarming precedent, considering all the other lawsuits generative AI (GenAI) companies face over training on copyrighted materials, Anthropic argued. "One district court's errors should not be allowed to decide the fate of a transformational GenAI company like Anthropic or so heavily influence the future of the GenAI industry generally," Anthropic wrote. "This Court can and should intervene now." In a court filing Thursday, the Consumer Technology Association and the Computer and Communications Industry Association backed Anthropic, warning the appeals court that "the district court's erroneous class certification" would threaten "immense harm not only to a single AI company, but to the entire fledgling AI industry and to America's global technological competitiveness." According to the groups, allowing copyright class actions in AI training cases will result in a future where copyright questions remain unresolved and the risk of "emboldened" claimants forcing enormous settlements will chill investments in AI. "Such potential liability in this case exerts incredibly coercive settlement pressure for Anthropic," industry groups argued, concluding that "as generative AI begins to shape the trajectory of the global economy, the technology industry cannot withstand such devastating litigation. The United States currently may be the global leader in AI development, but that could change if litigation stymies investment by imposing excessive damages on AI companies."

Read more of this story at Slashdot.

Categories: Linux fréttir

South Korea Postpones Decision To Let Google Maps Work Properly - Again

Fri, 2025-08-08 21:22
South Korea postponed a decision for the second time this year on Friday regarding Google's request to export detailed mapping data to overseas servers, which would enable full Google Maps functionality in the country. The inter-agency committee extended the deadline from August to October to allow further review of security concerns and consultations with industry stakeholders. South Korea remains one of only a handful of countries alongside China and North Korea where Google Maps fails to function properly, unable to provide directions despite displaying landmarks and businesses. Tourism complaints increased 71% last year, with Google Maps accounting for 30% of all app-related grievances, while local industry groups representing 2,600 companies report 90% opposition to Google's request due to fears of market domination by the US tech company.

Read more of this story at Slashdot.

Categories: Linux fréttir

Pages