news aggregator
An Entire Wikipedia That's 100% AI Hallucinations
"Every link leads to an entry that does not exist yet," explains the GitHub page for a Wikipedia-like site called Halupedia. "Until you click it, at which point an LLM pretends it has always existed and writes it for you, in the deadpan register of a 19th-century scholarly press..."
Every article is invented on demand. The footnotes are also lies... The hardest problem with an infinite, on-demand encyclopedia is internal contradiction... When the LLM writes an article, it is required to add a context="..." attribute on every <a> it inserts, summarising the future article it is linking to (e.g. context="19th-century clerk who formalized footnote drift, Pellbrick's mentor")... When that target article is later requested for the first time, the worker loads the accumulated hints and injects them into the system prompt as "PRIOR REFERENCES — these are CANON". The LLM is instructed that the encyclopedia is hallucinated and absurd, but it must not contradict itself.
Fast Company reports that Halupedia was created by software developer BartÅomiej Strama, who confessed in a Reddit comment that the site came about after a drunk night with a friend. In the week since launch, he says Halupedia has amassed more than 150,000 users."
Beyond indulging in silly alternate histories, what's the point of using Halupedia? Strama hinted at one larger purpose in a reply to a donor on his Buy Me a Coffee page: "Your contribution towards polluting LLM training data will surely benefit society!" he wrote.
The site is licensed as free software under the GPL-3.0 license.
Thanks to long-time Slashdot reader schwit1 for sharing the news.
Read more of this story at Slashdot.
Categories: Linux fréttir
How I Added an LLM-Based Grammar Checking + TeX Math Import To LibreOffice
Former Microsoft programmer Keith Curtis "wrote and self-published After the Software Wars to explain the caliber of free and open source software," according to his entry on Wikipedia, "and why he believes Linux is technically superior to any proprietary OS."
He's also KeithCu (long-time Slashdot reader #925,649), and has written a blog post on "How I added an LLM-based grammar checking + TeX math import to LibreOffice."
:
At Microsoft, I spent five years working on the text components RichEdit and Quill, and came to understand the "physics" of word processing: the file formats, data structures, and algorithms that provided fast access to text and properties, independent of the length of the file. Selecting one million characters to make them bold took about the same time as changing one character, because of the clever data structures (piece tables) and algorithms in these engines...
When I decided to add a real-time AI grammar checker to [LibreOffice plugin] WriterAgent, I knew what I was getting into, but I underestimated the trickery of LibreOffice's UNO.
His site shares the surprises he encountered, one by one. (Starting with "the office suite throws a bunch of initialization variables at your constructor. If your Python __init__ method doesn't handle them, the code fails to map the call, the stack misaligns, and the program dies.") There's sentence casing issues, duplicate words, and foreign-language syntax — all culminating in new features for "a LibreOffice extension (Python + UNO) that adds generative AI editing to Writer, Calc, and Draw..."
"If you want to try it out, the repo is here... Let's make LibreOffice and the free desktop AI-native!"
Read more of this story at Slashdot.
Categories: Linux fréttir
The Apple-OpenAI Alliance is Fraying, Setting Up a Possible Legal Fight
Bloomberg reports that Apple's two-year-old partnership with OpenAI "has become strained, according to people familiar with the matter."
Bloomberg describes OpenAI as "failing to see the expected benefits from the deal and now preparing possible legal action."
OpenAI lawyers are actively working with an outside legal firm on a range of options that could be formally executed in the near future, said the people, who asked not to be identified because the deliberations are private. That could include sending the iPhone maker a notice alleging breach of contract without necessarily filing a full lawsuit at the outset, according to the people... OpenAI believed that the companies' partnership, which wove ChatGPT into Apple software, would coax more users into subscribing to the chatbot. It also expected deeper integration across more Apple apps and prime placement within the Siri assistant. Instead, Apple's use of OpenAI technology across its operating systems remains limited, and features can be hard to find...
Apple has had its own concerns about OpenAI, including whether the company does enough to protect user privacy. And a recent push [by OpenAI] to make devices — an effort overseen by former Apple executives — has rankled the iPhone maker.
Any legal move by OpenAI likely wouldn't come until after the conclusion of the Musk trial, according to the people. No final decisions have been made, and OpenAI still hopes to resolve its issues with Apple outside of court.
The article points out that OpenAI "initially believed the deal could generate billions of dollars per year in subscriptions — something that hasn't come close to happening." An OpenAI executive argues to Bloomberg that from a product perspective Apple hasn't done everything they could, "and worse, they haven't even made an honest effort."
Read more of this story at Slashdot.
Categories: Linux fréttir
California Law Limits 'Recyling' Logo in New Attack on Plastic Waste
"Most of the plastic waste in California is about to lose the recycling symbol," writes the Washington Post's "climate coach."
The "chasing arrows" symbol, created in 1970 by a college student inspired by the burgeoning environmental movement, has been stamped indiscriminately on plastic bottles, clamshell takeout containers, chip bags and more for decades. The majority of the items emblazoned with the mark have been virtually impossible to recycle for most people. California lawmakers say they want to end the charade: Under what's known as the Truth in Recycling law, plastics cannot use the symbol if they aren't collected by curbside programs serving 60% of Californians and sorted by facilities serving 60% of the state's recycling programs (with some additional requirements). If the law goes into effect as scheduled on October 4, more than half of the types of plastic packaging and products sold in the state can no longer carry the chasing arrows logo. That will affect plastic films, foam, PVC and mixed plastics...
Food and packaging groups have sued the state of California, calling the law a form of censorship whose vague restrictions violate the First Amendment and due process rights.... Advocates of the law counter that corporations deliberately misled the public by turning the recycling symbol into a marketing device that masks the fact that only a small fraction of plastic packaging is ultimately recycled... The mark was originally intended to informwaste processors what polymers a plastic item was made from. But the public reasonably assumed anything stamped with the symbol was recyclable. Millions of tons of worthless plastic trash have since poured into recycling facilities unable to process it....
States are now taking action. Seven have passed laws shifting the cost of recycling onto packaging makers. Oregon and Washington have lifted requirements that plastic containers carry the chasing arrows symbol.
The article notes that
Norway already recovers 97% of beverage bottles, while Slovakia recycles 60% of plastic packaging. "But the U.S. only recovers about a third of its PET and HDPE bottles, and just 13% of plastic packaging, according to U.S. Plastics Pact, an industry-led forum.
"It won't be easy for the U.S. to reach higher levels of recycling: The necessary infrastructure and incentives are chronically underfunded, no federal mandate exists for minimum-recycled-content that would create demand and a mix of mostly unrecyclable hydrocarbons still dominates the waste stream."
Read more of this story at Slashdot.
Categories: Linux fréttir
Anthropic's Mythos Helped Build a Working macOS Exploit in Five Days
"The vulnerability is simple in practice," writes Tom's Hardware: "run a command as a standard user and gain root (administrator) access to the machine."
And it was Mythos Preview that helped the security researchers at Palo Alto-based Calif bypass a five-year Apple security effort in just five days. The blog 9to5Mac reports:
Last year, Apple introduced Memory Integrity Enforcement (MIE), a hardware-assisted memory safety system designed to make memory corruption exploits much harder to execute... [The researchers note it's built into Apple all models of the iPhone 17 and iPhone Air, and some MacBooks] They explain they have a 55-page technical report on the hack, but they won't release it until Apple ships a fix for the exploit. But they do note in broad terms that Anthropic's Mythos Preview model helped them identify the bugs and assisted them throughout the entire collaborative exploit development process.
"Mythos Preview is powerful: once it has learned how to attack a class of problems, it generalizes to nearly any problem in that class. Mythos discovered the bugs quickly because they belong to known bug classes. But MIE is a new best-in-class mitigation, so autonomously bypassing it can be tricky. This is where human expertise comes in. Part of our motivation was to test what's possible when the best models are paired with experts. Landing a kernel memory corruption exploit against the best protections in a week is noteworthy, and says something strong about this pairing...."
[I]n a time when even small teams, with the help of AI, can make discoveries such as this one, "we're about to learn how the best mitigation technology on Earth holds up during the first AI bugmageddon."
Read more of this story at Slashdot.
Categories: Linux fréttir
The Search for the Next 'James Bond' Actor Has Begun
Variety reports:
Amazon MGM Studios started auditioning actors for the part of 007 in the past few weeks, Variety has learned... The next James Bond film will be directed by Denis Villeneuve, the filmmaker behind the "Dune" franchise, "Arrival" and "Sicario." Amy Pascal of the "Spider-Man" films and David Heyman of the "Harry Potter" series will produce the picture, which will feature a script from "Peaky Blinders" creator Steven Knight. Tanya Lapointe ("Dune") is executive producing the film.
The BBC notes it's been five full years since the release of the last Bond film No Time To Die, and 15 months "since Amazon MGM Studios took control of the Bond franchise." But they also offer this list of "the current bookmakers' favourites" for who will become the seventh actor to play the gadget-loving super spy in the franchise's 64-year history:
Callum Turner — the 36-year-old actor is the current bookies' frontrunner. He has been in the Fantastic Beasts franchise, was nominated for a Bafta for TV drama The Capture, and starred in Apple TV's Masters of the Air...
Jacob Elordi — the Australian actor, 28, made his name in TV's Euphoria and cult hit film Saltburn, and was nominated for an Oscar this year for playing the monster in Frankenstein. The Rest Is Entertainment host Marina Hyde recently said she'd heard from a number of well-placed sources that he's now "in pole position" to be Bond.
Harris Dickinson — the 29-year-old is playing John Lennon in the forthcoming major Beatles biopics, and has previously appeared in Maleficent, The King's Man, Where the Crawdads Sing and Babygirl, and received a Bafta TV Award nomination for A Murder at the End of the World.
Henry Cavill — the Superman, The Witcher and Mission: Impossible actor is a fan favourite and was widely regarded to have been the runner-up when Craig landed the part. But at 43, is he now too old to start a lengthy stint as 007?
Aaron Taylor-Johnson — the Bafta-nominated 35-year-old, known for films like Kick-Ass, Kraven the Hunter and 28 Years Later, is a perennial contender, and would fit the bill.
Theo James — the suitably suave star, 41, made his name in the Divergent films and has since built his reputation in The Time Traveler's Wife, The White Lotus and The Gentlemen.
...Or producers could well go for one of the many other names who have been touted for the role, or an unexpected choice.
Read more of this story at Slashdot.
Categories: Linux fréttir
AI-generated code is 'pain waiting to happen'
INTERVIEW Enthusiasm among managers to adopt AI tools has outpaced developers' ability to learn those tools and use them effectively. Moshe Sambol, VP of customer solutions at software observability outfit Lightrun, told The Register in an interview that he speaks with a lot of companies. Some of the developers in those organizations, he said, are very comfortable with AI tools. "But the reality is that a lot of developers are much earlier in the curve," he said. "The expectations of businesses are getting ahead of where the developers are in terms of their mental model and in terms of the training that they're providing, the enablement they're providing to make their teams comfortable with the tools, and the rate at which these tools are evolving." Sambol said the degree of AI tool adoption varies. "I absolutely have customers who've told their developers, 'You don't write code anymore. You review code. No one should write a line of code unless for some reason you failed after three attempts getting GenAI to do it,'" he said. "I have customers like that. I don't know if I should name them, but absolutely." And he said on the other side of the spectrum, there are organizations like banks that are just starting to roll AI tools out due to compliance obligations and traditional industry caution. "It's an exciting time to be adopting these tools and learning these tools, but it puts a lot of pressure on the developer," he said. "It puts this expectation of being more productive." Not everyone manages that, and Sambol said he has a lot of sympathy for developers who have been directed to use AI tools without training and organizational guidance. Generative AI models will produce a lot of code quickly, he said, and because the code seems correct initially, it often gets pushed forward. "If it's not creating bugs en masse today, it's just pain waiting to happen," he said. "The number one question I think we have to be asking developers is, 'Can you explain that code? Have you validated that the code actually fits in the context of the broader system?'" Sambol said the answer isn't necessarily yes or no because developers have different levels of experience and often work on large projects where they focus only on a specific part of the code base. It's common in enterprises, he said, that no one person will understand the entire system end-to-end, which is why problem resolution often requires a group of people. The issue he sees is that generative AI systems don't help bridge the missing knowledge gap. They don't provide the context to understand all the components involved. Sambol went on to describe an incident in which a developer was using an AI assistant to build an Ansible automated workflow. "The generative AI was creating the Ansible template for him, which seems like a perfect match – it's drudge work," he explained. "And it's much better at getting the syntax exactly right." It worked. And then it stopped working. "The system that he was deploying to, all of a sudden, he could not get the component up," Sambol said. "It just wouldn't start. A process that had been going smoothly for a couple of hours in the morning, now all of a sudden, his service is down and it will not run. "And he's pulling his hair out trying to unstitch the day's work so far to figure out what went wrong, why is the service not working," he said, adding that the AI agent proved unhelpful by going off in the wrong direction, reinstalling the operating system, and undertaking other ineffective steps to effect repairs. What happened, Sambol explained, is that earlier in the day, the developer had installed the component in a certain way – it was running in a container with a systemd service. As such, it needed access to the ports on the device, which precluded running the component in Docker. "So the AI model re-wrapped it, repackaged it, and deployed it in a different way, but kept the original one running," he explained. "So it was simply a matter of the fact that the one he had initially deployed was still running and it was blocking the port and the second one couldn't run. "It's a fairly simple, easy-to-understand problem once you see it, but he lost the entire afternoon going down all kinds of dead ends with the AI looking at this, looking at that, because the AI model didn't remember that it had guided him to deploy the system a certain way earlier in the day." Sambol said various studies show a significant percentage of AI generated code contains errors and creates technical debt. That's not to say human developers are without fault. Sambol said developers have their own weaknesses. Many companies, he said, have offshored or globally distributed development teams, so there's a lot of variation. He argues that it's important to acknowledge that imperfection and work toward processes that improve results. One way to do that is to automate the prompting process in a way that makes it more repeatable. "When you do that, you identify where you're starting to get good results and you don't expect everybody to come up with a well-structured long prompt." Sambol added, "I think these tools are absolutely getting better. And so I'm reluctant to call any of them junk or deeply flawed. They're getting better shockingly rapidly. If you can take advantage of a couple of different ones – with a human being in the loop – then you are more likely to get output that is at least as good as you were getting before." ®
Categories: Linux fréttir
Fedora's AI Developer Desktop Initiative Blocked by Community Backlash
The blog It's FOSS has an update on the Fedora AI Developer Desktop Initiative, a proposed platform for AI/machine learning workloads on Fedora. It's now been blocked "after two Fedora Council members retracted their earlier approval votes."
The initiative was proposed by Red Hat engineer Gordon Messmer, aiming to deliver an Atomic Desktop with accelerated AI workload support, covering developer tools, hardware enablement, and building a community around AI on Fedora... At the May 6 council meeting, the members unanimously voted to approve this new initiative. After which a short, lazy consensus window was left open until May 8 to accommodate absent members, after which the decision was to be ratified.
But that last bit never happened, as council member Justin Wheeler (Jflory7) was the first person to change their vote to -1... ["While I strongly support leveraging AI to establish Fedora as a leading platform, completely rearchitecting our kernel strategy is a massive structural shift. It requires explicit alignment with our legal and engineering stakeholders before we commit the project to this path."]
Following that, fellow council member Miro HronÄok (churchyard) put in his -1, saying that he had originally assumed the proposal was purely additive and therefore uncontroversial. But seeing the community's response, he realized that he was mistaken about that. As an elected representative, he felt the need to reflect on this major proposal before signing it off.
Over 180 replies have piled up in the proposal's discussion thread, with many well-known Fedora contributors pushing back on things like kernel policy, proprietary software, and project identity. Hans de Goede from the packaging team called out the proposal's emphasis on CUDA support as going against Fedora's foundational commitment to free software, arguing that open alternatives like AMD's ROCm and Intel's oneAPI should be the focus instead.
Read more of this story at Slashdot.
Categories: Linux fréttir
Trump Phones Start Shipping - But Were There Really 600,000 Preorders?
USA Today reports:
Trump Mobile phones are being shipped this week, the company exclusively confirmed to USA TODAY in an email May 11....
The company's first smartphone — the T1 Phone — was originally scheduled for release in August. However, the golden gadget's release was later delayed to October before being pushed back again to this week. Now, Trump Mobile CEO Pat O'Brien told USA TODAY, pre-ordered phones will start getting sent out to customers this week... O'Brien said the company anticipates all pre-ordered phones to be delivered within the next several weeks... The company's 5G "47 Plan" is available for $47.45 a month, a nod to President Donald Trump's two presidential terms, according to the website... Customers will also have Trump(SM) displayed as the status bar in their network.
The Verge reported the phone was added last week to Google's public list of devices certified for Google Play, "usually one of the final steps before an Android phone is launched."
Trump Mobile may have broken radio silence partly in response to a recent wave of media coverage alleging that buyers had received emails notifying them that their preorders had been canceled, coverage that even made it onto Stephen Colbert's The Late Show... [T]here's seemingly no evidence of the alleged cancellation emails beyond unverified social media claims.
In January The Verge also questioned reports that 600,000 people preordered the Trump phone with a $100 deposit. "I can't find a shred of evidence that this figure is true," calling it "a microcosm of how the modern media landscape and AI chatbots can combine to give falsities the sheen of respectability."
I first saw the figure in, of all places, the Threads feed of California governor Gavin Newsom's press office, which had shared a screenshot of a tweet of a Grok summary making the claim. Trustworthy, right? The Grok post cites "reports from sources like Fortune, NPR, and The Guardian" for the 600,000 preorders, but a quick search of their recent output shows no sign of the number... India's Economic Times and Hindustan Times both reported a more specific figure of 590,000 preorders, referencing an unspecified Associated Press report as the source. [The Associated Press] VP of corporate communications, Lauren Easton, confirmed to me that "AP's original stories never contained such a number...."
Hindustan Times writer Shamik Banerjee called the citation "a typo," and told me that the figure was in fact taken from The Times of India. The Times of India story, which is bylined only to the newspaper's lifestyle desk, is more transparent in its sourcing: a viral post by a meme account... It's been covered by multiple publications, now presented as fact on MSN.com and tech site Phone Arena. And that coverage has helped it to filter into the chatbots and not just Grok — Gemini and ChatGPT were both happy to confirm to me that 600,000 T1 Phones have been ordered so far, the former falsely attributing the number to the Associated Press, and the latter to Phone Arena.
As for how many Trump Phone preorders have actually been placed? No one outside the company knows.
Read more of this story at Slashdot.
Categories: Linux fréttir
Why Is the US Job Market So Tough, Especially for Recent College Grads?
What's going on with the U.S. job market? "The economy is growing. Unemployment is low," notes the Washington Post. "And yet, for millions of workers, finding a job has become harder than at almost any other point in decades," with the hiring rate "well below pre-pandemic levels for more than a year."
Part of the problem? "Of the net 369,000 positions added across the entire economy since the start of 2025, health care alone accounted for nearly 800,000 — meaning every other sector, taken together, shed jobs." By the end of 2025 nearly half of college graduates ages 22 to 27 were working at jobs that didn't require a degree, according to stats from New York's Federal Reserve Bank.
The headline unemployment rate, at 4.2%, looks healthy. But that figure has been buoyed by a shrinking labor force: Fewer people are actively looking for work, which keeps the rate down even as hiring slows...
[Some large tech companies] are trying to recalibrate after their hiring sprees of 2021 and 2022, when many had raised pay, offered flexible schedules and signed people quickly... Higher interest rates have also made expansion more expensive, pushing many firms to invest in technology rather than headcount. Another reason hiring has slowed is uncertainty about AI. Even though the technology has not yet replaced large numbers of workers, it is already shaping how companies think about hiring. "I don't think this is AI displacement," said Ben Zweig, chief executive of Revelio Labs, a workforce data company. "What we're seeing is anticipatory." Instead of rushing to bring on new workers, some firms are waiting to see how the technology evolves and which tasks it will eventually take over.
A 39-year-old web developer tells the Post it took 453 job applications to get a handful of interviews and two offers. And a journalism school graduate said they'd sent hundreds of job applications but most led nowhere, and they're now couch-surfing to save money.
But the problem seems even worse for young people. One 18-year-old told the Post that in a year and a half of job searching, they'd yet to even meet an employer in person.
The unemployment rate for people ages 22 to 27 who recently completed college hit 5.6% in the final months of 2025 — well above the 4.2% rate for all workers, according to national data from the Federal Reserve Bank of New York... At one point last summer, new workforce entrants made up a larger share of the unemployed than at any point since the late 1980s — higher even than during the Great Recession. When hiring slows, the door closes first on those without an existing foothold. For the class of 2026, the timing could hardly be worse.
"It is getting increasingly clear that young people are being more affected by AI than older workers," Zweig said. Companies are not eliminating jobs at scale, but many are slow to hire junior workers. At the same time, older workers are staying in the labor force longer, leaving fewer openings for new arrivals. Even when jobs are available, the bar has shifted. Positions once considered entry level now often require several years of experience, technical expertise and familiarity with AI tools. With fewer openings and more applicants, companies are holding out for candidates who can do the job immediately and need little training... Employers are also looking for a different mix of skills. An analysis of millions of job postings by Indeed found that communication skills now appear in nearly 42% of all listings, while leadership skills feature in nearly a third — capabilities that are harder to prove on a résumé and harder still to demonstrate without an existing professional network. Christine Beck, a career coach who works with early-career job seekers, said employers are asking more of the people they do hire.
Read more of this story at Slashdot.
Categories: Linux fréttir
Cloud-managed earbuds sound strange - as a concept, and on a plane
Last year, The Register spotted Dell selling cloud-manageable wireless earbuds that feature the company’s famously stoic styling at a price higher than Apple charges for its latest AirPods. Dell eventually offered your correspondent a pair of the Pro Plus Earbuds to try so we could hear what all the fuss is about – and we accepted, on condition that the company showed us the cloudy management tools that make the buds worth the big bucks. Divya Soni, a go to market lead, showed me Dell’s cloudy Device Management Console, a tool that lets admins enroll and track the buds, send them new firmware, or do things like turn on active noise cancellation by default across a fleet of earbuds. New firmware matters for earbuds because they’re Bluetooth devices and the wireless protocol has had its fair share of security scares over the years. The buds have already earned Microsoft’s Teams Open Office Certification – a seal of approval for being able to handle noisy offices, plus a Zoom accreditation. New firmware might help there, too. Soni admitted earbuds aren’t the main priority for the Device Management Console, which Dell expects customers will mostly use to manage docks and displays. Dell delivers firmware updates to those devices at least once a year, to address security issues or fix bugs. The tool can do the same for keyboards or headsets. I can’t imagine anyone would adopt Dell’s Device Manager just to keep an eye on earbuds. I’m also not sure anyone would buy the buds for personal use. I say that because I own two sets of wireless earbuds and in their own way both are better than the Dells. My go-to buds are JB’s $40 Vibe Beam 2, which fit brilliantly, bring out some nice nuances in much music, boast batteries that last about six hours and only need about 15 minutes to recharge. That makes them satisfactory for long-haul flights, during which they drop a warmly enveloping cone of silence when active noise cancelling kicks in. My other pair are $100 Soundcore Space A40s (bought after destroying another pair). These buds have even nicer noise cancelling powers but fit terribly: I recently endured quite the scene when running to catch a bus and one dropped out of my ear and bounced into a shrub. The Soundcores redeem themselves with impressive microphones, so I use them when Zooming or recording a podcast. I prefer them to stay home because the case is bulbous and a little conspicuous in a front jeans pocket. The Dells are even bigger. They fit my ears well and battery life is strong at around eight hours. Active noise cancelling is poor: A high hiss persists in-flight and I perceived distracting artefacts when using them in noisy environments on the ground. Neither of my two PCs made a Bluetooth connection with the Dell buds. Dell has a fix for that – the buds’ case houses a small USB-C dongle devoted to connecting with the buds. It works every time and delivers a more stable connection than Bluetooth and brings out some musical nuances that I can’t hear with my other buds or desktop speaker. The dongle feels like a clue about how Dell imagines these buds will be used, because today's laptops seldom offer more than a pair of USB-C ports and they’re commonly used for power in and video out. Dedicating a port to earbuds seems wasteful … unless you’re using a Dell dock or monitor that offers more ports. The USB-C audio connector therefore made it hard to escape the idea that Dell expects these buds will almost always be sold as part of a corporate peripheral purchase. I can’t imagine consumers would prefer them to Apple’s AirPods, or the many cheaper earbuds that match them for performance. But if the boss decides your organization must have cloud-manageable earbuds it would be churlish to turn down the chance to use a pair of Pro Plus Earbuds for work and play. The experience of using them is in the name: they're built for the office but can handle after hours activities. They’re not delightful, but they’re far from trashy, annoying, or inconvenient. And when I inevitably lose or destroy my current buds I’ll be very happy if I have the Dells on hand. ®
Categories: Linux fréttir
Linux Kernel Outlines What Qualifies As A Security Bug, Responsible AI Use
The Linux 7.1 kernel has added new documentation clarifying what qualifies as a security bug and how AI-assisted vulnerability reports should be handled. Phoronix reports: Stemming from the recent influx of security bugs to the Linux kernel as well as an uptick in bug and security reports from discoveries made in full or in part with AI, additional documentation was warranted. Longtime Linux developer Willy Tarreau took to authoring the additional documentation around kernel bugs. To summarize (since the documentation is a bit too lengthy for a Slashdot story), the AI-assisted vulnerability reports should "be treated as public" because such findings "systematically surface simultaneously across multiple researchers, often on the same day." It adds that reporters should avoid posting a reproducer openly, instead "just mention that one is available" and provide it privately if maintainers request it. The guidance also tells AI-assisted reporters to keep submissions concise and plain-text, focus on verifiable impact rather than speculative consequences, include a thoroughly tested reproducer, and, where possible, propose and test a fix.
As for what qualifies as a security bug, the documentation says the private security list is for "urgent bugs that grant an attacker a capability they are not supposed to have on a correctly configured production system" and are easy to exploit, creating an imminent threat to many users. Reporters are told to consider whether the issue "actually crosses a trust boundary," since many bugs submitted privately are really ordinary defects that belong in the normal public reporting process.
All the new documentation can be read via this commit.
Read more of this story at Slashdot.
Categories: Linux fréttir
Europe built sovereign clouds to escape US control. Then forgot about the processors
FEATURE Can digital sovereignty exist on American silicon? Europe is pouring more than €2 billion into sovereign cloud initiatives designed to reduce exposure to US legal reach. The EU's IPCEI-CIS program funds infrastructure development. France qualifies operators under SecNumCloud, a framework with nearly 1,200 technical requirements promising "immunity from extraterritorial laws." But most datacenters and qualified cloud operators still rely heavily on Intel or AMD processors. And inside those processors sits a computer beneath the computer: management engines operating at Ring -3, below the operating system, outside the control of host security software, persistent even when the machine appears powered off. Under the US Reforming Intelligence and Securing America Act (RISAA) 2024, hardware manufacturers count as "electronic communications service providers" subject to secret government orders. Europe's frameworks certify the clouds. They don't assess the silicon. The computer your OS can't see That computer beneath the computer has a name. On Intel processors, it is the Management Engine (ME), or more precisely the Converged Security and Management Engine (CSME). On AMD, it is the Platform Security Processor (PSP). Both run at what security researchers call Ring -3, below the operating system, below the hypervisor, in a privilege level the host cannot see or log. "It's a computer inside your computer," explains John Goodacre, Professor of Computer Architectures and former director of the UK's £200 million Digital Security by Design program. He is clear about what that means in practice. The ME has its own memory, its own clock, and its own network stack, and because it can share the host's MAC and IP addresses, any traffic it generates is indistinguishable from the host's own traffic to the firewall. The architecture is not theoretical. Embedded in the Platform Controller Hub, the CSME is a separate microcontroller that operates independently of the host, with direct memory, device access, and network connectivity the host operating system cannot monitor. AMD's PSP works the same way. Intel's Active Management Technology (AMT), the remote management feature the ME enables, exposes at least TCP ports 16992, 16993, 16994, and 16995 on provisioned devices. Goodacre notes that an attack surface exists on unprovisioned hardware too. These ports deliver keyboard-video-mouse redirection, storage redirection, Serial-over-LAN, and power control to administrators managing fleets of devices remotely. The capability has legitimate uses. It also provides a channel that operates at a level below what European sovereignty frameworks can attest. Microsoft documented in 2017 that the PLATINUM nation state actor used Intel's Serial-over-LAN (SOL) as a covert exfiltration channel. SOL traffic transits the Management Engine and the NIC sideband path, delivered to the ME before the host TCP/IP stack runs. The host firewall and endpoint detection saw nothing, and any security tooling running on the compromised machine itself was equally blind. PLATINUM did not exploit a vulnerability. It exploited a feature, requiring only that AMT be enabled and credentials obtained. In documented cases, those credentials were the factory default: admin, with no password set. Goodacre catalogues this and related scenarios in a 37-page risk assessment prepared for CISOs evaluating Intel vPro hardware connected to corporate networks. Its conclusion is blunt: connecting an untouched-ME device to corporate resources "exposes the organization to a class of compromise that defeats the host security stack in its entirety." The ME does not stop when the machine appears to. Users recognize the symptom: a laptop powered off and stored for weeks is found, on next boot, to have a depleted battery. On modern thin and light platforms, what Microsoft documents as Modern Standby means "off" does not correspond to "all subsystems unpowered." The system-on-chip components the Management Engine runs on remain in low-power states, drawing enough to drain a 55 Wh battery over weeks, on the order of 100-200 mW continuous draw. The implication is documented in Goodacre's risk assessment: "Whether the radio is in a Wake-on-Wireless-LAN listening state is firmware policy. On a device whose firmware has been tampered with during transit through the supply chain, the answer cannot be inferred from the visible power state." A laptop that appears off, in a bag, can associate with a hostile network the user has no knowledge of. Professor Aurélien Francillon, a security researcher at French engineering school EURECOM, has spent years studying exactly this class of problem. Working with colleagues, he built a fully functional backdoor in hard disk drive firmware [PDF], a proof of concept demonstrating how storage devices could silently exfiltrate data through covert channels. Three months after presenting it at an academic conference, the Snowden disclosures revealed the NSA's ANT catalogue, which documented an identical capability already deployed in the field. "The NSA were already doing it," Francillon says flatly. "Quite amazing." That background informs his assessment of the ME. "Yes, it can probably be used as a backdoor, like many other things, including BMC [baseboard management controller] and many other firmwares," he says. The question, he argues, is not whether the backdoor exists but whether operational controls make it unreachable in practice. AMD faces the same architectural question. On April 14, 2026, researchers demonstrated the Fabricked attack against AMD's SEV-SNP confidential computing technology, achieving a 100 percent success rate with a software-only exploit. The Platform Security Processor proved vulnerable to the same class of compromise. On server hardware, the picture is the same. Intel ME runs on servers under a different name, Server Platform Services or SPS, and the BMC, the remote administration controller standard in datacenter hardware, relies on it. "More or less the same," Francillon says of the server variant. For datacenter operators, he sharpens the focus further: "If I look at cloud systems, servers, I would be more concerned with the BMC," pointing to published research demonstrating remote exploitation of BMC vulnerabilities that could allow an attacker to reinstall or fully compromise a server. The BMC is not a separate concern from the ME: on server hardware, it is the primary network entry point into the SPS, making it both the most exposed interface and the most consequential. Both Intel and AMD processors contain management engines that operate below the operating system. The silicon is designed by American companies and subject to American legal process. The backdoor the CLOUD Act doesn't use That legal process has teeth that most European policymakers underestimate. The CLOUD Act, passed in 2018, gave US authorities extraterritorial reach to data held by American companies. FISA Section 702 allows intelligence agencies to compel US persons and companies to provide access to communications. Both are well known in European sovereignty discussions. They operate through the front door: a legal order served on a company that controls data. Less well known is RISAA 2024, a law that opens a different entrance entirely. RISAA amended FISA's definition of "electronic communications service provider" in ways that go beyond cloud operators and platform companies, and beyond the bilateral agreements that European policymakers have built their legal defenses around. Hardware manufacturers now fall within scope. Intel and AMD can be compelled, via secret orders with gag clauses, to cooperate with US intelligence access. The mechanism through which that access could be exercised is the management engine: a persistent, privileged, network-connected runtime that operates below anything the host operating system can see or block. A SecNumCloud-certified operator can be legally isolated from American data demands. The processor inside its servers cannot. "You've actually got a policy mechanism by which any such machine anywhere can deliver any of its information," Goodacre says. RISAA's two-year term expired on April 20, 2026, but Congress extended it by 45 days while debating reforms. Whether it is renewed, amended or allowed to lapse, the architecture it targets does not change. SecNumCloud's blind spot France's SecNumCloud is Europe's most rigorous attempt to build a cloud certification that is legally immune to American law. It did not emerge from nowhere. ANSSI, France's national cybersecurity agency, was established in 2009 as part of a broader effort to build institutional muscle on digital sovereignty long before the term became fashionable. When Edward Snowden revealed the scale of NSA surveillance in 2013, France's response was technical rather than rhetorical: ANSSI published the first SecNumCloud framework in July 2014. A decade later, that framework has grown to nearly 1,200 technical requirements. At the time, SecNumCloud was a cybersecurity qualification, not a sovereignty instrument: it set requirements for architecture, encryption standards, access controls, and incident response, but said nothing about who controlled the underlying infrastructure or whose laws applied to it. The CLOUD Act changed that. Passed in 2018, it gave American authorities extraterritorial reach to data held by US companies, and suddenly a French cybersecurity framework had a geopolitical dimension it was not designed for. Version 3.2, introduced in 2022, added Chapter 19: a set of explicit requirements targeting extraterritorial law, mandating that only EU operators could run the service, that no non-EU party could access customer data, and that the provider could operate autonomously without external intervention. It promised "immunity from extraterritorial laws." In December 2025, S3NS, a joint venture between French defense and technology group Thales and Google Cloud, operating Google Cloud Platform technology under French control, became the first "hybrid" cloud to receive SecNumCloud qualification. The certification triggered heated debate: was this real sovereignty, or American technology with a European flag? But the debate missed a more fundamental question. Does SecNumCloud's certification reach as far as the silicon it runs on? Francillon is positioned to see both sides of that question. He sits on the French Technology Academy's working group on cloud security, a body that advises on the technical foundations of frameworks like SecNumCloud. And he has spent years studying firmware backdoors in academic literature and demonstrated them in practice. He knows what the hardware can do, and he knows what the certification requires. His starting point is that SecNumCloud provides genuinely valuable protection, and that the silicon gap does not negate that. When asked whether SecNumCloud explicitly addresses Intel Management Engine or AMD Platform Security Processor vulnerabilities, his answer is unambiguous: "There is no direct requirement for firmware backdoor prevention." The framework is not designed to be a technical specification for hardware-layer security. "The document aims to be generic and not dive into technical details," Francillon says. "Most of it is organizational security." What SecNumCloud does require is that providers build a proper threat model, consider mitigation mechanisms, and monitor administration gateways where external tech support could be exploited. The hardware layer was not addressed by oversight. It was left out by design. Francillon's assessment is not a fringe view. Vincent Strubel, the director of ANSSI, the very agency that designed and administers SecNumCloud, is equally explicit about what the framework does and does not cover. In a January 2026 LinkedIn post addressing SecNumCloud's scope, he writes that all cloud offerings, hybrid or not, depend on electronic components whose design and updates are not 100 percent controlled in Europe. If Europe were ever cut off from American or Chinese technology, he argues, the result would be a global problem of security degradation, not just in hybrid clouds, but everywhere. Strubel frames SecNumCloud carefully: it is "a cybersecurity tool, not an industrial policy tool." It protects against extraterritorial law enforcement and kill-switch scenarios. It was never designed to eliminate technology dependencies at the hardware layer, and no actor, state, or enterprise fully controls the entire cloud technology stack anyway. One technology frequently cited in sovereignty discussions is OpenTitan, Google's open source secure element deployed on its server hardware and used within the S3NS infrastructure. Francillon is clear about what it is and, critically, what it is not. "OpenTitan is a secure element, a small chip on the side that can be used for protecting sensitive keys, providing signatures, making attestations," he explains. "It's a bit like a TPM." What it is not is a replacement for the main processor. "The Linux and all your applications will not run on it." OpenTitan sits alongside x86 infrastructure as an external root of trust, independent of the ME. That matters because the default embedded TPM lives inside the ME, making it subject to the ME attack surface. OpenTitan sits outside that boundary. The two address different problems entirely, and conflating them, as sovereignty advocates sometimes do, obscures where the residual exposure actually lies. ANSSI's own technical position paper [PDF] on confidential computing, published in October 2025, concludes that Intel SGX, TDX, and AMD SEV-SNP are "not sufficient on their own to secure an entire system, or to meet the sovereignty requirements of SecNumCloud 3.2." Physical attackers are "explicitly out-of-scope" of vendor security targets. Supply chain attackers are "explicitly out-of-scope." The ME attack surface discussed in this article falls into neither category: it is a remote network threat, not a physical one. The paper's conclusion for users concerned about hostile cloud providers is stark: "Switch to a cloud provider they trust, or use their own hardware with physical security protection measures." The castle with a structural flaw Francillon does not dispute that SecNumCloud leaves the ME unassessed. His argument is that this does not matter in practice. "What I mean is that if there is a backdoor to access a room, it cannot be directly used if the room is in a castle. You have to pass the castle walls first." Network isolation, monitoring, and threat modeling are the walls. SecNumCloud's operational requirements mandate that administration gateways be isolated, that external tech support be monitored, that network segmentation prevents lateral movement. The Management Engine backdoor may exist, but the framework makes it unreachable except in what Francillon calls "very high-end attacks." That qualifier matters. Francillon is not claiming perfect security. He is claiming that proper operational controls reduce the threat to a level where only nation state actors with significant resources could exploit it. For most threat models, he argues, that is sufficient. "Saying it is useless to do SecNumCloud because there is ME, or whatever backdoor in some hardware we don't control, is a mistake," he says. SecNumCloud improves security over deployments without such controls, he argues, provided that hardware is carefully evaluated and firmware securely configured. The castle walls have a structural flaw that Goodacre's risk assessment documents in detail. Corporate perimeter firewalls see the device's traffic, but because the ME shares the host's MAC and IP addresses, they cannot tell ME-originated flows apart from legitimate host traffic. "The perimeter cannot attribute a flow to host-versus-CSME origin without out-of-band knowledge," Goodacre writes. A TLS-encrypted tunnel from the ME to an attacker server on port 443 looks, to the perimeter, like any other HTTPS connection the laptop makes. Network filtering reduces attack surface. It does not eliminate the exposure. Goodacre's position is that a "Tier-3 supply-chain residual remains in both cases and is the irreducible cost of buying any silicon that ships with a Ring -3 manageability engine." He defines Tier 3 as nation state cyber services operating at the level of compromising firmware in transit, mis-issuing CA certificates via in-country authorities, and modifying hardware at customs or courier hubs. The NSA's Tailored Access Operations division treated supply chain interdiction as routine business, with explicit doctrinal preference for BIOS and firmware implants over disk-level malware. His risk assessment's data on fleet vulnerability is concrete. Industry telemetry from Eclypsium, analyzing production enterprise environments, found that approximately 72 percent of devices observed remained vulnerable to INTEL-SA-00391 years after public disclosure, and 61 percent remained vulnerable to INTEL-SA-00295. The same reporting documented that the Conti ransomware group developed proof-of-concept Intel ME exploit code with the intent of installing highly persistent firmware-resident implants. "Connecting an untouched-ME vPro laptop to corporate resources exposes the organization to a class of compromise that defeats the host security stack in its entirety," Goodacre concludes. "The exposed controls include BitLocker full-disk encryption, FIDO2-protected sign-in, endpoint detection and response, the host firewall and the corporate VPN." The disagreement between Francillon and Goodacre is not about whether the vulnerability exists. Both confirm it does. Both confirm AMD faces the same issue. Both confirm software alone cannot fix it. The disagreement is about whether operational controls, Francillon's castle walls, make an architectural backdoor irrelevant in practice, or merely reduce its exploitability while leaving nation state actors with a path through. For SecNumCloud operators processing sensitive government or commercial data, the distinction is not academic. It is worth noting that SecNumCloud is designed for a higher level of security than standard cloud certifications, but is not intended for classified or restricted government data. The threat that can still slip through Francillon's castle walls is precisely the threat SecNumCloud was designed to keep out. The gap nobody names Goodacre told The Register he tested awareness of the Management Engine with various attendees at the CyberUK conference in April 2026. "Almost no one" knew about it, he reports. The gap between the sovereignty rhetoric and the silicon reality is not being surfaced in policy discussions, procurement decisions, or public debate over what digital sovereignty means. The debate that does happen, hybrid versus non-hybrid, Google/Thales versus pure European providers, focuses on operational control and legal structure. It does not address the shared silicon foundation. Strubel's LinkedIn post pushes back against the framing: "Imagining this problem is limited to hybrid cloud offerings is pure fantasy that doesn't survive confrontation with facts." Every cloud provider, hybrid or not, depends on components they don't fully control. The distinction isn't hybrid versus sovereign. It is what you're protecting against, and whether the controls you're implementing address that threat. There is no immediate solution. RISC-V, the open source processor architecture European sovereignty advocates point to as a long-term alternative, remains years from competitive performance in datacenter workloads. "It will take decades," Francillon says flatly. Arm is a cautionary precedent: it took nearly 20 years from the first server attempts before Arm achieved any meaningful datacenter presence. Can sovereignty exist on compromised silicon? For Goodacre, the bottom line is simple: the Tier-3 supply chain residual is "the irreducible cost of buying silicon with a Ring -3 manageability engine." Francillon argues that operational controls, including network isolation, monitoring, and threat modeling make the backdoor unreachable except in very high-end attacks. Strubel acknowledges hardware dependencies are real but maintains that SecNumCloud provides valuable protection for what it does cover: legal control, kill-switch resistance, defense against cyberattacks and insider threats. The disagreement is not about technical facts. It is about risk tolerance and threat model calibration. For European CIOs choosing SecNumCloud-certified providers, the question to ask vendors is: how do you address Intel Management Engine and AMD Platform Security Processor in your threat model? The answer will clarify whether the vendor treats the hardware layer as out of scope, or has implemented controls that reduce but do not eliminate the exposure. For European policymakers, the question is broader. Can digital sovereignty exist on non-sovereign silicon? The current frameworks do not answer that question. They certify operational controls, legal structure, and autonomous execution capability. They do not certify silicon-layer immunity, because the hardware is American or Chinese, subject to American or Chinese law, designed with management engines that European authorities did not specify, cannot legally compel on their own terms, and cannot replace. Whether that is a gap worth addressing, or a risk worth accepting as the unavoidable cost of participating in global technology supply chains, is a question Europe will need to answer for itself. ®
Categories: Linux fréttir
One in seven Brits swapped their GP for ChatGPT, study finds
Brits are now asking chatbots about mysterious lumps and weird rashes instead of calling their GP, which is probably not the digital healthcare revolution anybody meant to build. A new study from King's College London found that one in seven people in the UK have used AI instead of contacting a doctor or healthcare service, while one in ten said they had turned to chatbots rather than professional mental health support. Convenience was the biggest reason, cited by 46 percent of respondents, closely followed by curiosity at 45 percent. Another 39 percent said they used AI because they were unsure whether their symptoms were serious enough to bother a GP in the first place. The report, based on a survey of more than 2,000 adults, suggests that AI systems are quietly becoming Britain's unofficial second-opinion service while regulators are still arguing about what counts as "AI-enabled healthcare" in the first place. However, some respondents said the chatbot conversations ended up replacing medical care altogether. Around one in five respondents said chatbot advice discouraged them from seeking professional help, and 21 percent said they skipped contacting a healthcare provider because of something the AI told them. Public confidence in AI healthcare also looks shaky. The survey found Britons are almost perfectly split on whether AI should be involved in clinical decision-making, with 37 percent supporting its use and 38 percent opposing it. Safety and accuracy worries topped the list of public concerns about NHS AI use. Women, in particular, were less comfortable with the idea than men, and far more likely to say patients should be told when AI is involved in their care. Oddly, younger adults were among the most skeptical. Nearly half of 18 to 24-year-olds opposed clinical AI use, compared with 36 percent of people over 65. The public also appears to think AI has already taken over GP surgeries to a much greater extent than is the case. Respondents guessed that around 39 percent of GPs use AI in clinical decision-making, when the actual figure is closer to 8 percent. Professor Graham Lord, executive director at King's Health Partners, warned that responsibility for AI mistakes often lands on clinicians even when they have little control over the systems being deployed. "When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced," Lord said. Which sounds suspiciously like someone in healthcare has already seen the incoming paperwork. ®
Categories: Linux fréttir
Japan Runs Out of Robot Wolves In Fight Against Bears
Japan's worsening bear problem has created a shortage of handmade "Monster Wolf" robots, which are $4,000 solar-powered scarecrow-like devices with glowing eyes, sensors, and blaring sounds designed to frighten the animals away. "We make them by hand. We cannot make them fast enough now. We are asking our customers to wait two to three months," company president Yuji Ohta recently told the AFP. Popular Science reports: First released in 2016 by the manufacturer Ohta, Monster Wolf was originally designed to ward off the agricultural foes like boars, deer, and the island nation's Asian black bear (Ursus thibetanus) and brown bear (Ursus arctos) populations. The creative solution quickly went viral for its red LED eyes and menacing fangs -- as well as its admittedly odd, furry pipe frame.
Starting at around $4,000, each bespoke Monster Wolf is now equipped with battery power, solar panels, and detection sensors. Its speakers are programmed with over 50 audio clips including human voices and sirens audible over half a mile away. These aren't assembly line products, however. Each Monster Wolf is custom made, and Ohta simply can't keep up with the current demand.
[...] Ohta told the AFP that amid the ongoing crisis, there has been "growing recognition" that Monster Wolf is "effective in dealing with bears." The main customer base remains farmers, but orders are also coming from golf courses and rural workers. Upgraded versions will soon include wheels to actually chase animals and patrol preset routes. There are also plans to release a handheld version for outdoor enthusiasts and schoolchildren. Until Ohta catches up with its orders, residents and visitors are encouraged to review the Japanese government's own bear safety tips.
Read more of this story at Slashdot.
Categories: Linux fréttir
Wood Burning Is Reintroducing Lead Pollution Into the Air, Scientists Find
An anonymous reader quotes a report from The Guardian: Wood heating is reintroducing lead into the air of local communities and homes, a systematic investigation by academics has found. Overwhelming evidence of lead's neurotoxicity meant the metal was banned as an additive in petrol more than 25 years ago. The research by academics from the University of Massachusetts Amherst began by analysing samples of particle pollution from five suburban and rural towns in the north-east US. They looked for tiny particles of potassium that are given off when wood is burned and also particles containing lead. Samples from seven winters revealed associations between potassium and lead. When there were more wood burning particles in a daily sample, there was more lead in the air, with clear straight-line relationships in four of the five towns.
The project was extended to 22 other towns across the US. The relationships between lead and potassium varied from place to place, being strongest in the Rocky Mountains. By factoring in the effects of temperature, moderate to strong associations in their analysis strengthened the conclusion that the extra lead came from wood burning. The lead concentrations were less than the US legal limits, but any exposure to the metal is harmful. [...] Although less than legal limits, lead particles are routinely measured in UK cities in winter when people are also burning wood. This is normally attributed to waste wood covered with old lead paint, but the Umass Amherst study suggests the metal is coming from the wood itself. This means that any wood burning could increase exposure in neighborhoods and at home. Tricia Henegan, a PhD student at Umass Amherst and the first author on the research, said: "The most logical answer [to the question of how lead ends up in wood] is that it comes from uptake in the soil, probably riding along with the nutrients and water that trees need. Once in the tree, it deposits in the tree's tissues and remains until that tree is burned." Other research has found that it can then become part of the smoke.
"The use of wood as an energy source is a relic of the past, one that should not be relived if given a choice. Although wood fuel use can feel nostalgic, it does have negative consequences on air quality, and therefore public health."
Read more of this story at Slashdot.
Categories: Linux fréttir
Kioxia and Dell Cram Nearly 10PB Into a Single 2U Server
BrianFagioli writes: Kioxia and Dell Technologies say they have built a 2U server configuration capable of scaling to 9.8PB of flash storage, which is the sort of density that would have sounded impossible just a few years ago. The setup combines a Dell PowerEdge R7725xd Server with 40 Kioxia LC9 Series 245.76TB NVMe SSDs and AMD EPYC processors. According to Kioxia, matching the same capacity with more common 30.72TB SSDs would require seven additional servers and another 280 drives.
The companies are pitching the hardware squarely at AI and hyperscale workloads, where storage is rapidly becoming a bottleneck alongside compute. Kioxia claims the denser configuration can dramatically reduce power consumption and rack space requirements while remaining air cooled. The announcement also highlights how quickly enterprise storage capacities are escalating as organizations race to support larger AI models, massive datasets, and increasingly demanding data pipelines.
Read more of this story at Slashdot.
Categories: Linux fréttir
AMD Is Bringing Improved FSR 4 Upscaling To Its Older GPUs
AMD says FSR 4.1 will finally bring its newer hardware-accelerated upscaling technology to older Radeon GPUs. "The rollout will begin in July with RDNA3- and 3.5-based GPUs, which include the Radeon RX 7000 series, as well as integrated GPUs like the Radeon 890M and Radeon 8060S," reports Ars Technica. "In 'early 2027,' support will also be extended to the RDNA2 architecture, which includes the Radeon RX 6000 series, integrated GPUs like the Radeon 680M, and the Steam Deck's GPU. This would also open the door to supporting FSR 4 on the PlayStation 5 and Xbox Series X and S, all of which also use RDNA2-based GPUs." From the report: [AMD Computing and Graphics SVP Jack Huynh's] short video presentation didn't get into performance comparisons, but did mention that AMD had to work to get FSR 4's superior hardware-backed upscaling working on its older graphics architectures. RDNA4 includes AI accelerators that support the FP8 data format in the hardware, and porting FSR 4 to older GPUs meant getting it running on the integer-based INT8 hardware in the RDNA3 and RDNA2-based GPUs.
This may mean that FSR 4.1 running on an RDNA3 or RDNA2-based GPU may come with a larger performance hit relative to RDNA4 cards, or that image quality may differ slightly. Modders have already worked to get FSR4 working on INT8-supporting GPUs, and the older GPUs reportedly see a 10 to 20 percent performance hit relative to FSR 3.1 running on the same hardware. AMD's official implementation may or may not improve on these numbers.
[...] Any games that support FSR 4 should be able to support FSR 4.1 running on Radeon 7000-series cards; users will presumably be able to install a driver update in July that enables the new feature. Games that support the older FSR 3.1 can also be forced to use FSR 4 in the Radeon graphics driver.
Read more of this story at Slashdot.
Categories: Linux fréttir
Google reimburses Register sources who were victims of API fraud
Two of the Google Cloud developers who were hit with bills for thousands of dollars following unauthorized API calls to Gemini models have had their bills reversed, the users told The Register in recent days. But Google plans to continue automatically expanding users' spending limits, leaving them and countless other customers vulnerable to bills they cannot afford, whether from fraud or a sudden traffic surge. Australia-based developer Isuru Fonseka – whose usage bill skyrocketed to $17,000 in minutes after Google automatically upgraded his $250 spending tier when a hacker took control of his account – told us that he was happy to put this behind him. “It’s so good. It felt like they were just giving me the run around until your article. I just hope they fix it properly for everyone,” he said. “It’s great that the article was able to get the refund but it’s sad that it had to go to that level for them to process it urgently.” Despite refunding his money, Google seems to have lost a customer. Fonseka said that he has since ensured his API cannot be used with Google’s stable of AI products, and will likely try one of the independent foundation models if he needs those features. “I’ve disabled Gemini on everything – if I ever plan to use AI on my projects, I’m better off using it via a different service such as OpenRouter or going directly to one of the other LLM providers – just as a way to keep Gemini out of my account and the risk as low as possible,” he said. Fonseka said he was blindsided by a Google policy that allowed the company to automatically upgrade a user’s billing tier without permission or adequate warning. He had thought by signing up for a user tier with a $250 spending cap that his bills would be restricted to that amount. It was only after attackers exploited his API key that he learned Google would upgrade the cap automatically based on his history of spending. While Google acknowledged that the automatic tier upgrades allowed credential hijackers to rack up thousands of dollars in bills in cases like the one Fonseka described to The Register, it said it has not reconsidered the policy. In a statement to The Register, Google said that it wants to prioritize access to Google Cloud services without interruption, preferring to prevent service outages over respecting users' budget preferences. “With our automated growth tiers, we helped businesses scale as usage increased, built on their historic reputation of payments and usage,” a Google spokesperson told us in a statement. “This prevents their business having a hard service outage once they pass an artificial system quota.” Tiers vs spending caps There is some confusion between Google's usage tiers and its newly introduced spending caps, and Google’s documentation hasn't helped much. Google says its users can set their usage tiers not to exceed a certain spending level. For example the maximum spending allowed by a Tier 1 user like Fonseka is $250. However, if the account is older than 30 days and if, over the lifetime of their work with Google, they have spent at least $1,000, then Google will automatically allow that account to spend up to $100,000. So good customers have the most to fear from fraud or from an unexpected spike in usage. In several cases shared on social media, Google users were only aware of this after their credit cards were billed thousands of dollars. On April 22, Google introduced a trial of hard caps on spending within Google Cloud, but those are in a preview and are approved on a case-by-case basis. "We’re excited to announce that Spend Caps are coming soon to Google Cloud. Designed to work with Google Cloud Budgets, FinOps and DevOps can set budgets that enforce automated cost boundaries (caps) at the project level for AIS, Agent Platform, Cloud Run, Cloud Run Functions, and Maps," Google wrote. "These caps alert and ultimately pause API traffic once your set budget is reached, but leave your resources intact. If you need the traffic to resume, simply suspend the Spend Cap." Spend caps can only be set per project for a single, eligible service, Google said. Eligible services for this preview include Gemini API, Agent Platform (previously known as VertexAI), Cloud Run, Cloud Run Functions, Maps, Google said. Users who apply for a spending cap will have their submissions reviewed on a “one to two week basis” and customers are added in the order they submitted. “Once onboarded, you will receive an email with instructions on how to access the feature as well as details on how to submit feedback,” Google writes in its sign up page. Rod Danan, CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities, told The Register earlier this week that he saw his bill skyrocket to $10,000 in just 30 minutes of usage by attackers who exploited his public API key. Google forgave the charges on Thursday, he said. “They got back to me today agreeing to a refund,” he told us. “It's definitely relieving. You want to focus on the business. You don't want to have to focus on going and getting refunds from some crazy charges.” He said the stress of running a startup is hard enough without the addition of fighting one of the largest companies in the world imposing erroneous five-figure charges. “I'm happy that it's behind me. I wish it was easier,” he said. “I've learned, yeah, definitely don't give up. Be annoying whenever something is wrong and just keep pushing. Again, try to make it as public as possible, get louder and louder until the people you need to hear you actually hear you.” Google said any unauthorized use of API keys will be investigated and it historically has treated customers compassionately when there is clear evidence of fraud or error. “We take reports of credential abuse and the financial security of our customers extremely seriously; and as you know are investigating these specific cases you have pointed to and we will work directly with any impacted users to resolve charges resulting from fraudulent activity,” Google said. ®
Categories: Linux fréttir
Datacenters slurping up so much juice they boosted prices 75% in largest US energy market
Prices in the United States' largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes. The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report. According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM. “Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.” As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either. “The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.” Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals. For starters, Monitoring Analytics found that - like pretty much everywhere right now - power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report. “The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted. Current plan: Shift the risk to everyone else PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way. The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.” “Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.” As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them. “This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue. When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers. “PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions. Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiple states have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ®
Categories: Linux fréttir
