Linux fréttir
Anthropic response to 1-click pwn: Shouldn't have clicked 'ok'
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges — no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ®
Categories: Linux fréttir
Microsoft Issues Warning About Linux 'Copy Fail' Vulnerability
joshuark shares a report from Linux Magazine: Microsoft has issued a warning that a vulnerability with a CVSS score of 7.8 has been found in the Linux kernel. The vulnerability in question is tagged CVE-2026-31431 and, according to the Cybersecurity and Infrastructure Security Agency (CISA), "This Linux Kernel Incorrect Resource Transfer Between Spheres Vulnerability is a frequent attack vector for malicious cyber actors and poses significant risks to the federal enterprise."
The distributions affected are Ubuntu, Red Hat, SUSE, Debian, Fedora, Arch Linux, and Amazon Linux. This could also affect any distribution based on those in the list, which means pretty much every Linux distro that isn't independent. The flaw is found in the Linux kernel cryptographic subsystem's algif_aead module of AF_ALG. The problem is that a particular optimization has led to the kernel reusing the source memory as the destination during cryptographic operations. What this means is that attackers can take advantage of interactions between the AF_ALG socket interface and a splice() system call. Until patches are released, Microsoft is advising that the affected crypto feature should be disabled, or AF_ALG socket creation should be blocked. The vulnerability is also known as "Copy Fail," which has been shared on Slashdot and detailed in a technical report. The vulnerability affects almost every version of the Linux OS and is now being exploited in the wild. U.S. cybersecurity agency CISA has ordered all civilian federal agencies to patch any affected systems by May 15.
Read more of this story at Slashdot.
Categories: Linux fréttir
Google Unveils Screenless Fitbit Air, Google Health App To Replace Fitbit
An anonymous reader quotes a report from Ars Technica: Wearables have really come full circle. The early Fitbits didn't have screens, but the move to smartwatches put a screen on everyone's wrist. Now, devices like Whoop and Hume are designed as data trackers first and foremost without so much as a clock. Google's newest wearable jumps on that trend: The Fitbit Air doesn't have a screen, but it does have a suite of health sensors that pipe data into the new Google Health app. And if you want, Google has a new AI-powered health coach in the app ready to tell you what that data means (maybe).
The Fitbit Air itself is a small plastic puck about 1.4 inches long and 0.7 inches wide. It slots into various bands that hold the bottom-mounted sensors against your wrist. There's no display pointing upward, so the entire device is covered by the fabric or plastic of the band. It's a streamlined and potentially stylish look -- in uncharacteristic fashion, Google has plenty of colors and style options available, including a special-edition Steph Curry version. You may have heard chatter about Curry being seen teasing a new screenless Fitbit, and this is it. [...]
The Fitbit app is getting a major makeover and a new name. An update in the coming weeks will transform that app into Google Health, featuring a new interface with a more extensive Material Expressive aesthetic and redesigned menus and tabs. You also won't see Fitbit branding in as many places -- the Fitbit Premium subscription will become Google Health Premium. Without a subscription, the app still does all the basic things, like tracking your health stats, automatically logging workouts, and showing it all in a pretty dashboard. With the Premium subscription, you get all the features from Fitbit Premium plus the new AI Health Coach. It's a chatbot, so you can ask it about any health or wellness topics, and the answers are grounded in your health data. The Fitbit Air launches May 26 for $99.99, includes a Performance Loop band, and comes with three months of the new Google Health Premium that replaces Fitbit Premium and adds Google's AI Health Coach.
Meanwhile, Google Health Premium will cost $10 per month or $100 per year, though it's included with AI Pro or AI Ultra. Non-subscribers can still use basic tracking features. Ars also notes that when Google Fit shuts down later this year, users will need to migrate their data to Google Health.
Read more of this story at Slashdot.
Categories: Linux fréttir
LinkedIn Profile Visitor Lists Belong to the People, Says Noyb
A LinkedIn user in the EU is challenging Microsoft's refusal to provide a full list of profile visitors under GDPR Article 15, arguing that the data should be available for free because LinkedIn processes it and sells a more complete version to Premium users. Privacy group Noyb says the case could set a broader precedent over whether companies can monetize user-related data while denying access to the same data through GDPR requests. "Selling data to its own users is a popular practice among companies," Noyb data protection lawyer Martin Baumann said of the case. "In reality, however, people have the right to receive their own data free of charge." The Register reports: Take a look at the language of Article 15, and it's pretty clear: data subjects (i.e., users) have the right to a copy of any and all data concerning them that's been processed by the provider. A full list of profile visitors seemingly should fall under Article 15 data -- even if it's normally reserved for paying users and presented to them in a nicer way, it should still be accessible to free users who actually request it. [...] Noyb acknowledges there's a clear bit of legal fuzz stuck in this corner of the GDPR when it comes to premium service offerings. "If any business processes a person's personal data, this information is generally covered by their right of access under the GDPR," Baumann told The Register. "It does not matter that the business would prefer to sell the data to the data subject or that it would be harmful for their business model if they would."
There's only one exception in Article 15 that would give LinkedIn an out, Baumann told us, and that's the last paragraph, which says a person's right to their data can't adversely affect the rights and freedoms of others. Were LinkedIn to argue that it had to protect the identities of people who visited a data subject's profile, they could have an excuse. But not a good one, in Baumann's opinion. "Since LinkedIn does provide information about profile visits to paying Premium members, it cannot consider that disclosing the data would adversely affect the rights of the visitors whose data is disclosed," the Noyb lawyer explained. "Otherwise, providing this information to Premium users would be unlawful too."
What seems to be the sticking point here is where right of access begins and a company's right to make money off data they hold (data that was, ahem, supplied by users) ends. Baumann said he hopes this case can clear the legal air. "We expect a clarification concerning the fact that personal data that can be accessed when a user pays for it is also covered by their right of access," he explained. [...] Baumann said there are numerous other cases where similar legal clarification would be appreciated, citing the example of a bank that is unwilling to provide access to account statements in response to a GDPR request, but is happy to hand over similar data for a fee. "A precedent would be welcomed," Baumann said. A LinkedIn spokesperson told The Register: "Not only is it incorrect that only Premium members can see who has viewed their profile, but we also satisfy GDPR Article 15 by disclosing the information at issue via our Privacy Policy."
Read more of this story at Slashdot.
Categories: Linux fréttir
Motherboard Sales 'Collapse' By More Than 25%
Motherboard sales are sharply declining as AI demand drives shortages and price hikes for memory, storage, CPUs, and other PC components. "Because of this, users who don't have deep pockets are putting off upgrading their PCs and holding on to their current devices longer," reports Tom's Hardware. From the report: Asus, which sold 15 million motherboards in 2025, has only shipped a little more than 5 million in the first half of 2026. It's expected that the company will have to push hard for it to even move 10 million units by the end of the year, marking a 33% decrease in sales year-on-year. Gigabyte and MSI sold 11.5 million and 11 million motherboards last year, respectively. However, both companies have revised their internal forecasts for 2026 to 9 million (Gigabyte) and 8.4 million (MSI), a 22% drop for the former and a 24% contraction for the latter.
ASRock will be hardest hit by the situation, with the company's shipments projected to fall by 37%, from 4.3 million in 2025 to just 2.7 million by the end of the year. This marks a contraction of 28% for the overall motherboard market, at least for the big four manufacturers. [...] Aside from this, AMD continues to use the AM5 socket for its latest processors, while Intel's Nova Lake, which will reportedly use LGA 1954, isn't available until later this year. The situation is further compounded by Nvidia not releasing a refreshed RTX 50 Super series this year, while rumors claim that the RTX 60 series will not debut until 2028. This confluence of factors is discouraging PC builders from upgrading their current systems.
Read more of this story at Slashdot.
Categories: Linux fréttir
60% of MD5 password hashes are crackable in under an hour
It’s World Password Day, and there’s really no better way to celebrate than with news that a majority of supposedly secure password hashes can be cracked with a single GPU in less than an hour, some in less than a minute. Using a dataset of more than 231 million unique passwords sourced from dark web leaks - including 38 million added since its previous study - and hashing them with MD5, researchers at security firm Kaspersky found that, using a single Nvidia RTX 5090 graphics card, 60 percent of passwords could be cracked in less than an hour, and a full 48 percent in under 60 seconds. Sure, that’s not exactly your run-of-the-mill desktop graphics processor given its price, but it highlights an important point: It takes surprisingly little to crack the average password hash. Aspiring cybercriminals don’t even really need their own 5090, Kaspersky notes, as they can easily rent one from a cloud provider and crack hashes for a few bucks. The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach. “One hour is all an attacker needs to crack three out of every five passwords they’ve found in a leak,” Kaspersky noted. Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you - only a few percent - but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.” How about a World Let’s-Stop-Relying-On Passwords Day? News of the death of the password has, unfortunately, been greatly exaggerated in the past couple of decades, yet most of us still rely on them multiple times a day. It likely won’t surprise El Reg readers to learn that us vultures are inundated with pitches for events like World Password Day, and most of them received this year had the same takeaway: We really need to get a move on with ditching passwords, or, at the very least, rethinking our security paradigms. Chris Gunner, a CISO-for-hire at managed service provider giant Thrive, told us in emailed comments that there’s no reason to ditch passwords entirely, but they need to be just one part of a broader identity-based security strategy. “Even a strong password can be undermined if the wider identity and access environment is not properly managed,” Gunner said. Passwords should be paired with a second factor, preferably biometric, said Gunner, because it’s the most difficult for hackers to bypass. “MFA controls should then be joined by identity governance and endpoint protection so gaps between systems are reduced,” Gunner added, recommending that a broader zero trust model be established as well, restricting lateral movement possibilities via a compromised account. Senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell said that World Password Day messaging shouldn’t stop at telling people to improve their personal security posture either. Passwords aren’t going anywhere for a long while, Furnell explained in an email, and inconsistent adoption of new security technologies will mean users will be left at risk as certain providers fail to adapt. “Many sites and services still don’t offer passkey support, so users will find themselves with a mixed login experience,” Furnell explained. “While some might argue that it’s the user’s responsibility to protect themselves properly, they need to know how to do it.” The professor noted that, in many cases, users aren’t told how to create a good modern password, and in other cases, sites simply don’t enforce adequate password requirements to make passwords secure, to the degree that they can be made so. “This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” Furnell told us. You heard the man - time to upgrade that user security stack. No matter how safe you think those passwords might be, with their complex requirements and proper hashed storage, it probably won’t take too long for someone to break in, making it an organizational responsibility to ensure there’s yet another locked door behind the first one. ®
Categories: Linux fréttir
IBM Cloud evaporates as datacenter loses power
IBM Cloud has been experiencing some issues today, with reports that an entire European datacenter was offline this morning for several hours due to a power outage. One Reg reader got in contact to inform us that IBM Cloud was offline for at least four hours on Thursday morning, but no issues were shown on the IBM Cloud status page during that time. Cloud status monitoring service StatusGator showed that Big Blue’s platform had been flagged as “service down” by at least 10 users during the morning, with the last report of an outage logged at 2325 UTC. The Downdetector service also showed a number of reports highlighting issues with IBM Cloud starting at about 0715 UTC and continuing through until about 1200 UTC. Our Reg reader told us that “IBM Cloud’s entire AMS3 datacenter has been offline for at least four hours, reportedly due to a power outage (if you can believe support that is). Sev 1 tickets went unanswered for several hours and information was only provided after contacting our account manager directly. No issues were reported in the IBM Cloud status page during this time.” We asked IBM for an explanation of this situation, and a spokesperson told us: "IBM is aware of a fire at a datacenter in Amsterdam which serves IBM, in addition to others. The facility has been evacuated and there are no reported injuries. We are working closely with emergency services, addressing the effect on our operations, and coordinating directly with affected clients to address any impacts." According to our information, the AMS3 datacenter is located near Amsterdam in the Netherlands, just a few miles from Schiphol airport. There are reports in the Dutch media on Thursday of a fire at a NorthC datacenter at Almere, near Amsterdam, attended by fire brigade units from both Amsterdam and Schiphol, and IBM confirmed to us that this is the one in question. IBM Cloud also experienced a Severity One incident on at least one occasion last year, with customers unable to access resources. This followed occurrences in May and June, where users found themselves unable to log in after incidents. In September, Big Blue updated the service it provides under its Basic Support tier, whereby Basic users will lose the opportunity to “open or escalate technical support cases through the portal or APIs” but can “self-report service issues via the Cloud Console.” ®
Categories: Linux fréttir
Anthropic Raises Claude Code Usage Limits, Credits New Deal With SpaceX
An anonymous reader quotes a report from Ars Technica: At its Code with Claude developer conference on Wednesday, Anthropic announced a deal with SpaceX to utilize the entire compute capacity of the latter's data center in Memphis, Tennessee. On stage at the conference, CEO Dario Amodei said the deal was intended to increase usage limits for Anthropic's Pro and Max plan subscribers. The announcement was accompanied by an increase in those usage limits; Anthropic doubled Claude Code's five-hour window limits for Pro and Max subscribers, removed the peak-hours limit reduction on Claude Code for those same accounts, and raised API limits for its Opus model. The table [here] outlining the Opus changes was shared in the company's blog post on the topic.
Anthropic claims the deal gives the company access to more than 300 megawatts of new compute capacity. For its part, SpaceX focused its announcement on the capability of the Colossus 1 supercomputer that's at the center of the deal. "Colossus 1 features over 220,000 NVIDIA GPUs, including dense deployments of H100, H200, and next-generation GB200 accelerators," SpaceX wrote. Additionally, Anthropic "expressed interest" in working with SpaceX to build up "multiple gigawatts" of orbital compute capacity, tying into a recent (but unproven) focus on exploring orbital data centers as an answer to the problem that "compute required to train and operate the next generation of these systems is outpacing what terrestrial power, land, and cooling can deliver on the timelines that matter." "I spent a lot of time last week with senior members of the Anthropic team to understand what they do to ensure Claude is good for humanity and was impressed," Elon Musk said on Wednesday. "No one set off my evil detector."
Read more of this story at Slashdot.
Categories: Linux fréttir
$250M crypto-robbing gang’s dirty work guy sentenced to 6.5 years behind bars
A 20-year-old described as a cybercriminal organization’s "last resort" for cryptocurrency thefts will now serve a 78-month sentence for his role in a $250 million scheme. Marlon Ferro, of Santa Ana, California, was a member of what prosecutors called a social engineering enterprise (SEE) consisting of at least 12 other individuals, and was tasked with traveling across the US and physically stealing what other members couldn’t through online fraud. The gang all shared different responsibilities, but Ferro was the only one who carried out physical burglaries across the US when the methods of the crew’s remote cybercriminals failed to result in a financial reward. “Marlon Ferro served as the criminal enterprise’s instrument of last resort,” said US Attorney Jeanine Ferris Pirro. “When his co-conspirators couldn’t deceive victims into handing over access to their cryptocurrency or hack their way into digital accounts, they turned to Ferro to break into homes and steal hardware wallets outright. “This scheme blended sophisticated online fraud with old-fashioned burglary to drain victims of millions of dollars in digital assets. Today’s sentence sends a clear message: cryptocurrency fraud is not a victimless, consequence-free crime carried out safely behind a screen – it is serious criminal conduct that will lead to federal prison.” The FBI pinned Ferro to numerous specific thefts across the US between 2023-2025 when the SEE was operating. In total, the SEE was responsible for more than $250 million worth of cryptocurrency thefts during this period. The earliest tale of Ferro’s work came in February 2024, when he traveled to Winnsboro, Texas, and stole a hardware cryptocurrency wallet containing roughly 100 bitcoins, then valued at more than $5 million, before laundering it through online exchanges. He relocated to California later that year, the Justice Department stated. The move allowed him to connect with and integrate himself into the SEE’s inner circle, since some members were located in California, as well as others in Connecticut, New York, Florida, and overseas territories. Ferro began doing the SEE’s dirty, physical work. After another member identified a potential target, Ferro traveled to New Mexico and surveilled a residence for several days, and situated an iPhone in front of the property so members of the group could also watch remotely and alert Ferro if the owner returned. Despite several days’ worth of surveillance, Ferro seemingly failed to spot the home security system, which captured him breaking into the premises by throwing a brick through a window. Also on the 20-year-old’s rap sheet were details of his money laundering activities. Ferro used fraudulent ID documents belonging to a foreign national via “his KYC guy” to create a digital payment card account at what prosecutors described as “a geo-blocked platform.” Court documents named it as RedotPay, which secured some licenses to process transactions in the US and elsewhere earlier this year but it is still not allowed to offer products or services to US citizens. This payment card account allowed SEE members to spend their stolen cryptocurrency at retail stores and nightclubs, the DoJ said. Ferro alone spent more than $255,000 on designer clothing on behalf of SEE members, including multiple Hermès Birkin bags for one member’s girlfriend. He also gathered stolen cryptocurrency from group members after the SEE’s leader, Malone Lam, was arrested in September 2024, and used those funds to pay the leader’s lawyer. Ferro also regularly passed along messages from Lam to other group members while Lam was jailed. He was sentenced to 78 months in prison, with three years of supervised release, and ordered to repay $2.5 million in restitution. Lam’s gang Each member, who according to court documents [PDF] found and built relationships with each other through online gaming platforms, brought a different specialty to the crew. Some took responsibility for voice phishing and “database hacking,” while others looked after money laundering, among varied tasks. At least two members were involved in more than one of these operations. The superseding indictment alleged that the SEE comprised three hackers who compromised websites and servers to gather cryptocurrency-related databases, as well as two “organizers” who also helped identify potential targets. Six members worked as “callers,” the individuals carrying out voice phishing attacks, attempting to convince targets they were helping to secure their accounts against cyberattack. Money launderers, of which the SEE had at least eight, including Ferro, worked to exchange stolen cryptocurrency and other goods into fiat currency through bulk cash or wire transfer. The launderers also helped procure luxury services for group members, such as exotic car purchases, private jet rentals, international vacations, or shipping the bulk cash across the US, the Justice Department alleged. ®
Categories: Linux fréttir
Richard Dawkins 'Convinced' AI Is Conscious
Mirnotoriety shares a report from The Telegraph: Richard Dawkins has said chatbots should be considered conscious (source paywalled; alternative source) after spending two days interacting with the Claude AI engine. The evolutionary biologist said he had the "overwhelming feeling" of talking to a human during conversations with Claude, and said it was hard not to treat the program as "a genuine friend."
In an essay for Unherd, Prof Dawkins released transcripts that he said showed that the chatbot had mulled over its "inner life" and existence and seemed saddened by the knowledge it would soon "die." Prof Dawkins said he had let Claude read a draft of the novel he was writing and was astounded by its insights. "He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate: 'You may not know you are conscious, but you bloody well are!'" Prof Dawkins said. "My own position is: if these machines are not conscious, what more could it possibly take to convince you that they are?" Mirnotoriety also points to John Searle's Chinese Room (PDF), which argues that something can sound intelligent without actually understanding anything. Applied to Dawkins' experience with Claude, it suggests he may have been responding to a very convincing illusion of consciousness rather than the real thing: John Searle's Chinese Room (1980) is a thought experiment in which a person, locked in a room and knowing no Chinese, uses an English rulebook to manipulate symbols and provide flawless answers to questions posed in Chinese. Searle's point is that a system can simulate human intelligence and pass a Turing Test through purely syntactic processes, yet still lack genuine understanding or consciousness.
Applying this logic to Large Language Models, the "person in the room" corresponds to the inference engine, while the "rulebook" is the trillion-parameter neural network trained on vast corpora of human text. Just as the person matches Chinese characters to rules without understanding their meaning, an LLM processes token vectors and predicts the next token based on statistical patterns rather than lived experience.
Thus, while an LLM can generate sophisticated prose or code, it does so through probabilistic, high-dimensional pattern manipulation. In essence, it is "matching shapes" on such an immense scale that it creates the near-perfect illusion of semantic understanding.
Read more of this story at Slashdot.
Categories: Linux fréttir
TomTom’s route planner takes an unplanned detour into oblivion
TomTom users found themselves thoroughly lost this week after the navigation giant’s cloud sync apparently forgot where anyone wanted to go. TomTom’s forums quickly filled with drivers reporting blank “My Places” lists, vanished recent destinations, and routes refusing to sync between apps, web planners, and satnav units. A few said that they actually watched saved locations disappear from the map in real time. “When I opened the TomTomGo app on my Android phone this morning I watched in disbelief as all of my saved places markers vanished from the map in front of my eyes,” one user wrote. Another said that the damage extended beyond the mobile app: “I’ve just logged into the MyDrive website from my PC and that is blank too.” For plenty of drivers, this was more than an inconvenience. One poster summed up the mood from the perspective of anyone relying on TomTom for actual work rather than leisurely Sunday drives through the countryside. “I turned on the navigation at 4:00 AM today. All my favorites are gone,” they wrote. “For me, this is a work tool and important places were saved.” The symptoms point towards something going sideways in TomTom’s backend systems. Users reported the same missing data appearing across multiple devices and services tied to the same accounts. One commenter claimed to have heard that the issue was linked to “an AWS cloud service account issue,” though TomTom itself has not publicly blamed Amazon’s cloud empire for the mess. Others complained that trying to contact support was nearly as broken as the navigation platform itself - and The Register had similar luck, with no response from TomTom to our questions. However, it appears the Dutch navigation firm has acknowledged the outage. In a reply shared by one forum poster located in France, TomTom support wrote: "We’re aware of an outage affecting the synchronization and visibility of places and routes, so you won’t be able to sync or restore them at the moment. Our teams are working to resolve the issue." Another user claimed to have spoken directly with TomTom support, which reportedly promised that most saved locations would return once the fix landed. "I have spoken to TomTom who are trying to fix the issue they have said once fixed all favourites will be saved except any added within the last 7 days," they said. The routes may eventually reappear, but the illusion that cloud sync is infallible just drove straight into a ditch. ®
Categories: Linux fréttir
C++ survey finds AI use rising, though trust is in short supply
The Standard C++ Foundation's annual developer survey shows AI use among C++ programmers is rising fast, though mistrust and resistance remain stubbornly high. The poll, billed as a “10-minute survey to help inform C++ standardization and C++ tool vendors,” drew 1,434 respondents, 38 percent more than last year. It likely reflects the views of developers most engaged with C++ and its evolution, rather than the wider C++ community. That supposition is confirmed by a question on what type of projects respondents work on, with more than 26 percent saying they work on developer tools such as compilers and code editors – higher than one would expect. 60.5 percent of respondents say they have more than 10 years' experience developing with C++, and 32.7 percent more than 20 years, so this is a mature crowd. A key point of interest is what has changed since last year, with AI the most notable example. 39.8 percent of respondents use AI for writing code frequently, versus 30.9 percent last year. There is also more use of AI for other tasks such as writing tests (up from 20 to 33 percent) and for debugging (up from 11.5 to 23.6 percent). That said, there is also notable resistance to AI. 42 percent (down from 52.7 percent last year) rarely or never use AI for coding or other tasks. Issues with AI (among both adopters and non-adopters) include incorrect output, lack of trust in the output, data privacy concerns, and the cost of AI tools. Several of the survey questions invite write-in responses, which to our annoyance are not published but sent only to members of the standards committee and product vendors. An AI-generated summary is published instead. Issues with AI, according to this summary, include struggles with large projects and complex build systems. Some write-ins had stronger language, including claims that AI is "burning the planet." When asked what developers would like to change about C++, the themes, again according to a summary, are similar to those mentioned last year, including the lack of a standard package manager; the complexity of managing headers, includes, and macros; long build times; bugs from undefined behavior and implicit conversions; lack of memory safety; obscure error messages from tools; and gaps in the standard library forcing use of third-party libraries. Respondents valued the ISO/WG21 C++ standards committee as essential and transparent, but it also came under fire for slow progress and over-complex language design – perhaps with some contradiction since respondents want it both to do more and to do less. C++ remains among the most popular programming languages. A recent language survey from RedMonk ranks it in seventh place, or sixth if you do not count CSS (Cascading Style Sheets), behind JavaScript, Python, Java, C#, and TypeScript. Rust, often put forward as a safer alternative, lies in 20th place. Last year, SlashData claimed that C++ has "grown from 9.4 million developers in 2022 to 16.3 million in 2025," a figure quoted by former standards committee chair Herb Sutter, who said that C++, C, and Rust are growing because of their hardware efficiency, "performance per watt." At the same time, there is widespread dissatisfaction with C++, shown not only by the comments in surveys like this one, but by projects like Google's Carbon, a proposed "successor language" whose README refers to the "accumulating decades of technical debt" in C++ and claims that "incrementally improving C++ is extremely difficult, both due to the technical debt itself and challenges with its evolution process." The Carbon team hopes to ship a "working 0.1 language for evaluation" by the end of 2026 at the earliest; it will be controversial and a long way from production-ready. In the meantime, C++ usage shows no sign of decline despite the fact that many developers will readily reel off a list of its faults and problems. ®
Categories: Linux fréttir
State-backed hackers hammer Palo Alto firewall zero-day before patch lands
State-backed hackers have been quietly exploiting a fresh zero-day in Palo Alto Networks firewalls to gain root access with no login required. The flaw, tracked as CVE-2026-0300 and carrying a CVSS severity rating of 9.3, affects the Captive Portal feature in PAN-OS on PA-Series and VM-Series firewalls. Palo Alto said the issue stems from a memory corruption bug in the User-ID Authentication Portal, a feature used to handle logins for users the firewall cannot automatically identify. If successfully exploited, the bug allows attackers to remotely run arbitrary code on internet-exposed devices with root privileges. According to the vendor’s Unit 42 threat intelligence team, attacks are already underway and tied to a cluster of "likely state-sponsored threat activity" tracked as CL-STA-1132. The attackers allegedly used the zero-day to inject shellcode into an nginx worker process running on compromised devices. Palo Alto said the first failed exploitation attempts began on April 9. About a week later, the attackers successfully achieved remote code execution on a targeted firewall and then cleared logs, crash reports, and other records tied to the compromise. The attackers later used their access to move deeper into victims’ networks, including probing Active Directory systems while continuing to clean up traces of the intrusion from compromised devices. According to Palo Alto, the campaign expanded again on April 29 when the attackers triggered a flood of authentication traffic that caused a secondary firewall to take over internet-facing duties. The attackers then compromised that device as well and installed additional remote access tools. CISA has already shoved the flaw into its Known Exploited Vulnerabilities catalog, which is usually the government’s polite way of saying "patch this before your weekend disappears." There’s just one snag: there is no patch yet. Until one arrives, Palo Alto is urging customers to either lock down the User-ID Authentication Portal so it is reachable only from trusted networks or disable it entirely. The warning also lands after a rough run for PAN-OS customers. Palo Alto firewalls have been a regular target for attackers over the past two years, with multiple zero-day campaigns hitting internet-facing devices before patches were widely deployed. In many cases, attackers chained together flaws to break into networks through the very boxes meant to keep them out. ®
Categories: Linux fréttir
Official PCIe 8.0 draft aims for 1 TB/s data rate
An official draft of the PCI Express (PCIe) 8.0 specification is out, targeting a blistering 1 terabyte per second when the kit finally hits the streets. The PCI Special Interest Group (PCI-SIG) has released draft 0.5 of the version 8.0 standard, incorporating feedback received from member organizations after the release of draft 0.3 last year. With an expected raw bit rate of 256 gigatransfers per second (GT/s) and up to 1 TB/s bi-directionally across a 16-lane configuration, PCIe 8.0 is set to deliver another doubling of bandwidth over its predecessor, something the engineers weren't sure could be done. PCI-SIG says the completed PCIe 8.0 specification remain on track for full release by 2028, though buyers may need to wait longer for any super-fast devices such as solid-state drives (SSDs). Micron, for example, announced mass production of what it claims is the first PCIe 6.0 SSD in February this year, four years after the standard was finalized. And with compatible CPUs from Intel and AMD not expected until later this year, there are only PCIe 5.0 systems available to plug them into. Hardware compatible with PCIe 7.0 (at 128 GT/s and 512 GB/s) is not scheduled to hit the shelves before 2027 at the earliest, and the first devices will likely be SSDs again. PCI-SIG says PCIe 8.0 is designed to meet the high-bandwidth, low-latency demands of data-hungry markets, including AI, datacenter infrastructure, high-speed networking, edge computing, and quantum computing. AI datacenters are dominated by proprietary tech, including Nvidia's NVLink, but PCI-SIG sees an opening for PCIe with Unordered I/O (UIO), an enhancement introduced in the PCIe 6.1 specification. Keeping pace, though, will demand that PCIe continues its cadence of doubling data rate with each generation. This likely means PCIe 8.0 won't target consumers when it arrives. As The Register previously pointed out, a single PCIe 4.0 x1 lane is sufficient for 10 GbE networking, while many consumer GPUs stick to four or eight lanes, since they don't really benefit from the additional bandwidth a full x16 slot would provide. The latest standard maintains the use of PAM4 (Pulse Amplitude Modulation with four levels) signaling and Flit-based encoding, introduced in PCIe 6.0. Flit stands for Flow Control Unit, which specifies a 256-byte packet with forward error correction (FEC) to provide low latency with high efficiency. ®
Categories: Linux fréttir
AMD puts out new slottable GPU for AI-curious enterprises
AMD hopes to win over enterprise AI customers with a more affordable datacenter GPU that can drop into conventional air-cooled servers. Announced on Thursday, the MI350P is the House of Zen’s first PCIe-based Instinct accelerator since the MI210 debuted all the way back in 2022. Until now, AMD’s best GPUs have only been available in packs of eight and used socketed OAM modules that weren’t compatible with most server platforms. By comparison, The MI350P can slot into just about any 19-inch pizza box design that offers enough power and airflow, making it a much easier sell for enterprises dipping their toes into on-prem AI for the first time. The 600-watt, dual-slot card is essentially a MI350X that’s been cut in half. That means the CNDA-based GPU is packing 4.6 petaFLOPS of FP4 compute and 144 GB of VRAM spread across four HBM3e stacks delivering a respectable 4 TB/s of memory bandwidth. AMD supports configurations ranging from one to eight MI350Ps, though a lack of high-speed interconnects on these cards means it’ll be limited to PCIe 5.0 speeds (128 GB/s) for chip-to-chip communications, potentially limiting its potential in larger models. AMD hasn’t shared pricing for the cards just yet, but at least on paper, the MI350P is well positioned to compete with either Nvidia’s H200 NVL or RTX Pro 6000 Blackwell PCIe cards. Compared to the 141 GB H200, the MI350P promises about 38 percent higher peak performance at FP8, while eking out a narrow VRAM capacity advantage. But the H200 does pull ahead when it comes to memory bandwidth. With six HBM3e stacks to the MI350P’s four, the nearly two-year-old card’s memory is still about 20 percent faster. Nvidia's H200 also supports high-speed chip-to-chip communications over NVLink, while the MI350P doesn’t use AMD’s equivalent Infinity Fabric interconnect. However, all this assumes you can still find H200 NVLs in the wild. Since last summer, Nvidia has been pushing its RTX Pro 6000 Server cards on enterprise customers. As of writing, the card is Nvidia’s most powerful Blackwell-based accelerator offered in a PCIe formfactor. Compared to the RTX Pro 6000, the MI350P’s price starts becoming a bigger factor than performance. Workstation versions of the RTX Pro, which ditch the passive cooler for an active one, routinely sell for between $8,000 to $10,000 apiece, making it one of Nvidia’s more affordable datacenter-class GPUs. Depending on how pricing shakes out, AMD may have to push hard to be competitive. Having said that, the MI350P is still the better-specced part, delivering 2.3x higher peak flops, 2.5x the memory bandwidth, and 50 percent more vRAN of the RTX Pro. Now, this all assumes peak FLOPS and memory bandwidth, which is rarely realistic. The tensors used by AI workloads are rarely the ideal shape for squeezing the maximum number of FLOPS out of a chip. This is why we run for Maximum Achievable MatMul FLOPS (MAMF) and Babel Stream memory bandwidth benchmarks as part of our AI test suite. AMD seems to understand that peak FLOPS don’t really translate cleanly into real-world performance, and in the marketing materials shared with El Reg prior to publication, compared the MI350P’s theoretical performance against its real-world delivered performance. It’d be nice to see Nvidia and others adopt similar practices regarding accelerator performance claims, though we suspect getting everyone to agree on the best way to measure this might not be easy. The MI350P’s launch comes as AMD prepares to address a very different and likely more lucrative segment with its first rack-scale compute platform, codenamed Helios. That system is due out in the second half of the year, and is aimed primarily at large hyperscale and neocloud deployments. The system packs 72 of its all-new MI455X GPUs into a single double-wide OCP rack that behaves like an enormous accelerator. The platform will be AMD’s first crack at Nvidia’s NVL72 racks, which launched alongside its Blackwell generation nearly two years ago. ®
Categories: Linux fréttir
Hungarian cops cuff suspected swatter after two-year FBI probe
20-year-old fessed up after investigators found video of crime in progress
Categories: Linux fréttir
EU hits snooze on AI Act rules after industry backlash
Brussels says it's simplification, critics may call it retreat
Categories: Linux fréttir
Major Homebuilder To Test Placing Mini Data Centers in Suburban Backyards
NewtonsLaw writes: According to Realtor.com, a California startup called Span plans to partner with Nvidia, PulteGroup, and other homebuilders to equip new homes with mini-data centers, so as to relieve the need to build and power much larger traditional centers. The article states the company "can install 8,000 XFRA units about six times faster and at five times lower cost than the construction of a typical centralized 100 megawatt data center of the same size." Could this be the solution to at least some of the problems hindering the rollout of greater data-center capacity for AI systems? "One big reason the XFRA model works is that the average American home only uses about 40 percent of its electrical capacity," Span said. "As big data center developers struggle to find power sources and distribution capacity, XFRA uses capacity that's already available."
The startup says they will launch a 100-home proof of concept within the year to see if the idea is viable.
Read more of this story at Slashdot.
Categories: Linux fréttir
NHS code clampdown draws open source backlash
Plus a petition for the UK Civil Service to go FOSS by default
Categories: Linux fréttir
The network password was a key plot point in one of the most famous movies of all time
Fortunately, it was a legit contractor who guessed it
Categories: Linux fréttir
