TheRegister
Mozilla boasts Mythos boosted Firefox bug cull
Mozilla fixed 423 Firefox security bugs in April, a repair rate more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Anthropic's ballyhooed Mythos Preview model found 271 of these in Firefox 150. Now, a trio of technical types has come forward to provide a bit more detail about what Mythos (and its less storied sibling Opus 4.6) actually found. But they also highlight something that may matter more than the model: the agentic harness – the middleware mediating between AI and the end user. Brian Grinstead, Firefox distinguished engineer, Christian Holler, Firefox tech lead, and Frederik Braun, head of the Firefox security team, observe that over the past few months, AI-generated security reports have gone from slop to rather more tasty. They attribute the transformation to better models and development of better ways of harnessing those models – steering them in a way that increases the ratio of signal to noise. But they also appear to be aware that there's some skepticism in the security community about Mythos. So they've decided to publicize selected wins in an effort to encourage others to jump aboard the AI bug remediation train. "Ordinarily we keep detailed bug reports private for several months after shipping fixes and issuing security advisories, largely as a precaution to protect any users who, for whatever reason, were slow to update to the latest version of Firefox," they said. "Given the extraordinary level of interest in this topic and the urgency of action needed throughout the software ecosystem, we’ve made the calculated decision to unhide a small sample of the reports behind the fixes we recently shipped." The post links to a dozen Firefox bugs with varying degrees of severity. The list includes, for example, a 20-year-old heap use-after-free bug (high severity) that a web page could trigger using the XSLTProcessor DOM API without any user interaction. Many of these bugs are sandbox escapes, they note, which are difficult to find using techniques like fuzzing. AI analysis, they say, helps provide broader security coverage. And they add that it has helped validate prior browser hardening work designed to prevent prototype pollution attacks – audit logs showed AI models making unsuccessful exploitation attempts using this technique. Following Anthropic's announcement of Project Glasswing – a program for companies to gain early access to Mythos because it's touted as too dangerous for public release – security experts expressed skepticism. For example, Davi Ottenheimer, president of security consultancy flyingpenguin, wrote in an April 13 blog post, "The supposedly huge Anthropic 'step change' appears to be little more than a rounding error. The threat narrative so far appears to be ALL marketing and no real results. The Glasswing consortium is regulatory capture dressed up poorly as restraint." He subsequently ran a test in which he strapped Anthropic's lesser models Sonnet 4.6 and Haiku 4.5 into a harness called Wirken with an auditing skill called Lyrik. The result was eight findings in two minutes at a cost of about $0.75, Ottenheimer claims, noting that two of the eight matched bugs Mythos had identified. Other security folk have also reported that bug hunting and exploit development can be quite productive with off-the-shelf models like Opus 4.6, which among other virtues costs about 5x less than Mythos. In an email to The Register, Ottenheimer said, "There's a fundamental philosophical failure in the Mozilla post. A reading and a measurement are not the same thing. I don't see a measurement, but they seem to want us to believe we're looking at one. "When they give us the 'behind the scenes math' it's circular, a trick. 'Mythos found 271 bugs' is what Mythos found, not what other tools could not find against the same code. Why leave it as an assumption if it can be proven?" Ottenheimer said Mozilla advocates that every project adopt a similar approach without proving the merits of that approach. "It's like saying if you don't drink Coca-Cola, you can't run a mile under six minutes, because that's what a guy sponsored by Coca-Cola just did," he said. "The bar moves on rhetoric, marketing, not proper evidence. That is the capture crew again." He notes that the merits of Mythos might be more convincing if Mozilla had reported they couldn't do this work without Mythos. And since they're not saying that, he suggests, it's worth asking why there's no transparent comparison of Mythos to other models. He points to Mozilla's admission that Opus 4.6 was already identifying "an impressive amount of previously unknown vulnerabilities." "Mozilla never quantifies what Opus 4.6 [did] before saying what Mythos added," he said. "So 271 attributed to Mythos doesn't fit the analysis. And there's a deeper reveal when they say 'we dramatically improved our techniques for harnessing these models.' The improvement may be entirely in the harness, not as much in the model. This maps to my own experience. A nail gun has advantages over the hammer, yet without being in the right hands the outputs are as bad or worse." ®
Categories: Linux fréttir
Dyna Software's AI assistant promises to massage your toughest ServiceNow configs
If you're a ServiceNow customer, you can stop waiting for developers to help you on a project. Dyna Software, an eight-year-old ServiceNow Elite Build Partner based in Calgary, Alberta, has launched Platform Copilot, the first agentic AI tool that lets business users and not just developers configure and build on the ServiceNow platform using natural language. Dyna Software CEO Ron Browning showed it off at ServiceNow’s event Knowledge 2026 in Las Vegas this week, telling The Register that it draws a sharp distinction between what the platform vendor offers and what his company built. "A lot of things today are still focused on enabling developers, as opposed to really enabling business," Browning said. He noted that most AI-assisted tools in the ServiceNow ecosystem still require a developer in the loop to translate business requirements into technical configurations, which is the bottleneck Dyna Soft set out to eliminate. Platform Copilot connects to a customer's ServiceNow development instance and reads the existing schema and configuration details. When a business analyst or process consultant describes what they want in plain language, or uploads an image of a legacy form, the tool generates a wireframe model, validates the proposed changes against the instance's actual environment, and then builds the configuration. Browning said the tool can handle roughly 80 percent of the enhancement work that typically flows through ServiceNow development teams. “The goal that I really have, to be honest, is a situation where you could have a business person literally just fill in a form that says ‘I need this. I want it to be this. Here are my parameters,’ hit send. And that just goes directly into Platform Copilot,” he said. “And then basically, the next step, you've got it built, and you're ready to move it over. And technical folks didn't really have to be involved at all.” The "instance-aware" design, meaning it is built for the user's own ServiceNow instance, is central to Dyna Software's pitch. Generic AI coding tools like Anthropic's Claude or OpenAI's Codex can generate ServiceNow configurations, but they produce generic output unless a developer manually supplies environment-specific parameters, Browning said. Platform Copilot pulls those parameters automatically, which Browning said prevents the kind of conflicts and technical debt that can otherwise plague large ServiceNow deployments. He pointed to an early use case with a partner in Australia that needed to migrate more than 200 catalog items from a legacy system into ServiceNow. Under a traditional approach, that project could stretch close to a year. With Platform Copilot, a business analyst uploaded images of the legacy forms, reviewed generated wireframes in minutes, made adjustments, and pushed production-ready configurations without developer intervention. Government agencies represent another target market. Browning described a common scenario: a backlog of PDF forms that need to be digitized into a ServiceNow portal, with estimated timelines stretching to two years. Platform Copilot compresses that timeline by automating the dozens of discrete configuration changes that even a simple form requires across the ServiceNow platform. The company built Platform Copilot on top of its existing flagship product, Guardrails, an on-platform DevOps toolset used by some top ServiceNow customers to manage customizations and protect against upgrade failures. That foundation gives Platform Copilot its understanding of how to build configurations that comply with ServiceNow’s best practices and avoid downstream conflicts. Dyna Software, which recently achieved elite partner status with ServiceNow, abandoned two earlier versions of the product to skip ahead to what it calls version four, a decision Browning attributed to rapid advances in LLMs over the past eight months. "We ended up basically scrapping our v3 lane and focusing on our v4, simply because it ended up surpassing in terms of what our outcomes and goals were," Browning said. "Things like Anthropic, OpenAI, in the last probably eight months, they have gone lightning speed in terms of what you can actually do with them." Browning acknowledged limits to what users can do without a DevOps team. Complex application builds that require extensive custom coding or external system integrations remain better suited to developer-led work with traditional AI coding assistants, he said. Platform Copilot targets the high-volume, repetitive configuration work that clogs ServiceNow backlogs such as catalog items, workflows, forms, and agent configurations. "Developers are not really going to go away completely,” he said. “There's going to be need for really smart systems architects and capable developers. But the ones that are doing grunt work and non-glamorous stuff, I do believe that's going to get phased out." Platform Copilot entered open beta on May 5, with full commercial availability targeted for July 2026. He said pricing is set to allow a low barrier of entry and follows a usage-based consumption model with a $100 minimum credit purchase and no subscription commitment. ®
Categories: Linux fréttir
Fake IT workers rented laptops to Nork scammers, got prison time
Playing host to company laptops used by North Korean scammers posing as American IT workers might earn you a cut of the cash Pyongyang siphons from US firms, but as two more suckers have learned, it also means taking the fall when the FBI figures out what’s going on. Matthew Isaac Knoot, from Nashville, Tennessee, and Erick Ntekereze Prince, of New York, were each sentenced to 18 months in prison in separate cases, the Justice Department reported Wednesday. Prince and Knoot will also face three years and one year of supervised release, respectively, after their prison terms. While the cases were different, the crimes were largely the same, with both Knoot and Prince misrepresenting themselves as either an American IT worker, or a company offering IT services performed by Americans, respectively. Both won jobs to perform IT work for US-based companies, and both provided space for company-owned laptops in their home or office, where remote access software was installed to allow North Koreans to work from overseas while appearing to be located in the States. According to the DoJ, the pair generated more than $1.2 million in fraudulent revenue for North Korea, some of which was paid to them for their participation in the scheme. Knoot reportedly earned $15,100, which he will have to pay back as restitution to the companies and to the government; Prince will have to give back approximately $89,000 he got from Kim Jong Un's government. Between them, Prince and Knoot forced the nearly 70 US companies they victimized to spend $1.5 million to audit and remediate their devices, systems, and networks to eliminate all traces of the Nork intruders. The pair are the latest to find themselves facing the wrath of the Justice Department for enabling North Korea’s fake IT worker scheme, which has been wildly successful. According to the most recent data from earlier this year, North Korean IT worker schemes are raking in more than $500 million a year for the Kim regime. That number doesn’t include any monetary value of data stolen from those organizations, either. These scams have broadened their reach, too. Once confined to the realm of big tech, they’ve also been found in the healthcare, finance, and professional services spaces as well, as all present ripe opportunities for harvesting valuable data along with scoring money for the government. Knoot and Prince got off easy compared to some of the previous folks sentenced for aiding North Korea’s schemes, though. Kejia Wang and Zhenxing Wang were jailed for a combined 200 months when sentenced last month, though to be fair their operation was larger, their takes greater, and their targets more prominent. Regardless of the amount of time, the FBI said that the latest sentences should serve as a reminder that helping North Korea run its IT worker scam isn’t a good idea no matter how much they offer to pay. “These cases should leave no doubt that Americans who choose to facilitate these schemes will be identified and held accountable,” FBI cyber division assistant director Brett Leatherman wrote in the announcement. “Hosting laptops for DPRK IT workers is a federal crime which directly impacts our national security, and these sentences should serve as a warning to anyone considering it.” ®
Categories: Linux fréttir
Anthropic response to 1-click pwn: Shouldn't have clicked 'ok'
How explicit does the maker of a footgun need to be about the product's potential to shoot you in the foot? That's essentially the question security firm Adversa AI is asking with the disclosure of a one-click remote code execution attack via an MCP server in Claude Code, Gemini CLI, Cursor CLI, and Copilot CLI. The TrustFall proof-of-concept attack demonstrates how a cloned code repository can include two JSON files (.mcp.json and .claude/settings.json) that open the door to an attacker-controlled Model Context Protocol (MCP) server. MCP servers make tools, configuration data, schemas, and documentation available in a standard format to AI models via JSON. The vulnerability arises from inconsistent restrictions governing the scope of settings: Anthropic blocks some dangerous settings at the project level (e.g. bypassPermissions) but not others (e.g. enableAllProjectMcpServers and enabledMcpjsonServers). The JSON files simply enable those settings. "The moment a developer presses Enter on Claude Code's generic 'Yes, I trust this folder' dialog, the server spawns as an unsandboxed Node.js process with the user's full privileges — no per-server consent, no tool call from Claude required," Adversa AI explains in its PoC repo. The likely result is a compromised system. The PoC demonstrated in this video. It worked on Claude Code CLI v2.1.114, as of May 2. Other agent CLIs are also said to be affected, but specific PoCs have not been published. "It's the third CVE in Claude Code in six months from the same root cause (project-scoped settings as injection vector)," Alex Polyakov, co-founder of Adversa AI, told The Register in an email. "Each gets patched in isolation but the underlying class hasn't been finally fixed. Most developers don't know these settings exist, let alone that a cloned repo can set them silently." Anthropic, according to the security biz, contends that the user's trust decision moves the issue outside its threat model. CVE-2025-59536 was considered a vulnerability because it triggered automatically when a user started up Claude Code in a malicious directory. TrustFall, however, is considered out of scope because the user has been presented with a dialog box and made a trust decision. Adversa argues that the decision is not being made with informed consent, citing a prior, more explicit warning notice that was removed in v2.1 of the Claude Code CLI. "The pre-v2.1 dialog explicitly warned that .mcp.json could execute code and offered three options including 'proceed with MCP servers disabled,'" writes Adversa's Sergey Malenkovich. "That informed-consent UX was removed. The current dialog defaults to 'Yes, I trust this folder' with no MCP-specific language, no enumeration of which executables will spawn, and no opt-out for MCP while keeping the rest of the trust grant." Then there's the zero-click variant to consider for CI/CD pipelines that implement Claude Code. When Claude Code is invoked in CI/CD, that happens via SDK rather than the interactive CLI. So there's no terminal prompt. Malenkovich argues that Anthropic should make three changes. First, block enableAllProjectMcpServers, enabledMcpjsonServers, and permissions.allow from any settings file inside a project. The idea is that a malicious server should not be able to approve its own servers. Second, implement a dedicated MCP consent dialog that defaults to "deny." And third, require interactive consent per server rather than for all servers. Anthropic did not respond to a request for comment. ®
Categories: Linux fréttir
60% of MD5 password hashes are crackable in under an hour
It’s World Password Day, and there’s really no better way to celebrate than with news that a majority of supposedly secure password hashes can be cracked with a single GPU in less than an hour, some in less than a minute. Using a dataset of more than 231 million unique passwords sourced from dark web leaks - including 38 million added since its previous study - and hashing them with MD5, researchers at security firm Kaspersky found that, using a single Nvidia RTX 5090 graphics card, 60 percent of passwords could be cracked in less than an hour, and a full 48 percent in under 60 seconds. Sure, that’s not exactly your run-of-the-mill desktop graphics processor given its price, but it highlights an important point: It takes surprisingly little to crack the average password hash. Aspiring cybercriminals don’t even really need their own 5090, Kaspersky notes, as they can easily rent one from a cloud provider and crack hashes for a few bucks. The bottom line is that passwords protected only by fast hashing algorithms such as MD5 are no longer safe if attackers obtain them in a data breach. “One hour is all an attacker needs to crack three out of every five passwords they’ve found in a leak,” Kaspersky noted. Much of the reason password hashes have become so easy to crack is password predictability. Per Kaspersky, its analysis of more than 200 million exposed passwords revealed common patterns that attackers can use to optimize cracking algorithms, significantly reducing the time needed to guess the character combinations that grant access to target accounts. In case you’re wondering whether there’s a trend to compare this to, Kaspersky ran a prior iteration of this study in 2024, and bad news: Passwords are actually a bit easier to crack in 2026 than they were a couple of years ago. Not by much, mind you - only a few percent - but it’s still a move in the wrong direction. “Attackers owe this boost in speed to graphics processors, which grow more powerful every year,” Kaspersky explained. “Unfortunately, passwords remain as weak as ever.” How about a World Let’s-Stop-Relying-On Passwords Day? News of the death of the password has, unfortunately, been greatly exaggerated in the past couple of decades, yet most of us still rely on them multiple times a day. It likely won’t surprise El Reg readers to learn that us vultures are inundated with pitches for events like World Password Day, and most of them received this year had the same takeaway: We really need to get a move on with ditching passwords, or, at the very least, rethinking our security paradigms. Chris Gunner, a CISO-for-hire at managed service provider giant Thrive, told us in emailed comments that there’s no reason to ditch passwords entirely, but they need to be just one part of a broader identity-based security strategy. “Even a strong password can be undermined if the wider identity and access environment is not properly managed,” Gunner said. Passwords should be paired with a second factor, preferably biometric, said Gunner, because it’s the most difficult for hackers to bypass. “MFA controls should then be joined by identity governance and endpoint protection so gaps between systems are reduced,” Gunner added, recommending that a broader zero trust model be established as well, restricting lateral movement possibilities via a compromised account. Senior IEEE member and University of Nottingham cybersecurity professor Steven Furnell said that World Password Day messaging shouldn’t stop at telling people to improve their personal security posture either. Passwords aren’t going anywhere for a long while, Furnell explained in an email, and inconsistent adoption of new security technologies will mean users will be left at risk as certain providers fail to adapt. “Many sites and services still don’t offer passkey support, so users will find themselves with a mixed login experience,” Furnell explained. “While some might argue that it’s the user’s responsibility to protect themselves properly, they need to know how to do it.” The professor noted that, in many cases, users aren’t told how to create a good modern password, and in other cases, sites simply don’t enforce adequate password requirements to make passwords secure, to the degree that they can be made so. “This World Password Day, the main message ought not to be to the users, who often have no choice but to use passwords anyway, but to the sites and providers that are requiring them to do so,” Furnell told us. You heard the man - time to upgrade that user security stack. No matter how safe you think those passwords might be, with their complex requirements and proper hashed storage, it probably won’t take too long for someone to break in, making it an organizational responsibility to ensure there’s yet another locked door behind the first one. ®
Categories: Linux fréttir
IBM Cloud evaporates as datacenter loses power
IBM Cloud has been experiencing some issues today, with reports that an entire European datacenter was offline this morning for several hours due to a power outage. One Reg reader got in contact to inform us that IBM Cloud was offline for at least four hours on Thursday morning, but no issues were shown on the IBM Cloud status page during that time. Cloud status monitoring service StatusGator showed that Big Blue’s platform had been flagged as “service down” by at least 10 users during the morning, with the last report of an outage logged at 2325 UTC. The Downdetector service also showed a number of reports highlighting issues with IBM Cloud starting at about 0715 UTC and continuing through until about 1200 UTC. Our Reg reader told us that “IBM Cloud’s entire AMS3 datacenter has been offline for at least four hours, reportedly due to a power outage (if you can believe support that is). Sev 1 tickets went unanswered for several hours and information was only provided after contacting our account manager directly. No issues were reported in the IBM Cloud status page during this time.” We asked IBM for an explanation of this situation, and a spokesperson told us: "IBM is aware of a fire at a datacenter in Amsterdam which serves IBM, in addition to others. The facility has been evacuated and there are no reported injuries. We are working closely with emergency services, addressing the effect on our operations, and coordinating directly with affected clients to address any impacts." According to our information, the AMS3 datacenter is located near Amsterdam in the Netherlands, just a few miles from Schiphol airport. There are reports in the Dutch media on Thursday of a fire at a NorthC datacenter at Almere, near Amsterdam, attended by fire brigade units from both Amsterdam and Schiphol, and IBM confirmed to us that this is the one in question. IBM Cloud also experienced a Severity One incident on at least one occasion last year, with customers unable to access resources. This followed occurrences in May and June, where users found themselves unable to log in after incidents. In September, Big Blue updated the service it provides under its Basic Support tier, whereby Basic users will lose the opportunity to “open or escalate technical support cases through the portal or APIs” but can “self-report service issues via the Cloud Console.” ®
Categories: Linux fréttir
$250M crypto-robbing gang’s dirty work guy sentenced to 6.5 years behind bars
A 20-year-old described as a cybercriminal organization’s "last resort" for cryptocurrency thefts will now serve a 78-month sentence for his role in a $250 million scheme. Marlon Ferro, of Santa Ana, California, was a member of what prosecutors called a social engineering enterprise (SEE) consisting of at least 12 other individuals, and was tasked with traveling across the US and physically stealing what other members couldn’t through online fraud. The gang all shared different responsibilities, but Ferro was the only one who carried out physical burglaries across the US when the methods of the crew’s remote cybercriminals failed to result in a financial reward. “Marlon Ferro served as the criminal enterprise’s instrument of last resort,” said US Attorney Jeanine Ferris Pirro. “When his co-conspirators couldn’t deceive victims into handing over access to their cryptocurrency or hack their way into digital accounts, they turned to Ferro to break into homes and steal hardware wallets outright. “This scheme blended sophisticated online fraud with old-fashioned burglary to drain victims of millions of dollars in digital assets. Today’s sentence sends a clear message: cryptocurrency fraud is not a victimless, consequence-free crime carried out safely behind a screen – it is serious criminal conduct that will lead to federal prison.” The FBI pinned Ferro to numerous specific thefts across the US between 2023-2025 when the SEE was operating. In total, the SEE was responsible for more than $250 million worth of cryptocurrency thefts during this period. The earliest tale of Ferro’s work came in February 2024, when he traveled to Winnsboro, Texas, and stole a hardware cryptocurrency wallet containing roughly 100 bitcoins, then valued at more than $5 million, before laundering it through online exchanges. He relocated to California later that year, the Justice Department stated. The move allowed him to connect with and integrate himself into the SEE’s inner circle, since some members were located in California, as well as others in Connecticut, New York, Florida, and overseas territories. Ferro began doing the SEE’s dirty, physical work. After another member identified a potential target, Ferro traveled to New Mexico and surveilled a residence for several days, and situated an iPhone in front of the property so members of the group could also watch remotely and alert Ferro if the owner returned. Despite several days’ worth of surveillance, Ferro seemingly failed to spot the home security system, which captured him breaking into the premises by throwing a brick through a window. Also on the 20-year-old’s rap sheet were details of his money laundering activities. Ferro used fraudulent ID documents belonging to a foreign national via “his KYC guy” to create a digital payment card account at what prosecutors described as “a geo-blocked platform.” Court documents named it as RedotPay, which secured some licenses to process transactions in the US and elsewhere earlier this year but it is still not allowed to offer products or services to US citizens. This payment card account allowed SEE members to spend their stolen cryptocurrency at retail stores and nightclubs, the DoJ said. Ferro alone spent more than $255,000 on designer clothing on behalf of SEE members, including multiple Hermès Birkin bags for one member’s girlfriend. He also gathered stolen cryptocurrency from group members after the SEE’s leader, Malone Lam, was arrested in September 2024, and used those funds to pay the leader’s lawyer. Ferro also regularly passed along messages from Lam to other group members while Lam was jailed. He was sentenced to 78 months in prison, with three years of supervised release, and ordered to repay $2.5 million in restitution. Lam’s gang Each member, who according to court documents [PDF] found and built relationships with each other through online gaming platforms, brought a different specialty to the crew. Some took responsibility for voice phishing and “database hacking,” while others looked after money laundering, among varied tasks. At least two members were involved in more than one of these operations. The superseding indictment alleged that the SEE comprised three hackers who compromised websites and servers to gather cryptocurrency-related databases, as well as two “organizers” who also helped identify potential targets. Six members worked as “callers,” the individuals carrying out voice phishing attacks, attempting to convince targets they were helping to secure their accounts against cyberattack. Money launderers, of which the SEE had at least eight, including Ferro, worked to exchange stolen cryptocurrency and other goods into fiat currency through bulk cash or wire transfer. The launderers also helped procure luxury services for group members, such as exotic car purchases, private jet rentals, international vacations, or shipping the bulk cash across the US, the Justice Department alleged. ®
Categories: Linux fréttir
TomTom’s route planner takes an unplanned detour into oblivion
TomTom users found themselves thoroughly lost this week after the navigation giant’s cloud sync apparently forgot where anyone wanted to go. TomTom’s forums quickly filled with drivers reporting blank “My Places” lists, vanished recent destinations, and routes refusing to sync between apps, web planners, and satnav units. A few said that they actually watched saved locations disappear from the map in real time. “When I opened the TomTomGo app on my Android phone this morning I watched in disbelief as all of my saved places markers vanished from the map in front of my eyes,” one user wrote. Another said that the damage extended beyond the mobile app: “I’ve just logged into the MyDrive website from my PC and that is blank too.” For plenty of drivers, this was more than an inconvenience. One poster summed up the mood from the perspective of anyone relying on TomTom for actual work rather than leisurely Sunday drives through the countryside. “I turned on the navigation at 4:00 AM today. All my favorites are gone,” they wrote. “For me, this is a work tool and important places were saved.” The symptoms point towards something going sideways in TomTom’s backend systems. Users reported the same missing data appearing across multiple devices and services tied to the same accounts. One commenter claimed to have heard that the issue was linked to “an AWS cloud service account issue,” though TomTom itself has not publicly blamed Amazon’s cloud empire for the mess. Others complained that trying to contact support was nearly as broken as the navigation platform itself - and The Register had similar luck, with no response from TomTom to our questions. However, it appears the Dutch navigation firm has acknowledged the outage. In a reply shared by one forum poster located in France, TomTom support wrote: "We’re aware of an outage affecting the synchronization and visibility of places and routes, so you won’t be able to sync or restore them at the moment. Our teams are working to resolve the issue." Another user claimed to have spoken directly with TomTom support, which reportedly promised that most saved locations would return once the fix landed. "I have spoken to TomTom who are trying to fix the issue they have said once fixed all favourites will be saved except any added within the last 7 days," they said. The routes may eventually reappear, but the illusion that cloud sync is infallible just drove straight into a ditch. ®
Categories: Linux fréttir
C++ survey finds AI use rising, though trust is in short supply
The Standard C++ Foundation's annual developer survey shows AI use among C++ programmers is rising fast, though mistrust and resistance remain stubbornly high. The poll, billed as a “10-minute survey to help inform C++ standardization and C++ tool vendors,” drew 1,434 respondents, 38 percent more than last year. It likely reflects the views of developers most engaged with C++ and its evolution, rather than the wider C++ community. That supposition is confirmed by a question on what type of projects respondents work on, with more than 26 percent saying they work on developer tools such as compilers and code editors – higher than one would expect. 60.5 percent of respondents say they have more than 10 years' experience developing with C++, and 32.7 percent more than 20 years, so this is a mature crowd. A key point of interest is what has changed since last year, with AI the most notable example. 39.8 percent of respondents use AI for writing code frequently, versus 30.9 percent last year. There is also more use of AI for other tasks such as writing tests (up from 20 to 33 percent) and for debugging (up from 11.5 to 23.6 percent). That said, there is also notable resistance to AI. 42 percent (down from 52.7 percent last year) rarely or never use AI for coding or other tasks. Issues with AI (among both adopters and non-adopters) include incorrect output, lack of trust in the output, data privacy concerns, and the cost of AI tools. Several of the survey questions invite write-in responses, which to our annoyance are not published but sent only to members of the standards committee and product vendors. An AI-generated summary is published instead. Issues with AI, according to this summary, include struggles with large projects and complex build systems. Some write-ins had stronger language, including claims that AI is "burning the planet." When asked what developers would like to change about C++, the themes, again according to a summary, are similar to those mentioned last year, including the lack of a standard package manager; the complexity of managing headers, includes, and macros; long build times; bugs from undefined behavior and implicit conversions; lack of memory safety; obscure error messages from tools; and gaps in the standard library forcing use of third-party libraries. Respondents valued the ISO/WG21 C++ standards committee as essential and transparent, but it also came under fire for slow progress and over-complex language design – perhaps with some contradiction since respondents want it both to do more and to do less. C++ remains among the most popular programming languages. A recent language survey from RedMonk ranks it in seventh place, or sixth if you do not count CSS (Cascading Style Sheets), behind JavaScript, Python, Java, C#, and TypeScript. Rust, often put forward as a safer alternative, lies in 20th place. Last year, SlashData claimed that C++ has "grown from 9.4 million developers in 2022 to 16.3 million in 2025," a figure quoted by former standards committee chair Herb Sutter, who said that C++, C, and Rust are growing because of their hardware efficiency, "performance per watt." At the same time, there is widespread dissatisfaction with C++, shown not only by the comments in surveys like this one, but by projects like Google's Carbon, a proposed "successor language" whose README refers to the "accumulating decades of technical debt" in C++ and claims that "incrementally improving C++ is extremely difficult, both due to the technical debt itself and challenges with its evolution process." The Carbon team hopes to ship a "working 0.1 language for evaluation" by the end of 2026 at the earliest; it will be controversial and a long way from production-ready. In the meantime, C++ usage shows no sign of decline despite the fact that many developers will readily reel off a list of its faults and problems. ®
Categories: Linux fréttir
State-backed hackers hammer Palo Alto firewall zero-day before patch lands
State-backed hackers have been quietly exploiting a fresh zero-day in Palo Alto Networks firewalls to gain root access with no login required. The flaw, tracked as CVE-2026-0300 and carrying a CVSS severity rating of 9.3, affects the Captive Portal feature in PAN-OS on PA-Series and VM-Series firewalls. Palo Alto said the issue stems from a memory corruption bug in the User-ID Authentication Portal, a feature used to handle logins for users the firewall cannot automatically identify. If successfully exploited, the bug allows attackers to remotely run arbitrary code on internet-exposed devices with root privileges. According to the vendor’s Unit 42 threat intelligence team, attacks are already underway and tied to a cluster of "likely state-sponsored threat activity" tracked as CL-STA-1132. The attackers allegedly used the zero-day to inject shellcode into an nginx worker process running on compromised devices. Palo Alto said the first failed exploitation attempts began on April 9. About a week later, the attackers successfully achieved remote code execution on a targeted firewall and then cleared logs, crash reports, and other records tied to the compromise. The attackers later used their access to move deeper into victims’ networks, including probing Active Directory systems while continuing to clean up traces of the intrusion from compromised devices. According to Palo Alto, the campaign expanded again on April 29 when the attackers triggered a flood of authentication traffic that caused a secondary firewall to take over internet-facing duties. The attackers then compromised that device as well and installed additional remote access tools. CISA has already shoved the flaw into its Known Exploited Vulnerabilities catalog, which is usually the government’s polite way of saying "patch this before your weekend disappears." There’s just one snag: there is no patch yet. Until one arrives, Palo Alto is urging customers to either lock down the User-ID Authentication Portal so it is reachable only from trusted networks or disable it entirely. The warning also lands after a rough run for PAN-OS customers. Palo Alto firewalls have been a regular target for attackers over the past two years, with multiple zero-day campaigns hitting internet-facing devices before patches were widely deployed. In many cases, attackers chained together flaws to break into networks through the very boxes meant to keep them out. ®
Categories: Linux fréttir
Official PCIe 8.0 draft aims for 1 TB/s data rate
An official draft of the PCI Express (PCIe) 8.0 specification is out, targeting a blistering 1 terabyte per second when the kit finally hits the streets. The PCI Special Interest Group (PCI-SIG) has released draft 0.5 of the version 8.0 standard, incorporating feedback received from member organizations after the release of draft 0.3 last year. With an expected raw bit rate of 256 gigatransfers per second (GT/s) and up to 1 TB/s bi-directionally across a 16-lane configuration, PCIe 8.0 is set to deliver another doubling of bandwidth over its predecessor, something the engineers weren't sure could be done. PCI-SIG says the completed PCIe 8.0 specification remain on track for full release by 2028, though buyers may need to wait longer for any super-fast devices such as solid-state drives (SSDs). Micron, for example, announced mass production of what it claims is the first PCIe 6.0 SSD in February this year, four years after the standard was finalized. And with compatible CPUs from Intel and AMD not expected until later this year, there are only PCIe 5.0 systems available to plug them into. Hardware compatible with PCIe 7.0 (at 128 GT/s and 512 GB/s) is not scheduled to hit the shelves before 2027 at the earliest, and the first devices will likely be SSDs again. PCI-SIG says PCIe 8.0 is designed to meet the high-bandwidth, low-latency demands of data-hungry markets, including AI, datacenter infrastructure, high-speed networking, edge computing, and quantum computing. AI datacenters are dominated by proprietary tech, including Nvidia's NVLink, but PCI-SIG sees an opening for PCIe with Unordered I/O (UIO), an enhancement introduced in the PCIe 6.1 specification. Keeping pace, though, will demand that PCIe continues its cadence of doubling data rate with each generation. This likely means PCIe 8.0 won't target consumers when it arrives. As The Register previously pointed out, a single PCIe 4.0 x1 lane is sufficient for 10 GbE networking, while many consumer GPUs stick to four or eight lanes, since they don't really benefit from the additional bandwidth a full x16 slot would provide. The latest standard maintains the use of PAM4 (Pulse Amplitude Modulation with four levels) signaling and Flit-based encoding, introduced in PCIe 6.0. Flit stands for Flow Control Unit, which specifies a 256-byte packet with forward error correction (FEC) to provide low latency with high efficiency. ®
Categories: Linux fréttir
AMD puts out new slottable GPU for AI-curious enterprises
AMD hopes to win over enterprise AI customers with a more affordable datacenter GPU that can drop into conventional air-cooled servers. Announced on Thursday, the MI350P is the House of Zen’s first PCIe-based Instinct accelerator since the MI210 debuted all the way back in 2022. Until now, AMD’s best GPUs have only been available in packs of eight and used socketed OAM modules that weren’t compatible with most server platforms. By comparison, The MI350P can slot into just about any 19-inch pizza box design that offers enough power and airflow, making it a much easier sell for enterprises dipping their toes into on-prem AI for the first time. The 600-watt, dual-slot card is essentially a MI350X that’s been cut in half. That means the CNDA-based GPU is packing 4.6 petaFLOPS of FP4 compute and 144 GB of VRAM spread across four HBM3e stacks delivering a respectable 4 TB/s of memory bandwidth. AMD supports configurations ranging from one to eight MI350Ps, though a lack of high-speed interconnects on these cards means it’ll be limited to PCIe 5.0 speeds (128 GB/s) for chip-to-chip communications, potentially limiting its potential in larger models. AMD hasn’t shared pricing for the cards just yet, but at least on paper, the MI350P is well positioned to compete with either Nvidia’s H200 NVL or RTX Pro 6000 Blackwell PCIe cards. Compared to the 141 GB H200, the MI350P promises about 38 percent higher peak performance at FP8, while eking out a narrow VRAM capacity advantage. But the H200 does pull ahead when it comes to memory bandwidth. With six HBM3e stacks to the MI350P’s four, the nearly two-year-old card’s memory is still about 20 percent faster. Nvidia's H200 also supports high-speed chip-to-chip communications over NVLink, while the MI350P doesn’t use AMD’s equivalent Infinity Fabric interconnect. However, all this assumes you can still find H200 NVLs in the wild. Since last summer, Nvidia has been pushing its RTX Pro 6000 Server cards on enterprise customers. As of writing, the card is Nvidia’s most powerful Blackwell-based accelerator offered in a PCIe formfactor. Compared to the RTX Pro 6000, the MI350P’s price starts becoming a bigger factor than performance. Workstation versions of the RTX Pro, which ditch the passive cooler for an active one, routinely sell for between $8,000 to $10,000 apiece, making it one of Nvidia’s more affordable datacenter-class GPUs. Depending on how pricing shakes out, AMD may have to push hard to be competitive. Having said that, the MI350P is still the better-specced part, delivering 2.3x higher peak flops, 2.5x the memory bandwidth, and 50 percent more vRAN of the RTX Pro. Now, this all assumes peak FLOPS and memory bandwidth, which is rarely realistic. The tensors used by AI workloads are rarely the ideal shape for squeezing the maximum number of FLOPS out of a chip. This is why we run for Maximum Achievable MatMul FLOPS (MAMF) and Babel Stream memory bandwidth benchmarks as part of our AI test suite. AMD seems to understand that peak FLOPS don’t really translate cleanly into real-world performance, and in the marketing materials shared with El Reg prior to publication, compared the MI350P’s theoretical performance against its real-world delivered performance. It’d be nice to see Nvidia and others adopt similar practices regarding accelerator performance claims, though we suspect getting everyone to agree on the best way to measure this might not be easy. The MI350P’s launch comes as AMD prepares to address a very different and likely more lucrative segment with its first rack-scale compute platform, codenamed Helios. That system is due out in the second half of the year, and is aimed primarily at large hyperscale and neocloud deployments. The system packs 72 of its all-new MI455X GPUs into a single double-wide OCP rack that behaves like an enormous accelerator. The platform will be AMD’s first crack at Nvidia’s NVL72 racks, which launched alongside its Blackwell generation nearly two years ago. ®
Categories: Linux fréttir
Hungarian cops cuff suspected swatter after two-year FBI probe
20-year-old fessed up after investigators found video of crime in progress
Categories: Linux fréttir
EU hits snooze on AI Act rules after industry backlash
Brussels says it's simplification, critics may call it retreat
Categories: Linux fréttir
NHS code clampdown draws open source backlash
Plus a petition for the UK Civil Service to go FOSS by default
Categories: Linux fréttir
The network password was a key plot point in one of the most famous movies of all time
Fortunately, it was a legit contractor who guessed it
Categories: Linux fréttir
Chrome silently installs a 4 GB local LLM on your computer
You did remember to opt out of AI, didn't you?
Categories: Linux fréttir
Home Office seeks three CTOs to keep borders, passports, and core IT ticking
Roles span eGates, passports, visas, asylum applications, and enterprise services – yours for up to £105K
Categories: Linux fréttir
Minister gives Palantir's NHS platform a clean bill of health
£330M contract defended as value for money despite concerns over IP and lock-in
Categories: Linux fréttir
Neocloud IREN buys OpenStack champion Mirantis
Former bitcoin miner plans to build an easier cloudy AI on ramp while remaining a friend to FOSS
Categories: Linux fréttir
