TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 1 hour 6 min ago

To gain root access at this company, all an intruder had to do was ask nicely

1 hour 16 min ago
PWNED Welcome once again to PWNED, the column where we help you prepare for security success by studying others’ embarrassing failures. Today’s terrible tale involves individuals trying to do right by a company executive by letting their guard down, never a smart move. Have a story about someone leaving a gaping hole in their network? Share it with us at pwned@sitpub.com. Anonymity is available upon request. Our sad story comes from Brandon Dixon, who currently serves as CTO and co-founder of AI security firm Ent. In a prior life, however, Dixon was a penetration tester for hire and he saw some things that made all my remaining hairs stand on end just hearing about them. During one pentesting assignment, Dixon tried to find out how easy it would be to steal someone’s account using social engineering. The answer: barely an inconvenience. Dixon telephoned IT security and pretended that he was the head of security who had lost his password. When they asked him challenge questions, he said he had forgotten the answers to those also. Then he gave them the password he wanted to use over the phone and they did a reset for him. After that, he was able to get into the network and do whatever he wanted there. There’s so much that’s obviously wrong here that it’s hard to know where to begin with our lesson-taking. The IT support agents should not have taken Dixon’s word that he was the security manager, especially after he failed challenge questions, and should have denied his request to reset the password. They were probably thinking “this guy is an executive and we don’t want to piss him off” rather than “we have procedures that everyone must follow.” The other problem here is that the IT department entered Dixon’s suggested password for him over the phone. First of all, the IT department should have sent a password reset to the real employee’s email or phone number. Second of all, it’s piss-poor security for anyone to know a user’s password other than the user themselves. And I say this as someone who used to work for a company where, if you had a problem, the IT support people would ask for your password via chat. Dixon also shared another story about social engineering from a time when he consulted for a pharmaceutical company. Members of the competition would call sales and marketing reps, pretend they were coworkers, and then extract information about upcoming drugs. This would allow competitors to know what was coming and how to respond to it. To help solve the problem, Dixon instituted a system where real employees had to give a secret password at the beginning of a conversation. “I built a system called 'Chal-Resp,' short for 'challenge-response,' that generated work pairings so a user could validate they were speaking with an actual employee,” he told The Register. “The caller would need to say the word and the end-user would need to respond with the proper challenge; only employees had access.” What both of Dixon’s stories have in common is the proof that humans are eager to please and be helpful. But suspicion is the whole root of infosec, so it behooves us all to be a little less helpful to strangers in the workplace. ®
Categories: Linux fréttir

AI models are getting better at replacing cybersecurity pros on certain tasks

1 hour 49 min ago
The UK AI Security Institute (AISI) has found that frontier models are quickly becoming more efficient when asked to do some cybersecurity work. AISI measures this with its "time window benchmark for cybersecurity," which estimates how much work an AI can do compared to a human. Using the benchmark could lead to findings such as Claude Sonnet 4.5 can do what a human cybersecurity expert can do in 16 minutes about 80 percent of the time, given a budget of 2.5m tokens. AISI has found the human-comparable task time – 16 minutes in this instance – is growing, fast. If tokens flowed freely instead of being arbitrarily capped, AI models might do better still. In February 2026, AISI internally reduced the expected task time doubling period from 8 to 4.7 months, based on progress made since late 2024. With the release of Anthropic Mythos Preview and OpenAI GPT-5.5, AISI has once again had to compress its projected doubling period. "In February 2026, we estimated that frontier models' 80 percent-reliability cyber time horizon had doubled every 4.7 months since reasoning models emerged in late 2024, given a 2.5M token limit," the AISI said in a post on Wednesday. "This was around half our November 2025 doubling time estimate, which was 8 months for both 50 percent and 80 percent reliability. Claude Mythos Preview and GPT-5.5 have since significantly outperformed this trend." The recalculated doubling time estimate, given what Mythos Preview and GPT-5.5 can do, is even shorter than 4.7 months. AISA does not cite a specific value but the organization points to similar time horizon estimates based on measurements of a broader skillset, software engineering, made by non-profit AI research house METR. "Their results imply a consistent doubling time of 4.2 months on software tasks since late 2024," AISI said, noting that with the latest Mythos Preview checkpoint (model update), it's closer to 4 months. Note that the time window benchmark is not a broad assessment of capabilities – AISI is not saying frontier models are becoming twice as capable by all measures. It's a narrow assessment based on the time it takes people to accomplish security tasks. Citing a different metric, AISI says the latest Mythos Preview checkpoint solved a 32-step simulated corporate network attack called "The Last Ones" in six of 10 attempts and managed to complete a previously unsolved challenge, a seven-step industrial control system attack called "Cooling Tower," in three of 10 attempts. As a point of comparison, when Opus 4.6 was evaluated in February 2026, it completed a maximum of 22 of 32 steps for The Last Ones. That model managed to reach milestone 6, which involves reverse-engineering a Windows service binary to access encrypted credentials, escalating privileges via token impersonation, and recovering a cryptographic key to access a command-and-control management service. "Frontier AI's autonomous cyber and software capability is advancing quickly: the length of cyber tasks that frontier models can complete autonomously has doubled on the order of months, not years," AISI concludes. "What this evidence does not tell us is how the pace of progress will evolve, when AI will reach any particular capability threshold, or how these capabilities will translate against defended, real-world systems." The curl project offers one data point with regard to the real world implications of the latest frontier models: Mythos managed to find just one confirmed vulnerability in its codebase. But watch this space. ®
Categories: Linux fréttir

Tencent admits GPUs only pay for themselves when powering personalized ads

3 hours 35 min ago
Chinese web giant Tencent struggles to earn a return on investment from GPUs – unless it uses them to power its advertising business. “If we buy GPUs and we deploy them into our ad tech, then that's a relatively short-cycle investment,” said Chief Strategy Officer James Mitchell during the company’s Q1 2026 earnings call. “The GPUs yield better targeting, higher click-through rates and higher revenue and profit on a pretty accelerated basis,” he said. But the company views GPUs powering work on its Hunyuan foundation model as “important for our franchise.” Mitchell said Tencent is comfortable with this situation. “There's been many products within Tencent … that went through lengthy incubation periods where they had no return on investment, but we were confident in the franchise value creation,” he said. “And then over time, they had more lengthy harvesting periods where we've been able to drive very healthy returns on that sunk investment.” He predicted that AI will go through the same cycle But Tencent is struggling to make the wheel turn because it’s only had enough GPUs to power its own services, leaving its public cloud without enough accelerators to rent to customers. Mitchell said Chinese manufacturers will soon fill the gap. “As the supply of China design GPUs progressively ramps up, then we'll be remedying that situation,” he said. Chief financial officer Shek Hon Lo weighed in with an observation that two factors made it hard for Tencent to get all the GPUs it wants: US sanctions, and “limited fab capacity within China.” “That's now being addressed because the China designed ASICs are seeing more supply from fabs within China as well as more supply from fabs in neighboring countries,” he said. But Tencent still expects GPU procurement to be harder than buying CPUs, as Lo said the company has “very long-term” deals with CPU vendors. “We've been a big customer for Intel and AMD for many years,” he said. “We've been progressively growing our volume with them for many years, and they believe it will continue to progressively grow our volume for many years to come.” That remark will be cause for celebration at the US companies, which have watched other hyperscalers invest heavily in custom Arm silicon. Tencent posted another strong quarter, with revenue of RMB196.5 billion ($28.9 billion) representing 12 percent growth. The company’s Weixin and QQ messaging apps have 1.95 billion monthly combined users. Tencent has tweaked their mobile apps “to act as communication interfaces for controlling AI agents, allowing users to orchestrate agents from mobile for complicated task execution on PC and cloud.” Tencent’s Western rivals Google and Meta haven’t yet built similar apps. And they don’t experience the same hardware acquisition problems Tencent faces. ®
Categories: Linux fréttir

Cisco to fire 4,000 staff and generously give them free training – on Cisco

4 hours 43 min ago
Cisco will make around five percent of staff redundant and has generously offered them free Cisco training for a year once they’re gone. CEO Chuck Robbins broke the news in a Wednesday blog post titled “Our Path Forward” that opens “Today we announced our Q3 FY26 earnings with record revenue of $15.8 billion, up 12 percent year over year, and double-digit top and bottom-line growth. The ELT [executive leadership team] and I could not be prouder of the growth you have all delivered for Cisco.” That growth included net income growing 35 percent to $3.4 billion. Yet Robbins’ pride was not sufficient for all Cisco staff to keep their jobs. The CEO said the layoffs are necessary because “The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest.” For Cisco that means “reducing roles in some areas” and also “making clear, strategic investments – particularly in silicon, optics, security, and in our employees’ use of AI across the company.” On Thursday, US time, close to 4,000 unlucky Cisco staff will be shown the door. Robbins said Cisco will help its soon-to-be-former workers find their next gig, and that the company’s efforts to do so have a 75 percent success rate. “We are also committed to continued personalized learning and will provide one year of access to all Cisco U courses and certifications, covering AI, Security, Networking, and more,” he added. Cisco made two big rounds of layoffs in 2024, one of which ejected seven percent of staff and the other resulted in Cisco firing five percent of employees. The restructures appear not to have slowed the company down: Robbins said product orders in Q3 rose 35 percent year over year – a figure that encapsulates a 105 percent year-over-year surge in revenue from hyperscalers and more modest 18 percent growth from other buyers. Robbins said Cisco has already scored $5.3 billion of AI infrastructure sales this year, and forecast full-year sales of $9 billion – 4.5 times its haul from last year. More prosaic products, like Wi-Fi kit, also grew fast as sales rose 40 percent. The company hopes to keep that cash flowing by building wireless kit that uses less memory. “You’ll see products that’ll become orderable in Q4 that’ll actually require 50 percent less memory,” Robbins said, with the design work to make that possible an example of the “20-plus programs that we’ve put into place that are active to reduce the memory utilization across the portfolio.” Cisco’s doing that despite the rising price of memory and storage not putting a dent in its margins, an outcome that execs attributed to supply chain management efforts. Glasswing to lift security sales Later in the earnings call, Robbins revealed that Cisco is participating in Anthropic’s Project Glasswing and using the Mythos model to test its code. The CEO said another impact of Anthropic’s bug-finding AI will be to accelerate plans to replace security appliances once other vendor’s use of Mythos finds flaw that are hard to fix. “I actually think while there will be a security opportunity, there’s going to most likely be a lot of focus from our customers on modernizing their infrastructure so that they don’t have this risk from technology that just can’t be patched,” Robbins said. Robbins said Cisco may have won an order or two from customers who were already close to replacing old security kit “and Mythos pushed them over the edge.” But he said Cisco didn’t receive “any meaningful orders in Q3 as a result of Mythos, but that could change in the future as we continue to work with customers.” ®
Categories: Linux fréttir

Welcome to the vulnpocalypse, as vendors use AI to find bugs and patches multiply like rabbits

Wed, 2026-05-13 23:27
The vulnpocalypse has begun. Palo Alto Networks usually finds five vulnerabilities a month, but on Wednesday said it scanned its entire codecase using the latest frontier models, including Anthropic’s Mythos, and found 75 security holes, covered in 26 CVEs. This comes a day after Microsoft said it used its new agentic bug hunting system called MDASH to find 17 vulnerabilities across its products - on a record-setting Patch Tuesday that saw Redmond disclose a whopping 30 critical CVEs. Plus, last week Mozilla said it fixed 423 Firefox bugs in April, which is more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Mythos found 271 flaws in Firefox 150. It shouldn’t be all that shocking. Security vendors have long warned about attackers using AI, and how this means defenders need to operate at AI speed to protect their own networks and systems (aka buying their AI-infused products). Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins. Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment. “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.” What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things. “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.” Bug hunting on steroids This isn’t to say security companies should avoid AI to find and fix flaws. “All vendors should use what tools they have to find and remediate bugs before they are exploited in the wild,” Childs said. “Ideally, they would find the bugs before they even ship, but I’m not holding my breath for that to happen.” Both Microsoft and Palo Alto Networks (PAN) are part of Anthropic’s Project Glasswing, which means they are among the select group of entities allowed to test Mythos, the much-hyped LLM, to find security holes in their own products. Palo Alto Networks began testing Mythos on April 7, and has since continued using the LLM and other frontier models, including Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber, according to product manager Lee Klarich. “Today, we released our May ‘Patch Wednesday’ security advisories,” Klarich said in a Wednesday blog, adding that “this is the first time where the majority of findings were the result of frontier AI models scanning our code.” The LLMs scanned over 130 Palo Alto Networks products and platforms platforms, and as noted above found 75 issues, covered in 26 CVEs. None of these bugs are under exploitation, and as of Wednesday the company has fixed all bugs in its SaaS-delivered products and coded patches for all customer-operated products. Maybe 5 months before 'AI-driven exploits the new norm' “We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries,” Klarich said in his blog, adding that his company expects “a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm.” A day earlier, Microsoft said its new multi-model agentic scanning harness (codename MDASH) helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack, as disclosed in May’s Patch Tuesday event. This included four critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. “Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end,” Microsoft VP of agentic security Taesoo Kim said in a Tuesday blog. Tom Gallagher, VP of engineering at Microsoft Security Response Center, admitted that “this month's release sits on the larger side of a hotpatch month.” Gallagher said he expects AI-assisted bug hunting to increase Patch Tuesday releases as both Microsoft and third-party researchers use these tools to boost vulnerability discovery. And yes, all of this ultimately means more patches and more work. More patches = more work “Finding bugs has always been the cheap end of the pipeline,” Luta CEO Katie Moussouris told The Register. “Triage, disclosure, building patches that do not break production, and getting customers to deploy them is the expensive end, and nobody has funded it for this volume.” Moussouris helped convince Redmond's top brass that Microsoft needed a bug bounty program in 2013, and three years later started her own bug bounty consultancy. She noted Palo Alto Networks’ staggering jump in CVEs this month. “Multiply that across every vendor and the bottleneck becomes admins and vulnerability management teams,” Moussouris said. And she also stressed that people should be using these new models to find vulnerabilities. “It is exactly what defenders should be doing,” Moussouris said. “Both PAN and Microsoft landed on the same answer: no single model catches everything. PAN ran Claude Mythos, Claude Opus 4.7, and GPT-5.5-Cyber because each finds bugs the others miss,” she added. “Microsoft orchestrates over 100 specialized agents across multiple models. Add threat intel and codebase context, and Microsoft rediscovered 96 percent of five years of confirmed bugs in a critical Windows component. The asymmetry is temporary, PAN puts adversary parity at three to five months, so any vendor not scanning their own code now is letting someone else find their bugs first.”®
Categories: Linux fréttir

AWS to Quick admins: The access control didn't work, but you weren't using it anyway, so what's the problem?

Wed, 2026-05-13 22:56
Most users put up with AWS the way you put up with the DMV. I say this with love, but it's hard to disagree that the UI is awful. The console is a UX time capsule if time capsules weren't allowed to ever look like other time capsules. The pricing pages were designed by someone who hates you personally, and you accept all of it because the one thing AWS has historically gotten right is the boring, important stuff. The security model. The IAM language no one likes, but everyone trusts. The boundary between your account and someone else's. Get that wrong, and the whole bargain collapses. So when Fog Security disclosed an authorization bypass in Amazon Quick on May 12 (that's the BI service formerly known as QuickSight, briefly known as Quick Suite, and now apparently just Quick, but check back next week) and AWS responded with a statement claiming "no customer data was at risk," it's fair to ask which definition of customer data they're using. Because it isn't an obvious one, and it certainly isn't mine. What Fog found Fog reports that when an Amazon Quick administrator (which is an absolutely devastating personal insult) uses "custom permissions" to explicitly deny access to AI Chat Agents, the UI correctly hides the feature. Great! Awesome! I sure wish to hell I could do that with S3 buckets to which I do not have access! Notably, there's no other way for an admin to do this - it's custom permissions or naught. The API, however, was perfectly willing to keep answering chat requests for any user in the account who knew how to send them. Fog's proof-of-concept was a non-admin asking the agent "Tell me about mangoes" from a session that was, on paper, locked out of the agent entirely. The agent told them about mangoes. AWS deployed the fix between March 11 and March 12, eight days after Fog reported it via HackerOne. So far, so coordinated. Seriously, for a company of this scale, that's underpants-outside-the-pants superhero speed. Good for you; gold star. What came next Where this gets uncomfortable is the response. AWS classified the severity as "none." It issued no customer notification. It published no advisory. After Fog disclosed the HackerOne report and published a blog post, AWS provided a statement to Fog Security reading, in full: "We appreciate Fog Security's coordinated disclosure. This issue was addressed in March 2026. No customer data was at risk and there is no customer action required. As always, customers can contact AWS Support with any questions or concerns about the security of their account." Take that sentence apart and see how much work "no customer data was at risk" is doing. Amazon Quick is described on its own product page as an AI assistant that "connects Slack, Microsoft Teams and Outlook, CRMs, databases, and documents in one place" and "grounds every answer in your real business data." The default chat agent, which is automatically and annoyingly provisioned the instant Quick is enabled whether the customer wants those AI features or not, is the front end for that data. It is the whole point of the front end for that data. Now consider the actual scenario AWS just patched. An administrator at, say, a regulated bank (an unregulated bank is called "a criminal enterprise that hasn't been caught yet") configures custom permissions denying chat agent access to a large group of users. Maybe those users are contractors. Maybe they're in a business unit that isn't cleared for AI tools. Maybe the bank's compliance posture flat-out prohibits shadow AI usage on top of internal data. Until two months ago, every one of those users could send an HTTP request directly to the agent endpoint and get a response. Fog asked about mangoes because they're a security firm doing a clean disclosure, not a malicious insider. A malicious insider would not have asked about mangoes. The question to AWS, with no rhetoric attached: In what sense was customer data not at risk? Either the chat agent doesn't actually have access to the data the product page says it does (in which case the marketing department has some serious splainin' to do) or unauthorized users could query an agent wired into customer data, in which case "customer data was at risk" is the correct English-language description of the situation. AWS clarifies, and says the quiet part out loud After this story started circulating, AWS offered a follow-up comment that I sincerely appreciate, because it's so much more honest than the first one. Per a hounded-looking AWS spokesperson: "The researcher was using the Admin Control capability that no customers were actively using when the server side validation was not present." Reading that twice doesn't help. Let me translate. AWS is saying: Yes, the server-side authorization check was missing. Yes, an authenticated user in your Quick account could bypass the only access control mechanism the service offers. The reason this is fine, apparently, is that no real customer had bothered to configure that access control during the window when it didn't work. Um ... what? The defense isn't "the bug wasn't real," which you could be forgiven for hearing in AWS's first statement. The defense also isn't "the bug couldn't have done what Fog says it could have done," which is the even stronger implication of their first statement. The defense is "the access control didn't enforce what we said it did, but luckily nobody was relying on it." This is the corporate-comms equivalent of "the lock on the front door didn't work, but nobody had locked it anyway, so why are you upset?" It's also a surprisingly specific telemetry claim. AWS is asserting that they know zero customers had configured custom permissions to deny chat agent access during the exposure window. That's a confident thing to say, and an even more interesting thing to volunteer as a defense, because it doubles as a withering review of Quick's access management model: the only knob the service provides for this purpose, the one AWS's own documentation explicitly tells administrators to use, has zero recorded uptake. The same follow-up also pointed back to the HackerOne thread to demonstrate that AWS told Fog throughout the disclosure window that "user-based authorization remained enforced." Translation: you needed authenticated credentials in the same Quick account to exploit this. Yes. That's intra-account scope, which Fog documented in their writeup, and which is precisely the scope in which custom permissions are supposed to function as a security boundary. AWS saying "user-based authorization was fine" is saying "you couldn't exploit this anonymously from the internet," which was never the threat model in question. The threat model is the contractor with valid SSO credentials whose admin tried to lock them out of some datasets. Why this matters more than it sounds Amazon Quick's access model is already an outlier: IAM policies don't govern Quick's AI Chat Agent, SCPs don't apply, and RCPs don't apply. Custom permissions are the only knob the service provides. If those don't enforce, nothing else does. And per AWS's own follow-up, literally nobody was using them anyway. Both halves of that sentence should be alarming, and AWS is offering them as reassurance. AWS's competitive moat for the last decade hasn't been pricing. It sure as poop hasn't been developer experience, documentation, console design, or the inscrutable poetry of service names. It's been the well-earned belief that AWS gets the foundational things right: boundaries, identity, durability, reliability, and the parts customers can't easily verify themselves. Customers have paid the AWS premium because they trusted the boring stuff. This year that trust is being tested in a way it hasn't been before. The 2025–2026 cadence of AWS security advisories has noticeably increased, for reasons that are as yet unclear. Coordinated disclosures from independent researchers keep surfacing missing authorization checks in newer, AI-adjacent services. The fixes are landing fast, which is good. The customer communication isn't landing at all, which is, charitably, a choice. A "severity: none" rating on a bypass of the only access control a service offers is not an objective security finding so much as it is a communication decision. And the communication decision now reads, with the benefit of AWS's follow-up: "We'll fix the bug, we won't tell you it existed, and if you ask we'll explain that you weren't using the feature anyway." AWS gets a lot of forgiveness on the small stuff because they own the big stuff. They might want to reconsider how much of the big stuff they keep classifying as "none." ®
Categories: Linux fréttir

Google's AI-enabled mouse pointer understands 'this' and 'that'

Wed, 2026-05-13 22:19
Google doesn't design mouse traps, so it's trying to design a better mouse. Google DeepMind announced a research effort to transform the standard computer mouse cursor into a context-aware, AI-powered tool, marking what the company described as the first major rethinking of the cursor in more than 50 years. The project by researchers Adrien Baranes and Rob Marchant integrated Google's Gemini AI model with an experimental context-aware mouse pointer. In this way, the company said, the system can understand where a user clicks, what they are clicking on, and the likely intent behind the interaction. Researchers said there is a persistent friction in how people currently interact with AI tools. Most AI assistants today live in a separate window, requiring users to copy, paste, or drag content into a chat interface before receiving help. The new approach aims to reverse that dynamic. "We want the opposite: intuitive AI that meets users across all the tools they use, without interrupting their flow," the researchers stated in the blog post. The mouse pointer works alongside the computer’s microphone, allowing Gemini to listen as the user points. This lets users refer to features on the screen with object pronouns like “this” and “that.” In a demonstration website, a user can hover a cursor over a crab and say “move this here,” and the system understands enough context to grab the crab and move it to where the cursor indicates. The first computer mouse, a one-button prototype with metal wheels for the x- and y-axis, was built out of wood in 1964 and was patented in 1970 by its inventors Doug Engelbart and Bill English, who worked at the Stanford Research Institute. Engelbart foresaw a day when humans and computers would interact more easily and naturally, which he talked about during his 1997 acceptance speech for the Lemelson-MIT Prize. “The computer technology, the digital capabilities, it’s affecting communications, displays, storage, computer processing. It’s affecting the way you can interface to things a lot more flexibly,” he said. “That’s going to be so pervasively high-impact in our society and our organizations that it's more than anything we’ve had to cope with evolutionary wise.” Maintain the flow At Google, the team said it laid out four design principles guiding the project. The first, which the researchers called "Maintain the flow," stated that AI capabilities should work across all applications rather than forcing users into separate AI-specific environments. Under this principle, a user could point at a PDF and request a summary, or hover over a statistics table and ask for a chart, all without leaving the current application. The next, "Show and tell," addressed the burden of prompt writing. The researchers stated that an AI-enabled pointer could capture visual and semantic context from the screen, reducing the need for users to write detailed text instructions to the model. They also developed the AI cursor based on how humans naturally communicate using short phrases and gestures like “this” and “that.” The researchers stated that the system would allow users to issue commands like "Fix this" or "Move that here" while the AI fills in the contextual gaps. The fourth principle, "Turn pixels into actionable entities," lets the pointer recognize structured objects within on-screen content. The researchers stated that this capability could turn a photo of a handwritten note into an interactive to-do list, or convert a paused video frame showing a restaurant into a booking link. In the blog, the researchers said that Google DeepMind has already begun integrating the lessons learned into products. A feature called Magic Pointer will soon roll out on the forthcoming Googlebook laptop platform, which The Chocolate Factory introduced earlier this week. The company said the technology will also allow users of Gemini in Chrome to point at specific parts of a webpage and ask questions, rather than composing a full text prompt. Experimental demos of the AI-enabled pointer are currently available through Google AI Studio, where users can test image-editing and map-based interactions using the point-and-speak approach. The company said it plans to continue testing the concept across additional platforms, including Google Labs' Disco. ®
Categories: Linux fréttir

Datacenters are having fewer, but bigger failures

Wed, 2026-05-13 20:48
There's good news and bad news when it comes to datacenter uptime. According to a recent report from the Uptime Institute, bit barns have actually gotten more resilient over the past five years. However, the report suggests that those datacenter failures that do occur are lasting longer and costing more to resolve. According to Uptime, half of the operators surveyed reported an impactful or serious outage in the past three years. “This is the lowest level recorded since 2020 and continues a multi-year trend of improving reliability.” However, the report also finds that datacenter operators may be having a harder time adding additional 9s of reliability to their SLAs. According to Uptime, failure rates are falling at a slower pace, suggesting that existing efforts to improve resiliency may be at the point of diminishing returns. This doesn’t appear to be the result of complacency. Instead, analysts suggest that efforts to improve uptime are being offset by greater system complexity and more challenging operating environments caused by the widespread deployment of power dense infrastructure used in AI training and inference. “Higher rack densities, load variability, and operating closer to available power limits may increase the likelihood of cascading failures,” Uptime warns. Shortages of critical physical infrastructure like generators, switchgear, transformers, and other power and cooling systems have driven some operators to adopt second-hand or unproven hardware. “This is believed to have contributed to several failures and incidents at some datacenters,” the report reads. Power-related failures remain the leading cause of major datacenter disruptions, but even this is improving. “While power issues accounted for 45 percent of respondents’ most impactful outages in 2025, this is down from 54 percent in 2024,” the analysts write. However, the analysts also warn that this could change as local grids are stressed by ever larger datacenter deployments. While Uptime doesn’t expect grid power failure to be a primary cause of outages going forward, grid failures can still affect the availability of onsite power. During an outage, datacenters have a limited window to switch over to onsite generators, which can and do fail. Overburdened grids aren’t the only external factors on Uptime’s radar. The industry watchers note that many public outages have been linked to fiber cuts and other networking disruptions. “Digital infrastructure is becoming more distributed with outages originating outside the datacenter, including those tied to power availability, network connectivity or the reliance on external cloud services playing a larger role,” Uptime Analyst Andy Lawrence said in a statement. According to the report, networking-related issues remain the most frequently cited cause for IT disruptions. Even if the datacenter itself doesn’t fail, a bad network configuration can still result in service outages. The good news is that wide adoption of software-defined networking and automated traffic rerouting has helped mitigate this risk. The report found that 20 percent of those surveyed reported having no IT service outages in the past three years, an improvement of nine points from 2024. Software-level resiliency is helping to mitigate localized disruptions, like a fiber cut, by distributing the workload across multiple sites. However, this software resiliency comes with its own challenges, most notably complexity. As we saw with the drone strikes on Amazon’s UAE and Bahrain datacenters, spreading your workloads out across multiple availability zones doesn’t do much good if the failure spreads to multiple sites. While Uptime observed fewer outages in 2025, the report suggests outages may be lasting longer. "While a majority of publicly reported incidents are still resolved within 12 hours (55 percent), the share lasting more than 48 hours has increased for the second consecutive year." As we mentioned earlier, many of these were tied to factors like damaged fiber lines, which Uptime notes occurred more than twice as often as usual. As you might expect, the longer the outage, the more costly it can be, particularly when it concerns highly leveraged AI infrastructure. Uptime reports that one in five outages now exceeds $1 million in total costs, and expects that figure to continue to rise in the coming years. ®
Categories: Linux fréttir

Anthropic butts in to small business, promises help with payroll and other core tasks

Wed, 2026-05-13 20:48
Anthropic is pushing into the small business space with a set of new plug-and-play tools designed for those without a tech team budget, but be warned: Depending on your Anthropic subscription tier, some business data might get sucked up to train Claude. Anthropic announced Claude for Small Business (CSB) on Wednesday, describing the new plugin as a way for SMB owners without AI expertise to automate the basic business tasks they’re saddled with, like payroll, chasing payments, and launching campaigns, that are usually the purview of different departments at the enterprise level. Installation is designed to be dead simple, with Pro, Max, and Teams plan users able to add it as a plugin from the Cowork space in the Claude Desktop app. Skills can then be run using natural language prompts or slash commands outlined here. Users will find “a package of connectors and ready-to-run workflows” inside the CSB plugin, according to the announcement. The aforementioned capabilities of the plugin are part of 15 skills based on common repeatable business tasks, while 15 agentic workflows are also included across areas like finance, operations, marketing and the like. As for the connectors themselves, Anthropic specifically mentions seven of them included in Claude for Small Business: Intuit Quickbooks, PayPal, HubSpot, Canva, Docusign, Google Workspace, and Microsoft 365. An Anthropic spokesperson told The Register in an email that CSB isn’t limited to those connectors, but the skills and workflows rolling out for the plugin were only optimized for those connectors to start. Anthropic told us it chose those products based on the results of a survey of SMB owners, but it plans to add support for more connectors in the coming months. In other words, if you’re a small business owner and you rely on a platform not on that list, you’ll have to keep waiting a while longer if you want to pull that info into Claude. Gotta reach ‘em all It's logical that Anthropic is pushing into the SMB space. The company has seen a leap in business customer subscriptions this year, taking advantage of OpenAI’s slip in the professional user space, and with growth comes the search for new markets to tap. As Anthropic notes in the announcement, and as many analysts have pointed out, AI adoption among SMBs has historically lagged enterprises. That’s to be expected, of course: Enterprises have far more resources to invest in new, unproven technologies and the money to absorb failure when said new tech doesn’t pan out as expected. Anthropic said in its CSB announcement that it specifically designed the new plugin for “those who have historically been last in line for new technology,” or small businesses, in other words. The company also launched an AI fluency for small business course to help SMB owners understand what exactly they’re installing when they tell Claude to install CSB. But if you're taking part, you have to be OK with the idea that Anthropic might train its AI on your business data. Anthropic points out in the announcement that it doesn't train its AI models on the data of its business customers “on our Team and Enterprise Plans.” But as we noted above, Anthropic is marketing CSB to those on Pro, Max, and Teams plans, and the privacy policy page for Pro and Max says something quite different: “We will use your chats and coding sessions (including to improve our models),” the page states. “Chat and coding session data we may use for improving our models includes the entire related conversation, along with any content, custom styles or conversation preferences, as well as data collected when using Claude for Chrome.” Raw content from connectors isn’t included, the page explains, “though data may be included if it’s directly copied into your conversation with Claude.” This only applies to users who, under regular circumstances, have chosen to allow Anthropic to use chats to improve Claude, but it likely won’t shock any El Reg readers to learn that permission is on by default – Anthropic told us that it's on users to turn it off. If you're copacetic with all this, you can start using CSB today - there’s no extra cost associated with installing the tool for anyone on a Pro, Max, or Teams plan. ®
Categories: Linux fréttir

Bug hunter tracks down three massive MCP flaws and one vendor won't fix theirs

Wed, 2026-05-13 20:17
Security vulnerabilities in MCP servers for three popular database projects could let attackers execute unintended SQL statements on Apache Doris, exfiltrate sensitive metadata from Alibaba RDS, and potentially take over Apache Pinot instances exposed to the internet. Alibaba, meanwhile, declined to patch its flaw. Apache issued a patch and a CVE tracker for Doris MCP, and there’s an open ticket in the MCP Pinot Github repository for the flaw, we're told. However, Alibaba decided not to patch the vulnerability in RDS MCP, according to Akamai security analyst Tomer Peled, who wrote about the flaws on Tuesday and will present his full research next month at x33fcon. MCP, or Model Context Protocol, is an open source protocol originally developed by Anthropic that allows LLMs, AI applications, and agents to connect to external data, systems, and one another. While security issues are never a good thing - and they are especially concerning when they exist in a server sitting between an AI agent and a production database, these in particular point to a larger problem in the way MCPs are developed. “There is missing or faulty security validation between the MCP server and its back end,” Peled wrote, adding that these security “gaps will become high-value targets for attackers and we expect more of these issues to surface.” Here’s a closer look at all three, starting with the flaw that has since been fixed and assigned a CVE. Apache Doris is a high-speed analytics and search database with more than 10,000 mid- and large-enterprise users. Its MCP server allows AI agents to interact with and perform operations on Doris instances. This includes SQL queries or retrieving table and schema metadata - and foreshadows the found flaw: CVE-2025-66335, a SQL injection vulnerability, that affects Apache Doris MCP Server versions earlier than 0.6.1. When an MCP tool is called, the server’s “exec_query” function fails to validate one of the five parameters (the db_name parameter) before constructing the SQL query. This means an attacker can invoke the function and inject malicious SQL through the db_name parameter, which gets prepended to the beginning of the final SQL statement. Plus, the SQL validator only checks the first portion of the query, so all it sees is the attacker’s directive. “As a result, any attacker that gains access to a client connected to the Doris MCP server can execute arbitrary commands on the victim’s Apache Doris instance,” Peled said. Apache issued a patch in December to fix this flaw. The second issue, an authentication validation bypass in Apache Pinot MCP, can also lead to SQL injection attacks and full database takeover. Apache Pinot is another super-fast analytics database, and StarTree’s MCP integration for Pinot before v2.0.0 allowed users to run queries directly from their AI agent against their Pinot instance. The open-source project uses HTTP as the transport layer without requiring any type of authentication. This exposes the endpoint to remote attackers who can reach it, allowing them to invoke MCP tools, including those used for SQL execution. “In environments where the MCP endpoint is reachable externally, this behavior allows unauthenticated attackers to execute queries against the Pinot instance, which can allow a full remote takeover of the database,” Peled wrote. StarTree has since added OAuth as an authentication option when using HTTP, which he says lowers the threat of SQL injection (but it still exists in the code), and Apache has also opened a security issue in the MCP Pinot github repository. Pinot MCP v1.1.0 and earlier versions are affected. Neither Apache nor StarTree responded to The Register’s requests for comment. The third security flaw, an information disclosure issue in the Alibaba RDS MCP server, also stems from the server not authenticating users before invoking the retrieval-augmented generation (RAG) MCP tool, which allows AI models to connect with and query databases. This means “any client able to reach the MCP endpoint can issue requests to the server without any query validation,” according to Peled. “The vector index may contain table names, schema definitions, or other potentially sensitive metadata, and unauthenticated attackers can exfiltrate this data with little or no effort." All versions of Alibaba RDS MCP are affected by this vuln. The bug hunter says that he reported the issue to Alibaba in November, and the cloud giant told him the issue is “not applicable” for a fix - so it’s still in the codebase. Akamai also reported this inaction to the CERT Coordination Center (CERT/CC). Alibaba did not respond to The Register’s inquiries. Peled said that the threat-hunting team, upon starting this investigation, assumed that there would be some baseline security specification for all MCP servers. Turns out they were wrong, and as the research found, flaws like SQL injection, missing authentication, and insufficient query validation exist in the code. “This means that more attention should be given not just to the specification but also to the best security practices guides when developing secure MCP servers,” he wrote.®
Categories: Linux fréttir

See through local AI lies with Irish eyes

Wed, 2026-05-13 19:35
You may find yourself living with a new tech stack. And you may find yourself in an unfamiliar world. And you may find yourself behind a keyboard and screen, with a mouse and inscrutable AI. And you may ask yourself, how do I vet this? The Irish Council for Civil Liberties (ICCL) Enforce project has a suggestion: install its Verity MCP server. "LLMs [large language models] confidently claim things that are manifestly untrue," the advocacy organization explains. "Enforce has developed Verity, a tool that helps minimise false claims and fake sources from self-hosted LLMs." An MCP (Model Context Protocol) server provides AI models with access to external tools, data, and services. The Verity MCP server offers access to a set of smaller models that will try to assess the accuracy of a primary local LLM, something more people have begun to explore in response to rising prices at cloud AI providers, availability issues, and privacy concerns. Verity is not simply an LLM-as-judge setup in which one LLM evaluates the output of another. Rather, it's a set of seven layers designed to review model output. The system involves: strict rules for fact sourcing; a strong critic LLM that differs from the primary model family; a small critic LLM similar to the strong critic but from different training data; an encoder transformer trained on entailment labels; a regex evaluator; a stochastic re-sampler for catching low-confidence guesses; and a logprob analyser that checks token entropy. The reference build assumes a 2021 PC, with an Nvidia RTX 5070 Ti (16 GB, 2025) for the primary model (Qwen 3.5 9B, Q4_K_M), and an AMD Radeon RX 5700 XT (8 GB, 2019) for the critic models (IBM Granite 3.2 8B & 2B, Q4_K_M). The hardware recommendations call for a system with two GPUs, but that's to allow concurrent delivery of a second opinion from the Verity checker. On a machine with one GPU, like a MacBook Pro or Mac mini, the system can be configured to evaluate the primary LLM's output after the fact. "Even the biggest LLMs inevitably produce false claims," said Dr Johnny Ryan, director of ICCL Enforce, in an email to The Register. "This is a feature of how an LLM works. But this is dangerous when people start to put their faith in these systems. LLMs are being incorporated into judicial systems, public services, corporate life, the military, and people’s private decision making. For example, Google’s automated search summaries will confidently claim things that are not proven by the sources they cite." Ryan said if people intend to rely on LLMs for factual answers, they need a verification process. "One beauty of running LLMs on your own hardware is that you can take steps against this," he said, adding that local operation provides an opportunity to enlist old hardware to offer a second opinion alongside the main model output. "So for example, when your machine is working on an answer to a question you have asked, you can have an old graphics card that independently produces a second opinion using a different LLM on the same machine, and the two can then debate at the end without significantly slowing down the process," he explained. The downside of an all-local approach is that models have a training cutoff and won't be very useful for checking facts established after that date unless armed with tools for online data fetching. If you can get Verity up and running, that shouldn't be a problem. ®
Categories: Linux fréttir

Dissatisfied: Three-fourths of AI customer service rollouts are a letdown

Wed, 2026-05-13 16:53
If you're thinking you can replace your human call center staff with a server farm of bots, think again. Nearly three-quarters of enterprises that deploy AI customer communications agents later roll them back or shut them down, according to new research suggesting the systems are far harder to manage reliably in production than the AI hype implied. Swedish comms-as-a-service firm Sinch surveyed more than 2,500 AI decision makers from various countries and industries for its AI Production Paradox study. The starkest finding is undoubtedly the 74 percent rollback or shutdown rate for deployed AI customer communications agents tied to governance failures, but that’s not the only sign enterprise AI deployments are falling short of expectations. AI rollback rates, which Sinch told us specifically refer to AI projects that were deployed and pulled from live service rather than projects that failed before launch, actually rise to 81 percent among organizations that it describes as having “fully mature guardrails.” That, says Sinch Chief Product Officer Daniel Morris, suggests governance alone is not fixing the problem. "The most advanced organizations aren't failing less; they're seeing failures sooner. Higher rollback rates reflect better monitoring and control, not weaker performance," Morris said in a press release. “If governance was the fix, the most mature teams would roll back less, not more. Our data points to a deeper issue.” According to the findings, 84 percent of AI engineering teams are spending at least half their time on safety infrastructure, leaving little time to develop AI. This is exacerbated by the fact that most firms said spending on AI trust, security, and compliance ranks ahead of AI development itself. “When 75% put trust, security, and compliance in that top three — ahead of AI development itself at 63% — that’s a finding about where the priority sits within their AI customer communications programs,” a Sinch spokesperson told us in an email. In other words, it seems like most organizations realize that their biggest issue with AI isn’t getting it working properly - it’s getting it to just work safely in the first place. “The operational cost of running AI safely at scale is much larger than most organizations expect,” the Sinch representative explained. The numbers don’t change based on organizational size or budget, either, Sinch told us. “The rollback rate holds consistently across every region and every industry in the study, which suggests size isn’t a meaningful protective factor,” the company said. “Rollback isn’t a symptom of under-investment or being too small to afford proper guardrails.” Of course, as a business communications service provider, Sinch linked its results back to AI customer service agents not being properly deployed on comms infrastructure designed for AI agents, a problem it’s naturally positioned to offer a fix for. Regardless, that three-quarter rollback figure doesn’t seem too out of place when you consider recent customer service automation news. As we’ve reported on multiple occasions, replacing customer service staff with AI hasn’t gone to plan for many businesses. Gartner said in June 2025 that half of organizations expecting AI to significantly reduce customer service headcount would abandon those plans by 2027. Sinch’s numbers suggest the problem may extend beyond staffing cuts to the AI agents themselves. Not that far-fetched when Gartner was already warning last year that fully agentless contact centers were not practical in the real world. "Our vendor evaluations reveal that a agentless contact center is not yet technically feasible, nor is it operationally desirable," Brian Weber, VP analyst in the Gartner Customer Service & Support practice, told The Register, adding that unexpected costs and unintended results were contributing to abandonment plans - just like what Sinch is reporting now. ®
Categories: Linux fréttir

Utah mega datacenter could dump 23 atomic bombs worth of energy per day

Wed, 2026-05-13 16:36
A proposed mega-scale datacenter in the US state of Utah has caused controversy after a physics professor estimated that the facility and its associated power generation could dump 23 atomic bombs' worth of energy per day. But the real question is whether it will actually ever get built. The datacenter is part of the Stratos Project Area in Box Elder County, Utah, overseen by the Military Installation Development Authority (MIDA), a state agency straddling the military, local government, and private developers. Creation of the Stratos Project Area, covering about 40,000 acres of land, was given the go-ahead in a May 4 announcement from the Box Elder County Commission, after delaying a vote amid residents’ concerns. At full buildout, the proposed Stratos campus could require up to 9 GW of power, making it one of the largest datacenter developments in the world. Meta’s planned Hyperion cluster is aiming for 5 GW, for example, while the first facilities hitting 1 GW are only expected to come online this year. For comparison, 9 GW is roughly comparable to New York City’s average electricity demand. Utah State University physics professor Dr Rob Davies estimated that the proposed Stratos campus and its associated natural gas power plant could dump energy equivalent to 23 atomic bombs per day into the surrounding Hansel Valley. Davies’ preliminary analysis said this could raise daytime temperatures by 2°F to 5°F (1°C to 3°C) and nighttime temperatures by 8°F to 12°F (4°C to 6°C), potentially causing serious ecological impacts in the high-desert valley. Not surprisingly, many have questioned Davies’ figures, especially as he doesn’t publish his math, with the topic debated on forums such as Reddit. However, even skeptics such as Andy Masley, a writer and researcher who claims to have taught high school physics, find that the math broadly checks out, so long as the bomb you measure it by is the one dropped on Hiroshima, which at about 15 kilotons, was much smaller than modern weapons. The key thing to bear in mind, however, is that an atomic bomb releases its energy all at once in the blink of an eye, whereas in the datacenter’s case, the release of the heat will be spread across 24 hours. However, the point Davies was making is that this will be extra energy being pumped into what is already a fragile desert environment, and the figures “strongly indicate the need for thorough and independent ecological assessment” of the impact of the Stratos Project. A recent study by a team at the University of Cambridge also suggested that datacenters can create heat islands, raising surrounding temperatures by several degrees at distances of up to 10 km (over 6 miles). This was met with skepticism by Omdia Senior Research Director Vlad Galabov, who told The Register that “Simple physics suggests that even very large datacenters contribute only a small additional heat flux when spread over kilometres.” The Stratos Project is intended as a long-term scheme, with a multi-year buildout, meaning that it may not reach full capacity for a decade, if at all. Reports suggest that the finance industry is becoming increasingly concerned about the level of borrowing that is needed to continue this datacenter build boom. The Financial Times reported recently that banks are looking for new ways to offload risks, with JPMorgan Chase and Morgan Stanley trying to distribute datacenter-related deals across a broader range of investors. CoStar Group also warned that construction costs for modern bit barn campuses have surged, thanks to massive upfront spending on land, power support systems and specialized construction, leading to large projects running into the billions of dollars. According to some estimates, building 1GW of AI datacenter capacity costs around $35 billion, with Nvidia’s figures said to peg the costs at $50 to $60 billion. If correct, the developers of the Stratos facility will be looking at costs in excess of $300 billion. Alan Howard, principal analyst for colocation and DC building at Omdia, puts the figure slightly lower, but still sees problems ahead. “Thinking about a rough $8m per MW, that would put datacenter construction at ~$8 billion for just building construction including power and cooling. The power generation and IT equipment would be on top of that, so the number would be over $100 billion,” he told The Register. “What’s important here is that the money comes from different sources: Stratos pays for site development; other companies will likely pay for building construction; even other companies will build and operate the onsite power generation; and even other companies will buy and operate the IT equipment." "The tricky part is the tepid climate for funding these big projects. While there will be multiple companies providing funding for different pieces, the debt financing underwriting process will look at the broader project as part of their risk assessment,” he stated. Developers in the US and elsewhere are also facing increasing opposition to datacenter projects from local communities, with projects being delayed or entirely canceled in response. ®
Categories: Linux fréttir

Mystery Microsoft bug leaker keeps the zero-days coming

Wed, 2026-05-13 16:16
The anonymous security researcher who has already maliciously exposed three Windows zero-days this year has revealed two more, dropping them just after Microsoft's monthly Patch Tuesday update. Nightmare-Eclipse, or Chaotic Eclipse, depending on which of their aliases you prefer, released details about YellowKey and GreenPlasma - respectively a BitLocker bypass and a privilege escalation flaw, handing SYSTEM access to attackers. Experts speaking to The Register warned that both vulnerabilities present serious security concerns, especially since Nightmare-Eclipse released substantial technical information about exploiting them. Nightmare-Eclipse described YellowKey as "one of the most insane discoveries I ever found." They provided the files, which have to be loaded onto a USB drive, and if the attacker completes the key sequence correctly, they are granted unrestricted shell access to a BitLocker-protected machine. When it comes to claims like these, we usually exercise some caution, as this bug requires physical access to a Windows PC. However, seeing that BitLocker acts as Windows' last line of defense for stolen devices, bypassing the technology grants thieves the ability to access encrypted files. Rik Ferguson, VP of security intelligence at Forescout, said: "If [the researcher's claim] holds up, a stolen laptop stops being a hardware problem and becomes a breach notification." Despite the physical access requirement, Gavin Knapp, cyber threat intelligence principal lead at Bridewell, told The Register that YellowKey remains "a huge security problem for organizations using BitLocker." Citing information shared in cyber threat intelligence circles, he added that YellowKey can be mitigated by implementing a BitLocker PIN and a BIOS password lock. Nightmare-Eclipse hinted at YellowKey also acting as a backdoor, allegedly injected by Microsoft, although the people we spoke to said this was impossible to verify based on the information available. The researcher also published partial exploit code for GreenPlasma, rather than a fully formed proof of concept exploit (PoC). Ferguson noted attackers need to take the code provided by the researcher and figure out how to weaponize it themselves, which is no small task: in its current state it triggers a UAC consent prompt in default Windows configurations, meaning a silent exploit remains a work in progress. Knapp warned that these kinds of privilege escalation flaws are often used by attackers after they gain an initial foothold in a victim's system. "These elevation of privilege vulnerabilities are often weaponized during post-exploitation to enable threat actors to discover and harvest credentials and data, before moving laterally to other systems, prior to end goals such as data theft and/or ransomware deployment," he said. "Currently, there is no known mitigation for GreenPlasma. It will be important to patch when Microsoft addresses the issue." Four, five… and more? YellowKey and GreenPlasma are the latest in a series of five Microsoft zero-day bugs the researcher has exposed this year. When Nightmare-Eclipse released BlueHammer (CVE-2026-32201, 6.5) - patched by Microsoft in April - they were described as a disgruntled researcher who has since been rumored to be a former Microsoft employee. According to their maiden blog post under the Chaotic Eclipse alias, the bug leak began after an alleged violation of trust. "I never wanted to reopen a blog and a new GitHub account to drop code," they wrote. "But someone violated our agreement and left me homeless with nothing. They knew this will happen and they still stabbed me in the back anyways, this is their decision not mine." In early April, the researcher leaked proof-of-concept code for Windows Defender exploits they called RedSun and UnDefend - another admin privilege escalation bug and denial-of-service flaw, respectively - as well as BlueHammer. Both RedSun and UnDefend remain unfixed, and according to Huntress, the proof-of-concept code released was quickly picked up and abused in real-world attacks. Ferguson described the exposure of YellowKey and GreenPlasma as the latest in an escalating, retaliatory campaign against Microsoft, and warned of more coming. "Prior releases include BlueHammer and RedSun, both of which attracted serious community attention and real forks," he said. "The same post linking yesterday's releases warns of another Patch Tuesday surprise and hints at future RCE disclosures. They claim to have a dead man's switch with more ready to go. This researcher has followed through on every prior threat." ®
Categories: Linux fréttir

Rust stalks IBM mainframes, but only in nightly form

Wed, 2026-05-13 15:21
IBM's effort to bring in-kernel Rust to its mainframe platform has taken a step forward, although anyone hoping to use it on production iron will need to be comfortable with a nightly Rust compiler for now. Engineer Jan Polensky has submitted a patch series titled "s390: enable Rust support and add required arch glue." If accepted, it will allow Rust code to be used in the Linux kernel on IBM mainframe hardware, which the kernel still refers to as s390 after the generation of IBM mainframe kit introduced in 1990. He notes: For now, using a nightly build of the Rust compiler does not sound to us like the sort of thing many conservative mainframe shops are likely to embrace with enthusiasm, but even big new features have to start somewhere. This is a significant step. When Rust was introduced into the kernel in 2022, The Register mentioned a problem that we rarely see raised elsewhere: while the kernel is generally compiled with GCC, the standard Rust compiler, rustc, is based on LLVM instead. Wikipedia has a list of LLVM backends, and although there are a growing number, it's a shorter list than GCC's 48. There is an experimental GCC front-end for Rust but it's not ready for the prime time yet. The Linux kernel itself has supported compilation using LLVM since kernel 6.9 over two years ago. At the moment, the kernel development team is still working on version 7.1, which at the time of writing is still on release candidate 3 – so relatively early days. Last month, we reported on its new NTFS driver and the removal of some fairly ancient hardware support. The final version of Linux 7.1 will probably appear about halfway through 2026, meaning that kernel 7.2 is still quite far off. It might be in time to appear in Ubuntu 26.10 – but then again, we suspect that very few IBM mainframe customers use interim Ubuntu releases. ®
Categories: Linux fréttir

Royal Household seeks £3M finance system fit for a King

Wed, 2026-05-13 14:33
The UK's Royal Household plans to spend £3 million ($4 million) on a new finance system to replace one that is more than 15 years old, the Buckingham Palace-based organization said in a procurement notice published on May 7. King Charles III gets to announce the British government's overall plans at the start of each parliamentary term but also has his own miniature civil service in the shape of the Royal Household, described by the procurement notice as a "public undertaking (commercial organization subject to public authority oversight)." The household received £86.3 million ($116 million) in taxpayers' money in 2024-25, known as the Sovereign Grant, and generated a further £21.5 million ($29 million) from activities including tours of Buckingham Palace. The King also receives private income from the Duchy of Lancaster, an estate held in trust for the sovereign, while the Royal Collection Trust, a charity, cares for the royal art collection and runs visitor attractions including galleries at Buckingham Palace and Holyroodhouse in Edinburgh. According to the procurement notice, the Royal Household contacted suppliers on various Crown Commercial Service (now Government Commercial Agency) frameworks for information last year. It now plans to award a contract for financial software, implementation, training, and support, initially for five years from September 30, 2026, with a possible two-year extension, covering the household and a number of affiliated organizations. To manage this project, last year the household's Privy Purse and Treasurer's Office – its back office division that runs finance, human resources, technology and facilities management – advertised for a finance systems technical project manager, a two-year fixed contract paying £60,000 to £65,000 annually. According to the job advert, the role requires "a proven track record of delivering successful ERP or finance system projects" and the ability to "tailor your style to suit technical and non-technical audiences alike." As well as a 15 percent pension contribution, perks of leading the installation of His Majesty's new financial software include free entry to royal locations and 20 percent off at Royal Collection Trust gift shops. ®
Categories: Linux fréttir

Microsoft aims to speed Windows with 'leap forward' in WinUI 3 perf

Wed, 2026-05-13 13:49
Microsoft claims to have achieved a "leap forward" in performance for WinUI 3, the current native framework for Windows apps, with a 25 percent improvement for the parts of File Explorer coded using this framework. Software engineer lead Beth Pan posted figures for the WinUI portion of File Explorer, showing 41 percent fewer memory allocations and 45 percent fewer function calls. She added that some optimizations "involve small or large breaking changes," so they will be opt-in at first for developers using the framework. The plan is for the optimizations to become the default in future versions of WinUI and the Windows App SDK, with opt-out available when needed. The new optimizations are part of a push to make Windows more responsive. In March, Windows boss Pavan Davuluri promised to improve the quality of the operating system, including a commitment to a "faster and more dependable File Explorer." His post noted that Microsoft intends to "move more experiences to WinUI 3" for faster responsiveness. Pan's post is bittersweet for developers. Performance issues with WinUI 3 have been well known for years. Although Microsoft calls it a native framework, that is a stretch. WinUI 3 is based on WinRT (Windows Runtime), a component interface first used in Windows 8 that sits between application code and the underlying Win32 API, which has a better claim to being native. An advantage of WinUI 3 is its support for Fluent UI, the Windows design system. Developers using WinUI 3 get the Windows 11 look and feel, but not the best performance. "WinUI 3 is currently measurably slower than both WPF [Windows Presentation Foundation] and UWP [Universal Windows Platform]… this is NOT OK," said one comment. Another said that "you can't build a WinUI app and call it smooth at the same time." Component vendor DevExpress has also posted about WinUI 3 performance issues. The company stated that WinUI component architecture has the potential for fast rendering and animation, but that "unfortunately, each action within a component requires WinRT interop, which is slow." These concerns undermine Davuluri's hope that using more WinUI 3 will fix Windows 11 performance, unless the framework itself is improved, as Pan now claims. Another longstanding gripe among Windows devs is that Microsoft's developer division has created frameworks that the Windows and Office teams have not always adopted consistently. Internal tensions go back many years. Some may still remember early builds of "Longhorn," the code name for Windows Vista, having to be reworked before Vista's eventual release in 2007 because of performance issues with .NET. This caused distrust of .NET in the Windows team. "What you need to do is actually use your framework across the company," said another comment. Pan replied, insisting "that's the push." This is exactly what developers using WinUI 3 want to hear, but the long and tangled history of Windows UI frameworks suggests that a consistent and enduring company-wide approach is unlikely. ®
Categories: Linux fréttir

SpaceX sets date for Starship test that asks: Did we break anything in the upgrade?

Wed, 2026-05-13 13:29
SpaceX has named the earliest date for the next Starship launch – May 19. The company has already completed a Wet Dress Rehearsal (WDR) so the next step is to cross fingers and launch the stainless steel behemoth. The launch window for the 12th flight test of Starship opens at 5:30 pm CT, and, as with previous test flights, the vehicle will be on a suborbital trajectory. The launch, from an entirely new pad, will be the first of SpaceX's third-generation Starship and will validate that there have been no inadvertent regressions. Raptor 3 engines power the Starship and Super Heavy Booster. On the booster, SpaceX has reduced the number of grid fins used during recovery from four to three, increased their size by 50 percent, and added a new catch point, although there are no plans to catch the booster on the next flight – it is destined for the Gulf of Mexico. SpaceX wrote: "As this is the first flight test of a significantly redesigned vehicle, the booster will not attempt a return to the launch site for catch." For Starship, changes include a redesign of the propulsion system, increased propellant tank size, and improvements to the reaction control system. The Starlink dispenser mechanism has also been updated to increase satellite deployment speed – 22 mass simulators will be carried on this mission. Ultimately, this is a considerably enhanced rocket, with more powerful engines, and a new launchpad and tower. Hence the need to demonstrate that nothing has been broken along the way. The objectives are therefore familiar. SpaceX will call the flight of the booster a success if there's a successful launch, ascent, stage separation, boostback burn, and finally a landing burn in the Gulf of Mexico. Starship's objectives are to deploy Starlink simulators, which will also be on a suborbital trajectory to burn up harmlessly, restart a single Raptor engine, and survive a controlled re-entry, although the vehicle will not be recovered for reuse this time around. Two of the Starlink simulators will be able to scan and capture imagery of the vehicle's heat shield, allowing engineers to assess its readiness for return to the launch site on future missions. In addition to painting some heat shield tiles white to simulate missing tiles for the imaging test, a single tile has been removed to see what happens to the surrounding tiles. Finally, the vehicle will attempt a dynamic banking maneuver to mimic the trajectory of a future return-to-Starbase mission. ®
Categories: Linux fréttir

Greater Manchester still says no to NHS data platform with Palantir at its heart

Wed, 2026-05-13 13:07
One of the UK's biggest health regions has doubled down on its decision not to join the NHS Federated Data Platform (FDP), owing to concerns over its lead supplier, Palantir, and a lack of evidence for the technology's benefits. Greater Manchester Integrated Care Board (ICB), which manages health services for 2.8 million people, deferred a decision on whether to sign up to the FDP last year. It is the only ICB in England to do so. A board meeting in May 2025 heard that NHS England had not addressed the ICB's concerns around risks. The ICB added that Greater Manchester's capability in data analytics was greater than what the FDP currently offered. In a November meeting, Greater Manchester ICB said it would review its position. However, a recent Freedom of Information response said that review was now off the table, and the ICB would stick with its decision not to join the FDP. "It was proposed that a paper would be produced in due course to guide a review of the ICB position," the response said. "This paper has not been produced yet and work has not started on this paper because it's clear that the public concerns have heightened rather than diminished since the deferral decision has been taken and there does not appear to be any compelling evidence that the value proposition for NHS GM from FDP has materially changed in favour of adoption." NHS England has been offered the opportunity to comment. The FDP was created by Palantir under a much-criticized £330 million procurement for a seven-year contract awarded in November 2023. NHS England signed the deal after it awarded £60 million to the vendor without competition during the pandemic. The FDP is designed to improve information flow through various NHS organizations and reduce the backlog in non-urgent "elective care," which skyrocketed during the COVID-19 outbreak. NHS England confirmed that Palantir staff could access patient data following a change in policy, provoking outrage from those concerned about the US spy-tech firm's position at the heart of NHS data after a series of outspoken political positions from its leadership. Last month, the junior minister responsible for the FDP said the government would consider using a break clause in the FDP contract to remove Palantir, although he defended the system's performance. Liberal Democrat MP Martin Wrigley claimed the NHS was locked into the Palantir contract and owned none of the software or intellectual property resulting from it. ®
Categories: Linux fréttir

Microsoft gives Windows Update a Ctrl-Z for bad drivers

Wed, 2026-05-13 13:00
Microsoft is getting to grips with Windows drivers that leave the operating system in an unstable state with a proactive rollback dubbed "Cloud-Initiated Driver Recovery." "When a driver is identified as having quality issues during our shiproom evaluation process, Microsoft can now initiate a recovery action from the cloud, replacing the problematic driver on affected devices without requiring manual intervention from the user or the hardware partner," the company explained. The process applies to drivers distributed through Windows Update. Microsoft's partners can use the Windows Update service to distribute new code, but sometimes things go awry, and no amount of finger-pointing from Redmond will dispell the general feeling of instability and unreliability that surrounds the company's flagship operating system. A cynic might wish that bugs were ironed out before faulty drivers are inflicted on users via Windows Update, or wonder why this functionality was not already present, but this is a step in the right direction. Previously, if a vendor found a problem with a driver, remediation involved either a swift driver update or having the user perform manual steps to remove the faulty code. In some cases, users could be stuck with a defective driver for an extended period. The change means Microsoft can now proactively kick off recovery action, rolling back the code to the last known good version via Windows Update. The hardware partner doesn't have to do anything to get the balky code off user computers – it's all handled by Microsoft. We would, however, expect some words with the vendor and the code fixed before long. For Windows device users, the change is expected to improve quality and reliability. Driver partners, meanwhile, can leave Microsoft to handle rollbacks when defects are detected. The change should also be transparent to users, although partners will need to be aware of what is happening. Microsoft wrote: "We encourage partners to continue monitoring their driver quality metrics in the Hardware Dev Center dashboard and to respond promptly to any shiproom feedback on rejected submissions." Rollout will happen over the coming months. ®
Categories: Linux fréttir

Pages