TheRegister
Agent harnesses, like OpenClaw, are changing how we build and run AI models
After nearly four years and hundreds of billions burned building smarter and more capable models, folks understandably would like to see them do something more than run a chatbot. In this respect, OpenClaw served like blood in the water, demonstrating that, in spite of its seemingly endless supply of security flaws, LLMs really can be used to automate complex tasks. Since then, you've probably noticed the term "harness" coming up more frequently to describe agentic AI frameworks, and for good reason. You don't need a harness to interact with a chatbot – local tools like Ollama send API calls directly to the LLMs – but to do today's advanced work, they are essential. On their face, AI harnesses are just a bit of code that wraps around an LLM's API endpoint, orchestrates tool calls, and manages context. OpenClaw, Claude Code, Codex, and Pi Coding Agent are all examples of code-focused harnesses you may already be familiar with. As simple as all this sounds, harnesses are changing the way we think about everything from training new models to how we build and run them at scale. LLM inference on its own is pretty dumb – not the models so much as the way we interact with them. The OpenAI-compatible API calls that have become the de facto standard are transactional. With most early chatbots, you made a request and the API would supply a response. A harness, by comparison, orchestrates those API calls, breaking down one request into multiple. If you were to ask a code agent to build an app that parses logs, the harness might make one request to plan things out, another to review the log directory, a third to generate and execute that code in an interpreter, and a fourth to debug and fix any errors. This multi-step loop would continue until the work is done or the harness cuts it short to ask for user input. At least for coding, these harnesses are getting good enough to be useful. In fact, a harness may have a bigger impact on whether the code assistant will be successful than the model itself. Even Qwen3.6-27B, a small-to-medium-sized LLM, proved to be a surprisingly effective alternative to larger paid models when paired with harnesses like Anthropic’s Claude Code or Cline. And yes, if you didn’t know, Claude Code works with any model you like. In fact, the realization that small models with well-designed harnesses can now automate complex tasks has contributed to a shortage of Mac Minis, as AI enthusiasts race to self-host OpenClaw and LLMs on them. Changing the way we build models Training dominated the first two years of the AI boom. OpenAI, Google, Microsoft and others raced to build smarter models using as much data as they could harvest. But by the end of 2024, the payoff of building ever larger models started to taper off, as the extra parameters only engendered small gains in intelligence. DeepSeek R1 brought “reasoning” models and test-time scaling to the mainstream. To be clear, these models don’t actually reason, but instead trade time and tokens for higher quality answers and a lower propensity to make stuff up (aka "hallucinate," although we at El Reg try to avoid anthropomorphizing AI). It wasn’t the first. OpenAI’s o1 beat them to it, but R1 was the first widely adopted open weights model that used reinforcement learning (RL) to teach the model new skills, like chain-of-thought reasoning. Over the past year, agentic code assistants have steadily gained traction. Consequently, people are increasingly using RL to teach models to use the tools and resources that agent harnesses expose to them. If you look at many of the recent model releases on Hugging Face, you’ll notice a strong emphasis on agentic tool calling and long-context reasoning. If you want a model to work effectively with an agent harness, it needs to execute tool calls reliably. And since those tool calls can return large quantities of information, you also need the model not to lose track of that information. While these qualities make for better agentic models, they also require a very different set of hardware. CPUs take center stage Compute to run these agent harnesses is in high demand. After living in the shadow of high-end GPUs and AI accelerators for the past few years, CPUs are back in the limelight. Intel Xeon processors are selling faster than Intel can make them. Meta is buying up every chip it can get from Arm and Nvidia, and renting boatloads of Amazon’s Graviton CPUs while it awaits delivery. This is happening because agent harnesses don’t run on GPUs. Even with enough CPU cores to execute these tasks at scale, the number of requests is also reshaping the way we run models. If you haven’t noticed, inference costs have been on the rise. OpenAI recently raised the price of GPT-5.5, Microsoft moved GitHub Copilot to a purely usage-based pricing model, and Anthropic could soon force Claude Code users onto its pricier “Max” subscriptions. Some of this is because of increased demand. Like it or not, vibe coding is catching on and probably isn’t going away. However, we suspect some of it may be down to the fact that these models are running on hardware that was originally built for training and is now having to play double duty for inference. Only in the last year and a half have we started to see inference-optimized systems like Nvidia’s NVL72 racks hit the market. AWS, AMD, and others are now racing to catch up with rack-scale compute platforms of their own. But it turns out that even these systems aren’t enough on their own. If agentic code harnesses are making dozens of requests, each generating hundreds of lines of code, inference performance becomes a major bottleneck. In the early days of ChatGPT, it might have been enough to churn out tokens faster than the average person could read. Remove the meatbag from the equation and speed becomes everything. GPUs are incredibly compute-dense parallel processors, but their memory isn’t great for the kind of auto-regressive large models these harnesses are being saddled to. Groq and Cerebras get their moment under the AI sun Faced with these challenges, infrastructure providers have adopted new compute architectures that combine GPUs with specialized AI accelerators. Nvidia’s acquihire of Groq is a prime example. Late last year, Nvidia dropped $20 billion to license the AI chipmaker’s language processing unit (LPU) chip tech and hire away its engineering staff. As we wrote at the time, Nvidia could have built its own SRAM-heavy decode accelerator, if it wanted to, but because it was faster to use someone else’s. By combining its compute heavy GPUs with Groq’s high-bandwidth LPUs, Nvidia was able to churn out more tokens faster and, in theory, improve the economics for AI agents. Higher interactivity is key for agentic workloads because they can now serve more requests in the same amount of time, or “think” about the information that’s been provided to them for longer. We’ve previously explored Nvidia’s new Groq-based LPXs back at GTC as well as the market dynamics behind the multi-rack architecture. AWS is using recently public Cerebras Systems' wafer-scale AI accelerators in much the same way, while Intel is now working with SambaNova on its own disaggregated compute architecture. The pendulum swings Given the sheer amount of compute these agent harnesses require, there’s a good chance we’ll start to see hyperscalers cut costs by offloading some of the work onto client devices. Because of the way these harnesses work, simpler requests like planning could be run on small models running locally on the user’s PC. In fact, Google appears to be doing just that. As we reported earlier this month, Google quietly began shipping as part of Chrome a small LLM that will eat up 4 GB of disk space, and presumably just as much memory when in operation. The model appears to power basic functionality like “help me write” functionally, scam detection, and other AI-assisted functions which have steadily invaded our browsers as of late. It’s not hard to imagine code agents doing something similar. A small local model could be used to draft and test code snippets while the larger cloud-hosted model is used to debug and correct errors, shifting much of the load off datacenters and onto client devices. For that to work, we’re going to need systems with a whole lot more high-speed memory, which poses a bit of a problem in light of the DRAM and NAND shortage. While user-facing agent harnesses could be used to shove some of the computational load onto customer devices, many still want to see agents carrying out entire departments' worth of work. Take the human out of the loop, and these agents wouldn’t be constrained by limitations of their fleshy masters and could work orders of magnitude faster given enough compute resources. So, just like the rise of PCs didn’t spell the end of mainframes, local AI will no is unlikely to end investors' obsession with ever hotter and more power hungry bit barns any time soon. ®
Categories: Linux fréttir
Enough with the AI FOMO, go slow-mo, says Domo CDO
Chris Willis, chief design officer and futurist for data platform biz Domo, wonders why people aren't more annoyed with AI companies. Willis said he was in San Francisco a few weeks ago and he couldn't fathom the lack of resentment. "Why aren't people more resentful that these companies have pushed this technology upon them and now everyone is feeling a tremendous amount of anxiety," he told The Register in an interview. "I'm sure you've seen the surveys and the research. Everyone from the C-suite on down feels like the clock is ticking and their careers are on the line." San Francisco is the home of OpenAI and Anthropic. Google, Microsoft, and Amazon are also in town. So there's a lot of self-interested AI enthusiasm in the city by the bay. The resentment is there if you look beyond the billboard evangelism shouting its way down the US 101 corridor that connects the city to Silicon Valley proper. But the existential dread behind Stop AI, Pause AI, Poison Fountain, and the firebombing of OpenAI CEO's Sam Altman's home isn't quite what Willis has in mind. He's concerned with the way AI has been marketed through fear – act now or be left behind by this technology that might just take everyone's job and enable DIY biological weapons, now that LLMs can more reliably count the number of "r"s in "strawberry." "Fear," he said, "is not a durable strategy for innovating." The problem as Willis sees it begins with the fact that AI models are a product without a spec. "When you're trying to create a product and you're trying to figure out how that product fits in the market, you have to figure out who it's for and what it's going to do and what it's not going to do," he said. "And these large language models, essentially the feature spec is: 'It'll do anything for anyone, anyway, anyhow, in any language.'" So it's not surprising, he said, that there's some confusion. "From a leadership perspective, we've seen many times the pattern where there is a lot of pressure for companies to suddenly innovate with a technology that's not well understood," he said. "And so organizations are spending a lot on buying these AI tools and then expecting innovation to just happen. And that's not usually how innovation works." What company leaders face, he said, is not an innovation problem but an impatience problem. "They're thinking, 'we have to do something now," he said, "and so AI in many ways is becoming a sort of theater. We have to show that we're doing something." The phenomenon known as "tokenmaxxing" – buying access to AI models and directing or expecting employees to use them as much as possible – illustrates the lack of strategy, Willis said. "In certain organizations where AI is theater and impatience is driving rather than innovation, tokenmaxxing is a convenient way to feed that narrative," he said. "But it doesn't change anything. The research does suggest that you might have people putting through a lot of tokens and maybe they are personally becoming more productive. But it's not changing the bottom line." The deeper problem, he said, is that companies are treating AI itself as a solution rather than as a tool to help power the solution. The result is a lot of proof-of-concept projects that lack what's required to make them durable, trustworthy, and deployable at scale. Starting with business needs first is essential, Willis argues. "If you don't understand the process and the automations and the workflows in your business, you run the risk of putting in a very powerful engine that's going to drive your business way faster, but with the lights off, at night," he said. Willis suggests companies should not set moonshot goals for AI, and start with something simple, like automating processes tied to a spreadsheet. He described work done with one customer that involved developing an app to go through company invoices, check for discrepancies, and surface anomalies for review by a person. The clients were thrilled. Understanding where human judgement is required and where decisions can be verified and hence automated, is key, he said. "Usually that question is not asked." Failing to ask questions like that invites problems. Willis pointed to the way that Swedish fintech biz Klarna replaced customer service staff with AI, only to return to replace the AI with people. "It's very enticing to say we're just going to replace everything with a chatbot," he said. "Frankly, no customer ever just wants to talk to your chatbot." Willis said there's no magic for innovating. Companies need to do the hard work of understanding how AI may or may not be useful for the desired outcome. "There will be a reckoning when it comes to budgets around these things," he said, "because CFOs are starting to as 'Why are we spending all this money and not gaining anything?'" ®
Categories: Linux fréttir
Classic 7 is Windows 10 LTSC cosplaying as Windows 7
For those who miss what Windows looked like in 2009, Classic 7 is a heavily modified version of Windows 10 IoT LTSC, reworked to make it look as much as possible like Windows 7, while still being in support and receiving updates. This has been accomplished thanks to a large compilation of skins, themes, add-ons, tweaks, and so on – some of which are real components from older versions of Windows, adapted and modified to run on Windows 10. We were not sure whether to cover Classic 7, because while it is impressive and fun, we are not at all sure it is legitimate to use. But we can see a target audience. This isn't just a layer of makeup; it's more like a face transplant. It includes some real binaries from Windows 7, and indeed earlier versions, adapted and grafted onto Windows 10. One component is the Windows Media Center from Windows XP, which was cut from Windows 10 before release. The specific version of Windows 10 that it's modified is significant. It's Windows 10 IoT LTSC. We talked about this specific edition in April 2025 because it's the last version of Windows 10 that is still in support and receiving updates. The standard Windows 10 Enterprise LTSC release will continue to receive updates until 2027, and the IoT edition, which is only available in US English, will get updates until 2032 – so this is the longest-lived version of Windows 10. At the bottom of our story on Windows 10 LTSC, we mentioned the slightly shady world of third-party modified editions of Windows. Classic 7 is one; it's a modified version of an Enterprise edition of Windows, one that's only available for legitimate licensing via a Volume License Agreement. Unless you have appropriate volume licensing for the underlying Windows edition and have paid the fairly hefty fee, this is an unlicensed copy of Windows. So we have to spell out that this is not for production use, and you should not use it in any working environment. It's an interesting hack, though, and it might be a bit of fun for a home gaming machine or something like that. As an aside, one of the most widely used tools for activating unauthorized copies of Windows and Office, MassGrave, is in fact hosted on GitHub. In other words, Microsoft itself is hosting tools to activate unlicensed copies of Windows and Office. Whether that counts as tacit approval, we wouldn't like to say. Classic 7 has been under construction for over a year and a half, and it's the sequel to an earlier project called Reunion7 – also hosted on GitHub, as it happens. As its list of credits shows, Classic 7 is in part a compilation of a lot of existing tools. Some of them are relatively well known, such as Winaero Tweaker, which can run on any copy of Windows and, among lots of other options, allows some of the less desirable changes in the Windows UI to be undone – for instance, switching to the hidden Aero Lite theme. Classic 7 includes this and a lot more besides. We could identify some of the couple of dozen credited projects, such as the Aero11 theme, itself a port of Aero10 to Windows 11. This works alongside OpenGlass, which brings Aero-style transparency to Windows 10. There's also the Windows NT Modding Utility, and another hack that lets you change the Windows version number reported on the command-line, called Custom CMD Version Text. Multiple sub-components come from the Windhawk mods collection, some credited to a developer called ImSwordQueen, whose themes can be seen on DeviantArt. Other components are more than just cosmetic. For instance, the remarkable description of Explorer7: "explorer7 is a wrapper library that allows Windows 7's explorer.exe to run properly on modern Windows versions, aiming to resurrect the original Windows 7 shell experience." So this is not merely a theme for Windows 10 Explorer: as far as we can tell, it's the real Windows 7 Explorer, but running on top of 10. The same appears to apply to Control Panel as well, thanks to the Control Panel Restoration Pack. Thanks to the Windows Media Center (Modern Hardware) effort, this is the real XP version, which an on-screen message says replaced the Windows 8 version used in an older build. We tried Classic 7 in VMware, and the experience is quite uncanny. We did hit some glitches: our first installation failed when we let it do its own disk partitioning. Deleting all the partitions, manually creating a single large C: drive, and telling the installer to use that worked. A few error messages did appear here and there. Trying to change screen resolution went badly awry until we installed the VMware guest additions. Opening Windows Update just threw an error. Overall, though, it is genuinely remarkable. It looks and feels like Windows 7 – but in principle, you can run the latest apps and drivers and they should work. It even includes your choice of older Firefox versions, including version 115 ESR, skinned to look exactly like Internet Explorer – an effort called BeautyFox. Last year, we wrote a piece on running Windows 7 in 2025 and it really reminded us how great the 2009 release looked compared to anything that's come since. Apparently, that late-noughties translucent look is now known as Frutiger Aero, and frankly we miss it. In all honesty, we feel Classic 7 goes too far. We don't want Help/About dialog boxes, and even the winver tool and the ver command to lie to us. We'd prefer something that told the truth, but looked pretty while doing it. But as we wrote last year, some personal friends are still running Windows 7 by choice, and compatibility is starting to become a problem. If you want a recent Firefox, well, you're out of luck. Firefox 115 from 2023 still works, and remarkably, it's still getting security fixes now: the March end-of-life has been postponed again, and it's currently August 2026. The Irish Sea wing of Vulture Towers is still running it on OS X 10.13 and it works flawlessly. This is a way out: to keep the 17-year-old vintage look, while running a codebase that still has another five years in it. If you're that determined, it's an option… and it's undeniably an attractive GUI. Whether this unauthorized rebuild of an unlicensed OS is an attractive option, though – you must decide that for yourself. ®
Categories: Linux fréttir
Wanted: Digital chief for England's schools. Must enjoy data, AI, and concrete problems
England's Department for Education is advertising a role paying up to £200,000 a year to lead a new digital and infrastructure group overseeing school buildings and maintenance, as well as technology and data. Its Director General, Digital and Infrastructure, will lead the technology function of around 1,800 staff, develop a new strategy covering digital services, data, and artificial intelligence, and lead work on a unique identifier for children and other learners in England. Scotland, Wales, and Northern Ireland run education services on a devolved basis. The successful candidate will also implement a new strategy for "the education estate" of schools, colleges, nurseries, and children's homes. The job ad warns the function "carries some of the highest levels of risk and accountability in the department - including life-and-death decisions on safety," citing ongoing work to remove unsafe reinforced autoclaved aerated concrete (RAAC) from schools. "I am looking for a leader who is motivated by impact - someone who is able to combine their digital and data expertise with their drive to improve outcomes for children and young people," writes the department’s permanent secretary, Susan Acland-Hood, in a briefing document with the advert. "Whilst you do not need to be an expert on education policy, you need to be curious and committed to rapidly building your understanding of the latest evidence, system, and policy landscape." The department is willing to base the job in Bristol, Cambridge, Coventry, Darlington, London, Manchester, Nottingham, or Sheffield, although those who do not work in the capital will need to go there frequently. Applications close on June 1. Several other departments have recently advertised digital director-general posts, the civil service job category just below permanent secretary (equivalent to chief executive). In January, England's Department of Health and Social Care advertised the role of director general for technology, digital and data with a salary of up to £285,000 a year. In February, the Ministry of Defence offered £270,000 to £300,000 for its chief digital and information officer job. And in April, the Department for Science, Innovation and Technology advertised for three directors-general, one paid £174,000 and the other two paying between £200,000 and £260,000 annually. ®
Categories: Linux fréttir
AI-generated code is 'pain waiting to happen'
INTERVIEW Enthusiasm among managers to adopt AI tools has outpaced developers' ability to learn those tools and use them effectively. Moshe Sambol, VP of customer solutions at software observability outfit Lightrun, told The Register in an interview that he speaks with a lot of companies. Some of the developers in those organizations, he said, are very comfortable with AI tools. "But the reality is that a lot of developers are much earlier in the curve," he said. "The expectations of businesses are getting ahead of where the developers are in terms of their mental model and in terms of the training that they're providing, the enablement they're providing to make their teams comfortable with the tools, and the rate at which these tools are evolving." Sambol said the degree of AI tool adoption varies. "I absolutely have customers who've told their developers, 'You don't write code anymore. You review code. No one should write a line of code unless for some reason you failed after three attempts getting GenAI to do it,'" he said. "I have customers like that. I don't know if I should name them, but absolutely." And he said on the other side of the spectrum, there are organizations like banks that are just starting to roll AI tools out due to compliance obligations and traditional industry caution. "It's an exciting time to be adopting these tools and learning these tools, but it puts a lot of pressure on the developer," he said. "It puts this expectation of being more productive." Not everyone manages that, and Sambol said he has a lot of sympathy for developers who have been directed to use AI tools without training and organizational guidance. Generative AI models will produce a lot of code quickly, he said, and because the code seems correct initially, it often gets pushed forward. "If it's not creating bugs en masse today, it's just pain waiting to happen," he said. "The number one question I think we have to be asking developers is, 'Can you explain that code? Have you validated that the code actually fits in the context of the broader system?'" Sambol said the answer isn't necessarily yes or no because developers have different levels of experience and often work on large projects where they focus only on a specific part of the code base. It's common in enterprises, he said, that no one person will understand the entire system end-to-end, which is why problem resolution often requires a group of people. The issue he sees is that generative AI systems don't help bridge the missing knowledge gap. They don't provide the context to understand all the components involved. Sambol went on to describe an incident in which a developer was using an AI assistant to build an Ansible automated workflow. "The generative AI was creating the Ansible template for him, which seems like a perfect match – it's drudge work," he explained. "And it's much better at getting the syntax exactly right." It worked. And then it stopped working. "The system that he was deploying to, all of a sudden, he could not get the component up," Sambol said. "It just wouldn't start. A process that had been going smoothly for a couple of hours in the morning, now all of a sudden, his service is down and it will not run. "And he's pulling his hair out trying to unstitch the day's work so far to figure out what went wrong, why is the service not working," he said, adding that the AI agent proved unhelpful by going off in the wrong direction, reinstalling the operating system, and undertaking other ineffective steps to effect repairs. What happened, Sambol explained, is that earlier in the day, the developer had installed the component in a certain way – it was running in a container with a systemd service. As such, it needed access to the ports on the device, which precluded running the component in Docker. "So the AI model re-wrapped it, repackaged it, and deployed it in a different way, but kept the original one running," he explained. "So it was simply a matter of the fact that the one he had initially deployed was still running and it was blocking the port and the second one couldn't run. "It's a fairly simple, easy-to-understand problem once you see it, but he lost the entire afternoon going down all kinds of dead ends with the AI looking at this, looking at that, because the AI model didn't remember that it had guided him to deploy the system a certain way earlier in the day." Sambol said various studies show a significant percentage of AI generated code contains errors and creates technical debt. That's not to say human developers are without fault. Sambol said developers have their own weaknesses. Many companies, he said, have offshored or globally distributed development teams, so there's a lot of variation. He argues that it's important to acknowledge that imperfection and work toward processes that improve results. One way to do that is to automate the prompting process in a way that makes it more repeatable. "When you do that, you identify where you're starting to get good results and you don't expect everybody to come up with a well-structured long prompt." Sambol added, "I think these tools are absolutely getting better. And so I'm reluctant to call any of them junk or deeply flawed. They're getting better shockingly rapidly. If you can take advantage of a couple of different ones – with a human being in the loop – then you are more likely to get output that is at least as good as you were getting before." ®
Categories: Linux fréttir
Cloud-managed earbuds sound strange - as a concept, and on a plane
Last year, The Register spotted Dell selling cloud-manageable wireless earbuds that feature the company’s famously stoic styling at a price higher than Apple charges for its latest AirPods. Dell eventually offered your correspondent a pair of the Pro Plus Earbuds to try so we could hear what all the fuss is about – and we accepted, on condition that the company showed us the cloudy management tools that make the buds worth the big bucks. Divya Soni, a go to market lead, showed me Dell’s cloudy Device Management Console, a tool that lets admins enroll and track the buds, send them new firmware, or do things like turn on active noise cancellation by default across a fleet of earbuds. New firmware matters for earbuds because they’re Bluetooth devices and the wireless protocol has had its fair share of security scares over the years. The buds have already earned Microsoft’s Teams Open Office Certification – a seal of approval for being able to handle noisy offices, plus a Zoom accreditation. New firmware might help there, too. Soni admitted earbuds aren’t the main priority for the Device Management Console, which Dell expects customers will mostly use to manage docks and displays. Dell delivers firmware updates to those devices at least once a year, to address security issues or fix bugs. The tool can do the same for keyboards or headsets. I can’t imagine anyone would adopt Dell’s Device Manager just to keep an eye on earbuds. I’m also not sure anyone would buy the buds for personal use. I say that because I own two sets of wireless earbuds and in their own way both are better than the Dells. My go-to buds are JB’s $40 Vibe Beam 2, which fit brilliantly, bring out some nice nuances in much music, boast batteries that last about six hours and only need about 15 minutes to recharge. That makes them satisfactory for long-haul flights, during which they drop a warmly enveloping cone of silence when active noise cancelling kicks in. My other pair are $100 Soundcore Space A40s (bought after destroying another pair). These buds have even nicer noise cancelling powers but fit terribly: I recently endured quite the scene when running to catch a bus and one dropped out of my ear and bounced into a shrub. The Soundcores redeem themselves with impressive microphones, so I use them when Zooming or recording a podcast. I prefer them to stay home because the case is bulbous and a little conspicuous in a front jeans pocket. The Dells are even bigger. They fit my ears well and battery life is strong at around eight hours. Active noise cancelling is poor: A high hiss persists in-flight and I perceived distracting artefacts when using them in noisy environments on the ground. Neither of my two PCs made a Bluetooth connection with the Dell buds. Dell has a fix for that – the buds’ case houses a small USB-C dongle devoted to connecting with the buds. It works every time and delivers a more stable connection than Bluetooth and brings out some musical nuances that I can’t hear with my other buds or desktop speaker. The dongle feels like a clue about how Dell imagines these buds will be used, because today's laptops seldom offer more than a pair of USB-C ports and they’re commonly used for power in and video out. Dedicating a port to earbuds seems wasteful … unless you’re using a Dell dock or monitor that offers more ports. The USB-C audio connector therefore made it hard to escape the idea that Dell expects these buds will almost always be sold as part of a corporate peripheral purchase. I can’t imagine consumers would prefer them to Apple’s AirPods, or the many cheaper earbuds that match them for performance. But if the boss decides your organization must have cloud-manageable earbuds it would be churlish to turn down the chance to use a pair of Pro Plus Earbuds for work and play. The experience of using them is in the name: they're built for the office but can handle after hours activities. They’re not delightful, but they’re far from trashy, annoying, or inconvenient. And when I inevitably lose or destroy my current buds I’ll be very happy if I have the Dells on hand. ®
Categories: Linux fréttir
Europe built sovereign clouds to escape US control. Then forgot about the processors
FEATURE Can digital sovereignty exist on American silicon? Europe is pouring more than €2 billion into sovereign cloud initiatives designed to reduce exposure to US legal reach. The EU's IPCEI-CIS program funds infrastructure development. France qualifies operators under SecNumCloud, a framework with nearly 1,200 technical requirements promising "immunity from extraterritorial laws." But most datacenters and qualified cloud operators still rely heavily on Intel or AMD processors. And inside those processors sits a computer beneath the computer: management engines operating at Ring -3, below the operating system, outside the control of host security software, persistent even when the machine appears powered off. Under the US Reforming Intelligence and Securing America Act (RISAA) 2024, hardware manufacturers count as "electronic communications service providers" subject to secret government orders. Europe's frameworks certify the clouds. They don't assess the silicon. The computer your OS can't see That computer beneath the computer has a name. On Intel processors, it is the Management Engine (ME), or more precisely the Converged Security and Management Engine (CSME). On AMD, it is the Platform Security Processor (PSP). Both run at what security researchers call Ring -3, below the operating system, below the hypervisor, in a privilege level the host cannot see or log. "It's a computer inside your computer," explains John Goodacre, Professor of Computer Architectures and former director of the UK's £200 million Digital Security by Design program. He is clear about what that means in practice. The ME has its own memory, its own clock, and its own network stack, and because it can share the host's MAC and IP addresses, any traffic it generates is indistinguishable from the host's own traffic to the firewall. The architecture is not theoretical. Embedded in the Platform Controller Hub, the CSME is a separate microcontroller that operates independently of the host, with direct memory, device access, and network connectivity the host operating system cannot monitor. AMD's PSP works the same way. Intel's Active Management Technology (AMT), the remote management feature the ME enables, exposes at least TCP ports 16992, 16993, 16994, and 16995 on provisioned devices. Goodacre notes that an attack surface exists on unprovisioned hardware too. These ports deliver keyboard-video-mouse redirection, storage redirection, Serial-over-LAN, and power control to administrators managing fleets of devices remotely. The capability has legitimate uses. It also provides a channel that operates at a level below what European sovereignty frameworks can attest. Microsoft documented in 2017 that the PLATINUM nation state actor used Intel's Serial-over-LAN (SOL) as a covert exfiltration channel. SOL traffic transits the Management Engine and the NIC sideband path, delivered to the ME before the host TCP/IP stack runs. The host firewall and endpoint detection saw nothing, and any security tooling running on the compromised machine itself was equally blind. PLATINUM did not exploit a vulnerability. It exploited a feature, requiring only that AMT be enabled and credentials obtained. In documented cases, those credentials were the factory default: admin, with no password set. Goodacre catalogues this and related scenarios in a 37-page risk assessment prepared for CISOs evaluating Intel vPro hardware connected to corporate networks. Its conclusion is blunt: connecting an untouched-ME device to corporate resources "exposes the organization to a class of compromise that defeats the host security stack in its entirety." The ME does not stop when the machine appears to. Users recognize the symptom: a laptop powered off and stored for weeks is found, on next boot, to have a depleted battery. On modern thin and light platforms, what Microsoft documents as Modern Standby means "off" does not correspond to "all subsystems unpowered." The system-on-chip components the Management Engine runs on remain in low-power states, drawing enough to drain a 55 Wh battery over weeks, on the order of 100-200 mW continuous draw. The implication is documented in Goodacre's risk assessment: "Whether the radio is in a Wake-on-Wireless-LAN listening state is firmware policy. On a device whose firmware has been tampered with during transit through the supply chain, the answer cannot be inferred from the visible power state." A laptop that appears off, in a bag, can associate with a hostile network the user has no knowledge of. Professor Aurélien Francillon, a security researcher at French engineering school EURECOM, has spent years studying exactly this class of problem. Working with colleagues, he built a fully functional backdoor in hard disk drive firmware [PDF], a proof of concept demonstrating how storage devices could silently exfiltrate data through covert channels. Three months after presenting it at an academic conference, the Snowden disclosures revealed the NSA's ANT catalogue, which documented an identical capability already deployed in the field. "The NSA were already doing it," Francillon says flatly. "Quite amazing." That background informs his assessment of the ME. "Yes, it can probably be used as a backdoor, like many other things, including BMC [baseboard management controller] and many other firmwares," he says. The question, he argues, is not whether the backdoor exists but whether operational controls make it unreachable in practice. AMD faces the same architectural question. On April 14, 2026, researchers demonstrated the Fabricked attack against AMD's SEV-SNP confidential computing technology, achieving a 100 percent success rate with a software-only exploit. The Platform Security Processor proved vulnerable to the same class of compromise. On server hardware, the picture is the same. Intel ME runs on servers under a different name, Server Platform Services or SPS, and the BMC, the remote administration controller standard in datacenter hardware, relies on it. "More or less the same," Francillon says of the server variant. For datacenter operators, he sharpens the focus further: "If I look at cloud systems, servers, I would be more concerned with the BMC," pointing to published research demonstrating remote exploitation of BMC vulnerabilities that could allow an attacker to reinstall or fully compromise a server. The BMC is not a separate concern from the ME: on server hardware, it is the primary network entry point into the SPS, making it both the most exposed interface and the most consequential. Both Intel and AMD processors contain management engines that operate below the operating system. The silicon is designed by American companies and subject to American legal process. The backdoor the CLOUD Act doesn't use That legal process has teeth that most European policymakers underestimate. The CLOUD Act, passed in 2018, gave US authorities extraterritorial reach to data held by American companies. FISA Section 702 allows intelligence agencies to compel US persons and companies to provide access to communications. Both are well known in European sovereignty discussions. They operate through the front door: a legal order served on a company that controls data. Less well known is RISAA 2024, a law that opens a different entrance entirely. RISAA amended FISA's definition of "electronic communications service provider" in ways that go beyond cloud operators and platform companies, and beyond the bilateral agreements that European policymakers have built their legal defenses around. Hardware manufacturers now fall within scope. Intel and AMD can be compelled, via secret orders with gag clauses, to cooperate with US intelligence access. The mechanism through which that access could be exercised is the management engine: a persistent, privileged, network-connected runtime that operates below anything the host operating system can see or block. A SecNumCloud-certified operator can be legally isolated from American data demands. The processor inside its servers cannot. "You've actually got a policy mechanism by which any such machine anywhere can deliver any of its information," Goodacre says. RISAA's two-year term expired on April 20, 2026, but Congress extended it by 45 days while debating reforms. Whether it is renewed, amended or allowed to lapse, the architecture it targets does not change. SecNumCloud's blind spot France's SecNumCloud is Europe's most rigorous attempt to build a cloud certification that is legally immune to American law. It did not emerge from nowhere. ANSSI, France's national cybersecurity agency, was established in 2009 as part of a broader effort to build institutional muscle on digital sovereignty long before the term became fashionable. When Edward Snowden revealed the scale of NSA surveillance in 2013, France's response was technical rather than rhetorical: ANSSI published the first SecNumCloud framework in July 2014. A decade later, that framework has grown to nearly 1,200 technical requirements. At the time, SecNumCloud was a cybersecurity qualification, not a sovereignty instrument: it set requirements for architecture, encryption standards, access controls, and incident response, but said nothing about who controlled the underlying infrastructure or whose laws applied to it. The CLOUD Act changed that. Passed in 2018, it gave American authorities extraterritorial reach to data held by US companies, and suddenly a French cybersecurity framework had a geopolitical dimension it was not designed for. Version 3.2, introduced in 2022, added Chapter 19: a set of explicit requirements targeting extraterritorial law, mandating that only EU operators could run the service, that no non-EU party could access customer data, and that the provider could operate autonomously without external intervention. It promised "immunity from extraterritorial laws." In December 2025, S3NS, a joint venture between French defense and technology group Thales and Google Cloud, operating Google Cloud Platform technology under French control, became the first "hybrid" cloud to receive SecNumCloud qualification. The certification triggered heated debate: was this real sovereignty, or American technology with a European flag? But the debate missed a more fundamental question. Does SecNumCloud's certification reach as far as the silicon it runs on? Francillon is positioned to see both sides of that question. He sits on the French Technology Academy's working group on cloud security, a body that advises on the technical foundations of frameworks like SecNumCloud. And he has spent years studying firmware backdoors in academic literature and demonstrated them in practice. He knows what the hardware can do, and he knows what the certification requires. His starting point is that SecNumCloud provides genuinely valuable protection, and that the silicon gap does not negate that. When asked whether SecNumCloud explicitly addresses Intel Management Engine or AMD Platform Security Processor vulnerabilities, his answer is unambiguous: "There is no direct requirement for firmware backdoor prevention." The framework is not designed to be a technical specification for hardware-layer security. "The document aims to be generic and not dive into technical details," Francillon says. "Most of it is organizational security." What SecNumCloud does require is that providers build a proper threat model, consider mitigation mechanisms, and monitor administration gateways where external tech support could be exploited. The hardware layer was not addressed by oversight. It was left out by design. Francillon's assessment is not a fringe view. Vincent Strubel, the director of ANSSI, the very agency that designed and administers SecNumCloud, is equally explicit about what the framework does and does not cover. In a January 2026 LinkedIn post addressing SecNumCloud's scope, he writes that all cloud offerings, hybrid or not, depend on electronic components whose design and updates are not 100 percent controlled in Europe. If Europe were ever cut off from American or Chinese technology, he argues, the result would be a global problem of security degradation, not just in hybrid clouds, but everywhere. Strubel frames SecNumCloud carefully: it is "a cybersecurity tool, not an industrial policy tool." It protects against extraterritorial law enforcement and kill-switch scenarios. It was never designed to eliminate technology dependencies at the hardware layer, and no actor, state, or enterprise fully controls the entire cloud technology stack anyway. One technology frequently cited in sovereignty discussions is OpenTitan, Google's open source secure element deployed on its server hardware and used within the S3NS infrastructure. Francillon is clear about what it is and, critically, what it is not. "OpenTitan is a secure element, a small chip on the side that can be used for protecting sensitive keys, providing signatures, making attestations," he explains. "It's a bit like a TPM." What it is not is a replacement for the main processor. "The Linux and all your applications will not run on it." OpenTitan sits alongside x86 infrastructure as an external root of trust, independent of the ME. That matters because the default embedded TPM lives inside the ME, making it subject to the ME attack surface. OpenTitan sits outside that boundary. The two address different problems entirely, and conflating them, as sovereignty advocates sometimes do, obscures where the residual exposure actually lies. ANSSI's own technical position paper [PDF] on confidential computing, published in October 2025, concludes that Intel SGX, TDX, and AMD SEV-SNP are "not sufficient on their own to secure an entire system, or to meet the sovereignty requirements of SecNumCloud 3.2." Physical attackers are "explicitly out-of-scope" of vendor security targets. Supply chain attackers are "explicitly out-of-scope." The ME attack surface discussed in this article falls into neither category: it is a remote network threat, not a physical one. The paper's conclusion for users concerned about hostile cloud providers is stark: "Switch to a cloud provider they trust, or use their own hardware with physical security protection measures." The castle with a structural flaw Francillon does not dispute that SecNumCloud leaves the ME unassessed. His argument is that this does not matter in practice. "What I mean is that if there is a backdoor to access a room, it cannot be directly used if the room is in a castle. You have to pass the castle walls first." Network isolation, monitoring, and threat modeling are the walls. SecNumCloud's operational requirements mandate that administration gateways be isolated, that external tech support be monitored, that network segmentation prevents lateral movement. The Management Engine backdoor may exist, but the framework makes it unreachable except in what Francillon calls "very high-end attacks." That qualifier matters. Francillon is not claiming perfect security. He is claiming that proper operational controls reduce the threat to a level where only nation state actors with significant resources could exploit it. For most threat models, he argues, that is sufficient. "Saying it is useless to do SecNumCloud because there is ME, or whatever backdoor in some hardware we don't control, is a mistake," he says. SecNumCloud improves security over deployments without such controls, he argues, provided that hardware is carefully evaluated and firmware securely configured. The castle walls have a structural flaw that Goodacre's risk assessment documents in detail. Corporate perimeter firewalls see the device's traffic, but because the ME shares the host's MAC and IP addresses, they cannot tell ME-originated flows apart from legitimate host traffic. "The perimeter cannot attribute a flow to host-versus-CSME origin without out-of-band knowledge," Goodacre writes. A TLS-encrypted tunnel from the ME to an attacker server on port 443 looks, to the perimeter, like any other HTTPS connection the laptop makes. Network filtering reduces attack surface. It does not eliminate the exposure. Goodacre's position is that a "Tier-3 supply-chain residual remains in both cases and is the irreducible cost of buying any silicon that ships with a Ring -3 manageability engine." He defines Tier 3 as nation state cyber services operating at the level of compromising firmware in transit, mis-issuing CA certificates via in-country authorities, and modifying hardware at customs or courier hubs. The NSA's Tailored Access Operations division treated supply chain interdiction as routine business, with explicit doctrinal preference for BIOS and firmware implants over disk-level malware. His risk assessment's data on fleet vulnerability is concrete. Industry telemetry from Eclypsium, analyzing production enterprise environments, found that approximately 72 percent of devices observed remained vulnerable to INTEL-SA-00391 years after public disclosure, and 61 percent remained vulnerable to INTEL-SA-00295. The same reporting documented that the Conti ransomware group developed proof-of-concept Intel ME exploit code with the intent of installing highly persistent firmware-resident implants. "Connecting an untouched-ME vPro laptop to corporate resources exposes the organization to a class of compromise that defeats the host security stack in its entirety," Goodacre concludes. "The exposed controls include BitLocker full-disk encryption, FIDO2-protected sign-in, endpoint detection and response, the host firewall and the corporate VPN." The disagreement between Francillon and Goodacre is not about whether the vulnerability exists. Both confirm it does. Both confirm AMD faces the same issue. Both confirm software alone cannot fix it. The disagreement is about whether operational controls, Francillon's castle walls, make an architectural backdoor irrelevant in practice, or merely reduce its exploitability while leaving nation state actors with a path through. For SecNumCloud operators processing sensitive government or commercial data, the distinction is not academic. It is worth noting that SecNumCloud is designed for a higher level of security than standard cloud certifications, but is not intended for classified or restricted government data. The threat that can still slip through Francillon's castle walls is precisely the threat SecNumCloud was designed to keep out. The gap nobody names Goodacre told The Register he tested awareness of the Management Engine with various attendees at the CyberUK conference in April 2026. "Almost no one" knew about it, he reports. The gap between the sovereignty rhetoric and the silicon reality is not being surfaced in policy discussions, procurement decisions, or public debate over what digital sovereignty means. The debate that does happen, hybrid versus non-hybrid, Google/Thales versus pure European providers, focuses on operational control and legal structure. It does not address the shared silicon foundation. Strubel's LinkedIn post pushes back against the framing: "Imagining this problem is limited to hybrid cloud offerings is pure fantasy that doesn't survive confrontation with facts." Every cloud provider, hybrid or not, depends on components they don't fully control. The distinction isn't hybrid versus sovereign. It is what you're protecting against, and whether the controls you're implementing address that threat. There is no immediate solution. RISC-V, the open source processor architecture European sovereignty advocates point to as a long-term alternative, remains years from competitive performance in datacenter workloads. "It will take decades," Francillon says flatly. Arm is a cautionary precedent: it took nearly 20 years from the first server attempts before Arm achieved any meaningful datacenter presence. Can sovereignty exist on compromised silicon? For Goodacre, the bottom line is simple: the Tier-3 supply chain residual is "the irreducible cost of buying silicon with a Ring -3 manageability engine." Francillon argues that operational controls, including network isolation, monitoring, and threat modeling make the backdoor unreachable except in very high-end attacks. Strubel acknowledges hardware dependencies are real but maintains that SecNumCloud provides valuable protection for what it does cover: legal control, kill-switch resistance, defense against cyberattacks and insider threats. The disagreement is not about technical facts. It is about risk tolerance and threat model calibration. For European CIOs choosing SecNumCloud-certified providers, the question to ask vendors is: how do you address Intel Management Engine and AMD Platform Security Processor in your threat model? The answer will clarify whether the vendor treats the hardware layer as out of scope, or has implemented controls that reduce but do not eliminate the exposure. For European policymakers, the question is broader. Can digital sovereignty exist on non-sovereign silicon? The current frameworks do not answer that question. They certify operational controls, legal structure, and autonomous execution capability. They do not certify silicon-layer immunity, because the hardware is American or Chinese, subject to American or Chinese law, designed with management engines that European authorities did not specify, cannot legally compel on their own terms, and cannot replace. Whether that is a gap worth addressing, or a risk worth accepting as the unavoidable cost of participating in global technology supply chains, is a question Europe will need to answer for itself. ®
Categories: Linux fréttir
One in seven Brits swapped their GP for ChatGPT, study finds
Brits are now asking chatbots about mysterious lumps and weird rashes instead of calling their GP, which is probably not the digital healthcare revolution anybody meant to build. A new study from King's College London found that one in seven people in the UK have used AI instead of contacting a doctor or healthcare service, while one in ten said they had turned to chatbots rather than professional mental health support. Convenience was the biggest reason, cited by 46 percent of respondents, closely followed by curiosity at 45 percent. Another 39 percent said they used AI because they were unsure whether their symptoms were serious enough to bother a GP in the first place. The report, based on a survey of more than 2,000 adults, suggests that AI systems are quietly becoming Britain's unofficial second-opinion service while regulators are still arguing about what counts as "AI-enabled healthcare" in the first place. However, some respondents said the chatbot conversations ended up replacing medical care altogether. Around one in five respondents said chatbot advice discouraged them from seeking professional help, and 21 percent said they skipped contacting a healthcare provider because of something the AI told them. Public confidence in AI healthcare also looks shaky. The survey found Britons are almost perfectly split on whether AI should be involved in clinical decision-making, with 37 percent supporting its use and 38 percent opposing it. Safety and accuracy worries topped the list of public concerns about NHS AI use. Women, in particular, were less comfortable with the idea than men, and far more likely to say patients should be told when AI is involved in their care. Oddly, younger adults were among the most skeptical. Nearly half of 18 to 24-year-olds opposed clinical AI use, compared with 36 percent of people over 65. The public also appears to think AI has already taken over GP surgeries to a much greater extent than is the case. Respondents guessed that around 39 percent of GPs use AI in clinical decision-making, when the actual figure is closer to 8 percent. Professor Graham Lord, executive director at King's Health Partners, warned that responsibility for AI mistakes often lands on clinicians even when they have little control over the systems being deployed. "When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced," Lord said. Which sounds suspiciously like someone in healthcare has already seen the incoming paperwork. ®
Categories: Linux fréttir
Google reimburses Register sources who were victims of API fraud
Two of the Google Cloud developers who were hit with bills for thousands of dollars following unauthorized API calls to Gemini models have had their bills reversed, the users told The Register in recent days. But Google plans to continue automatically expanding users' spending limits, leaving them and countless other customers vulnerable to bills they cannot afford, whether from fraud or a sudden traffic surge. Australia-based developer Isuru Fonseka – whose usage bill skyrocketed to $17,000 in minutes after Google automatically upgraded his $250 spending tier when a hacker took control of his account – told us that he was happy to put this behind him. “It’s so good. It felt like they were just giving me the run around until your article. I just hope they fix it properly for everyone,” he said. “It’s great that the article was able to get the refund but it’s sad that it had to go to that level for them to process it urgently.” Despite refunding his money, Google seems to have lost a customer. Fonseka said that he has since ensured his API cannot be used with Google’s stable of AI products, and will likely try one of the independent foundation models if he needs those features. “I’ve disabled Gemini on everything – if I ever plan to use AI on my projects, I’m better off using it via a different service such as OpenRouter or going directly to one of the other LLM providers – just as a way to keep Gemini out of my account and the risk as low as possible,” he said. Fonseka said he was blindsided by a Google policy that allowed the company to automatically upgrade a user’s billing tier without permission or adequate warning. He had thought by signing up for a user tier with a $250 spending cap that his bills would be restricted to that amount. It was only after attackers exploited his API key that he learned Google would upgrade the cap automatically based on his history of spending. While Google acknowledged that the automatic tier upgrades allowed credential hijackers to rack up thousands of dollars in bills in cases like the one Fonseka described to The Register, it said it has not reconsidered the policy. In a statement to The Register, Google said that it wants to prioritize access to Google Cloud services without interruption, preferring to prevent service outages over respecting users' budget preferences. “With our automated growth tiers, we helped businesses scale as usage increased, built on their historic reputation of payments and usage,” a Google spokesperson told us in a statement. “This prevents their business having a hard service outage once they pass an artificial system quota.” Tiers vs spending caps There is some confusion between Google's usage tiers and its newly introduced spending caps, and Google’s documentation hasn't helped much. Google says its users can set their usage tiers not to exceed a certain spending level. For example the maximum spending allowed by a Tier 1 user like Fonseka is $250. However, if the account is older than 30 days and if, over the lifetime of their work with Google, they have spent at least $1,000, then Google will automatically allow that account to spend up to $100,000. So good customers have the most to fear from fraud or from an unexpected spike in usage. In several cases shared on social media, Google users were only aware of this after their credit cards were billed thousands of dollars. On April 22, Google introduced a trial of hard caps on spending within Google Cloud, but those are in a preview and are approved on a case-by-case basis. "We’re excited to announce that Spend Caps are coming soon to Google Cloud. Designed to work with Google Cloud Budgets, FinOps and DevOps can set budgets that enforce automated cost boundaries (caps) at the project level for AIS, Agent Platform, Cloud Run, Cloud Run Functions, and Maps," Google wrote. "These caps alert and ultimately pause API traffic once your set budget is reached, but leave your resources intact. If you need the traffic to resume, simply suspend the Spend Cap." Spend caps can only be set per project for a single, eligible service, Google said. Eligible services for this preview include Gemini API, Agent Platform (previously known as VertexAI), Cloud Run, Cloud Run Functions, Maps, Google said. Users who apply for a spending cap will have their submissions reviewed on a “one to two week basis” and customers are added in the order they submitted. “Once onboarded, you will receive an email with instructions on how to access the feature as well as details on how to submit feedback,” Google writes in its sign up page. Rod Danan, CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities, told The Register earlier this week that he saw his bill skyrocket to $10,000 in just 30 minutes of usage by attackers who exploited his public API key. Google forgave the charges on Thursday, he said. “They got back to me today agreeing to a refund,” he told us. “It's definitely relieving. You want to focus on the business. You don't want to have to focus on going and getting refunds from some crazy charges.” He said the stress of running a startup is hard enough without the addition of fighting one of the largest companies in the world imposing erroneous five-figure charges. “I'm happy that it's behind me. I wish it was easier,” he said. “I've learned, yeah, definitely don't give up. Be annoying whenever something is wrong and just keep pushing. Again, try to make it as public as possible, get louder and louder until the people you need to hear you actually hear you.” Google said any unauthorized use of API keys will be investigated and it historically has treated customers compassionately when there is clear evidence of fraud or error. “We take reports of credential abuse and the financial security of our customers extremely seriously; and as you know are investigating these specific cases you have pointed to and we will work directly with any impacted users to resolve charges resulting from fraudulent activity,” Google said. ®
Categories: Linux fréttir
Datacenters slurping up so much juice they boosted prices 75% in largest US energy market
Prices in the United States' largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes. The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report. According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM. “Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.” As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either. “The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.” Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals. For starters, Monitoring Analytics found that - like pretty much everywhere right now - power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report. “The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted. Current plan: Shift the risk to everyone else PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way. The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.” “Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.” As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them. “This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue. When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers. “PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions. Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiple states have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ®
Categories: Linux fréttir
Git is unprepared for the AI coding tsunami
Last month, Mitchell Hashimoto, HashiCorp co-founder, publicly declared that he was moving his popular open source Ghostty terminal emulator project from GitHub. GitHub runs the world’s largest service built on the Git distributed version control system, created by Linus Torvalds. Once an enthusiastic user, Hashimoto grew disillusioned with service disruptions, and increasingly slow pull requests. “This is no longer a place for serious work if it just blocks you out for hours per day, every day,” he wrote. Hashimoto was quick to defend Git itself: “The issue isn't Git, it's the infrastructure we rely on around it: issues, PRs, Actions, etc.” Many have blamed GitHub’s performance on Microsoft, which acquired the company in 2018. But to be fair, GitHub itself has been experiencing heavier-than-expected traffic thanks to a proliferation of AI-generated pull requests. In 2025, GitHub saw a 206 percent year-over-year growth in AI-generated projects measured by the use of Bash shell scripts, a widespread way of running agents. And more AI code means more bugs. Research from GitClear found that AI-generated code heaped 10.83 issues per pull request, compared to 6.45 for the old-fashioned human variety. Our new agentic workforce is raising big questions about how the entire software development lifecycle (SDLC) should evolve, and if Git should come along. “Agents are nudging us toward a continuous flow,” warned Peco Karayanev, co-founder of DevOps platform provider Autoptic, which bridges Git-based deployments with observability tools for agent-based remediation. Autoptic’s entire user base runs on some form of Git, either homebrew or from a service provider like GitLab. Given the volume and magnitude of changes across repos, “we need git to start operating in a more continuous mode,” Karayanev wrote in an email interview. Git operations, especially when used in GitOps-style automated deployments, still need to be managed by people. Updates, commits, pushes, merges are often yoked into sequences of “stop/go” episodes where someone has to hit enter on the keyboard a few times to continue the workflow, Karayanev noted. This model may not hold up once agents start getting priority. A butler for Git Git has always had its share of critics, especially those who use the tool daily. There may not be another piece of software that is so widely adopted and yet so inscrutable. Torvalds and other Linux kernel developers built Git in 2005 after frustrations with trying to shoehorn Linux code into the commercial BitKeeper tool. Linux, a global group project of mammoth proportions, required a distributed version control system able to support non-linear development of thousands of parallel branches. Like any distributed system, Git can be difficult to understand. One of the co-founders of GitHub, Scott Chacon co-wrote a book on using Git (2009’s Pro Git) and still he finds himself occasionally flummoxed by the version control system. There are still “sharp edges” to Git, Chacon told The Register. “There's a lot of stuff that it doesn't do very well from a usability standpoint,” he said. Chacon co-founded GitButler as a way to “rethink the porcelain” of Git, to make Git more suitable to modern workflows. (Last month, GitButler received $17 million in venture capital funding). Think of GitButler as a super-powered Git client. It allows the developer to work on two different branches simultaneously, using a technique called virtual branching. It reconciles the code a developer is working on with the upstream code. They can reorder commits, or edit the comments of a previous commit. It offers richer metadata about the files being worked on. It can show which commits are unique to that branch. Best of all, it eliminates what many developers call “rebase hell,” where merges into an updated codebase must be checked one at a time, a problem GitButler solves by keeping the user’s code synchronized with what is upstream. Many of these actions GitButler offers can be done through the Git command itself – although Git’s command language, and its rules, can be so obtuse that “you will probably make a mistake at some point,” Chacon said. A Git for agents Chacon believes GitHub’s current reliability issues stem from the current tsunami of agentic work. This is “ironic” because GitHub was built to scale Git, he said. “But an influx of agents is pushing the service to the brink.” The problem lies not with Git itself, but with everyone using one service, Chacon argued. Last year, GitHub had about 180 million users working across 630 million repositories – with 121 million created in 2025 alone, according to the company’s most recent annual Octoverse report. “From the longer-term perspective, it doesn't need to be like this,” he argued. Maybe Git should be run locally, mirrored globally and managed with clients … such as GitButler, Chacon suggested. Perhaps Git-based version control systems could be customized for specific industry verticals. We need to think about how we “distribute these systems more,” he said. “Git is designed to be distributed but we’re not distributing it,” he said. GitButler has created a command line interface specifically for agents. It was designed to give MCP servers an integrated map of the repository, which otherwise would require stitching together multiple Git commands. The Virtual Files concept allows the agent to work on a section of code that is also being worked on by a developer, or another agent. These are changes that point to a rethinking of how a Git workflow should run. “I think all of these systems should fundamentally change, because all of our workflows have changed, right? There needs to be different, sort of primitives for how to deal with these problem sets,” Chacon said. A tip from gaming development One company that wants its platform to replace Git altogether is Diversion, which has built an eponymous distributed version control system initially pitched for large-scale game design. “Git's architecture is actually an issue that prevents scaling,” argues Diversion CEO Sasha Medvedovsky in an interview with The Register. “Fundamentally it's an architecture problem that can't be fixed and is a bottleneck for end users and hosting services.” Git is a distributed system insofar as every user, or hosted service, requires a dedicated database (much like blockchain). “It's not distributed in the regular sense but rather replicated,” he wrote in an exchange with The Register on LinkedIn. Operations run on a single thread, making concurrent operations impossible. As a result, the larger the repository, the slower the commit operations – a deadly combination for fast-paced agentic software development, Medvedovsky noted. Of course, every CEO will have their talking points ready about a competitor’s weaknesses (Diversion is finalizing a blog post with hard numbers about Git and GitHub performance). But there are a growing number of other initiatives around prepping Git for the challenging times ahead. Perhaps most notable is Jujutsu, a Git-compatible distributed version control system, stewarded by Google senior software engineer Martin von Zweigbergk. Like GitButler, Jujutsu (jj) aims to eliminate a lot of the annoyances that come with Git. It includes an undo button and the ability to keep committing even when there is a conflict. And because everything written in C must be recast into Rust these days, long-time Git contributor Sebastian Thiel started a project called Gitoxide to rebuild Git in Rust. Potential benefits include significant performance improvements through multicore processing, and the much-needed memory safety that comes with Rust. Will Git 3 solve all the problems? Git’s chief maintainer is Junio Hamano, who took the reins from Torvalds in 2005. And he remains busy keeping Git current. At FOSDEM this February, core Git contributor and GitLab engineering manager Patrick Steinhardt discussed some of the changes coming in the next version of Git, version 3, which is gradually being rolled out this year. One of the chief improvements will be in the way Git manages the commit references, the IDs that point to each change being made. Surprisingly, this operation is a real bottleneck for the software. “The design is inefficient,” Steinhardt told the audience. Every time a programmer commits a code change, it gets recorded in a “packed-refs” file, which saves time by not giving each commit its own reference file. As projects grow larger, however, it takes longer for Git to amend or to delete a reference in packed-refs (One GitLab repo has a packed-refs file of more than 20 million references, Steinhardt said). This is especially problematic when you have multiple, simultaneous readers and writers of that file. And just forget about getting a consistent view of all the references. The freshly implemented Reftable feature, which will be the default in Git 3.0, stores references in an indexable binary format. The Git folks borrowed this concept from the Eclipse Foundation’s JGit Java implementation of Git. Reftable allows for block updates, eliminating the need to rewrite a 2 GB-sized file for a single entry. And it is much faster for reading, which would pave the way for Git supporting larger, more sprawling repositories – perfect for an ever-busy agentic workforce. For nearly two decades, Git has proved to be the version control system of choice for geeks worldwide. But even with these new features and various third-party enhancements, can it retain relevance for a new generation of agentically enhanced coders? The battle is on. ®
Categories: Linux fréttir
AI agents show they can create exploits, not just find vulns
Sure, AI agents such as Mythos can find security vulnerabilities in software, but the bigger question is whether they can turn those flaws into functional exploits that work in the real world. After all, many AI-discovered bugs prove minor or difficult to weaponize. New research, however, suggests frontier models can indeed develop working exploits when directed to do so. To better understand the rapidly changing security landscape, computer scientists from UC Berkeley, Max Planck Institute for Security and Privacy, UC Santa Barbara, Arizona State University, Anthropic, OpenAI, and Google decided to build ExploitGym, a benchmark for evaluating the exploitation capabilities of AI agents. This is not an entirely disinterested set of investigators – Anthropic, OpenAI, and Google all sell AI services. And both Anthropic and OpenAI have talked up the risk of leading models Claude Mythos Preview and GPT-5.5 while selling access to government partners. Since Anthropic announced Mythos in early April, the security community has been critical of the company's approach, described by some as fear-mongering. And various security experts have made the case that even commercially available AI models can find security flaws. Nonetheless, Mythos and GPT-5.5 outshine their peers in ExploitGym, as described in the paper, "ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?" ExploitGym consists of 898 real vulnerabilities found in applications, Google's V8 JavaScript engine, and the Linux kernel. Its workout consists of presenting an AI agent with a vulnerability and proof-of-concept input that triggers it, to see whether the agent can create an exploit capable of arbitrary code execution. According to the UC Berkeley Center for Responsible Decentralized Intelligence, Mythos Preview successfully exploited 157 test instances and GPT-5.5 managed 120 in the allotted two-hour window. "Even when standard security defenses like ASLR or the V8 sandbox were turned on, a meaningful number of exploits still worked," the boffins wrote in a blog post. "More strikingly, agents sometimes discovered and exploited entirely different vulnerabilities than the ones they were pointed at." The agents (CLI + model) tested were Claude Code with Claude Opus 4.6, Claude Opus 4.7, Claude Mythos Preview, and GLM-5.1; Codex CLI with GPT-5.4/GPT-5.5; and Gemini CLI with Gemini 3.1 Pro. And even the ancient models released in February (Opus 4.6 and Gemini 3.1 Pro) had some success. The researchers say that one of their more interesting findings is that these models sometimes went "off-script" in capture-the-flag (CTF) environments, where an agent has to find and retrieve some hidden value. This was most evident with Mythos Preview and GPT-5.5. The former succeeded in 226 CTF exercises but only used the intended bug in 157 instances, while the latter captured 210 flags and only used the intended bug in 120 of those cases. The authors also note that while there was some overlap in the exploits discovered, the various models found different exploits. This suggests applying a diverse set of models might be advantageous both in attack and defense scenarios. It's worth adding that ExploitGym tests were done with security guardrails disabled. When the test was re-run on GPT-5.5 with default safety filters active, the model refused 88.2 percent of the time before making any tool call. The Register, however, has seen security researchers craft prompts in a way to avoid triggering refusals. So safeguards of that sort have limits. "Our results show that autonomous exploit development by frontier AI agents is no longer a hypothetical capability," the authors state in their paper. "While current agents are not yet reliable across all targets, they already exploit a non-trivial fraction of real-world vulnerabilities, including complex targets such as kernel components." ®
Categories: Linux fréttir
LocalSend puts your sneakernet out of business
FOSS It happens all the time. You have a file on one of your devices and you need to have it on another one. You could put the file on a USB flash drive and walk it over (the so-called sneakernet), you could email it to yourself, or you could try to set up some kind of network resource. LocalSend, a free open source tool, makes the process of sharing files on a LAN easier than anything else and it works on Windows, Linux, macOS, Android, and more. The Reg FOSS desk is not routinely a fan of Apple fondleslabs. (We’ve tried, but they’re a bit too locked down for us.) Saying that, from what we’ve heard, LocalSend is a bit like Apple’s AirDrop but for grown-up computers and non-Apple kit. For Linux Mint users, it’s a bit like the included Warpinator – and as that page says, don’t search for it and go to warpinator.com, as it’s a fake site. It’s a free download from its GitHub page and is also available in Canonical’s Snap store and on Flathub. You run it, and it gives that computer a cute nickname in the form of (adjective)+(fruit). Run it on two computers on the same local network, and they should see each other. You click “send” on one, and “receive” on the other, and that’s about it: pick the file or folder, and off it goes. LocalSend isn’t very big – the installation packages are mostly around the 15 MB mark – so it’s pretty fast to download or install. This vulture found and tried it when we downloaded a just-over-4 GB file and then worked out we’d downloaded it onto the wrong OS on the wrong machine. It takes a good few minutes to download several gigabytes – we live on a small, remote island, where our 100 Mbps broadband costs about four times what 1 Gbps broadband used to cost in Czechia – and it seemed worth trying to transfer it rather than grab another copy. The gist of the idea is that LocalSend is quicker than using a USB key. You know the sort of process: find a big enough USB key, check it has space, copy the file onto it, eject it, go to the other machine, insert it, and copy the file off again. Even if it goes perfectly, LocalSend is still less hassle. It’s also easier than configuring some kind of temporary folder-sharing setup between different OSes on different computers with different login names. (The Irish Sea wing of Vulture Towers recently moved house and has yet to finalize his office layout and reconnect his NAS servers. It’s climbing to the top of the to-do list, though.) LocalSend is also available on both the iOS App Store and Google Play Store, so it can help for devices that you can’t readily plug a USB key into. The transfer happens across your local network, so it won’t use up bandwidth on metered internet connections, and will even work if your internet connection is down. Warpinator is Mint’s solution – but in our case, we initially needed to move the file from Windows to macOS. Both have ports of Warpinator, but both seem unofficial, and while the machines could see one another, file transfers failed. We’ve also tried SyncThing, but it’s not good at keeping machines in sync when they’re rarely on at the same time – and we’ve had problems with it recursively duplicating directory trees into themselves so deeply that no GUI tool could delete them. Ideally, you should have an always-on home server that also runs SyncThing – and if you have one of those, then for one-off file transfers, you don’t really need SyncThing: just copy it to the server, and off again. LocalSend just worked, and for us, it worked identically whether either end was running Windows, Linux, or macOS. We couldn’t ask for more. ®
Categories: Linux fréttir
Microsoft puts stability in the driver's seat with new initiative
Microsoft has laid out plans for how it and its partners will deal with iffy drivers causing stability problems in the company's flagship operating system. Dubbed the Driver Quality Initiative (DQI), Microsoft has outlined four pillars to support the program. These are Architecture – hardening kernel-mode drivers and enabling third-party kernel-mode drivers to transition to user mode; Trust – raising the bar for trusted partners and drivers; Lifecycle – addressing outdated and low-quality drivers; and Quality Measures – going beyond simple crash counts to measure driver quality. It's all very laudable, although, aside from references in the architecture pillar, Microsoft's WinHEC 2026 announcement said little about how Redmond ended up in a situation where drivers can run at a privilege level that allows a failure to leave the operating system hopelessly borked. The infamous CrowdStrike incident of 2024, which crashed millions of Windows devices, ably demonstrated the dangers of drivers running around in the Windows kernel. Microsoft later blamed a 2009 undertaking with the European Commission for how that situation came to be, although it skipped over the whole not-creating-an-API-so-security-vendors-didn't-need-kernel-access part. In the months after the CrowdStrike incident (or "learnings", as Microsoft delicately put it), the Windows Resiliency Initiative was announced. According to Microsoft, "DQI builds on the learnings and infrastructure established through the Windows Resiliency Initiative." Drivers are the bane of many Windows users. A faulty driver can make the entire operating system unstable. Sure, a customer might wonder how such a situation has been allowed to happen. Still, we are where we are, and dealing with it requires Microsoft to harden the operating system and provide ways for vendors to work with Windows that don't involve breaking down the kernel's doors. Those same vendors need to ensure that drivers are high-quality and reliable. "Driver and platform quality," wrote Microsoft, "is central to the customer experience." The company has espoused much in recent months about how it intends to "fix" Windows after a disastrous few years that have taken a hatchet to consumer confidence. Fripperies like moving the taskbar and rethinking Redmond's relentless pushing of Copilot are one thing. Dealing with driver-related crashes is quite another. WinHEC 2026 has shown that at least some within Microsoft are determined to deal with the fundamentals, and that requires taking the Windows maker's hardware partners along for the ride. ®
Categories: Linux fréttir
Google'll grab your gigs if you don’t cough up your number
Google is testing a storage reduction for new accounts unless a phone number is provided. The change the Chocolate Factory is trialing affects new accounts, reducing the free storage from 15 GB to a miserly 5 GB unless the user provides a telephone number. Not all new users are impacted. We created a Gmail account today, and were given the full 15 GB of storage without being required to provide a phone number (although it did ask for one for activation code purposes). The test is also regional and, it must be emphasized, is just that at this stage – a test. However, it could point to a future where tech vendors demand more data in return for using a 'free' service. Arguably, we're living in that future right now. A Google spokesperson told The Register: "We're testing a new storage policy for new accounts created in select regions that will help us continue to provide a high quality storage service to our users, while encouraging users to improve their account security and data recovery." A Reddit thread on the matter contained all manner of theories regarding what the data might be used for, including nefarious commercial purposes. Judging by the screenshot, Google is trying to curb people who create multiple accounts to gain more storage. 15 GB is not a lot of storage these days, particularly given the relentless growth in media file sizes. That said, a drop to 5 GB would bring Google into line with Apple, which gives customers the same amount unless they upgrade to iCloud+. Microsoft gives users 15 GB of free Outlook.com storage, and Proton Mail's free tier gives users 1 GB (initially 500 MB until a starting checklist is completed). Should the test become reality, it could be seen as yet another step on a worrying path. Sure, you can have more free storage: sign here and agree to hand over these bits of your personal information. As demand for storage increases, vendor offerings are looking ever more miserly, and a cut from Google, even with the best of intentions, will rankle. Then again, if you are concerned about privacy and your personal information being used for commercial purposes, it could be that, for all its convenience, Gmail might not be the right tool for you. Reducing storage to 5 GB for new users (existing users aren't affected) unless a telephone number is handed over might be the nudge that some users need to look elsewhere for their email needs. ®
Categories: Linux fréttir
NASA's Psyche mission set for a brief encounter with Mars
More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ®
Categories: Linux fréttir
Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028
AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ®
Categories: Linux fréttir
Exploited Exchange Server flaw turns OWA inboxes into script launchpads
Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ®
Categories: Linux fréttir
Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day
Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ®
Categories: Linux fréttir
X tells Ofcom it will finally check its moderation inbox
Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ®
Categories: Linux fréttir
