TheRegister

Subscribe to TheRegister feed
Articles from www.theregister.com
Updated: 1 hour 13 min ago

AI-generated code is 'pain waiting to happen'

1 hour 22 min ago
INTERVIEW Enthusiasm among managers to adopt AI tools has outpaced developers' ability to learn those tools and use them effectively. Moshe Sambol, VP of customer solutions at software observability outfit Lightrun, told The Register in an interview that he speaks with a lot of companies. Some of the developers in those organizations, he said, are very comfortable with AI tools. "But the reality is that a lot of developers are much earlier in the curve," he said. "The expectations of businesses are getting ahead of where the developers are in terms of their mental model and in terms of the training that they're providing, the enablement they're providing to make their teams comfortable with the tools, and the rate at which these tools are evolving." Sambol said the degree of AI tool adoption varies. "I absolutely have customers who've told their developers, 'You don't write code anymore. You review code. No one should write a line of code unless for some reason you failed after three attempts getting GenAI to do it,'" he said. "I have customers like that. I don't know if I should name them, but absolutely." And he said on the other side of the spectrum, there are organizations like banks that are just starting to roll AI tools out due to compliance obligations and traditional industry caution. "It's an exciting time to be adopting these tools and learning these tools, but it puts a lot of pressure on the developer," he said. "It puts this expectation of being more productive." Not everyone manages that, and Sambol said he has a lot of sympathy for developers who have been directed to use AI tools without training and organizational guidance. Generative AI models will produce a lot of code quickly, he said, and because the code seems correct initially, it often gets pushed forward. "If it's not creating bugs en masse today, it's just pain waiting to happen," he said. "The number one question I think we have to be asking developers is, 'Can you explain that code? Have you validated that the code actually fits in the context of the broader system?'" Sambol said the answer isn't necessarily yes or no because developers have different levels of experience and often work on large projects where they focus only on a specific part of the code base. It's common in enterprises, he said, that no one person will understand the entire system end-to-end, which is why problem resolution often requires a group of people. The issue he sees is that generative AI systems don't help bridge the missing knowledge gap. They don't provide the context to understand all the components involved. Sambol went on to describe an incident in which a developer was using an AI assistant to build an Ansible automated workflow. "The generative AI was creating the Ansible template for him, which seems like a perfect match – it's drudge work," he explained. "And it's much better at getting the syntax exactly right." It worked. And then it stopped working. "The system that he was deploying to, all of a sudden, he could not get the component up," Sambol said. "It just wouldn't start. A process that had been going smoothly for a couple of hours in the morning, now all of a sudden, his service is down and it will not run. "And he's pulling his hair out trying to unstitch the day's work so far to figure out what went wrong, why is the service not working," he said, adding that the AI agent proved unhelpful by going off in the wrong direction, reinstalling the operating system, and undertaking other ineffective steps to effect repairs. What happened, Sambol explained, is that earlier in the day, the developer had installed the component in a certain way – it was running in a container with a systemd service. As such, it needed access to the ports on the device, which precluded running the component in Docker. "So the AI model re-wrapped it, repackaged it, and deployed it in a different way, but kept the original one running," he explained. "So it was simply a matter of the fact that the one he had initially deployed was still running and it was blocking the port and the second one couldn't run. "It's a fairly simple, easy-to-understand problem once you see it, but he lost the entire afternoon going down all kinds of dead ends with the AI looking at this, looking at that, because the AI model didn't remember that it had guided him to deploy the system a certain way earlier in the day." Sambol said various studies show a significant percentage of AI generated code contains errors and creates technical debt. That's not to say human developers are without fault. Sambol said developers have their own weaknesses. Many companies, he said, have offshored or globally distributed development teams, so there's a lot of variation. He argues that it's important to acknowledge that imperfection and work toward processes that improve results. One way to do that is to automate the prompting process in a way that makes it more repeatable. "When you do that, you identify where you're starting to get good results and you don't expect everybody to come up with a well-structured long prompt." Sambol added, "I think these tools are absolutely getting better. And so I'm reluctant to call any of them junk or deeply flawed. They're getting better shockingly rapidly. If you can take advantage of a couple of different ones – with a human being in the loop – then you are more likely to get output that is at least as good as you were getting before." ®
Categories: Linux fréttir

Cloud-managed earbuds sound strange - as a concept, and on a plane

3 hours 53 min ago
Last year, The Register spotted Dell selling cloud-manageable wireless earbuds that feature the company’s famously stoic styling at a price higher than Apple charges for its latest AirPods. Dell eventually offered your correspondent a pair of the Pro Plus Earbuds to try so we could hear what all the fuss is about – and we accepted, on condition that the company showed us the cloudy management tools that make the buds worth the big bucks. Divya Soni, a go to market lead, showed me Dell’s cloudy Device Management Console, a tool that lets admins enroll and track the buds, send them new firmware, or do things like turn on active noise cancellation by default across a fleet of earbuds. New firmware matters for earbuds because they’re Bluetooth devices and the wireless protocol has had its fair share of security scares over the years. The buds have already earned Microsoft’s Teams Open Office Certification – a seal of approval for being able to handle noisy offices, plus a Zoom accreditation. New firmware might help there, too. Soni admitted earbuds aren’t the main priority for the Device Management Console, which Dell expects customers will mostly use to manage docks and displays. Dell delivers firmware updates to those devices at least once a year, to address security issues or fix bugs. The tool can do the same for keyboards or headsets. I can’t imagine anyone would adopt Dell’s Device Manager just to keep an eye on earbuds. I’m also not sure anyone would buy the buds for personal use. I say that because I own two sets of wireless earbuds and in their own way both are better than the Dells. My go-to buds are JB’s $40 Vibe Beam 2, which fit brilliantly, bring out some nice nuances in much music, boast batteries that last about six hours and only need about 15 minutes to recharge. That makes them satisfactory for long-haul flights, during which they drop a warmly enveloping cone of silence when active noise cancelling kicks in. My other pair are $100 Soundcore Space A40s (bought after destroying another pair). These buds have even nicer noise cancelling powers but fit terribly: I recently endured quite the scene when running to catch a bus and one dropped out of my ear and bounced into a shrub. The Soundcores redeem themselves with impressive microphones, so I use them when Zooming or recording a podcast. I prefer them to stay home because the case is bulbous and a little conspicuous in a front jeans pocket. The Dells are even bigger. They fit my ears well and battery life is strong at around eight hours. Active noise cancelling is poor: A high hiss persists in-flight and I perceived distracting artefacts when using them in noisy environments on the ground. Neither of my two PCs made a Bluetooth connection with the Dell buds. Dell has a fix for that – the buds’ case houses a small USB-C dongle devoted to connecting with the buds. It works every time and delivers a more stable connection than Bluetooth and brings out some musical nuances that I can’t hear with my other buds or desktop speaker. The dongle feels like a clue about how Dell imagines these buds will be used, because today's laptops seldom offer more than a pair of USB-C ports and they’re commonly used for power in and video out. Dedicating a port to earbuds seems wasteful … unless you’re using a Dell dock or monitor that offers more ports. The USB-C audio connector therefore made it hard to escape the idea that Dell expects these buds will almost always be sold as part of a corporate peripheral purchase. I can’t imagine consumers would prefer them to Apple’s AirPods, or the many cheaper earbuds that match them for performance. But if the boss decides your organization must have cloud-manageable earbuds it would be churlish to turn down the chance to use a pair of Pro Plus Earbuds for work and play. The experience of using them is in the name: they're built for the office but can handle after hours activities. They’re not delightful, but they’re far from trashy, annoying, or inconvenient. And when I inevitably lose or destroy my current buds I’ll be very happy if I have the Dells on hand. ®
Categories: Linux fréttir

Europe built sovereign clouds to escape US control. Then forgot about the processors

7 hours 53 min ago
FEATURE Can digital sovereignty exist on American silicon? Europe is pouring more than €2 billion into sovereign cloud initiatives designed to reduce exposure to US legal reach. The EU's IPCEI-CIS program funds infrastructure development. France qualifies operators under SecNumCloud, a framework with nearly 1,200 technical requirements promising "immunity from extraterritorial laws." But most datacenters and qualified cloud operators still rely heavily on Intel or AMD processors. And inside those processors sits a computer beneath the computer: management engines operating at Ring -3, below the operating system, outside the control of host security software, persistent even when the machine appears powered off. Under the US Reforming Intelligence and Securing America Act (RISAA) 2024, hardware manufacturers count as "electronic communications service providers" subject to secret government orders. Europe's frameworks certify the clouds. They don't assess the silicon. The computer your OS can't see That computer beneath the computer has a name. On Intel processors, it is the Management Engine (ME), or more precisely the Converged Security and Management Engine (CSME). On AMD, it is the Platform Security Processor (PSP). Both run at what security researchers call Ring -3, below the operating system, below the hypervisor, in a privilege level the host cannot see or log. "It's a computer inside your computer," explains John Goodacre, Professor of Computer Architectures and former director of the UK's £200 million Digital Security by Design program. He is clear about what that means in practice. The ME has its own memory, its own clock, and its own network stack, and because it can share the host's MAC and IP addresses, any traffic it generates is indistinguishable from the host's own traffic to the firewall. The architecture is not theoretical. Embedded in the Platform Controller Hub, the CSME is a separate microcontroller that operates independently of the host, with direct memory, device access, and network connectivity the host operating system cannot monitor. AMD's PSP works the same way. Intel's Active Management Technology (AMT), the remote management feature the ME enables, exposes at least TCP ports 16992, 16993, 16994, and 16995 on provisioned devices. Goodacre notes that an attack surface exists on unprovisioned hardware too. These ports deliver keyboard-video-mouse redirection, storage redirection, Serial-over-LAN, and power control to administrators managing fleets of devices remotely. The capability has legitimate uses. It also provides a channel that operates at a level below what European sovereignty frameworks can attest. Microsoft documented in 2017 that the PLATINUM nation state actor used Intel's Serial-over-LAN (SOL) as a covert exfiltration channel. SOL traffic transits the Management Engine and the NIC sideband path, delivered to the ME before the host TCP/IP stack runs. The host firewall and endpoint detection saw nothing, and any security tooling running on the compromised machine itself was equally blind. PLATINUM did not exploit a vulnerability. It exploited a feature, requiring only that AMT be enabled and credentials obtained. In documented cases, those credentials were the factory default: admin, with no password set. Goodacre catalogues this and related scenarios in a 37-page risk assessment prepared for CISOs evaluating Intel vPro hardware connected to corporate networks. Its conclusion is blunt: connecting an untouched-ME device to corporate resources "exposes the organization to a class of compromise that defeats the host security stack in its entirety." The ME does not stop when the machine appears to. Users recognize the symptom: a laptop powered off and stored for weeks is found, on next boot, to have a depleted battery. On modern thin and light platforms, what Microsoft documents as Modern Standby means "off" does not correspond to "all subsystems unpowered." The system-on-chip components the Management Engine runs on remain in low-power states, drawing enough to drain a 55 Wh battery over weeks, on the order of 100-200 mW continuous draw. The implication is documented in Goodacre's risk assessment: "Whether the radio is in a Wake-on-Wireless-LAN listening state is firmware policy. On a device whose firmware has been tampered with during transit through the supply chain, the answer cannot be inferred from the visible power state." A laptop that appears off, in a bag, can associate with a hostile network the user has no knowledge of. Professor Aurélien Francillon, a security researcher at French engineering school EURECOM, has spent years studying exactly this class of problem. Working with colleagues, he built a fully functional backdoor in hard disk drive firmware [PDF], a proof of concept demonstrating how storage devices could silently exfiltrate data through covert channels. Three months after presenting it at an academic conference, the Snowden disclosures revealed the NSA's ANT catalogue, which documented an identical capability already deployed in the field. "The NSA were already doing it," Francillon says flatly. "Quite amazing." That background informs his assessment of the ME. "Yes, it can probably be used as a backdoor, like many other things, including BMC [baseboard management controller] and many other firmwares," he says. The question, he argues, is not whether the backdoor exists but whether operational controls make it unreachable in practice. AMD faces the same architectural question. On April 14, 2026, researchers demonstrated the Fabricked attack against AMD's SEV-SNP confidential computing technology, achieving a 100 percent success rate with a software-only exploit. The Platform Security Processor proved vulnerable to the same class of compromise. On server hardware, the picture is the same. Intel ME runs on servers under a different name, Server Platform Services or SPS, and the BMC, the remote administration controller standard in datacenter hardware, relies on it. "More or less the same," Francillon says of the server variant. For datacenter operators, he sharpens the focus further: "If I look at cloud systems, servers, I would be more concerned with the BMC," pointing to published research demonstrating remote exploitation of BMC vulnerabilities that could allow an attacker to reinstall or fully compromise a server. The BMC is not a separate concern from the ME: on server hardware, it is the primary network entry point into the SPS, making it both the most exposed interface and the most consequential. Both Intel and AMD processors contain management engines that operate below the operating system. The silicon is designed by American companies and subject to American legal process. The backdoor the CLOUD Act doesn't use That legal process has teeth that most European policymakers underestimate. The CLOUD Act, passed in 2018, gave US authorities extraterritorial reach to data held by American companies. FISA Section 702 allows intelligence agencies to compel US persons and companies to provide access to communications. Both are well known in European sovereignty discussions. They operate through the front door: a legal order served on a company that controls data. Less well known is RISAA 2024, a law that opens a different entrance entirely. RISAA amended FISA's definition of "electronic communications service provider" in ways that go beyond cloud operators and platform companies, and beyond the bilateral agreements that European policymakers have built their legal defenses around. Hardware manufacturers now fall within scope. Intel and AMD can be compelled, via secret orders with gag clauses, to cooperate with US intelligence access. The mechanism through which that access could be exercised is the management engine: a persistent, privileged, network-connected runtime that operates below anything the host operating system can see or block. A SecNumCloud-certified operator can be legally isolated from American data demands. The processor inside its servers cannot. "You've actually got a policy mechanism by which any such machine anywhere can deliver any of its information," Goodacre says. RISAA's two-year term expired on April 20, 2026, but Congress extended it by 45 days while debating reforms. Whether it is renewed, amended or allowed to lapse, the architecture it targets does not change. SecNumCloud's blind spot France's SecNumCloud is Europe's most rigorous attempt to build a cloud certification that is legally immune to American law. It did not emerge from nowhere. ANSSI, France's national cybersecurity agency, was established in 2009 as part of a broader effort to build institutional muscle on digital sovereignty long before the term became fashionable. When Edward Snowden revealed the scale of NSA surveillance in 2013, France's response was technical rather than rhetorical: ANSSI published the first SecNumCloud framework in July 2014. A decade later, that framework has grown to nearly 1,200 technical requirements. At the time, SecNumCloud was a cybersecurity qualification, not a sovereignty instrument: it set requirements for architecture, encryption standards, access controls, and incident response, but said nothing about who controlled the underlying infrastructure or whose laws applied to it. The CLOUD Act changed that. Passed in 2018, it gave American authorities extraterritorial reach to data held by US companies, and suddenly a French cybersecurity framework had a geopolitical dimension it was not designed for. Version 3.2, introduced in 2022, added Chapter 19: a set of explicit requirements targeting extraterritorial law, mandating that only EU operators could run the service, that no non-EU party could access customer data, and that the provider could operate autonomously without external intervention. It promised "immunity from extraterritorial laws." In December 2025, S3NS, a joint venture between French defense and technology group Thales and Google Cloud, operating Google Cloud Platform technology under French control, became the first "hybrid" cloud to receive SecNumCloud qualification. The certification triggered heated debate: was this real sovereignty, or American technology with a European flag? But the debate missed a more fundamental question. Does SecNumCloud's certification reach as far as the silicon it runs on? Francillon is positioned to see both sides of that question. He sits on the French Technology Academy's working group on cloud security, a body that advises on the technical foundations of frameworks like SecNumCloud. And he has spent years studying firmware backdoors in academic literature and demonstrated them in practice. He knows what the hardware can do, and he knows what the certification requires. His starting point is that SecNumCloud provides genuinely valuable protection, and that the silicon gap does not negate that. When asked whether SecNumCloud explicitly addresses Intel Management Engine or AMD Platform Security Processor vulnerabilities, his answer is unambiguous: "There is no direct requirement for firmware backdoor prevention." The framework is not designed to be a technical specification for hardware-layer security. "The document aims to be generic and not dive into technical details," Francillon says. "Most of it is organizational security." What SecNumCloud does require is that providers build a proper threat model, consider mitigation mechanisms, and monitor administration gateways where external tech support could be exploited. The hardware layer was not addressed by oversight. It was left out by design. Francillon's assessment is not a fringe view. Vincent Strubel, the director of ANSSI, the very agency that designed and administers SecNumCloud, is equally explicit about what the framework does and does not cover. In a January 2026 LinkedIn post addressing SecNumCloud's scope, he writes that all cloud offerings, hybrid or not, depend on electronic components whose design and updates are not 100 percent controlled in Europe. If Europe were ever cut off from American or Chinese technology, he argues, the result would be a global problem of security degradation, not just in hybrid clouds, but everywhere. Strubel frames SecNumCloud carefully: it is "a cybersecurity tool, not an industrial policy tool." It protects against extraterritorial law enforcement and kill-switch scenarios. It was never designed to eliminate technology dependencies at the hardware layer, and no actor, state, or enterprise fully controls the entire cloud technology stack anyway. One technology frequently cited in sovereignty discussions is OpenTitan, Google's open source secure element deployed on its server hardware and used within the S3NS infrastructure. Francillon is clear about what it is and, critically, what it is not. "OpenTitan is a secure element, a small chip on the side that can be used for protecting sensitive keys, providing signatures, making attestations," he explains. "It's a bit like a TPM." What it is not is a replacement for the main processor. "The Linux and all your applications will not run on it." OpenTitan sits alongside x86 infrastructure as an external root of trust, independent of the ME. That matters because the default embedded TPM lives inside the ME, making it subject to the ME attack surface. OpenTitan sits outside that boundary. The two address different problems entirely, and conflating them, as sovereignty advocates sometimes do, obscures where the residual exposure actually lies. ANSSI's own technical position paper [PDF] on confidential computing, published in October 2025, concludes that Intel SGX, TDX, and AMD SEV-SNP are "not sufficient on their own to secure an entire system, or to meet the sovereignty requirements of SecNumCloud 3.2." Physical attackers are "explicitly out-of-scope" of vendor security targets. Supply chain attackers are "explicitly out-of-scope." The ME attack surface discussed in this article falls into neither category: it is a remote network threat, not a physical one. The paper's conclusion for users concerned about hostile cloud providers is stark: "Switch to a cloud provider they trust, or use their own hardware with physical security protection measures." The castle with a structural flaw Francillon does not dispute that SecNumCloud leaves the ME unassessed. His argument is that this does not matter in practice. "What I mean is that if there is a backdoor to access a room, it cannot be directly used if the room is in a castle. You have to pass the castle walls first." Network isolation, monitoring, and threat modeling are the walls. SecNumCloud's operational requirements mandate that administration gateways be isolated, that external tech support be monitored, that network segmentation prevents lateral movement. The Management Engine backdoor may exist, but the framework makes it unreachable except in what Francillon calls "very high-end attacks." That qualifier matters. Francillon is not claiming perfect security. He is claiming that proper operational controls reduce the threat to a level where only nation state actors with significant resources could exploit it. For most threat models, he argues, that is sufficient. "Saying it is useless to do SecNumCloud because there is ME, or whatever backdoor in some hardware we don't control, is a mistake," he says. SecNumCloud improves security over deployments without such controls, he argues, provided that hardware is carefully evaluated and firmware securely configured. The castle walls have a structural flaw that Goodacre's risk assessment documents in detail. Corporate perimeter firewalls see the device's traffic, but because the ME shares the host's MAC and IP addresses, they cannot tell ME-originated flows apart from legitimate host traffic. "The perimeter cannot attribute a flow to host-versus-CSME origin without out-of-band knowledge," Goodacre writes. A TLS-encrypted tunnel from the ME to an attacker server on port 443 looks, to the perimeter, like any other HTTPS connection the laptop makes. Network filtering reduces attack surface. It does not eliminate the exposure. Goodacre's position is that a "Tier-3 supply-chain residual remains in both cases and is the irreducible cost of buying any silicon that ships with a Ring -3 manageability engine." He defines Tier 3 as nation state cyber services operating at the level of compromising firmware in transit, mis-issuing CA certificates via in-country authorities, and modifying hardware at customs or courier hubs. The NSA's Tailored Access Operations division treated supply chain interdiction as routine business, with explicit doctrinal preference for BIOS and firmware implants over disk-level malware. His risk assessment's data on fleet vulnerability is concrete. Industry telemetry from Eclypsium, analyzing production enterprise environments, found that approximately 72 percent of devices observed remained vulnerable to INTEL-SA-00391 years after public disclosure, and 61 percent remained vulnerable to INTEL-SA-00295. The same reporting documented that the Conti ransomware group developed proof-of-concept Intel ME exploit code with the intent of installing highly persistent firmware-resident implants. "Connecting an untouched-ME vPro laptop to corporate resources exposes the organization to a class of compromise that defeats the host security stack in its entirety," Goodacre concludes. "The exposed controls include BitLocker full-disk encryption, FIDO2-protected sign-in, endpoint detection and response, the host firewall and the corporate VPN." The disagreement between Francillon and Goodacre is not about whether the vulnerability exists. Both confirm it does. Both confirm AMD faces the same issue. Both confirm software alone cannot fix it. The disagreement is about whether operational controls, Francillon's castle walls, make an architectural backdoor irrelevant in practice, or merely reduce its exploitability while leaving nation state actors with a path through. For SecNumCloud operators processing sensitive government or commercial data, the distinction is not academic. It is worth noting that SecNumCloud is designed for a higher level of security than standard cloud certifications, but is not intended for classified or restricted government data. The threat that can still slip through Francillon's castle walls is precisely the threat SecNumCloud was designed to keep out. The gap nobody names Goodacre told The Register he tested awareness of the Management Engine with various attendees at the CyberUK conference in April 2026. "Almost no one" knew about it, he reports. The gap between the sovereignty rhetoric and the silicon reality is not being surfaced in policy discussions, procurement decisions, or public debate over what digital sovereignty means. The debate that does happen, hybrid versus non-hybrid, Google/Thales versus pure European providers, focuses on operational control and legal structure. It does not address the shared silicon foundation. Strubel's LinkedIn post pushes back against the framing: "Imagining this problem is limited to hybrid cloud offerings is pure fantasy that doesn't survive confrontation with facts." Every cloud provider, hybrid or not, depends on components they don't fully control. The distinction isn't hybrid versus sovereign. It is what you're protecting against, and whether the controls you're implementing address that threat. There is no immediate solution. RISC-V, the open source processor architecture European sovereignty advocates point to as a long-term alternative, remains years from competitive performance in datacenter workloads. "It will take decades," Francillon says flatly. Arm is a cautionary precedent: it took nearly 20 years from the first server attempts before Arm achieved any meaningful datacenter presence. Can sovereignty exist on compromised silicon? For Goodacre, the bottom line is simple: the Tier-3 supply chain residual is "the irreducible cost of buying silicon with a Ring -3 manageability engine." Francillon argues that operational controls, including network isolation, monitoring, and threat modeling make the backdoor unreachable except in very high-end attacks. Strubel acknowledges hardware dependencies are real but maintains that SecNumCloud provides valuable protection for what it does cover: legal control, kill-switch resistance, defense against cyberattacks and insider threats. The disagreement is not about technical facts. It is about risk tolerance and threat model calibration. For European CIOs choosing SecNumCloud-certified providers, the question to ask vendors is: how do you address Intel Management Engine and AMD Platform Security Processor in your threat model? The answer will clarify whether the vendor treats the hardware layer as out of scope, or has implemented controls that reduce but do not eliminate the exposure. For European policymakers, the question is broader. Can digital sovereignty exist on non-sovereign silicon? The current frameworks do not answer that question. They certify operational controls, legal structure, and autonomous execution capability. They do not certify silicon-layer immunity, because the hardware is American or Chinese, subject to American or Chinese law, designed with management engines that European authorities did not specify, cannot legally compel on their own terms, and cannot replace. Whether that is a gap worth addressing, or a risk worth accepting as the unavoidable cost of participating in global technology supply chains, is a question Europe will need to answer for itself. ®
Categories: Linux fréttir

One in seven Brits swapped their GP for ChatGPT, study finds

9 hours 50 min ago
Brits are now asking chatbots about mysterious lumps and weird rashes instead of calling their GP, which is probably not the digital healthcare revolution anybody meant to build. A new study from King's College London found that one in seven people in the UK have used AI instead of contacting a doctor or healthcare service, while one in ten said they had turned to chatbots rather than professional mental health support. Convenience was the biggest reason, cited by 46 percent of respondents, closely followed by curiosity at 45 percent. Another 39 percent said they used AI because they were unsure whether their symptoms were serious enough to bother a GP in the first place. The report, based on a survey of more than 2,000 adults, suggests that AI systems are quietly becoming Britain's unofficial second-opinion service while regulators are still arguing about what counts as "AI-enabled healthcare" in the first place. However, some respondents said the chatbot conversations ended up replacing medical care altogether. Around one in five respondents said chatbot advice discouraged them from seeking professional help, and 21 percent said they skipped contacting a healthcare provider because of something the AI told them. Public confidence in AI healthcare also looks shaky. The survey found Britons are almost perfectly split on whether AI should be involved in clinical decision-making, with 37 percent supporting its use and 38 percent opposing it. Safety and accuracy worries topped the list of public concerns about NHS AI use. Women, in particular, were less comfortable with the idea than men, and far more likely to say patients should be told when AI is involved in their care. Oddly, younger adults were among the most skeptical. Nearly half of 18 to 24-year-olds opposed clinical AI use, compared with 36 percent of people over 65. The public also appears to think AI has already taken over GP surgeries to a much greater extent than is the case. Respondents guessed that around 39 percent of GPs use AI in clinical decision-making, when the actual figure is closer to 8 percent. Professor Graham Lord, executive director at King's Health Partners, warned that responsibility for AI mistakes often lands on clinicians even when they have little control over the systems being deployed. "When something goes wrong with AI, responsibility is often placed on clinicians, even where they have limited control over how AI tools are introduced," Lord said. Which sounds suspiciously like someone in healthcare has already seen the incoming paperwork. ®
Categories: Linux fréttir

Google reimburses Register sources who were victims of API fraud

Fri, 2026-05-15 21:26
Two of the Google Cloud developers who were hit with bills for thousands of dollars following unauthorized API calls to Gemini models have had their bills reversed, the users told The Register in recent days. But Google plans to continue automatically expanding users' spending limits, leaving them and countless other customers vulnerable to bills they cannot afford, whether from fraud or a sudden traffic surge. Australia-based developer Isuru Fonseka – whose usage bill skyrocketed to $17,000 in minutes after Google automatically upgraded his $250 spending tier when a hacker took control of his account – told us that he was happy to put this behind him. “It’s so good. It felt like they were just giving me the run around until your article. I just hope they fix it properly for everyone,” he said. “It’s great that the article was able to get the refund but it’s sad that it had to go to that level for them to process it urgently.” Despite refunding his money, Google seems to have lost a customer. Fonseka said that he has since ensured his API cannot be used with Google’s stable of AI products, and will likely try one of the independent foundation models if he needs those features. “I’ve disabled Gemini on everything – if I ever plan to use AI on my projects, I’m better off using it via a different service such as OpenRouter or going directly to one of the other LLM providers – just as a way to keep Gemini out of my account and the risk as low as possible,” he said. Fonseka said he was blindsided by a Google policy that allowed the company to automatically upgrade a user’s billing tier without permission or adequate warning. He had thought by signing up for a user tier with a $250 spending cap that his bills would be restricted to that amount. It was only after attackers exploited his API key that he learned Google would upgrade the cap automatically based on his history of spending. While Google acknowledged that the automatic tier upgrades allowed credential hijackers to rack up thousands of dollars in bills in cases like the one Fonseka described to The Register, it said it has not reconsidered the policy. In a statement to The Register, Google said that it wants to prioritize access to Google Cloud services without interruption, preferring to prevent service outages over respecting users' budget preferences. “With our automated growth tiers, we helped businesses scale as usage increased, built on their historic reputation of payments and usage,” a Google spokesperson told us in a statement. “This prevents their business having a hard service outage once they pass an artificial system quota.” Tiers vs spending caps There is some confusion between Google's usage tiers and its newly introduced spending caps, and Google’s documentation hasn't helped much. Google says its users can set their usage tiers not to exceed a certain spending level. For example the maximum spending allowed by a Tier 1 user like Fonseka is $250. However, if the account is older than 30 days and if, over the lifetime of their work with Google, they have spent at least $1,000, then Google will automatically allow that account to spend up to $100,000. So good customers have the most to fear from fraud or from an unexpected spike in usage. In several cases shared on social media, Google users were only aware of this after their credit cards were billed thousands of dollars. On April 22, Google introduced a trial of hard caps on spending within Google Cloud, but those are in a preview and are approved on a case-by-case basis. "We’re excited to announce that Spend Caps are coming soon to Google Cloud. Designed to work with Google Cloud Budgets, FinOps and DevOps can set budgets that enforce automated cost boundaries (caps) at the project level for AIS, Agent Platform, Cloud Run, Cloud Run Functions, and Maps," Google wrote. "These caps alert and ultimately pause API traffic once your set budget is reached, but leave your resources intact. If you need the traffic to resume, simply suspend the Spend Cap." Spend caps can only be set per project for a single, eligible service, Google said. Eligible services for this preview include Gemini API, Agent Platform (previously known as VertexAI), Cloud Run, Cloud Run Functions, Maps, Google said. Users who apply for a spending cap will have their submissions reviewed on a “one to two week basis” and customers are added in the order they submitted. “Once onboarded, you will receive an email with instructions on how to access the feature as well as details on how to submit feedback,” Google writes in its sign up page. Rod Danan, CEO of Prentus, a company that helps job applicants with interview preparation and tracks job placements for universities, told The Register earlier this week that he saw his bill skyrocket to $10,000 in just 30 minutes of usage by attackers who exploited his public API key. Google forgave the charges on Thursday, he said. “They got back to me today agreeing to a refund,” he told us. “It's definitely relieving. You want to focus on the business. You don't want to have to focus on going and getting refunds from some crazy charges.” He said the stress of running a startup is hard enough without the addition of fighting one of the largest companies in the world imposing erroneous five-figure charges. “I'm happy that it's behind me. I wish it was easier,” he said. “I've learned, yeah, definitely don't give up. Be annoying whenever something is wrong and just keep pushing. Again, try to make it as public as possible, get louder and louder until the people you need to hear you actually hear you.” Google said any unauthorized use of API keys will be investigated and it historically has treated customers compassionately when there is clear evidence of fraud or error. “We take reports of credential abuse and the financial security of our customers extremely seriously; and as you know are investigating these specific cases you have pointed to and we will work directly with any impacted users to resolve charges resulting from fraudulent activity,” Google said. ®
Categories: Linux fréttir

Datacenters slurping up so much juice they boosted prices 75% in largest US energy market

Fri, 2026-05-15 21:02
Prices in the United States' largest wholesale power market have nearly doubled in the past year thanks to demand from datacenters. And an independent watchdog predicts things will only get worse without some serious changes. The PJM Interconnection serves all or parts of 13 states and the District of Columbia in the eastern US, including Northern Virginia, that’s got the densest cluster of datacenters in the world. The surge in wholesale power costs across PJM was outlined on Thursday by Monitoring Analytics, a firm that serves as the official market monitor for the Interconnection, in its Q1 2026 state of the market report. According to the report, the total cost per megawatt-hour (MWh) of wholesale power rose from $77.78 in the first three months of 2025 to $136.53 in the same period this year, an increase of 75.5 percent year over year. Monitoring Analytics didn’t mince words in its report, identifying datacenter load growth as the main driver of recent capacity market conditions and rising prices in PJM. “Data center load growth is the primary reason for recent and expected capacity market conditions, including total forecast load growth, the tight supply and demand balance, and high prices,” the report reads. “But for data center growth, both actual and forecast, the capacity market would not have seen the same tight supply demand conditions.” As for what might come next, the report doesn’t ignore the likely outcome of the current situation, either. “The price impacts on customers have been very large and are not reversible,” the report states, but the bad news doesn’t stop there. “The price impacts will be even larger in the near term unless the issues associated with data center load are addressed in a timely manner.” Based on the rest of the report, a timely resolution to the datacenter load issue shouldn’t be expected, at least not in a way that’ll benefit locals. For starters, Monitoring Analytics found that - like pretty much everywhere right now - power grids aren’t ready for the datacenter boom. PJM has taken steps to upgrade its power commitment and dispatch software to better operate its grid, but planned upgrades have been delayed multiple times with no planned implementation date on the calendar, per the report. “The current supply of capacity in PJM is not adequate to meet the demand from large data center loads and will not be adequate in the foreseeable future,” Monitoring Analytics asserted. Current plan: Shift the risk to everyone else PJM has been planning a one-time backstop auction to procure new power generation for datacenter projects in the region at the request of the Trump administration and the governors of the states it serves, but Monitoring Analytics isn’t convinced the Interconnection is going about the process in the right way. The currently proposed auction structure, says the watchdog, would “generally shift significant risk to other PJM customers,” which is a temptation the group says “should be resisted.” “Other PJM customers, whether residential, commercial or industrial, should not be treated as a free source of insurance, or collateral, or financing for data centers,” the report continued. “Yet that is what most of the proposals related to a backstop auction actually do.” As for what PJM ought to be doing, you probably won’t need to rack your brain to figure that out: Monitoring Analytics says datacenters ought to be required to bring their own power. Such a rule, says the group, should include fast-track options for interconnection for BYOP datacenters, and otherwise a queue that would only connect datacenters when there is adequate capacity to serve them. “This broad bring-your-own new generation solution to the issues created by the addition of unprecedented amounts of large data center load does not require a continued massive wealth transfer through ongoing shortage pricing,” the analysts argue. When asked for its response to the problems raised by the Monitoring Analytics report, PJM told us that it was fully aware of the impact of electricity cost increases on its customers. “PJM is working with states and member companies to address these consumer impacts on multiple fronts, including extending market caps put in place since the 2025/2026 auction, authorizing multiple transmission expansion projects that are now in development, and reforming wholesale electricity market rules,” the Interconnection told us. Monitoring Analytics didn’t respond to questions. Americans have become increasingly hostile to new datacenter projects driven by the AI boom, with 71 percent of respondents to a Gallup survey saying they opposed DC projects in their neighborhoods. Projects in multiple states have been abandoned recently due to pushback from locals, many of whom are concerned not only with electrical price increases, noise, and eyesores, but environmental harm as well. ®
Categories: Linux fréttir

Git is unprepared for the AI coding tsunami

Fri, 2026-05-15 20:15
Last month, Mitchell Hashimoto, HashiCorp co-founder, publicly declared that he was moving his popular open source Ghostty terminal emulator project from GitHub. GitHub runs the world’s largest service built on the Git distributed version control system, created by Linus Torvalds. Once an enthusiastic user, Hashimoto grew disillusioned with service disruptions, and increasingly slow pull requests. “This is no longer a place for serious work if it just blocks you out for hours per day, every day,” he wrote. Hashimoto was quick to defend Git itself: “The issue isn't Git, it's the infrastructure we rely on around it: issues, PRs, Actions, etc.” Many have blamed GitHub’s performance on Microsoft, which acquired the company in 2018. But to be fair, GitHub itself has been experiencing heavier-than-expected traffic thanks to a proliferation of AI-generated pull requests. In 2025, GitHub saw a 206 percent year-over-year growth in AI-generated projects measured by the use of Bash shell scripts, a widespread way of running agents. And more AI code means more bugs. Research from GitClear found that AI-generated code heaped 10.83 issues per pull request, compared to 6.45 for the old-fashioned human variety. Our new agentic workforce is raising big questions about how the entire software development lifecycle (SDLC) should evolve, and if Git should come along. “Agents are nudging us toward a continuous flow,” warned Peco Karayanev, co-founder of DevOps platform provider Autoptic, which bridges Git-based deployments with observability tools for agent-based remediation. Autoptic’s entire user base runs on some form of Git, either homebrew or from a service provider like GitLab. Given the volume and magnitude of changes across repos, “we need git to start operating in a more continuous mode,” Karayanev wrote in an email interview. Git operations, especially when used in GitOps-style automated deployments, still need to be managed by people. Updates, commits, pushes, merges are often yoked into sequences of “stop/go” episodes where someone has to hit enter on the keyboard a few times to continue the workflow, Karayanev noted. This model may not hold up once agents start getting priority. A butler for Git Git has always had its share of critics, especially those who use the tool daily. There may not be another piece of software that is so widely adopted and yet so inscrutable. Torvalds and other Linux kernel developers built Git in 2005 after frustrations with trying to shoehorn Linux code into the commercial BitKeeper tool. Linux, a global group project of mammoth proportions, required a distributed version control system able to support non-linear development of thousands of parallel branches. Like any distributed system, Git can be difficult to understand. One of the co-founders of GitHub, Scott Chacon co-wrote a book on using Git (2009’s Pro Git) and still he finds himself occasionally flummoxed by the version control system. There are still “sharp edges” to Git, Chacon told The Register. “There's a lot of stuff that it doesn't do very well from a usability standpoint,” he said. Chacon co-founded GitButler as a way to “rethink the porcelain” of Git, to make Git more suitable to modern workflows. (Last month, GitButler received $17 million in venture capital funding). Think of GitButler as a super-powered Git client. It allows the developer to work on two different branches simultaneously, using a technique called virtual branching. It reconciles the code a developer is working on with the upstream code. They can reorder commits, or edit the comments of a previous commit. It offers richer metadata about the files being worked on. It can show which commits are unique to that branch. Best of all, it eliminates what many developers call “rebase hell,” where merges into an updated codebase must be checked one at a time, a problem GitButler solves by keeping the user’s code synchronized with what is upstream. Many of these actions GitButler offers can be done through the Git command itself – although Git’s command language, and its rules, can be so obtuse that “you will probably make a mistake at some point,” Chacon said. A Git for agents Chacon believes GitHub’s current reliability issues stem from the current tsunami of agentic work. This is “ironic” because GitHub was built to scale Git, he said. “But an influx of agents is pushing the service to the brink.” The problem lies not with Git itself, but with everyone using one service, Chacon argued. Last year, GitHub had about 180 million users working across 630 million repositories – with 121 million created in 2025 alone, according to the company’s most recent annual Octoverse report. “From the longer-term perspective, it doesn't need to be like this,” he argued. Maybe Git should be run locally, mirrored globally and managed with clients … such as GitButler, Chacon suggested. Perhaps Git-based version control systems could be customized for specific industry verticals. We need to think about how we “distribute these systems more,” he said. “Git is designed to be distributed but we’re not distributing it,” he said. GitButler has created a command line interface specifically for agents. It was designed to give MCP servers an integrated map of the repository, which otherwise would require stitching together multiple Git commands. The Virtual Files concept allows the agent to work on a section of code that is also being worked on by a developer, or another agent. These are changes that point to a rethinking of how a Git workflow should run. “I think all of these systems should fundamentally change, because all of our workflows have changed, right? There needs to be different, sort of primitives for how to deal with these problem sets,” Chacon said. A tip from gaming development One company that wants its platform to replace Git altogether is Diversion, which has built an eponymous distributed version control system initially pitched for large-scale game design. “Git's architecture is actually an issue that prevents scaling,” argues Diversion CEO Sasha Medvedovsky in an interview with The Register. “Fundamentally it's an architecture problem that can't be fixed and is a bottleneck for end users and hosting services.” Git is a distributed system insofar as every user, or hosted service, requires a dedicated database (much like blockchain). “It's not distributed in the regular sense but rather replicated,” he wrote in an exchange with The Register on LinkedIn. Operations run on a single thread, making concurrent operations impossible. As a result, the larger the repository, the slower the commit operations – a deadly combination for fast-paced agentic software development, Medvedovsky noted. Of course, every CEO will have their talking points ready about a competitor’s weaknesses (Diversion is finalizing a blog post with hard numbers about Git and GitHub performance). But there are a growing number of other initiatives around prepping Git for the challenging times ahead. Perhaps most notable is Jujutsu, a Git-compatible distributed version control system, stewarded by Google senior software engineer Martin von Zweigbergk. Like GitButler, Jujutsu (jj) aims to eliminate a lot of the annoyances that come with Git. It includes an undo button and the ability to keep committing even when there is a conflict. And because everything written in C must be recast into Rust these days, long-time Git contributor Sebastian Thiel started a project called Gitoxide to rebuild Git in Rust. Potential benefits include significant performance improvements through multicore processing, and the much-needed memory safety that comes with Rust. Will Git 3 solve all the problems? Git’s chief maintainer is Junio Hamano, who took the reins from Torvalds in 2005. And he remains busy keeping Git current. At FOSDEM this February, core Git contributor and GitLab engineering manager Patrick Steinhardt discussed some of the changes coming in the next version of Git, version 3, which is gradually being rolled out this year. One of the chief improvements will be in the way Git manages the commit references, the IDs that point to each change being made. Surprisingly, this operation is a real bottleneck for the software. “The design is inefficient,” Steinhardt told the audience. Every time a programmer commits a code change, it gets recorded in a “packed-refs” file, which saves time by not giving each commit its own reference file. As projects grow larger, however, it takes longer for Git to amend or to delete a reference in packed-refs (One GitLab repo has a packed-refs file of more than 20 million references, Steinhardt said). This is especially problematic when you have multiple, simultaneous readers and writers of that file. And just forget about getting a consistent view of all the references. The freshly implemented Reftable feature, which will be the default in Git 3.0, stores references in an indexable binary format. The Git folks borrowed this concept from the Eclipse Foundation’s JGit Java implementation of Git. Reftable allows for block updates, eliminating the need to rewrite a 2 GB-sized file for a single entry. And it is much faster for reading, which would pave the way for Git supporting larger, more sprawling repositories – perfect for an ever-busy agentic workforce. For nearly two decades, Git has proved to be the version control system of choice for geeks worldwide. But even with these new features and various third-party enhancements, can it retain relevance for a new generation of agentically enhanced coders? The battle is on. ®
Categories: Linux fréttir

AI agents show they can create exploits, not just find vulns

Fri, 2026-05-15 19:45
Sure, AI agents such as Mythos can find security vulnerabilities in software, but the bigger question is whether they can turn those flaws into functional exploits that work in the real world. After all, many AI-discovered bugs prove minor or difficult to weaponize. New research, however, suggests frontier models can indeed develop working exploits when directed to do so. To better understand the rapidly changing security landscape, computer scientists from UC Berkeley, Max Planck Institute for Security and Privacy, UC Santa Barbara, Arizona State University, Anthropic, OpenAI, and Google decided to build ExploitGym, a benchmark for evaluating the exploitation capabilities of AI agents. This is not an entirely disinterested set of investigators – Anthropic, OpenAI, and Google all sell AI services. And both Anthropic and OpenAI have talked up the risk of leading models Claude Mythos Preview and GPT-5.5 while selling access to government partners. Since Anthropic announced Mythos in early April, the security community has been critical of the company's approach, described by some as fear-mongering. And various security experts have made the case that even commercially available AI models can find security flaws. Nonetheless, Mythos and GPT-5.5 outshine their peers in ExploitGym, as described in the paper, "ExploitGym: Can AI Agents Turn Security Vulnerabilities into Real Attacks?" ExploitGym consists of 898 real vulnerabilities found in applications, Google's V8 JavaScript engine, and the Linux kernel. Its workout consists of presenting an AI agent with a vulnerability and proof-of-concept input that triggers it, to see whether the agent can create an exploit capable of arbitrary code execution. According to the UC Berkeley Center for Responsible Decentralized Intelligence, Mythos Preview successfully exploited 157 test instances and GPT-5.5 managed 120 in the allotted two-hour window. "Even when standard security defenses like ASLR or the V8 sandbox were turned on, a meaningful number of exploits still worked," the boffins wrote in a blog post. "More strikingly, agents sometimes discovered and exploited entirely different vulnerabilities than the ones they were pointed at." The agents (CLI + model) tested were Claude Code with Claude Opus 4.6, Claude Opus 4.7, Claude Mythos Preview, and GLM-5.1; Codex CLI with GPT-5.4/GPT-5.5; and Gemini CLI with Gemini 3.1 Pro. And even the ancient models released in February (Opus 4.6 and Gemini 3.1 Pro) had some success. The researchers say that one of their more interesting findings is that these models sometimes went "off-script" in capture-the-flag (CTF) environments, where an agent has to find and retrieve some hidden value. This was most evident with Mythos Preview and GPT-5.5. The former succeeded in 226 CTF exercises but only used the intended bug in 157 instances, while the latter captured 210 flags and only used the intended bug in 120 of those cases. The authors also note that while there was some overlap in the exploits discovered, the various models found different exploits. This suggests applying a diverse set of models might be advantageous both in attack and defense scenarios. It's worth adding that ExploitGym tests were done with security guardrails disabled. When the test was re-run on GPT-5.5 with default safety filters active, the model refused 88.2 percent of the time before making any tool call. The Register, however, has seen security researchers craft prompts in a way to avoid triggering refusals. So safeguards of that sort have limits. "Our results show that autonomous exploit development by frontier AI agents is no longer a hypothetical capability," the authors state in their paper. "While current agents are not yet reliable across all targets, they already exploit a non-trivial fraction of real-world vulnerabilities, including complex targets such as kernel components." ®
Categories: Linux fréttir

LocalSend puts your sneakernet out of business

Fri, 2026-05-15 19:10
FOSS It happens all the time. You have a file on one of your devices and you need to have it on another one. You could put the file on a USB flash drive and walk it over (the so-called sneakernet), you could email it to yourself, or you could try to set up some kind of network resource. LocalSend, a free open source tool, makes the process of sharing files on a LAN easier than anything else and it works on Windows, Linux, macOS, Android, and more. The Reg FOSS desk is not routinely a fan of Apple fondleslabs. (We’ve tried, but they’re a bit too locked down for us.) Saying that, from what we’ve heard, LocalSend is a bit like Apple’s AirDrop but for grown-up computers and non-Apple kit. For Linux Mint users, it’s a bit like the included Warpinator – and as that page says, don’t search for it and go to warpinator.com, as it’s a fake site. It’s a free download from its GitHub page and is also available in Canonical’s Snap store and on Flathub. You run it, and it gives that computer a cute nickname in the form of (adjective)+(fruit). Run it on two computers on the same local network, and they should see each other. You click “send” on one, and “receive” on the other, and that’s about it: pick the file or folder, and off it goes. LocalSend isn’t very big – the installation packages are mostly around the 15 MB mark – so it’s pretty fast to download or install. This vulture found and tried it when we downloaded a just-over-4 GB file and then worked out we’d downloaded it onto the wrong OS on the wrong machine. It takes a good few minutes to download several gigabytes – we live on a small, remote island, where our 100 Mbps broadband costs about four times what 1 Gbps broadband used to cost in Czechia – and it seemed worth trying to transfer it rather than grab another copy. The gist of the idea is that LocalSend is quicker than using a USB key. You know the sort of process: find a big enough USB key, check it has space, copy the file onto it, eject it, go to the other machine, insert it, and copy the file off again. Even if it goes perfectly, LocalSend is still less hassle. It’s also easier than configuring some kind of temporary folder-sharing setup between different OSes on different computers with different login names. (The Irish Sea wing of Vulture Towers recently moved house and has yet to finalize his office layout and reconnect his NAS servers. It’s climbing to the top of the to-do list, though.) LocalSend is also available on both the iOS App Store and Google Play Store, so it can help for devices that you can’t readily plug a USB key into. The transfer happens across your local network, so it won’t use up bandwidth on metered internet connections, and will even work if your internet connection is down. Warpinator is Mint’s solution – but in our case, we initially needed to move the file from Windows to macOS. Both have ports of Warpinator, but both seem unofficial, and while the machines could see one another, file transfers failed. We’ve also tried SyncThing, but it’s not good at keeping machines in sync when they’re rarely on at the same time – and we’ve had problems with it recursively duplicating directory trees into themselves so deeply that no GUI tool could delete them. Ideally, you should have an always-on home server that also runs SyncThing – and if you have one of those, then for one-off file transfers, you don’t really need SyncThing: just copy it to the server, and off again. LocalSend just worked, and for us, it worked identically whether either end was running Windows, Linux, or macOS. We couldn’t ask for more. ®
Categories: Linux fréttir

Microsoft puts stability in the driver's seat with new initiative

Fri, 2026-05-15 17:25
Microsoft has laid out plans for how it and its partners will deal with iffy drivers causing stability problems in the company's flagship operating system. Dubbed the Driver Quality Initiative (DQI), Microsoft has outlined four pillars to support the program. These are Architecture – hardening kernel-mode drivers and enabling third-party kernel-mode drivers to transition to user mode; Trust – raising the bar for trusted partners and drivers; Lifecycle – addressing outdated and low-quality drivers; and Quality Measures – going beyond simple crash counts to measure driver quality. It's all very laudable, although, aside from references in the architecture pillar, Microsoft's WinHEC 2026 announcement said little about how Redmond ended up in a situation where drivers can run at a privilege level that allows a failure to leave the operating system hopelessly borked. The infamous CrowdStrike incident of 2024, which crashed millions of Windows devices, ably demonstrated the dangers of drivers running around in the Windows kernel. Microsoft later blamed a 2009 undertaking with the European Commission for how that situation came to be, although it skipped over the whole not-creating-an-API-so-security-vendors-didn't-need-kernel-access part. In the months after the CrowdStrike incident (or "learnings", as Microsoft delicately put it), the Windows Resiliency Initiative was announced. According to Microsoft, "DQI builds on the learnings and infrastructure established through the Windows Resiliency Initiative." Drivers are the bane of many Windows users. A faulty driver can make the entire operating system unstable. Sure, a customer might wonder how such a situation has been allowed to happen. Still, we are where we are, and dealing with it requires Microsoft to harden the operating system and provide ways for vendors to work with Windows that don't involve breaking down the kernel's doors. Those same vendors need to ensure that drivers are high-quality and reliable. "Driver and platform quality," wrote Microsoft, "is central to the customer experience." The company has espoused much in recent months about how it intends to "fix" Windows after a disastrous few years that have taken a hatchet to consumer confidence. Fripperies like moving the taskbar and rethinking Redmond's relentless pushing of Copilot are one thing. Dealing with driver-related crashes is quite another. WinHEC 2026 has shown that at least some within Microsoft are determined to deal with the fundamentals, and that requires taking the Windows maker's hardware partners along for the ride. ®
Categories: Linux fréttir

Google'll grab your gigs if you don’t cough up your number

Fri, 2026-05-15 16:09
Google is testing a storage reduction for new accounts unless a phone number is provided. The change the Chocolate Factory is trialing affects new accounts, reducing the free storage from 15 GB to a miserly 5 GB unless the user provides a telephone number. Not all new users are impacted. We created a Gmail account today, and were given the full 15 GB of storage without being required to provide a phone number (although it did ask for one for activation code purposes). The test is also regional and, it must be emphasized, is just that at this stage – a test. However, it could point to a future where tech vendors demand more data in return for using a 'free' service. Arguably, we're living in that future right now. A Google spokesperson told The Register: "We're testing a new storage policy for new accounts created in select regions that will help us continue to provide a high quality storage service to our users, while encouraging users to improve their account security and data recovery." A Reddit thread on the matter contained all manner of theories regarding what the data might be used for, including nefarious commercial purposes. Judging by the screenshot, Google is trying to curb people who create multiple accounts to gain more storage. 15 GB is not a lot of storage these days, particularly given the relentless growth in media file sizes. That said, a drop to 5 GB would bring Google into line with Apple, which gives customers the same amount unless they upgrade to iCloud+. Microsoft gives users 15 GB of free Outlook.com storage, and Proton Mail's free tier gives users 1 GB (initially 500 MB until a starting checklist is completed). Should the test become reality, it could be seen as yet another step on a worrying path. Sure, you can have more free storage: sign here and agree to hand over these bits of your personal information. As demand for storage increases, vendor offerings are looking ever more miserly, and a cut from Google, even with the best of intentions, will rankle. Then again, if you are concerned about privacy and your personal information being used for commercial purposes, it could be that, for all its convenience, Gmail might not be the right tool for you. Reducing storage to 5 GB for new users (existing users aren't affected) unless a telephone number is handed over might be the nudge that some users need to look elsewhere for their email needs. ®
Categories: Linux fréttir

NASA's Psyche mission set for a brief encounter with Mars

Fri, 2026-05-15 14:09
More than two years after launch, NASA's Psyche mission will whizz past Mars on May 15, using the planet's gravity to tweak its trajectory and accelerate on to its asteroid destination. The spacecraft, which was launched on October 13, 2023, will pass just 2,800 miles (4,500 kilometers) above the surface of the red planet at 12,333 mph (19,848 kph) on its way to the metal-rich asteroid, Psyche. In February, the spacecraft's thrusters were fired for 12 hours to refine its approach to Mars. That refinement played its part in today's flyby. However, it won't be until a Doppler shift is recorded in the signals from the spacecraft as it passes Mars that scientists will be able to definitively confirm its new speed and trajectory. These techniques are not new. Gravity assist maneuvers have been a thing since the dawn of the space age, and were theorized long before. One of the most famous beneficiaries is the Voyager mission, which took advantage of a rare planetary alignment to undertake a "Grand Tour" of Jupiter, Saturn, Uranus, and Neptune. The trajectory allowed significant propellant to be saved. And, of course, the use of gravity assists highlights the work undertaken by boffins in trajectory planning to calculate exactly how a spacecraft should be launched and what corrections are needed to achieve the required precision. Psyche is due to reach its destination in 2029, and the Mars flyby will allow scientists to check out the spacecraft's payload. For example, the multispectral imager will capture thousands of observations of Mars. According to NASA, Sarah Bairstow, Psyche's mission planning lead at the Jet Propulsion Laboratory in Southern California, said: "This is our first opportunity in flight to calibrate Psyche's imager with something bigger than a few pixels, and we’ll also make observations with the mission's other science instruments." A bit of bonus science is always welcome, as well as a rehearsal for the main event, when Psyche reaches its destination. "Ultimately, though, the only reason for this flyby is to get a little help from Mars to speed us up and tilt our trajectory in the direction of the asteroid Psyche," said Lindy Elkins-Tanton, principal investigator for Psyche at the University of California, Berkeley. "But if all our instruments are powered up, and we can do important testing and calibration of the science instruments, that would be the icing on the cake." ®
Categories: Linux fréttir

Anthropic urges Uncle Sam to kneecap China's AI ambitions before 2028

Fri, 2026-05-15 13:33
AI monger Anthropic wants America and its allies to tighten measures aimed at curbing China's AI progress, warning of the consequences if "authoritarian governments" take the lead rather than Uncle Sam. In a lengthy missive posted on its website, the San Francisco-based org says it expects AI to deliver "transformational economic and societal impacts" in the coming years, and whether the transition goes well depends on where the most capable systems are built first. Since the technology is advancing swiftly, democratic countries have only a limited time in which to act, Anthropic believes. The measures it wants to see are nothing new: enforcing tighter export controls on chips used for AI development, such as Nvidia's GPUs, and cutting off access to American AI models. Recent history suggests these controls "have been incredibly successful," it says. But if Chinese researchers are only several months behind the US in AI capabilities, as many experts estimate, how successful can those efforts have been? AI labs in China have only built models that come close to those in America because of their talent and their knack for exploiting loopholes to get around export controls, Anthropic claims, along with distillation attacks that "illicitly extract the innovations of American companies." Many will suspect this is Anthropic's chief motivation in calling for action against China. Back in February, the Claude model maker accused China-based rivals including DeepSeek of using distillation to train their models by siphoning knowledge from Anthropic's own. As The Register pointed out at the time, accusing China of copying, while using content created by others to train your own models, shows a staggering lack of self-awareness from the AI industry. Anthropic's sermon also shows blinkered thinking. It implies that China can only advance by riding on America's coattails, and is incapable of innovating. This is despite the shockwaves generated by the release of the DeepSeek R1 model early in 2025, believed to be on a par with the best US models. Numerous reports also indicate that Chinese organizations have made huge strides with domestically developed AI silicon, and Beijing even tried to discourage tech companies in the country from buying and using Nvidia chips. Anthropic sets out two scenarios for what the world could look like in 2028, a date when it expects "transformative AI systems" to have emerged. In the first scenario, America has "successfully defended its compute advantage," and "democracies set the rules and norms around AI." The second has China overtaking the US, leading to AI norms and rules being shaped by authoritarian regimes, with the best models enabling "automated repression at scale." Another problem with Anthropic's plan is that many countries, especially in Europe, view both American and Chinese AI supremacy as a threat to democracy. There is a concerted push in Europe for "digital sovereignty" to minimize reliance on US technology, for example. Others warn it could erode democracy in America itself. Anthropic can draw little comfort from the Trump administration, which has a constantly shifting attitude to China. Export controls were said not to be high on the agenda during the President's trip to Beijing this week, and it was reported that the US has now cleared around 10 Chinese firms to buy Nvidia's second-most powerful AI chip, the H200. ®
Categories: Linux fréttir

Exploited Exchange Server flaw turns OWA inboxes into script launchpads

Fri, 2026-05-15 11:51
Microsoft has confirmed a vulnerability in on-premises Exchange Server that could result in surprise script execution in victims' browsers. Tracked as CVE-2026-42897, the flaw affects Outlook Web Access (OWA) and can be triggered by a specially crafted email opened in OWA, assuming "certain interaction conditions are met." The prize for attackers is arbitrary JavaScript execution in the mark's browser context. The advisory describes the flaw as a spoofing vulnerability stemming from cross-site scripting, which will set alarm bells ringing for administrators, and it appears the vulnerability is being exploited. The bug was assigned a CVSS score of 8.1. Exchange Server 2016, 2019, and the latest version, Exchange Server Subscription Edition (SE), are all affected regardless of their update level. A mitigation has been released via the Exchange Emergency Mitigation (EM) Service. However, Microsoft warned the mitigation might break other things – inline images might stop working in the recipient's OWA reading pane (use attachments instead) and the OWA Print Calendar functionality might not work (use a screenshot or the Outlook Desktop client). Finally, OWA Light might not work properly. Microsoft deprecated this in 2024, so affected users should consider an upgrade. The mitigation can also be applied manually in scenarios where customers are not using the EM service. These might be disconnected or air-gapped environments – exactly the sort of environments where on-premises Exchange tends to linger. Microsoft is working on a full security update, although only the Exchange SE version will be publicly available. Exchange 2016 and 2019 customers will receive it only if enrolled in Period 2 of the Exchange Server Extended Security Updates (ESU) program. The second period of Exchange Server ESU kicked off this month, with Microsoft sternly warning that there would be no extensions past its end. The vulnerability does not affect Exchange Online. Microsoft has not given any details on how the exploit works, nor how widely it is being exploited. ®
Categories: Linux fréttir

Patch time for Cisco SD-WAN admins as vendor drops yet another make-me-admin zero-day

Fri, 2026-05-15 11:15
Cisco admins face emergency patch duty after Switchzilla disclosed a max-severity make-me-admin bug affecting Catalyst SD-WAN Controller and Manager. Switchzilla dropped an advisory for CVE-2026-20182 (10.0) on Thursday, saying that both components, formerly known as vSmart and vManage, were vulnerable in all deployment types, and that fixes were available. The bug allows unauthenticated remote attackers to bypass authentication and gain admin privileges on an affected system. According to Rapid7, whose researchers Stephen Fewer and Jonah Burgess found the vulnerability, attackers exploiting CVE-2026-20182 could then start issuing arbitrary NETCONF commands. It means they could steal data, intercept traffic, manipulate an organization's firewall rules, or just bring the network down, opening up opportunities for attackers of all stripes: state-backed, financially motivated, hacktivists – you name it. Offering a high-level overview of the vulnerability, Cisco said: "This vulnerability exists because the peering authentication mechanism in an affected system is not working properly. An attacker could exploit this vulnerability by sending crafted requests to the affected system. "A successful exploit could allow the attacker to log in to an affected Cisco Catalyst SD-WAN Controller as an internal, high-privileged, non-root user account. Using this account, the attacker could access NETCONF, which would then allow the attacker to manipulate network configuration for the SD-WAN fabric." Cisco confirmed that, in May 2026, it became aware that CVE-2026-20182 had been exploited as a zero-day, although it did not attribute the activity. The Cybersecurity and Infrastructure Security Agency (CISA) also added CVE-2026-20182 to its Known Exploited Vulnerabilities (KEV) catalog, which is reserved for the security flaws that are both actively being exploited and threaten federal agencies. The US cyber agency gave Federal Civilian Executive Branch agencies just three days to apply Cisco's patches. While CISA has set similarly short deadlines before, they are rare and typically reserved for vulnerabilities deemed especially urgent. There was no word of the bug being exploited in ransomware attacks. Cisco said in its advisory there are no workarounds available, and it "strongly recommends" applying the available fixes. Any admin responsible for their org's Cisco SD-WAN system should hunt through their logs, Cisco said, and be aware that indicators of compromise may appear among otherwise normal-looking operational logs. Specifically, they should be auditing the auth.log file at /var/log/auth.log for entries related to Accepted publickey for vmanage-admin from unknown or unauthorized IP addresses. Then, check those IP addresses against the configured System IPs that are listed in the Cisco Catalyst SD-WAN Manager web UI, the vendor said. Cisco thanked the Rapid7 researchers, who first reported the vulnerability in early March after looking into a separate authentication bypass zero-day in Cisco Catalyst SD-WAN Controller (CVE-2026-20127, 10.0) from February. ®
Categories: Linux fréttir

X tells Ofcom it will finally check its moderation inbox

Fri, 2026-05-15 10:52
Britain's media regulator has extracted a set of promises from X over illegal hate speech and terrorist content, suggesting that even "free speech absolutism" eventually meets a compliance department. Under commitments accepted by Ofcom, X said it will review and assess reports of suspected illegal terrorist and hate content from UK users within an average of 24 hours, with at least 85 percent handled within 48 hours through its dedicated UK reporting channel. The company also committed to engaging with external experts on how its reporting systems work, following several organizations' complaints that they were unclear whether reports submitted to X were even being received, let alone acted on. X also said it would withhold access in the UK to accounts operated by or on behalf of terrorist organizations proscribed in Britain if the accounts are reported for posting illegal terrorist content. Ofcom said X will now submit quarterly performance data over a 12-month period so the regulator can monitor whether the company is actually sticking to those promises. "Following intensive engagement carried out by Ofcom's online safety team, X have committed to implementing stronger protections for UK users, which we will now monitor closely," said Oliver Griffiths, Ofcom's Online Safety Group Director. "We have evidence that terrorist content and illegal hate speech is persisting on some of the largest social media sites. We are challenging them to tackle the problem and expect them to take firm action." The regulator launched a compliance investigation in December to examine whether major social media platforms have adequate systems to address illegal hate and terrorist material. Ofcom said evidence gathered alongside organizations including Tech Against Terrorism, Tell MAMA, and the Antisemitism Policy Trust pointed to illegal hate and terror content remaining visible across some of the internet's largest platforms. Ofcom said the issue was of "particular concern" following several recent antisemitic incidents and attacks on Jewish sites in Britain, including attacks in Manchester, Golders Green, and recent arson attempts in London. The watchdog also made clear this is not the end of its scrutiny of X, reminding the platform that Ofcom's separate investigation including issues related to Grok is ongoing and that it will continue to probe X's broader illegal content compliance systems. ®
Categories: Linux fréttir

ZTE showcases at GSMA M360 LATAM 2026, driving future business model restructuring - AI & network two-way integration

Fri, 2026-05-15 10:26
Partner Content ZTE Corporation (0763.HK / 000063.SZ), a global leading provider of integrated information and communication technology solutions, participated in GSMA M360 LATAM 2026. Ms. Chen Zhiping, Chief International Ecosystem Representative of ZTE, delivered a keynote speech entitled "Driving Future Business Model Restructuring — AI & Network Two-Way Integration" at the conference. Ms. Chen provided an in-depth analysis of the industrial value of the two-way integration of AI and networks, sharing ZTE's achievements in the Latin American market over the past two decades, its AI-Native network innovation practices, and its full-scenario intelligent solutions, helping Latin American operators complete their strategic upgrade from "connectivity providers" to "digital economy enablers". Facing the AI industry wave, ZTE released its global strategic vision in 2025: "All in AI, AI for All, Becoming a Leader in Connectivity and Intelligent Computing". Ms. Chen stated that this strategy is highly aligned with the core concepts of this GSMA Summit. In the future, ZTE will move beyond traditional network connectivity services, continuously upgrade its basic network capabilities, and comprehensively expand its AI and intelligent computing business layout. Through a two-way integration model of AI empowering the network and the network supporting AI, ZTE will reconstruct a new business model adapted to the AI era and activate new growth momentum for the Latin American digital economy. In terms of AI-enabled network upgrades, ZTE has pioneered the AI-Native network concept, deeply embedding AI capabilities into all network layers and processes to maximize network efficiency and optimize costs. In the wireless network field, ZTE's new 5G BBU integrates native intelligent computing capabilities, effectively improving the overall efficiency of hardware and software resources and increasing cell throughput by 20%. Simultaneously, by combining Super-N high-performance power amplifiers and AI intelligent optimization technology, equipment energy consumption is reduced by 38%. Currently, AAU and RRU products equipped with this technology have been deployed on a large scale in several Latin American countries, including Chile, Ecuador, Bolivia, Brazil, and Peru, with over 37,000 units deployed to date, saving local operators millions of dollars in electricity costs annually and achieving efficient, green, and intelligent network upgrades. Built upon AI-Native technology, the AIR Net advanced intelligent network solution enables commercial deployment of "autonomous driving" for networks, comprehensively revolutionizing operator operation and maintenance models and reducing overall TCO. This solution has already been commercially deployed in multiple locations globally. Currently, ZTE's intelligent network capabilities have obtained authoritative L4-level certification from the TM Forum, and its self-developed Co-Claw enterprise-level intelligent agent has been fully implemented internally, continuously improving network automation and intelligence levels and helping operators move towards advanced intelligent networks. In response to the complex and diverse network environment in Latin America, ZTE continues to implement scenario-based coverage solutions to bridge the regional digital divide. In indoor scenarios, ZTE has partnered with Chilean company Millicom to deploy the Qcell solution, achieving stable gigabit coverage throughout buildings. In remote rural scenarios, ZTE collaborates with Brazilian company Claro to implement the RuralPilot simplified rural network solution, addressing network coverage challenges in the vast Amazon region with its low cost and ease of maintenance. ZTE also offers a wide range of home coverage solutions, precisely matching the networking needs of different regions and scenarios in Latin America. Ms. Chen Zhiping stated that ZTE will continue to be rooted in the Latin American market, deepen the two-way integration and innovation of AI and networks, and continue to implement green, efficient, and intelligent full-stack ICT solutions to help local operators complete their strategic transformation, upgrade from traditional connectivity service providers to digital economy enablers, comprehensively meet the intelligent needs of industries and families in all scenarios, and work together to build a smart, inclusive, and sustainable new digital ecosystem in Latin America. Contributed by ZTE.
Categories: Linux fréttir

OpenAI caught in TanStack npm supply chain chaos after employee devices compromised

Fri, 2026-05-15 10:08
OpenAI says attackers behind the TanStack npm supply chain compromise stole internal credentials after reaching two employee devices, forcing the company to rotate signing certificates for several desktop products. The company disclosed this week that it had been caught up in the wider "Mini Shai-Hulud" campaign targeting npm ecosystems and developer infrastructure, though it said there was no evidence that customer data, production systems, or deployed software were compromised. OpenAI said the incident happened during a phased rollout of new supply chain security controls introduced after a previous Axios-related incident. According to the company, the two compromised employee devices had not yet received updated package management protections that would have blocked the malicious dependency. The attackers carried out "credential-focused exfiltration activity" against a limited set of internal repositories reachable from the affected employee machines, according to OpenAI. It said "only limited credential material was successfully exfiltrated from these code repositories." That was apparently enough to trigger a precautionary reset across multiple products. OpenAI is rotating the certificates used to sign macOS versions of ChatGPT Desktop, Codex App, Codex CLI, and Atlas, and is requiring users to update the affected software by June 12. The incident ties OpenAI to the increasingly messy supply chain campaign that has spent the past several weeks worming through npm ecosystems, CI/CD infrastructure, and GitHub Actions workflows. Security firm Socket linked the TanStack compromise to the broader "Mini Shai-Hulud" operation, which abused poisoned automation workflows and stolen publishing credentials to push malicious package updates into trusted software pipelines. Researchers tracking the wider Mini Shai-Hulud campaign have connected the activity to a threat group known as TeamPCP, which appears to have developed an unhealthy interest in poisoning npm ecosystems and rifling through developer credentials. TanStack confirmed this week that 84 malicious package versions spanning 42 @tanstack/* packages had been published after attackers compromised parts of its release infrastructure. The poisoned packages were designed largely to steal credentials, including GitHub tokens, cloud secrets, npm credentials, and CI/CD authentication material. The campaign appears linked to earlier Mini Shai-Hulud attacks involving SAP-related npm packages, suggesting the same credential-stealing operation is spreading across multiple developer ecosystems. OpenAI said it is continuing to investigate the incident and monitor for any downstream abuse tied to the stolen credentials. The reassuring news is that OpenAI says no production systems were breached. The less reassuring news is that attackers keep getting deeper into the software assembly line before anybody notices. ®
Categories: Linux fréttir

Fusion for the future: XLSMART and ZTE partnering for a boundless digital Indonesia

Fri, 2026-05-15 09:59
Partner Content In Indonesia, the magic of “Bumbu”, that perfect spice blend, creates unforgettable flavors. Today, in the digital world, an even grander "fusion" is taking place. Facing the challenge of unifying seperate networks across Indonesia's diverse geography, XLSMART partnered with ZTE on a landmark dual-network convergence project, integrating over 20,000 4G base stations and deploying more than 7,000 new 5G sites in just eight months. The initiative has launched the country's first nationwide 5G blanket coverage network, validated by Ookla as the fastest 5G network in H2 2025. Leveraging digital-intelligent tools and ecosystem collaboration, the project significantly enhanced coverage, capacity, and user experience for 73 million subscribers — turning complex delivery challenges into measurable gains in speed and efficiency. Fusion for the Future. Watch how the converged network is powering Indonesia's digital growth. Contributed by ZTE.
Categories: Linux fréttir

UK reloads artillery plans with £1B remote-control howitzer order

Fri, 2026-05-15 09:45
The British Army is to get 72 next-gen mobile artillery units, in the shape of a remote-controlled howitzer (RCH) module that mounts onto the Boxer armored vehicle already in service. The Ministry of Defence (MoD) announced a £1 billion ($1.35 billion) contract to provide the Army with a modern mobile system capable of providing artillery support against targets up to 70 km (44 miles) away. First deliveries of the RCH 155 units are expected in 2028, with a "minimum deployable capability" expected before the end of the decade. It follows a £52 million early capability demonstrator contract signed in December 2025. The RCH 155 is basically a 155 mm gun housed in a turreted artillery module mounted on the Boxer drive module. It is an auto-loading weapon, capable of firing eight rounds per minute. The unit features a fire control computer with integrated ballistics calculation, plus radio data transmission to a remote artillery control system. Boxer is an eight-wheeled (8x8), all-terrain vehicle designed to take a number of different bolt-on mission modules allowing it to fulfill various roles. The British Army has initially chosen just a few of these types, primarily the troop carrier variant, but also the ambulance module and command vehicle unit. According to the MoD, the barrel, breech, recoil system, and trunnions will be manufactured by German defense biz Rheinmetall at its large-caliber production facility in Telford, using British steel supplied by Sheffield Forgemasters. The Boxer drive modules/chassis, engine, and drivetrain that the weapon system sits on will be manufactured by the UK division of pan-European defense firm KNDS in Stockport. The Army is to receive a total of 623 of these. A new mobile artillery platform was needed to replace the UK's aging fleet of AS-90 self-propelled howitzers. These could easily be mistaken for a tank, thanks to their tracked chassis and turret-mounted gun. The last of these were donated to Ukraine over the past few years to help it fight Russia. The UK also procured a small number (14) of Archer mobile artillery systems as a stop-gap while a successor for AS-90 was selected. This is an automated 155 mm gun mounted on a 6x6 articulated truck chassis. "This major investment is defence delivering for the battlefield and for Britain's economy," said Defence Secretary John Healey MP. "By securing next-generation artillery with Germany, not only are we rearming to strengthen NATO against growing Russian aggression but also creating highly skilled jobs here in Britain." Ironically, Britain was one of the earliest partners in the Boxer joint venture, but withdrew from it in 2003 to focus on a different program, the Future Rapid Effect System (FRES). One strand of FRES eventually led to what is now known as the Ajax family of armored vehicles. You may have heard of it. The UK government announced it was rejoining the Boxer program in 2018 in order to meet its Mechanized Infantry Vehicle (MIV) requirement. ®
Categories: Linux fréttir

Pages